P4P, cognitive bias, and ethical care

I have written before of my belief that P4P as often practiced is unethical because it attempts to tilt the clinician away from shared decision making with an autonomous patient, and towards directed decision making serving the interest of the clinician or the institution. Some have disagreed, suggesting that good clinicians will not be swayed. My initial response to this line of argument has been: if these financial incentives do not work to align the clinician’s goals with the institution’s (rather than with the the patient’s) why are the institutions doing it? There is a better response, and it comes from this past week’s NEJM Perspective by Pat Croskerry, MD, PhD.

Among the points made in his discussion of cognitive bias and its impact on clinical decision making are the following:

  • Cognitive bias, a universal human trait,  is a common and significant barrier to accurate diagnosis and objective treatment decisions in medicine. (The diagnostic failure rate is estimated to be between 10 and 15% and cognitive errors are the largest cause.)
  • The human mind, including that of clinicians, is prone to more than 100 identified and studied biases that affect clinical decision making.
  • Humans process information with two separate systems: Type 1, which is intuitive, quick, always on, easy, automatic, and serves us well for the day-in and day-out activities of life; and Type 2, which is analytic, slow, hard, requires practice and training to master, has to be voluntarily engaged, and is well suited for complicated or nuanced decision making where an instant response is less important than an accurate response. (Note: these are not simply philosophical constructs. fMRI shows quite dramatically that different areas and pathways are involved, and that each of these systems tends to inhibit the other.)
  • For medical decisions, System 2 is without question more accurate and more appropriate, although clearly still susceptible to cognitive biases, especially if care is not taken to explicitly evaluate for and eliminate triggers for cognitive error.
  • It would be impractical to use System 2 exclusively. The use of System 1 in medicine is a time and energy saver, and quickly becomes the default under time pressure.  

Manifestations of System 1 include standardized care, algorithms, care driven by EBM recommendations, and learned patterns of response to common situations. This poses a serious risk for errors and suboptimal or even dangerous care. Since we cannot avoid cognitive bias, we have two obligations to minimize the harm that it can cause:

  • Structure our systems and design our work flows to minimize the opportunity for built in bias. This includes providing adequate time for decision making, providing tools for point-of-care decision support, and working to avoid any built-in bias, in the form of inappropriate defaults or incentives.
  • Being mindful: “Becoming alert to the influence of bias requires maintaining keen vigilance and mindfulness of one’s own thinking. When a bias is identified by a decision maker, a deliberate decoupling from the intuitive mode is required…” This is not as easy as it sounds. We are rarely aware of our biases and few understand the impact on their decision making. “Debiasing” in a setting prone to cognitive error is extremely hard and rarely successful.

In view of the known harmful impact of cognitive bias on clinical decision making, public policy groups, governments, payers, health care organizations and individual clinicians should be vigilant and carefully avoid setting up systems that foster poor clinical decision making. Individuals should be vocal and assertive, willing to point out policies, systems and behaviors that increase the likelihood that cognitive bias will interfere with patient-centered shared clinical decision making.  P4P that rewards clinicians for convincing patients to make a specific decision (as opposed to rewarding the clinician for helping the patient reach a decision based on the patient’s own context, values and preferences), is designed to use cognitive bias to alter clinician behavior and undermine patient autonomy, and is unethical. 

 


Links to more on this topic::