Play

Consent varies depending on how you ask the questions

One of the reasons we under value insurance- The future “you” doesn’t get equal treatment

Under stress, we don’t make good decisions (no surprise there)

Behavioral Economics may be a framework to learn the “art of medicine”

 

Alan Pitt: I’m here again with Professor Doug Hough from Johns Hopkins University and we’ve been talking about behavioral economics. One of the ways that we can better understand each other, both from a provider-to-patient as well as a provider-to-provider communication, so that perhaps we can communicate better and if so, get more value in healthcare. Dr. Hough, the last time we talked, you mentioned the word framing. I know that’s throughout behavioral economics, and it comes up often in my world as a practitioner in issues of consent: How I go to talk to a patient about a procedure. Whether they agree or disagree can really vary based on the words I use.

 

Douglas Hough: Right.

 

AP: How does that work, framing for medical procedures and consent?

 

DH: When you describe an issue or you present a potential treatment plan to your patient, do you talk about the benefits first, or the risks first? Now, most clinicians are going to present the benefits first. Why do they do that? Well, for one thing, they believe that this treatment is going to have significant benefits and then they want as good clinicians, they want to describe what the risks are. That makes a lot of sense. Suppose you switched it, and you described the risks of a procedure first. What do you think would happen?

 

AP: A lot of people wouldn’t want to do it.

 

DH: Why? It’s the same information—the same benefits, the same risks. You just present the risk first. What’s the difference?

 

AP: As I think you mentioned in the first session, we’re not really rational. How we’re told things really determines our outcome, but it makes no sense a lot of the time.

 

DH: Well, from a behavioral science standpoint, it makes a lot of sense because what you’re doing by either presenting the benefits first or the risks first is that you are framing, and another term is anchoring, people’s ideas; their thoughts. If you present the benefits first, and then you start presenting the risks, the risks start to chip away from the benefits. However, you’ve  anchored the patient and they’re mindful of what the benefits are.

 

DH: If you started with the risks, the patient is now thinking, “Oh my gosh! I could be disabled, I could even die because of this procedure!” And then once they’re anchored there, you say, “Oh by the way, there are all these benefits, too.” Well, that the patient has already been anchored by the risks. So as a result, you are absolutely right—more than likely, the patient is going to agree to the procedure if you present the benefits first, rather than the risks first. Another thing that’s kind of interesting is that a number of studies have shown that if you put the risks in a survival frame, people will respond differently than if you put them in a mortality frame.

 

AP: Say in percentages, 90-10, 10-90.

 

DH: That’s right, so if you say, “99 of a 100 patients will have a 5-year survival ratio.” That’s going to be different than saying “10 people will die over 5 years.” Now, standard economics would assume that people can subtract 90 from 100 and get 10, but that’s not how people think. It may not be rational, but it’s how people think. So framing the issues is incredibly important for decision making.

 

AP: What about a related issue? We’re often faced at end-of-life with family members who drive a lot of decision making, and we as clinicians may think that there’s a very little hope for the patient, but the family comes and they want everything done. Is there anything that you can suggest in those conversations that behavioral economics might offer us for guidance, in terms of having better conversations?

 

DH: Oh, absolutely. In one of Kahneman and Tversky’s most famous papers, they noted that people seem to over-weight small probabilities and under-weight large probabilities, and a number of studies have demonstrated this. Now, we’re talking to the extent that people really can’t perceive the difference between a 1 in 10,000 chance, a 1 in a 100,000 chance, and a 1 in 1,000,000 chance. Even though those are a 100-fold differences in risks, they kind of see them as the same. As a result, if you were to tell the patient, or more importantly, the patient’s family, “Well, you know, there’s really only a 1 in a million chance that this procedure is going to work and save your grandfather.” They’re going to think, “Oh, you mean there is a chance?”

 

AP: Yes—they’re going to view it as a one in 10.

 

DH: Yes, they’ll view it as 1 in 100 or, yeah, they’re going to look at it that way. Now, the other part about this—there’s something that behavioral economist George Loewenstein has called the “hot–cold empathy gap.” By that, he means that people make different decisions when they’re in a cold state, versus in a hot state. If you’re sitting at home and your physician has said to you, “Well, you know at some point you’re going to be near death, you need to do some advanced directives.”

 

DH: What would you want? You go through the list and you decide, “Oh, that’s fine.” But then, when you’re in the hot state, I mean you are under extreme duress, those decisions suddenly change. Now, even if they still stay the same for the patient, the family is now in the hot state. They’re going to make decisions that—well, if they were thinking about this in a cold, clinical, rational style, they would make significantly different decisions.

 

AP: One example might be road rage, right? Thinking about it rationally, no one would ever commit road rage, but in that moment, that guy who cut me off, I’m getting him.

 

DH: In that moment, you’re going to do it. And here’s where this hot–cold empathy gap comes in. What Loewenstein has hypothesized and has shown is that people don’t understand that they make different decisions in a hot state versus in a cold state. They think they’re making rational decisions in both of those.

 

AP: So what do we do when we’re faced with a patient who’s really—in my world, I work at a big neurological center—we often have braindead patients. I mean, literally, there is no hope, but still the family persists. How do we talk them back into that cold state, where they’re much more rational about what the next steps are? Any thoughts on that?

 

DH: I’m going to have to punt on that one.

 

[laughter]

 

DH: Hey, Alan, I’m an economist. Not a couselor…

 

AP: Alright, well I was looking for some guidance here. I thought you might have the answers.

 

AP: Let me switch topics on you.

 

DH: Yeah, I guess, once again, behavioral economics is a tool, it’s not the end-all, be-all.

 

AP: Well, I’m expecting you to give a lecture on what you do at end-of-life in the next 12 months, and I’m bringing some clinicians to you.

 

[laughter]

 

AP: One of the things that always got me, that always bothers me, is this kind of “CYA,” or cover-your-ass medicine that many physicians practice. If you think about it, it costs me nothing as a clinician to suggest follow-up for you, particularly in my world as a radiologist, to suggest follow-up for you as a patient. “Let’s get another CT or MRI scan in 3 to 6 months.” In fact, I’m actually incentivized to do that, because you’re going to come back and I’m going to get additional revenue out of that imaging. You as the patient have to deal with risk, worry, and cost for that timeframe, right? Everybody has cancer until they don’t. If I say it could be cancer, you have cancer.

 

DH: Right.

 

AP: From a physician’s perspective, can you offer any thoughts on how to lower that bar to where we practice that less or think about it differently, in economic terms? Or maybe you’re going to tell me it’s an insurmountable problem because the incentives all align with my over-ordering things.

 

DH: A couple of things. Let me start with that “CYA” comment. It isn’t just CYA. There’s kind of an almost-universal cognitive bias called “action bias,” which means, “Above all things, do something. For goodness sake, do something.”

 

AP: Right.

 

DH: You see it as a radiologist. Oncologists see it all the time. You’ve alluded to it—if you say it may be cancer, then it’s cancer.

 

DH: An oncologist who diagnoses someone with prostate cancer, and it’s a Gleason Score of say 4, 5, or 6, you tell the patient, “Well, you’ve got a prostate cancer.” The patient immediately is going to think, “Oh, what are we going to do?”

 

AP: Yes, and by the way, for the audience—that’s a low score. I have this adage that it’s better to die with disease; you win if you die with disease.

 

DH: Yes, exactly. To tell a patient, “Well, yeah you have this cancer, but you’re probably going to die from something else, so what we’re going to engage in is watchful waiting.” The patient will say, “What are you talking about? There’s a cancer in my body. Get it out. You’re telling me that we’re going to wait till it gets much bigger? I thought you and the American Cancer Society always said, ‘early detection, early treatment’, so why are we waiting around for this?”

 

DH: Well, that’s action bias, and a physician often will be sort of vulnerable to that as well. So what do you do about that? I think what you do about that is that you establish a series of guidelines and/or checklists that a physician can appeal to. It can appeal to a physician in a couple of ways. One is from an ethical and moral standpoint, and also from a legal standpoint. As you know, many malpractice trials are be based on, “Well, did you adhere to the standard of care?” And if the physician can say, “The American Cancer Society, or the American College of Radiology or another congressional organization says that the standard of care for this kind of disease is watchful waiting. And that’s what I did.” That’s going to be more defensible than if he just decides on his own that he’s not going to provide care.

 

AP: Of course, you gave that example in your book, which has gotten national attention about mammography, and societies actually were suggesting overtreatment of women and they were going the opposite direction from what the data said. I would assume that was because—this is my own society, the American College of Radiology—that’s what we’re trained to do. We’re trained to cure disease, right?

 

DH: You’re trained to do something.

 

AP: Yes, we’re trained. So I think sometimes the standards kind of drive overtreatment as much as avoidance of treatment. You gave a really good example of that soccer goalie. Percentages show that he shouldn’t move when he’s in the penalty kick situation. He should just stand there; he has like a much higher chance of blocking the kick then, if he goes left or right, according to the percentages. But he’s derided because he didn’t do anything.

 

DH: Right, he just stood there.

 

AP: People don’t like that. You know, I’m kind of left with the feeling that behavioral economics offers something of a framework which, for me and my career, has been very subjective. This  idea of the “art of medicine” is a very squishy thing, right? As senior doctors, we tell people, “Well, this is just experience.” I get a sense that behavioral economics might offer something of a framework for medical students, nursing students to begin to understand how to have better conversations with their patients during their career. Would you say that’s true?

 

DH: Oh, absolutely. In a sense, behavioral economics gives you permission to give that. Physicians have long resisted what they call the “cookbook method” because of the “art of medicine.” We have to recognize that, but at the same time, the cookbooks—the checklists and the guidelines—can be really valuable for physicians to provide high-quality care. So I think that’s one way in which behavioral economics can help. Also, it would be good to be more mindful about framing and anchoring, to understand what those do. Now, physicians understand that implicitly. But to be more mindful of what it means, and to help and be able to use that those framing tools to honestly help the patients to make the right decisions for them.

 

AP: Have you been successful at all in extending from your economics students into the medical students, there at Hopkins or the nursing students with some courses on behavioral economics yet?

 

DH: Well, I wish I could say yes, [chuckle] but not quite yet.

 

AP: Well, I guess it’s the early days still, right?

 

DH: It’s the early days. It’s very much the early days.

 

AP: I do believe, though, that medical school training must evolve. You know, personally, I have a deep-seated belief that healthcare in America in particular has gotten to the wrong place, where we’re selling the wrong things. When I started this series on behavioral economics, I really called out that it’s not a new drug, a new device, or a new surgery that’s going to get us out the healthcare crisis. It’s giving the patient, their family, and, to some degree, the less-experienced provider, reassurance. Oftentimes they don’t hear us though. That reassurance goes unheard when we’re having that conversation. I guess the overarching concept that I’m hearing from you is that behavioral economics offers a window, a framework, to have better conversations between doctors and patients, as well as doctors and doctors in terms of healthcare. Is that the real take-home message?

 

DH: Yeah, that really is, Alan. In a sense, it provides a new way of thinking. It’s not a perfect way of thinking, but it’s a new way of thinking and of framing issues and problems that I think could help both clinicians and their patients make better healthcare decisions.

 

AP: Doug, I really appreciate the time today. I hope folks listening find this of value. For clinicians out there, they might consider using some of these techniques in the conversations they have with their patients. Frankly, for patients talking to providers and making very important decisions about their own healthcare, they might take these suggestions under advisement as well. Particularly the hot–cold state: Don’t make those big decisions, if possible, when you’re really wrapped in some sort of emotional issue. Thank you again, Doug, I very much appreciate it.

 

DH: Exactly. I enjoyed it as well, Alan.