Play

 

In part 2 of my discussion with Dr. Eliot Siegel, I’d like to talk about artificial intelligence and how computers will play a role in healthcare moving forward. In Part 1 of our discussion, we talked about how Dr. Siegel was able to set up the first filmless department in the US back in 1993, almost a quarter century ago, though many patients, consumers, and physicians alike still can’t get those images when they need them. We discussed how we’re hoping that business rules as well as legislative changes will allow that to be the case. However, there’s a lot of discussion now about how computers may actually replace physicians, and this is particularly true for radiologists, who happen to be the most digital of all healthcare folks in the country. Dr. Siegel has played an integral role and has been an advisor to IBM Watson and their look at how computers in medicine might help both textual interpretation—looking at what we say and what’s in the medical record—as well as imaging interpretation—how computers might be able to add to our interpretation of those images. And some have suggested recently that we just won’t need any radiologists, that computers will do it all.

 

Dr. Eliot Siegel, I was hoping that you could comment on some of these topics.

 

It’s a really interesting, super hot topic right now. As you know, just in the last few weeks there was an article that came out in the New England Journal of Medicine by Ezekiel Emanuel and a colleague, and just a few weeks before that an article came out in The Journal of the American College of Radiology, and earlier this year in the spring, Ezekiel Emanuel gave a keynote talk. Essentially his message is that the biggest threat to radiology is machine learning. He’s made some predictions in these talks and articles that radiologists are at great risk of being replaced by computers and by machines potentially in as little as 4 to 5 years. I get emails from all over the world: I’ve got a resident in Portugal who wrote me a few months ago, one who is from Italy also doing her residency—I get dozens of these—and the question is frequently that they’ve heard about machines replacing radiologists and they want to know whether they should stay in the residency or switch to interventional radiology, or leave radiology or medicine in general. It’s just amazing how much hype is associated with machine learning, and how completely silly and false and premature all of those conclusions are. So, one of the major questions is, why? Why has there been so much anxiety just recently? What has changed?

 

Machine learning has been around for 30, 35—arguably 50—years, and it’s really been an outgrowth of statistical techniques that have gotten increasingly sophisticated. The greatest excitement and anxiety has been associated with an entry in the year 2012 from Toronto that was called “Super Vision.” What they were able to do is to apply the graphical processes that are used in the video gaming industry, and they were able to increase the speed of their algorithm, which was an algorithm called the Neural Network; they were able to speed it up so much that they got a major jump in image recognition performance. So what has happened over the last few years is that the accuracy in telling the difference between a cat and a dog, a chair and a table, and a motorcycle and a car have improved so much that the accuracy is in the neighborhood of 96 or 97 percent for this image recognition challenge. Because of that, people are starting to declare that image recognition is pretty much a solved problem.

 

In the meantime, I am contacted by startup companies and many others saying that, well, they’re going to use that software in medical imaging and it’s going to do all sorts of things. It’s going to find fractures, bone age determination, and everybody believes that that software’s going to be useful in medicine. In fact, a recent startup that was funded with tens of millions of dollars had its CEO declare that he was going to replace the “wasted protoplasm” that represents the radiologist sitting in front of the workstation in the next few months. And what’s happened is that that CEO is no longer in charge of that particular startup, and what people don’t understand is that the algorithms that work well for a dog and a cat and simple small images are completely untested for medical image applications, and that medical imaging is so much more complicated and has so many different modalities and diseases that there is absolutely no evidence whatsoever at this point that computers will be able to apply the same techniques in medical imaging.

 

So, there is a lot of interest and excitement, but at this point I think that all of this is overblown. I’m having a debate with Dr. Brad Erickson from Mayo Clinic at the RSNA this year, where he’s going to argue for and I’m going to argue against the fact that radiologists will be replaced by computers in 20 years. Even if somebody arrived with a space ship tomorrow from another planet that is more advanced than we are, or from the future in a time machine, and gave us the software that actually could read better than a radiologist—which we have nothing close to—it would take 30 years just for the FDA to clear it and test it to know that it actually does what it’s purported to do. We talked with somebody from the FDA at the conference I ran on machine intelligence and medical imaging, and that answer was that they would have to test it out in each and every diagnosis, in each modality and each body part that it was claimed to work on. And that process literally would take decades to do. So my level of anxiety at this point is incredibly low about being replaced by computers, despite everything that you’re hearing. I go out of my way to reassure all these residents and fellows and attendings that write or call that they have absolutely nothing to worry about, except all sorts of amazing things that machine learning will be able to do, rather than replace us, to make us better and smarter and safer.

 

What do you think the life of a radiologist or even the life of a physician will look like in 5 or 10 years based on the computer assistant, if you will, that’s in clinical practice?

 

I think that what we’re going to see first are much smarter systems than we have now. When I use my speech recognition system, it sometimes makes really crazy mistakes, like if I say an aneurysm is 4.8 centimeters it might transcribe the word “foreplay.” When it does that it is really making a mistake that a human would never make because I never use the word “foreplay” in my radiology dictations. So, starting to build in the equivalent of a grammar or spell checker for dictation software is one of many different steps. The other would be for a system to start reading my reports, and when I make a recommendation for follow-up have the machine track that in the background. If there’s a critical finding like a subarachnoid hemorrhage on a patient in the ER who has been in a motor vehicle accident, and I report that, I want the computer to be able to add that as a critical finding and then be able to note that. When I give intravenous contrast or dye to a patient when I’m scanning, I want the computer to be able, when the patient comes the next time, to give me an estimate on what is the right amount of contrast to give and what’s the right dose to scan with. There are literally hundreds of amazing applications that we could start with in the next 5 years or so. After that 5 to 10 to 15 years, what we’re going to start seeing in radiology and medicine in general are systems that are better at doing some of the things that we rely on our interns and residents and fellows for—to be able to start looking for lung nodules on chest CT scans or subarachnoid hemorrhage on an unenhanced CT scan of the brain, and start to do very specific types of tasks. So when we talk about artificial intelligence, we can talk about narrow or weak artificial intelligence, general artificial intelligence, and then super intelligence. We’re at that narrow phase right now, and I think in a 5-to-10-to-15 year timeframe, we’ll have computers doing very specific, very narrow tasks that help us out and make us safer and faster and smarter.

 

So you would hold, if you go to the 80-20 rule, that computers will take over the 80—the rather mundane tasks—and leave the 20 percent for the professional to deal with?

 

That’s what I think is going to happen over the next 10 or 20 years, not in a short number of years, as Ezekiel Emanuel and others have suggested. I can tell you when I do image interpretation, my resident or fellow ends up doing that 80 or 85 percent, and it makes me better and faster. Sometimes they’ll pick up mistakes that I may have made, and they are certainly able to do a lot of the repetitive tasks well and quickly, and I learn new things from them all the time. I would see that [computer assistance] happening, but I don’t think radiologists will be supplanted by computers any time in any of the careers of anybody that’s listening to this podcast.

 

I’m going to be a bit of a cynic here. Six months or so ago, the VA came out with a ruling that suggested that non-radiologists—in effect, nurse practitioners—should start to do anesthesia and image interpretation without supervision. At some point the interpretive skills of folks—to save money and to improve access—would be eroded, and computers would be able to do that better than humans might be able to do it. Do you think there’ll be a business model that will push to replace individuals in certain environments, with computers being better, cheaper, faster in some way?

 

I think that’s going to happen very slowly. With regard to the VA and imaging, I think there’s a misunderstanding about what the VA was doing. The VA was not so much suggesting that nurses would actually interpret the radiographs; what I think they meant was that the nurses would interpret the reports that radiologists would give. I’m sure the VA will come up with clarification of that. I know the American College of Radiology doesn’t believe that.

 

With regard to the business case, we’re already seeing some of that. There’s an aid organization that helps out a number of different undeveloped areas in Africa, for example. It turns out it’s really inexpensive to deliver X-ray systems out there, but getting radiologists time to be able to do the interpretation is incredibly expensive. So the National Library of Medicine has created software that would make a diagnosis of either normal or abnormal, tuberculosis or not tuberculosis. I’m hoping to be involved with a challenge where I take some millions of images that I have in a collection that we’ve done at the University of Maryland for tuberculosis and release those to the machine learning community so that they can actually test and try make diagnosis of tuberculosis or tell whether a chest radiograph is normal or abnormal by training on massively large data sets. So, I think we’re going to see in areas that don’t have access to a radiologist—and I think it’s going to start outside the United States—that there will be software programs that will do fairly specific functions: Is a chest radiograph normal or is there evidence of tuberculosis or what’s the likelihood? And it’ll be done for triage purposes. So instead of looking through 10,000 studies that might be done in a particular month at an area within Africa that might be screening, maybe you would be able to just look at 50 or 100 radiographs. So, in areas that don’t have radiologist access or coverage, we’re going to start seeing computers and machines do some of that interpretation. I think in the meantime it’s going to help radiologists be better and faster, but I don’t think in the United States we’re going to see computers supplanting and providing primary interpretation for studies, other than perhaps DEXA scans and some other ones that are just inherently quantitative.

 

Let me take a different sidetrack. The consumer is really begging for access to this information, and today the consumer goes online and tries to figure out what’s wrong with them because they can’t get those answers from a human being—they [doctors] don’t’ have time to talk to them about it. The consumer spends a lot of time and will come to my office with a pile of papers and tell me what I missed. Do you think IBM Watson with its textural and imaging interpretation skills might become a second opinion tool that the consumer uses to validate against what his or her physician is telling him?

 

I think what’s going to happen first is that medical information other than images is going to be available for these types of analytic programs. So: Should I be on a statin; should I be on a particular hypertension medication; which one should I be on? Here’s my genomic data; can you give me some nutritional advice and suggestions? Here are some of my symptoms; can you have your computer go through the lab data that I have—maybe genomic data, maybe not—and can you tell me some possibilities of what might be a diagnosis that I can present to my physician and then sort of have that second opinion? For me, I think the low-hanging fruit by far is going to be medical analytic information. If I have cancer, then can I electronically send my information and then get an opinion from a computer system that might steer me toward a particular clinical trial or might steer me in the direction of thinking that I might have an auto-immune disease rather than the disease being diagnosed as something else. I think we’re going to see that a lot sooner than having a computer automatically do radiology diagnosis.

 

One area that is really interesting, though, is dermatology. We have a number of different apps already that purport to be able to make skin diagnoses—diagnoses of melanoma, for example. I think things like that, we’re going to start having apps look at them.

 

But getting back to what I was mentioning about the FDA, at least in the US, any app that purports to do diagnosis is going to have to have a large database of cases and be able to make a very strong case that it’s able to do this safely and consistently. And I think that, overall, that’s going to be a huge barrier, particularly to medical diagnoses and certainly to being able to do imaging diagnosis. I think it’s a tool and an informational library. I think we’ll see more and more of these things come online, but getting FDA clearance for any of these has been extraordinarily problematic, and of course then there’s the medical legal liability issue associated with it. I can give a human doctor malpractice insurance, but I’m not sure exactly how I would do that for an algorithm or for a company.

 

You know, we have self-driving cars—in fact I drive a self-driving car myself, when I head out on a long trip or even when I’m driving into work in stop-and-go traffic—and as the months have gone on I’ve trusted it. But the problem is that I don’t know how to debug really complex software like in a self-driving car. So you can tell me that on average half as many people die in car accidents with self-driving cars, but they may die in a completely different way. They might take a right turn off a bridge all of a sudden and kill me that way. And so the types of accidents could be different, and there could be bugs in the software. How do I know how to debug complex medical software that says that it can do image interpretation, whether that’s for dermatology or radiology? How much of what happens with machine learning is kind of a black box, where you run on a large number of patients? For the most part it works, but no one knows exactly how it works.

 

I think that the FDA and others are going to feel very uncomfortable with software when we don’t understand exactly how it does what it purports to do. So, there are huge barriers and challenges associated with applying technology that’s doing well outside of medicine to medical applications in general. And imaging [won’t be] the first piece to fall, which is what many people have predicted, including Andrew Ng, the Stanford professor that gives an amazing course in machine learning available for free online. I think a lot of people who don’t understand medicine and medical imaging have concluded that imaging is going to be low-hanging fruit, whereas in my opinion it’s really at the very top of the tree as far as difficulty.

 

You’ve mentioned the FDA. You’ve mentioned the barriers for this technology to be used for healthcare. As a major barrier, do you think the FDA is right in putting these strict limitations or do you think they’re a little too draconian? Where do you see them fitting? If you were king, where would you put the FDA, their meter for security?

 

You know, I’m vice-chair of radiology at the University of Maryland and I’m also in charge of radiology at the VA, and what I see being part of the Department of Veterans Affairs and part of the federal governments’ bureaucracy is that things move a lot more slowly and cautiously. There are also a lot of politics that can hamper things. So for me, I think the FDA has the right idea about protecting patients; I think the FDA has the right idea that if you claim you can make a medical diagnosis, you need to have good strong evidence to prove it. But I think the methodologies that they’re using are slower as far as the approval rate. For example, with mammography CAD, companies had the idea that they had to use CAD as a second diagnosis. After the diagnosis was made, then CAD would be a second opinion. But incorporating CAD software from mammography to find breast cancers could have been done much better by doing it in a way that would be interactive and cooperative with the radiologist—having a level of certainty about why it thought [the] things [it did.] Plus, with the incredible explosion of new decision support algorithms and new machine learning and AI applications in medicine, I don’t believe that the FDA has the people or funds or resources to ramp up concomitantly with the incredible amount of innovation and development that is happening in the next few years. Unless there are fundamental changes, I’m very concerned that the FDA will not be able to handle the onslaught of really interesting innovative and creative software.

 

[Still] despite being a skeptic of the hype that radiologists are going to be replaced in 4 or 5 years, I’m an incredible cheerleader for the potential to help physicians and help radiologists as a supplement to make us much better, safer, and smarter.

 

 

I think this would be an interesting conversation at a national meeting in terms of the role of the FDA moving forward. By the way, your comment regarding human-machine interactions is really the opportunity. It’s not about whether machines or humans are better it’s about whether you can make a system that incorporates the best of both.

 

Elliot I really want to thank you for you time today. This has been really informative.

 

My pleasure and privilege, Alan. It’s always great talking with you. Thank you so much.