fbpx

The Future of Medical Imaging and Machine Learning

April 4. 2019. 7 mins read

Often referred to as “unicorns,” startups that are worth a billion dollars or more are so rare that only 335 exist today according to the gold standard unicorn list over at CB Insights. If you happen to be a founder of one of these unicorns, odds are better than one in ten that you attended Stanford University, one of the most research-oriented universities in the world. Thirty-one Stanford faculty have won the Nobel Prize since the university’s founding and 19 Nobel laureates are currently teaching on the beautiful Stanford campus that sprawls across farmland near San Francisco with a faculty roster that makes you wonder what you’ve been doing with your life so far:

Stanford Faculty of 2019 accomplishments
Stanford Faculty of 2019 accomplishments – Source: Stanford University

We visited the campus this week to attend a Workshop on the Future of Medical Imaging which found people from all over the globe gathering together to talk about the sorts of technology advancements that just might turn into the unicorns of tomorrow. A few of the startups in attendance were Zebra Medical Vision and Arterys, both of which we wrote about in our article on 9 Artificial Intelligence Startups in Medical Imaging.

At The Peak of Gartner’s Hype Cycle

Click for company websiteSpeaking at the conference was Founder and CEO of Zebra Medical Vision, Eyal Gura, who talked about how deep learning is at the peak of the Gartner Hype Cycle. Essentially, this means that a great deal more work needs to happen before the technology becomes commonplace. His firm has now developed 48 deep learning algorithms to diagnose medical conditions, eight of which have been CE marked. Providers use Zebra to alert them of patients at high risk of cardiovascular, lung, bone, and other diseases.

Zebra Medical was an early mover and started out by building their own repository containing 15 years of Israeli imaging data from the country’s national healthcare system. While five years ago progress was stymied by radiologists who didn’t want anything to do with the use of machine learning for diagnosis, today, the problems have changed. Budget allocations are rigid, approvals can take a while, and it’s especially challenging to scale because each hospital ecosystem differs so much. For a startup, it’s costly to try and deal with all these hurdles on a client-by-client basis. As deep learning crests the peak of hype, the fanfare will fade and it all becomes about execution. And a whole lot more than just using deep learning to diagnose medical images.

Sharper Imagery Using AI

Click for company websiteAlso speaking at the conference was founder of medical imaging startup Arterys, Shreyas Vasanawala, who also happens to be Director of MRI at Stanford in charge of developing new Magnetic Resonance Imaging (MRI) technologies. During the workshop, Mr. Vasanawala talked about how AI isn’t just used to diagnose medical imagery, it’s also being used to help construct the images as well.

During the process of capturing the image from the patient, a great deal of data is generated of which about 80% is discarded. Using machine learning algorithms, that data can be used to further describe the medical image (what IT folks call metadata) and help resolve problems with the imagery that result from imperfections such as people moving around or breathing while the images are being captured. For example, one image slice might take 20 seconds to generate during which time the patient is holding their breath. Using machine learning to predict the image output, they might be able to take 15 image slices during the same 20 seconds. It’s another example of how AI is being applied to all phases of the medical imaging life cycle.

How AI Will Transform Radiology

A pervasive theme throughout the workshop was the use of artificial intelligence that extends well beyond simply diagnosing medical images. Just last year, Stanford established their Center for Artificial Intelligence in Medicine & Imaging led by Director Curtis Langlotz who talked about how they’ve established relationships across nine departments at Stanford to look at further ways AI can be used for medical imaging. Because radiologists are human, there’s still around a 4% error rate which can be reduced by having AI offer up a second opinion. They’re also giving the algorithms more data to munch on.

Using Natural Language Processing (NLP), they can extract the contents of an Electronic Health Record (EHR) and then use that additional data to interpret an image. Eventually, they’ll add genomics data as well. Even deciding what type of medical imaging procedure should be performed will be a decision that machine learning can contribute to. When it comes to which patients a radiologist prioritizes in their workflow, a machine learning algorithm can quickly sort out who should be looked at first. That’s what Zebra Medical Vision is doing with their “Triage” product offering where algorithms are flagging and prioritizing cases for pneumothorax and intracranial hemorrhages.

Zebra Triage – Source: Zebra Medical Imaging

These are just some of the ways that AI will help transform radiology, not replace radiologists.

AI Won’t Replace Radiologists

The term “scut work” is often used by medical students in residency to describe the less glamorous tasks they are asked to do which fall under “on the job training” but are menial and supposed to be someone else’s job. There are plenty of tasks that radiologists have on their plates that they’d much rather someone else do. Said Mr. Vasanawala, “being a radiologist isn’t as sexy as it sounds, and a good chunk of the day for a radiologist can be mundane and mindless.” Radiologists don’t get to go home early, not even with the help of AI algorithms. Freeing up the radiologists from mundane tasks lets them do more of what they’re best at which is helping people.

One of the workshop speakers was a Stanford PhD student named Pranav Rajpurkar who currently studies under Coursera’s Andrew Ng. He talked about a future of continuous monitoring where an ambulatory ECG patch worn on one’s chest could record a million heartbeats over two weeks which could be monitored and analyzed in real-time, something that would take too much time for humans to ever do. Those same capabilities could give half the world access to radiologists where hospitals are being built with expensive medical imaging equipment but nobody to read the scans.

Mr. Rajpurkar also made mention of a future where you could snap a picture of your own x-ray with a smartphone from anywhere in the world and then upload it to the cloud where the AI algorithms could give you a diagnosis on par with a human radiologist. During the workshop, mention was made of something Stanford is working on called Xray4All. It’s an ongoing research project Mr. Rajpurkar is working on that was described in a Medium post last year as a tool that will “diagnose thoracic diseases from chest x-rays taken as photos at an expert level at low latency using deep learning.” Stanford certainly has the data needed to train their chest x-ray algorithms. It’s a dataset called CheXpert, and Stanford gives it away for free because they believe data drives innovation and progress can be accelerated by sharing data. (Note that “radiograph” is just a synonym for x-ray.)

CheXpert is a large public dataset for chest radiograph interpretation, consisting of 224,316 chest radiographs of 65,240 patients. We retrospectively collected the chest radiographic examinations from Stanford Hospital, performed between October 2002 and July 2017 in both inpatient and outpatient centers, along with their associated radiology reports.

In addition to CheXpert, Stanford also offers other data sets including MURA, one of the largest public radiographic image datasets consisting of bone x-rays. An ongoing competition lets you run your own algorithms against the dataset and the results are scored on a leaderboard with Stanford researchers leading the charge.

Image files take up a lot of space, and back of the napkin math says that if the CheXpert x-rays averaged 50 megabytes each then that’s more than 11 terabytes of images that need to be stored, cataloged, cleansed, etc., and that’s just for that one dataset. The need to build out infrastructure to house medical imaging data was cited as a major pain point by all of the workshop attendees who commented on the topic. That’s what this next startup hopes to solve.

Big, Big Imaging Data

Click for company websiteOne person who understands the pain points of the workshop attendees is Travis Richardson, CEO of Flywheel.  The company sells software to support all the other companies who are doing machine-learning for medical imaging. Everyone agrees that the IT challenges inherent to setting up this framework is formidable. Advanced algorithms and working in the cloud, requires knowing a lot of computer science. Flywheel believes that research scientists should not have to deal with things like sharing, privacy, multi-site access, data labeling, and search.  Flywheel software helps scientists use cloud services (such as Google Cloud Platform), to take care of data storage and cloud-scaling for compute.  The Flywheel software meets the privacy requirements of HIPPA.  It only makes sense that as big imaging data becomes centralized, the basic IT services for data and computational managements be provided broadly so that research scientists shouldn’t have to rebuild the same software.  Flywheel is filling that need.

Conclusion

Algorithms are a better autopilot for radiologists, but they will still need to know how to fly the plane. The use of AI only stands to benefit radiologists but it will take time. The definition of “artificial intelligence for medical imaging” encompasses a lot more than just diagnosing medical image output or helping to reduce medical practice pattern variation. It’s really about using artificial intelligence to solve radiologists’ pain points.

While much of what was presented during the workshop is either commercialized or en route to commercialization, members of the medical community lamented the challenges they face that can be solved with AI but haven’t yet. One of those problems, medical image segmentation, is something we’re going to look at in our next article on the workshop in which we’ll look at medical imaging technology that’s being developed for the surgeons of the future.

Share

Leave a Reply

Your email address will not be published.