Also publishes study showing how explaining the output improves HCPs' diagnoses; Big year in this category, as IDx-DR also launches in US and raises $33 million Series A
This morning, Google SVP of Global Affairs Mr. Kent Walker shared in a blog post that the tech giant has expanded its AI diabetes retinopathy screening efforts with Verily to Thailand, where it has partnered with the Ministry of Public Health-run Rajavithi Hospital. The duo will kick off a research pilot in the future (no timeline provided), and according to the post, Thailand has only ~1,400 eye doctors to look after five million people with diabetes’ eyes. That’s a ratio of approximately 1:3,600, on top of routine eye care and other conditions that ophthalmologists must address. Based on AMA statistics from 2010, the US ratio of specialists to people with diabetes is better, but still not great (1:1,300).
Today’s announcement was especially notable in that it was posted on the Google Blog (very widely read), and it builds on existing Verily implementation efforts in India, where there is a shortage of 127,000 eye doctors.
The blog post comes in tandem with a brand new publication from Google’s AI group describing algorithmic improvements, which is boiled down in this separate blog post from two of the authors. While a study published earlier this year showed that bringing in expert adjudicators improved the software’s performance, today’s paper addresses the problem of explainability – how exactly is the algorithm coming to its conclusions? The authors found that presenting physicians with predicted retinopathy scores and heat maps highlighting retinal regions driving the diagnosis improved the physicians’ diagnostic accuracy and their confidence in their conclusions. For cases of moderate retinopathy or worse, supplementing physicians with the model’s grades and heat maps boosted sensitivity from mean 79% to 89%, without a deterioration in specificity (~95%-96% in both cases)! (We wonder if a similar approach could be used in AI-driven insulin dosing decision support!)
As an example, in the figure below, an expert adjudication panel (“Ground Truth”) said the subject has moderate retinopathy, while the model gave evidence for both no (N) and moderate (Mo) retinopathy. Without the model’s assistance, neither of two physicians diagnosed any retinal damage; with assistance, three of four correctly diagnosed it as moderate. The bloggers are “excited to develop assistance that works like this, where human and machine learning abilities complement each other.” We’re eager to see this technology reach greater scale around the world, and are especially encouraged by the focus on scalability, implementation, and workflow.
This past April, FDA authorized marketing of IDx-DR’s autonomous retinopathy detection system, which is already being distributed throughout Europe in partnership with IBM. Stateside, University of Iowa Health Care became the first to adopt the technology in a diabetes care setting, and the company was in talks with several other large healthcare systems as of June. And in a demonstration of the substantial interest in AI-based healthcare, IDx announced a $33 million Series A in September led by 8VC (investor in Oscar Health, HIMS, Blink Health, and others), with noteworthy participation from Optum Ventures, a UnitedHealthcare subsidiary.
Though IBM is distributing IDx-DR in Europe, we’re not sure where IBM Watson’s own retinopathy detection stands, particularly in the wake of internal management changes. We last heard from the group in 2017, when it reported 86% accuracy in classifying eye scans into one of five categories of retinopathy.
FYI, if you’re working in or following AI, Google’s AI Impact Challenge should perhaps be on your radar. The challenge is an open call to organizations to submit their ideas about how to harness AI for social good. “Selected organizations will receive support from Google’s AI experts, Google.org grant funding from a $25 million pool, credit and consulting from Google Cloud, and more.” Applications are open until January 22, 2019! Apply here, and read the FAQs here.
-- by Brian Levine, Adam Brown, and Kelly Close