For some years, we have been told that artificial intelligence is poised to transform healthcare, whether that is by detecting clues in patient data which can predict conditions such as Parkinson’s or by automating the examination of scans, thereby making human radiologists largely redundant. For all the hype, however, there are few if any signs of AI in action in a clinical setting.
But in my latest Healthtech Conversation, a doctor who has been a pioneer in developing an AI system to spot serious eye conditions tells me he believes it could make the NHS a world leader in this technology.
Dr Pearse Keane is both a consultant ophthalmologist at London’s Moorfields Hospital and professor of artificial medical intelligence at the Institute of Ophthalmology at University College London. He’s also a great communicator.
Some of the headlines from our conversation:
The NHS faces a huge challenge in handling patients referred to hospital after eye scans at High Street opticians - “to put it in a brutal way, we are drowning in the number of patients that we need to see.”
This challenge “perfectly embodied” the kind of problem he wanted to solve with AI
After reading about Google’s DeepMind he hooked up with one of the firm’s founders via LinkedIn
By 2018 they had developed a system that was as good as Moorfields’ experts in assessing more than 50 different retinal diseases from powerful eye scans.
But getting from “idea to algorithm” turns out to be a lot easier than going from “code to clinic”.
Nevertheless he’s confident that within a couple of years a project in which the NHS had a key role could be “one of the first autonomous AI systems to be used by millions of patients around the world.”
You can see the full interview here:
Or if you just want to listen:
https://www.buzzsprout.com/1919307/episodes/9881331
After Pearse Keane finished medical school in Ireland, he spent a couple of formative years in California and came to London’s Moorfields Hospital determined that technology would play a key role in his research. He says he thought about everything from virtual reality to blockchain but then began to focus on deep learning, then emerging as the most successful AI technique:
“The eureka moment for me was in I think about July 2015. I was reading Wired UK magazine, and there was a big profile of DeepMind’s Mustafa Suleyman. And so I tracked down his profile on LinkedIn, and I actually took out a one month premium subscription so that I could send him a message.”
His persistence paid off. The co-founder of the London-based AI startup, which had been bought by Google the year before, invited him to a meeting. Soon the doctor found himself working with DeepMind - and he immediately knew the problem he wanted to tackle.
It used to be only hospitals that had the optical coherence tomography (OCT) scanners used to spot serious eye conditions but now they are used by just about every high street optician. Dr Keane says that is a good thing - “let me be clear, I don't want to seem negative about it.” But he then illustrates the problem: “Imagine if every GP in the country was given their own MRI scanner, but they didn't necessarily have the training to be able to interpret the scans.”
Every person who had a cough or a headache might end up being referred to hospital, overwhelming the health service. Something similar was happening with OCT scans when opticians thought they showed signs of the dangerous condition age-related macular degeneration, which merits an urgent referral to a consultant. “In 2016 Moorfields had 7,000 urgent referrals from optometrists as possible age related macular degeneration. Now, of those 7000 urgent referrals, only 800 of them actually had the severe form of macular degeneration that requires urgent intervention.”
To stop hospitals being overwhelmed some way of triaging those thousands of scans was needed. But developing an AI system to do this would involve patient data being used in a partnership with a powerful US tech giant - there were obvious sensitivities: “So we put a lot of thought into making sure that we did everything very nicely and very correctly from an ethical and data protection perspective.”
DeepMind insisted at that time that it was independent from its parent Google and that no NHS data would be passed to the US tech giant. (Later, when the London division’s health work was subsumed into Google Health that guarantee seemed less than watertight.)
But things moved incredibly fast - by 2018 Pearse Keane’s name was among those on a paper in Nature Medicine describing an AI system which could assess more than 50 eye conditions from OCT scans as well as or better than medical experts.
Since then, things have gone rather quiet. So what has happened? “Going from an idea to an algorithm is kind of easy in some ways compared to going from code to clinic,” explains Dr Keane.
All kinds of issues have had to be confronted before the system can actually be put into action and start dealing with the torrent of OCT scans. The team behind it has had to work out whether it can scale up, and whether it can prove equally effective with patients all around the world, not just in London: “We want to prove that if you're a patient with with diabetic eye disease in Ghana, in Brazil, and Mexico in the US and India or other countries around the world, that essentially the algorithm will work just as well on you as if you were a patient in Moorfields.”
Then there is the question of regulatory approval, from the MHRA in the UK, the FDA in the US and dozens of other regulators worldwide. They will have to be confident that issues of reliability, of trust in an algorithm to make vital decisions about patient care, of data privacy have been addressed,
And Pearse Keane admits there are reasons to be cautious about all of the AI advances we have heard about in recent years: “The fundamental point is that this is a new technology. And that despite all the hype around AI, in the context of healthcare, in the context of self driving cars, in the context of other things, it's still in the early stages.”
Nevertheless, one senses a certain frustration in Dr Keane about the complexity of getting an innovation he believes could make a fundamental difference to healthcare through the thickets of government and regulatory bureaucracy.
Still, when I ask him to predict when the AI system will get across the line he makes a big optimistic bet: “My prediction is that this is in the next couple of years,” he says. “I genuinely believe that this could be one of the first autonomous AI systems to be used by millions of patients around the world. And for me, I feel very proud that Moorfields and the NHS and the UK could play a central role in that.”
It is an exciting vision. But there are more difficult questions to be answered before it becomes a reality. Who gets the blame if the AI fails to spot a dangerous eye condition? Will this pose a threat to the jobs of medical professionals? Will the NHS profit from an innovation it has helped to create or will Google keep all the profits? Expecting answers within a couple of years may prove optimistic.
Thanks Rory
Fascinating.
Problems I’d not even contemplated.
Let’s hope the data security and ownership issues are sorted out soon and satisfactorily