The UK newspapers have some chilling front pages today, with leading AI gurus warning that smart machines could kill us all. I blame myself - back in 2014 I started the trend for scientists predicting the end of humanity when I interviewed Professor Stephen Hawking.
“Once humans develop artificial intelligence, it would take off on its own, and re-design itself at an ever increasing rate,” the great man told me. “Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”
My report for that night’s Ten O’Clock News did feature the more balanced view of a scientist who, unlike the astrophysicist Stephen Hawking, actually specialised in AI. But it also made prominent use of that great cliché of dystopian TV news pieces, a clip from The Terminator.
Nine years on, it is the AI scientists themselves who are sounding the alarm, and Arnie Schwarzenegger is back on the TV news bulletins again, wiping out humanity. But before we resign ourselves to obsolescence, I have two questions.
First of all, HOW??? Nobody has yet explained exactly how AI goes about killing us, having decided we are surplus to requirements. Sure, I can believe that ChatGPT 9.0 or some other Large Language Model will write Booker Prize winning novels, code software without a single bug, or even transform the judicial system by giving far wiser rulings than a human judge struggling to stay awake after lunch. But how exactly does it decide to unleash a swarm of killer robots and hunt us all down or release a deadly virus? And when we’re all dead, what does it do next?
My second and more urgent question is if AI is eventually going to destroy us can we please hurry up and see some of its promised benefits in healthcare first? After all, it is more than a decade since we started hearing stories about robot surgeons, algorithms that could spot the difference between a malignant and a benign tumour in a scan more efficiently than any radiologist, and AI techniques that could slash the time and cost involved in developing new drugs.
All sorts of innovations were coming out of tech companies. Microsoft’s Cambridge lab was working on brain tumour detection, IBM’s Watson Health division was going to crunch data to help battle cancer and Parkinson’s, while in 2018 the UK’s Babylon, showed off a Chatbot which it claimed was better at diagnosis than the average GP.
And excitement about AI wasn’t limited to the private sector - innovative medics like Dr Pearse Keane at Moorfields Eye Hospital worked on algorithms that could help triage scans sent in by High Street opticians and an NHS AI skunkworks looked at using AI for practical projects such as predicting traffic to A&E departments.
But for all the investment and the hype around AI in healthcare, it is very hard to spot any major impact on patient care or NHS productivity. When I go for an MRI scan, it still takes weeks to hear about the results which are delivered by a human not an algorithm (for which I am very grateful, by the way). IBM disposed of most of Watson Health, having apparently decided it wasn’t going to be a big moneyspinner. As for Babylon, once lauded by the then Health Secretary Matt Hancock as a healthtech superstar, it has largely given up on the UK, and after a disastrous share listing in the US, investors have seen almost the entire value of their holdings wiped out.
Part of the problem is summed up by Pearse Keane who explained to me that going from having an AI idea to creating a working algorithm can be done relatively quickly, but from “code to clinic” takes a long time. The “fail fast” credo amongst Silicon Valley tech entrepreneurs doesn’t work when it comes to healthcare but some of the AI practitioners appear to be failing very slowly.
Now, as calls for regulation of AI grow ever louder - though nobody seems very clear what the rules should be - the prospects for its rapid adoption in healthcare look even less rosy. We are told that generative AI is moving forward so rapidly that even the scientists on the frontline cannot work out exactly where it is heading. But perhaps it is time to stop worrying about the long-term when, as Keynes reminds us, we are all dead anyway and get on with reaping the benefits of AI for our health right now.
I find myself very suspicious of the "we are all dooooomed" conversations kicking around.
I note that many of them are coming from the IT/Computer sector, and to be brutally honest, they have a fairly terrible reputation when it comes to understanding human beings and society. This is the same bunch who 30+ years ago were predicting a WWW utopia which would be totally free of any rules of any kind and yet would magically be the most amazing place! And it wouldn't matter that some posted the most vile content in dark corners because it was Utopia - the home of free expression - what could possibly go wrong???
Yeah, right.
The other thing that has bothered me is that the so-called Godfathers of AI (Godfathers? Don't they mean Fathers? Godfathers sit on the sideline doing nothing) are very much associated with the big-boys when it comes to AI. Who do they want to regulate? It sounds like they want to get in early with regulation so they can dictate what shape that regulation might be and how it is implemented. Hmm...
My other problem is the entire idea of a dystopia. Dystopian visions are hardly new, though they probably peaked with Bladerunner. Aldous Huxley's Brave New World was my first meeting with one - a Utopia disguising a Dystopia. But if Utopian and Dystopian visions have anything in common, it is that however much they make a good story, or can sound "wise" from the lips of someone with a suitably deep and husky voice, they have little place in reality. Those closest ever was the Soviet block and the DPRK, and yet however much oppressed the population, it was still nowhere close to Bladerunner or 1984.
AI is a tool. It is a very different tool but it is still a tool. I think any idea of it becoming sentient is wishful thinking and shows that those who are invested in the tech still have a large number of their brain cells in the cloud - both virtual and actual.
But it is a powerful tool. So, what should, in reality, be regulated? If one regulates to stop it taking over the world, then to be honest, big tech has won. Because I am still to be convinced that is any kind of threat at all.
However, as a tool in the hands of a human, who can use it to make money at the expense of other humans (especially their jobs), AI shows its real teeth. Or rather, the exploiters of AI. But stopping it ruling the world doesn't appear to include that problem.
So, here is the cynical take. Big AI Tech, otherwise known as Cosmic AC (Time to wheel out Asimov again), says, "We promise not to create a god. We won't ever let our invention get to the point where it can declare, 'Let There be Light,' like Asimov wrote."
The world governments take a collective sigh of relief, and fail to notice that companies are developing AI apps to not just do things that human's can't do, but also do things that humans CAN do. And not only CAN do, but because humans ARE sentient, they do them a hell of lot better than AI ever can. And they can do this because they don't need it to become the sentient threat to achieve their goals - they are still within the regulation.
Oh dear. We have avoided a complete dystopia run by Multivac Cosmic AC, (a dystopia that would never have come anyway), but there does appear to be a huge shortage of jobs around. Any why is Big AI Tech laughing all the way to its AI bank?
I don’t think the swarm of killer robots will be needed. AI will simply finish most of us off in a generation by tidying up and shutting down the pesky, untidy things we do and which artificial consciousness doesn’t need – most communications networks, food production, transport, care, utilities infrastructure, factories...
The population will plummet and apart from a new generation of hunter-gatherers, subsistence farmers, a handful of Hollywood inspired underground rebels and no doubt the usual smattering of power grabbers, the rest of us will make an earlier than planned-for exit.
It would be truly marvellous if good and supportive things come of AI before it decides helping us is a waste of its time.
Who knows what it will do afterwards? Create great art, perhaps.
Probably, it’ll evolve far enough and in sufficient variety to form a new version of a society, make its own mistakes and go phut as well.
Eventually some kind of creature or consciousness may peer into the extinct age of AI much as we examine the Cretaceous and the Ancient Greeks.