Is ethical AI 'woke'?
Another US/UK divide opens up
“Department of War AI will not be woke,” said Pete Hegseth, the former Fox TV host, who is now America’s Secretary of Defence, sorry, War. He was speaking as the Trump administration prepared last week to launch fire and fury at Anthropic, the AI company that dared to try to place restrictions on the military’s use of its Claude product.
Anthropic’s founder Dario Amodei had drawn two red lines - he did not want the Pentagon to use Claude for mass surveillance or to operate lethal weapons without any human intervention. But worrying about Chinese-style intrusion into every aspect of a citizen’s private life or killer robots which decide their mission is to kill every human with a brown face and over five foot high is apparently “woke”.
(I have always struggled to define the “w” word which obviously includes the Diversity Equality and Inclusion policies which have been swept away under Trump 2, along with dozens of senior military figures who appeared not to be straight white men. Then again, the genius that is Pete Hegseth, the man who divulged secret battle plans on a Signal group which included a prominent journalist, is obviously himself a diversity hire . He just about snuck into the rather over-filled quota of alleged alcoholic sex pests.)
It should not have been a surprise to the administration that Dario Amodei has some ethical concerns about the ways that artificial intelligence could be used. After all, the Anthropic boss is in the habit of writing 10,000 word essays on the subject. In his latest, the insight I found most chilling was that the vast amount of literature AI models are trained on includes sci-fi novels where AIs rebel against humanity and they might see these stories as manuals for their future behaviour.
Last Friday when it became clear that Amodei was not going to back down, Trump and Hegseth did not just drop Anthropic’s Pentagon contract and the use of Claude by any part of the federal government, they took an even more radical step. Using legislation usually employed against overseas firms such as China’s Huawei, they designated Anthropic a supply chain risk, which meant that all US defence contractors would have to drop it too. Despite severe doubts about the legality of this move, several major companies have already complied.
And guess which AI company has glided into the empty spot left by Anthropic? That’s right, OpenAI, which appeared to be fine with the customer doing whatever they wanted with the technology. Faced with a backlash, OpenAI’s chief executive Sam Altman admitted its deal with the Pentagon “was definitely rushed, and the optics don’t look good” but insisted that there were safeguards built into the contract.
Still, the whole affair gives us some clarity about the widening gap between the US and Europe over the regulation of AI. The EU has passed an AI Act which appears to be quite interventionist, with a framework which assesses AI projects by their level of risk. For instance, using AI for social scoring ,where people are classified according to their behaviour or personal characteristics, would be deemed an unacceptable risk and banned, whereas its use in employment and education would be seen as high-risk with strict rules on transparency enforced. While the UK is no longer in the EU it is widely accepted that our businesses will want to comply with the new law so as to have access to the European market.
By contrast, the US appears determined to have as little regulation as possible, citing the need for American AI companies to keep ahead in the fierce competition with China. Indeed, it would seem that the kind of AI operation that would be deemed an unacceptable risk in the EU might look very attractive to a Trump administration which has not hesitated to declare a national emergency to justify all manner of actions that look legally or ethically dubious.
Another reason then for the UK government to take a long hard look at American tech companies and ask whether we can trust them. Anthropic does at least appear to have a backbone when putting its fine words about its ethical code into practice. Dario Amodei says his company’s refusal to give “dictator-style praise” to Trump - unlike “other AI companies” (translation OpenAI) - is the root cause of the breakdown in relations.
So here’s a test of the UK’s backbone. Last July, the UK government signed up to a strategic partnership with OpenAI to ”explore where it can deploy AI in areas such as justice, defence and security, and education technology in line with UK standards and guidelines.” Anthropic had signed something rather similar but more vaguely worded a few months earlier.
It looks then as though the two companies could end up competing for contracts across UK government departments including the Ministry of Defence. But the Trump administration’s order to all its defence contractors to stop using Claude affects companies like Lockheed which also have big contracts with the MOD. That means that a deal with Anthropic may look less attractive, especially if Donald Trump and Pete Hegseth keep bellowing about “woke AI.” So here’s the big question - at a time when the ethical use of AI is under the spotlight as never before will the government reward a company that has shown it is serious about its principles or connive at its destruction?
Then there’s the question of the controversial AI software firm Palantir. It has a partnership with Anthropic which means it is the channel through which Claude is used by the US military. But NBC has reported that at a meeting with Palantir an Anthropic employee was concerned to hear that Claude had been used during the raid on Venezuela. The suggestion was that this was how the rift between Anthropic and the Pentagon had begun.
Now, Palantir has recently signed a £240 million agreement with the Ministry of Defence, continuing its advance into UK government which started with what grew from a £1 trial to a £330 million contract to build a Federated Data Platform for the NHS.
That NHS contract is all about sharing the most sensitive data, patient health records, one of the areas considered high risk under the new EU law. So maybe the Health Secretary should be calling up Palantir and asking a simple question - does the company oppose the use of its technology for mass surveillance or does it, like Messrs Hegseth and Trump, regard such concerns as woke nonsense?


“the genius that is Pete Hegseth, the man who divulged secret battle plans on a Signal group which included a prominent journalist, is obviously himself a diversity hire” <<—- Great piece Rory, is it ok that for me this was the stand-out sentence? 😃
Another great article, thank you.
For me one of the most unfortunate things about AI, is that because models are trained on a lot of data generated by humans, it reflects back to us many of the less desirable things about humanity whether that’s racism, misogyny, hunger for power or whatever. Humans need to be given boundaries - Epstein is a good example of what can happen when a person with no moral compass is not reined in. So AI needs rules and boundaries too. I don’t think that’s about being “woke” rather safeguarding against AI amplifying our own failings or worse, being manipulated for nefarious purposes.