A US Army soldier monitors a US Army 14' Shadow surveillance drone from a control room in Afghanistan
As Verity Harding noted in her guest post in May the debate surrounding Artificial Intelligence (AI) has tended towards the more apocalyptical scenarios, despite the best efforts of the AI community to keep ethical and legal concerns to the fore as the technology develops. The issue is now competing with pandemics and climate change as an era-defining change. It was the central theme of Prime Minister Rishi Sunak’s recent meeting with President Biden. He is calling an international conference on AI safety. Meanwhile Tony Blair and William Hague have come together to warn that Britain may be falling behind in the development of AI and urged a great national effort in technology development, and applications (in the civilian sphere) while ensuring that this is all done safely. In the middle of concerns about redundant workforces, an epidemic of computer-generated fakery, and intelligent machines turning on their human creators, Paul McCartney turned up praising AI for making it possible to extract John Lennon’s voice to produce a new Beatle’s song.
This is a multi-faceted debate that often lacks clear focus, oscillating between excitement at the possibilities and fear of the dangers. There is no single AI technology and no single AI issue to worry about. AI can develop and present options with a comprehensiveness, speed, and accuracy that humans cannot match. It is safe to assume that when machines are given narrow but non-trivial problems to solve on matters where there is a vast amount of data they can make and will continue to make a substantial difference to all human affairs. At the same time the quality of AI’s outputs depends on the quality of the databases it can access, which may suffer from lacuna, pollution or corruption. Moreover, the ability to develop innovative AI depends not only on clever individuals but also, especially when it comes to the large language models, extraordinary amounts of computer power and the most advanced chips. That is why only a few countries and organisations can really be big players in developing the technology and its applications.
I claim no specialist expertise on the technology so I am in no position to explain what is currently going on with AI and its future possibilities. My aim in this post is different. I want to suggest some ways to think about the potential impact of AI in the military sphere, based on the character of warfare as a violent duel as much as the quality of the AI.
The Military Value of AI
Those aspects of war in which time is of the essence and advanced weapons are up against each other prompt many of the scare stories about an AI-dominated future. The scenarios usually involve machines deciding to go on a killing spree while their notional human controllers take cover. One such alarming possibility appeared in a recent report, quickly denied, describing a simulation in which a drone guided by AI used ‘highly unexpected strategies to achieve its goal’ and ended up (virtually) killing its operator.
According to the story, the drone was advised to destroy an enemy air defence system but turned on any sources of interference with its mission, which eventually included an operator telling it not to attack. After this story created a minor storm (especially when it was assumed an actual operator had been killed), it was explained that this was no more than a ’thought experiment’ and not even an actual simulation.
Keep reading with a 7-day free trial
Subscribe to Comment is Freed to keep reading this post and get 7 days of free access to the full post archives.