In his second Reith Lecture, Stuart Russell, Professor of Computer Science at the University of California, Berkeley, looked at how developments in technology have meant that AI now has the capacity to destroy life, independently. ‘Killer Robots’, also known as ‘lethal autonomous weapons systems’ are able to locate, identify and ‘engage’ (a euphemism for ‘kill’) targets without human supervision. They simply need to be programmed, using facial recognition.
Drone strikes have been carried out since the turn of the 21st century, mostly by the US military, in places like Afghanistan, Iraq, Pakistan, Somalia, Syria and Yemen. The drones are not manned but have a ‘controller’ and a system of communications. But, even with a controller, there has been a consistent civilian casualty rate of between 7% and 15%. And without a controller…?
When asked for his thoughts on the development of such unmanned killing machines, Russell offered a number of scenarios. He mused on whether a ‘code of conduct’ would be the answer – to persuade scientists not to create algorithms that would kill people. Or perhaps the technology could be banned altogether? But since wars exist, and are funded by governments, he also considered that maybe ‘robot wars’ that avoid human casualties could be a solution? Of course, in reality, he admitted, wars tend to end when the level of destruction, human and otherwise, becomes untenable. So probably not.
The question posed by Russell is, how do we ensure that AI never has more power over us? The difficulty lies with the programming. AI sets out to achieve objectives without consciousness. It will do ‘whatever it takes’ to get the result that it was programmed for. Clearly if the programmer sets out to do harm, AI will do its utmost to achieve that objective.
And perhaps it’s simply too late. Turkey is already manufacturing a drone, with facial recognition and different ammunition options, which may have been used in an attack in Libya in 2020 (according to a report from the UN Security Council).
To ban or not to ban?
There are currently 30 countries that approve a ban, including the EU and United Nations, along with the great majority of the public around the world. The US and Russian governments however, supported by the UK, Israel and Australia, argue that a ban is unnecessary.
Perhaps the UK believes that the Convention on Certain Conventional Weapons (1980), which covers chemical and biological weapons, could be expanded? Or, more likely perhaps, because, according to Human Rights Watch, killer robots are already being deployed by China, Israel, South Korea, Russia, the United States… and the United Kingdom.
“The [UK] is considering scrapping a rule that prohibits automated decision-making without human oversight, arguing that it stifles innovation.”
Politico
The restriction of AI used in warfare certainly doesn’t seem to be top of this Government’s priority list. According to Amnesty International, “the UK is developing an unmanned drone which can fly in autonomous mode and identify a target within a programmed area.” But with an arms trade worth in excess of £80 billion I suppose it makes sense to keep your weapons up to date.

licensed under Creative Commons Attribution 2.0
Can there be universal treaty against using such arms?
Russell referred to the the various efforts in the past to limit the use of the most horrific killing technologies. Popular revulsion against the use of gas by both sides in World War I, led to the Geneva Protocol of 1925 against their use, although there was still some use of gas in World War II, and later. A more comprehensive treaty on chemical weapons was achieved only in 1993.
Biological weapons were also researched, and utilised, most notably by the Russians on the Eastern front. A treaty against their use was agreed in 1972. The treaty on the non-proliferation of nuclear weapons was signed (by the governments of US, UK and USSR) in 1968. From this history of treaty-making, Russell is somewhat optimistic that something similar could be achieved for AI weaponry and he has evidently participated in many conferences and high-level discussions about this.
But the task of defining exactly which types of AI should be banned is tricky, especially as there is much money to be made in peaceful uses of the same technology. Drones are used in wildlife photography and in battlefields; self-driving vehicles can be used as taxis or as tanks in warfare. In question time, there was some pessimistic pondering that, even if a negotiated ban on AI weaponry could be achieved, humans would just go on to develop the next technological wizard weapon, such as one based on DNA targeting.
What happens now?
There are many millions of people around the world wondering why they cannot be protected from technology that could hunt them down and kill them. Certainly it’s proven that soldiers are less willing to kill their own countrymen in civil conflicts, but the use of autonomous lethal weapons could mean that despotic leaders target, or at the very least threaten to target, members of its own country’s democratic groups, for instance.
Amnesty International is especially concerned about this and, in November 2021, it launched a campaign calling on government leaders around the world to launch negotiations for a new international law on autonomy in weapons.
Perhaps most worrying of all, in the light of Russell’s warning about the inability of AI to have a conscience, is the UK’s lack of a strategy in stymying some of the less attractive capabilities of AI. Again, US-based Politico reported that the UK was “embracing the freedom that comes from not being tied to Brussels,” that there are no plans to regulate the tech and, most chillingly, “The country is considering scrapping a rule that prohibits automated decision-making without human oversight, arguing that it stifles innovation.”
You might want to read that again… a rule that prohibits automated decision-making without human oversight “stifles innovation.”
As far back as 2018, the Guardian was investigating Ministry of Defence-funded programmes into AI technology and quoted Peter Burt, author of a report Off the Leash: The Development of Autonomous Military Drones in the UK, “Despite public statements that the UK has no intention of developing lethal autonomous weapon systems, there is tangible evidence that the MoD, military contractors and universities in the UK are actively engaged in research and the development of the underpinning technology with the aim of using it in military applications.”
In question time, after the lecture, it was revealed that employees of the MoD engaged in the arms industry which include AI weapons number more than 4,000 in the UK.
For a country looking to capitalise on its technological capabilities, and not be hamstrung by pesky EU protocols, Brexit really was a winner – just not the one that was promised. Just rip up the rules and prosper, no matter the casualties!