What is it, and what does it mean for our future?
In the latest Reith Lectures, Stuart Russell, Professor of Computer Science at the University of California, Berkeley, explored what he considers to be the most profound change in human history – that of the role of artificial intelligence, or AI. Over four lectures, he looked at our fears about the technology, the impact on jobs and the economy, and asked who will ultimately be in the control – us, or the machines?
In his first lecture, Russell revealed that Alan Turing – best known for breaking the Enigma Code – was examining the core aspects of AI, including machine-learning, as far back as 1950. Turing ominously predicted that “Once the machine thinking method had started, it would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control”. But it’s not all bad news.
The benefits of AI
Russell explained that AI tries to replicate human intelligence in order to achieve objectives, and so far it has been very successful. The ability for AI to ‘learn’ was first established when a draughts programme learnt how to beat its inventor. Logical reasoning and planning abilities has meant that robots could be developed.
Today AI is all around us, used in everything from search engines to self-driving cars. It has the ability to recognise speech – “Siri, turn on the radio!” – as well as objects, from chess pieces to aeroplanes.
At its best, AI can be used to perform those repetitive tasks that humans no longer want to do. It can do it more cheaply, efficiently, and at scale. It’s also estimated that using AI could produce a ten-fold increase in GDP. With the money saved, people around the world could be freed from monotonous, exhausting, even dangerous work, experience a decent standard of living, and have more leisure time. What’s not to love?
The downsides
But Russell also recognises, as Turing did, that, since AI is unable to have objectives of its own, it could be used for more sinister purposes too. Algorithms can be used to create racial and gender bias, disseminate disinformation, create deep fakes, and facilitate cybercrime. It’s easy to see how its misuse could threaten stability, democracy, even mankind itself.
“Once the machine thinking method had started, it would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control.”
Alan Turing
Turing’s prediction that AI could ultimately take control was realised in the post-apocalyptic film ‘Terminator’, where a cyborg with the capacity for self-thought, goes back in time to assassinate a woman whose as yet unborn son grows up to save mankind from hostile AI.
AI and Brexit
Problems arise when we’re not aware of what AI is capable of, or how it is operating without our knowledge. An example of this is within social media, where algorithms provide ‘curated’ content for your news feed or viewing choices, based on your previous clicks and decisions. AI is able to anticipate and predict what you may want to read, or watch. Russell suggests that, while algorithms aren’t intelligent, “They have more power over people’s cognitive intake than any dictator in history.”
“[Algorithms] have more power over people’s cognitive intake than any dictator in history.”
Stuart Russell
The Sovereign Individual (1997) written by James Dale Davidson and Lord William Rees-Mogg (father of the current Leader of the House, Jacob Rees-Mogg) should also give us pause for thought. In it they anticipated crypto-currencies, internet bots and the gig economy. They also prophesied that the internet would “liberate information from the bounds of reality… that you’ll see any story you wish, true or false, unfold on your computer with greater verisimilitude than anything that NBC or the BBC can now muster.” They also predicted that the epidemic of disinformation would fragment society and lead to the death of democracy as we know it. It’s not hard to believe that, along with the bots, that dystopian world is already upon us.
I’m reminded of Cambridge Analytica’s role in the EU referendum, when it harvested Facebook users’ data in order to provide targeted content, unseen by others, that would reinforce an individual’s unconscious bias eg those worried about immigration would see content that suggested that if Turkey joined the EU, millions of migrants would come to the UK. This wasn’t true, but it could have persuaded some to vote to leave the EU. As the margin of the ‘win’ was narrow, it’s fair to say that there was a good chance that the use of AI swung the referendum. Clearly democracy had been damaged, but to whose advantage?*
According to Elaine Kamarck in her 2018 article ‘Malevolent soft power, AI, and the threat to democracy’ there was evidence of Russian interference in the 2016 US election of Donald Trump. What may not be widely realised was that Russia was also found to have interfered in elections all over the world, including the Ukraine, Scotland, Austria, Belarus, Bulgaria, Czech Republic, France, Germany, Italy, Malta, Moldova, Montenegro, Netherlands, Norway, and Spain, as well as the Brexit vote in Great Britain.
Our government appears to be untroubled by this clear threat to democracy. The long-awaited Russia Report concluded that no interference could be found, but that was because, as Jonathan Lis pointed out in the Guardian in 2020, “neither the British government nor intelligence agencies made any effort to investigate the alleged hacking of the UK’s most significant democratic event in generations.” So, if you don’t look for it, you won’t find it.
Brexit might just be the start of it. Aside from the most obvious act of self-harm in removing the country from its closest markets, the government’s eagerness to escape from ‘red-tape’, rules and regulations, also makes us more vulnerable, especially to some of the less attractive aspects of AI.
Cybercrime
It’s an issue that affects everyone, and includes everything from hacking, fake websites, bogus emails requesting security information, and automated blackmail – and it costs individuals and UK businesses tens of thousands of pounds a year. But it’s a business that is growing. According to Cybercrime magazine in November 2021, if it were a country, cybercrime would be the third largest economy after the US and China. At the end of 2021 it had cost the world $6.5 trillion. That figure is predicted to rise to $10.5 trillion by 2025.
Some governments are trying to tackle this with regulation. 50 countries have signed up to the Council of Europe’s Budapest Convention on Cybercrime, which includes the exchange of information across borders. However, since the Brexit Transition Period is now over, and the UK has ceased its membership of the EU’s political institutions, it is not certain which protocols will still be signed up to.
The UK no longer has a place on the team that manages Europol, the agency that investigates Europe-wide organised crime, and, according to Dr Tim Stevens, Senior Lecturer in Global Security and Head of the King’s Cyber Security Research Group, the UK’s level of access to EU policing and security databases, essential to fighting cybercrime, will be severely diminished.
The EU is also going to ban the use of machine impressions of human beings – where a phone call that may sound like your wife (but isn’t) asks you to send money, for instance. Deep fakes are also of concern, where technology can synthesise faces and voices in order to create media showing that an event happened when it didn’t. Unfortunately, as Russell admits, the safety rules to make sure that humanity doesn’t lose control have not been written yet.
But is our government sufficiently worried about it? Not much. A recent article in US-based Politico reported that “The UK’s strategy, which markedly contrasts [with] the EU’s own AI proposed rules, indicates that it’s embracing the freedom that comes from not being tied to Brussels, and that it’s keen to ensure that freedom delivers it an economic boost.” It goes on to say, “Absent from the strategy are its plans on how to regulate the tech, which has already demonstrated potential harms.”
“Absent from the [UK’s] strategy are its plans on how to regulate the tech, which has already demonstrated potential harms.”
Politico
A ‘Profit before People’ strategy is unlikely to end well for the majority of us. The next article will be on Reith lecture 2 about the use of AI in warfare.
*In a TedTalk in 2019, award-winning journalist Carole Cadwalladr said that democracy is under serious threat. Cambridge Analytica (once owned by Robert Mercer, who bankrolled Donald Trump, and heavily involved in the Brexit referendum) harvested the profiles of 87 million Facebook users, politically, to “understand their fears and better target them with Facebook ads”. She suggested that “spreading lies in darkness with illegal cash, from God-knows-where, is subversion”. In her talk she wondered whether it was possible “to have a free and fair election ever again?” Cadwalladr is currently in court being sued by Arron Banks (who donated £8 million to the Leave.EU campaign group) for a comment that she made in that talk.