Tesla's Elon Musk and Stephen Hawking are among those calling for regulations to curb the development of lethal autonomous weapons systems and cyberterrorism

Scare stories have always held a particular sexiness for the media, and AI is no exception. Essentially, they take two forms: one, that a weaponised AI – particularly if harnessed by unsavoury regimes or terrorists – will unleash a wave of death and destruction; and two, that the technology itself will rapidly evolve into a “super-intelligence” far beyond the scope of humans to understand, let alone control. Ultimately, it might even decide that we are just a messy nuisance to be disposed of.

It would be tempting to dismiss both scenarios as the product of a febrile media, if it weren’t for the fact that some of the people who might be expected to be AI’s biggest cheerleaders are among the Cassandras. Take Tesla’s Elon Musk and Stephen Hawking. They joined other scientists in an open letter warning of the dangers of so-called LAWS (lethal autonomous weapons systems), widely seen as an early application of AI. Russian manufacturer Kalashnikov has already announced it is developing a series of AI “combat modules” (killer robots, basically), and China and the US are working on their own LAWS programmes too.

“Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and...

This content is premium content, and only accessible to subscribers. Please log in to view the content - or subscribe here.

Subscribe to read: Apocalypse soon? Tech giants warn of risks of 'AI arms race'



Already a subscriber? Login using the fields below.

To get access to this content, become an Ethical Corporation subscriber today.

Subscribe and join the likes of:

Subscribe here
Close popup
LAWS  McKinsey  Mark Zuckerberg  automation  self-driving cars  cyberterrorism 

comments powered by Disqus