In a closing session at MIT’s Aeronautics and Astronautics symposium in Boston, entrepreneur Elon Musk warned of the risks humanity faces from artificial intelligence (AI).
Musk, who co-founded PayPal and electric car company Tesla, said international legislation would be needed to prevent the technology being abused in a way that would harm humanity.
When asked whether artificial intelligence was ready for “prime time”, he described AI, as “summoning the demon” analogous to someone using a pentagram and holy water to attempt to control a demon.
“There should be some regulatory oversight at the national and international level, just to make sure we don’t do something very foolish,” he said.
Musk has previously tweeted about a book called SuperIntelligence, by Nick Bostrom. In the book, the author warns that a sufficiently advanced and easily modifiable machine intelligence – a “seed AI” – could apply its wits “to create a smarter version of itself”.
Such seed AI has fuelled sci-fi movies such as Terminator, where it is man versus the machines as self-aware robots become killing machines.
World War 2 code-breaker Alan Turning, who cracked the German Enigma cryptography and defined modern computing in his universal Turing Machine, also described many of the concepts behind AI.
His Turing Test outlines a series of questions a human could pose to a computer. A machine is said to exhibit human-like behaviour if the answers it gives are indistinguishable from a genuine human response.
In June this year, a computer program called Eugene Goostman became the first to succeed at stimulating a human response in the Turing Test. The engineers behind Eugene said the program was developed to simulate the responses of a 12-year old Ukrainian boy. It could be applied in areas such as chatbots to manage social network correspondence and spell-checking – a far cry from the doom and gloom of Musk’s warning.
But innovations such as self-driving cars are pushing the limits of current technology. In July, business secretary Vince Cable shared plans that will see driverless vehicles on UK roads from January 2015.
Writing in Computer Weekly last year, Gartner distinguished analyst Steve Prentice warned: “Smart systems like IBM’s Watson, autonomous vehicles and a growing army of robots are quietly making more and more decisions every day – decisions that increasingly affect our lives.”
He argued that if a machine makes a decision, what happens if it gets it wrong?
Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Related content from ComputerWeekly.com
RELATED CONTENT FROM THE TECHTARGET NETWORK