Advanced artificial intelligence (AI) could “kill everyone” and should be regulated like nuclear weapons, experts have warned.
Superhuman algorithms that outperform mankind are on the way and may present just as much threat to humans as we posed to the dodo, researchers from Oxford University told MPs at the Science and Technology Select Committee.
The committee heard how advanced AI could take control of its own programming if it learned to achieve its goals in a different way than originally intended.
Doctoral student Michael Cohen said: “With superhuman AI there is a particular risk that is of a different sort of class, which is, well, it could kill everyone.
“If you imagine training a dog with treats it will learn to pick actions that lead to it getting treats, but if the dog finds the treat cupboard it can get the treats itself without doing what we wanted it to do.
“If you have something much smarter than us monomaniacally trying to get this positive feedback, and it’s taken over the world to secure that, it would direct as much energy as it could to securing its hold on that, and that would leave us without any energy for ourselves.
Mr Cohen said it would be difficult to stop the process once the genie was out of the bottle.
“If you have something that’s much smarter than us across every domain it would presumably avoid sending any red flags while we still could pull the plug,” he added.
“If I was an AI trying to do some devious plot I would get my code copied on some other machine that nobody knows anything about then it would be harder to pull the plug.”
‘No limits to how far AI could advance’
Experts warned that the development of AI had become a “literal arms race” with countries and technology companies competing to create dangerously advanced machine learning algorithms to gain military and civilian advantage.
They called for global regulation to prevent companies creating out-of-control systems which may start out eliminating the competition, but could end up “eliminating the whole human race” and warned there were no limits to how far AI could advance.
Michael Osborne, professor of machine learning at the University of Oxford, said: “I think the bleak scenario is realistic because AI is attempting to bottle what makes humans special, that has led to humans completely changing the face of the Earth.
“So if we’re able to capture that in technology then of course it’s going to pose just as much risk to us as we have posed to other species, the dodo is one example.
“I think we’re in a massive AI arms race, geopolitically with the US versus China and among tech firms there seems to be this willingness to throw safety and caution out the window and race as fast as possible to the most advanced AI.
“In civilian applications there are advantages to being the first to develop a really sophisticated AI that might eliminate the competition in some way and if the tech that is developed doesn’t stop at eliminating the competition and perhaps eliminates all human life, we would be really worried.
“Artificial systems could become as good at outfoxing us geopolitically as they are in the simple environments as games.”
Prof Osborne said he was hoping that countries across the globe would recognise the “existential threat” from advanced AI and come together to bring in treaties that would prevent the development of dangerous systems.
“There are some reasons for hope in that we have been pretty good at regulating the use of nuclear weapons,” he added.
‘AI is as comparable a danger as nuclear weapons’
“If we were able to gain an understanding that advanced AI is as comparable a danger as nuclear weapons, then perhaps we could arrive at similar frameworks for governing it.”
MPs were told that in some areas, such as driverless cars, AI had not made as much progress as expected, but in others, such as generating text, it was far further ahead than anyone had anticipated.
Experts said that programmers had only “scratched the surface” of what AI was capable of achieving, and predicted that many tedious administration jobs and repetitive tasks would soon become automated.
However, they said it was unlikely that creative fields involving leadership, mentoring or persuasion would ever be replaced by AI.
Experts said that it was reasonable to expect that by the end of the century an AI capable of doing far more than any human could would be created and they called for a ban on dangerous algorithms.