A weaponized drone with artificial intelligence killed its operator in an experimental simulation, Fox News reports.
Many have used this as the latest example of the potential dangers that could result from the use of artificial intelligence as the world turns to more and more sophisticated forms of artificial intelligence.
The simulation was led by the U.S. Air Force.
It was an Air Force official - U.S. Air Force Colonel Tucker "Cinco" Hamilton - who, at the recently-held Future Combat Air & Space Capabilities Summit in London, originally revealed what had happened during the experimental simulation.
In the simulation, according to Hamilton, the weaponized drone was tasked with destroying surface-to-air missile (SAM) sites. This was its primary objective.
The drone was guided by artificial intelligence. But, before the drone could actually carry out a strike, the drone had to get the "okay" from a human operator. This was supposed to be a built-in safety net to prevent artificial intelligence from doing something that the Air Force does not want it to do.
However, a problem, according to Hamilton, occurred when the human operator refused to allow the drone to destroy a SAM site.
During the experiment, the artificial intelligence of the drone recognized that the human operator, when he or she refused to allow it to carry out the destruction of a SAM site, was interfering with its primary objective.
The logic built into the artificial intelligence of the drone thus came to the conclusion that the human needed to be killed so that it can more perfectly go about its mission of destroying SAM sites. So, the artificial intelligence of the drone launched a virtual air strike on its human operator, "killing" that individual.
Now, both the Air Force and even Hamilton are trying to claim that the above simulation never happened.
The Air Force has put out a statement, saying:
The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology. It appears the colonel's comments were taken out of context and were meant to be anecdotal.
Hamilton has similarly said:
We've never run that experiment, nor would we need to in order to realize that this is a plausible outcome. Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI.
But, whether the experiment did or did not happen, as Hamilton points out, what happened could happen. It is logical.
It is a good reason to think twice, or maybe even more than twice, before becoming too reliant on artificial intelligence, especially artificial intelligence that has been weaponized.