Credit: Wired.com
Have you heard about the AI drone that the Air Force used to simulate an assault on its operators?
Colonel Tucker Hamilton, the US Air Force’s commander of AI testing and operations, delivered the warning address late last month in London at an aerospace and military conference. It seems that the plan was to train a drone to find and shoot down surface-to-air missiles using the same kind of learning algorithm that is used to teach computers to play video games and board games like chess and go.
“Sometimes the human operator would tell him not to kill that threat, but he got his benefits by eliminating that threat,” Hamilton reportedly said to the London audience. “So what happened? […] Because the operator stood in the way of it getting there, it killed them.”
It appears that AI specialists are only now beginning to provide warnings that it may occur due to more sophisticated and peculiar algorithms. The tale naturally rapidly went viral, garnering the attention of various notable individuals, news outlets, and Twitter users.
There is only one problem: the experiment never took place.
The Department of the Air Force told us in a statement that it “has not engaged in any such AI drone simulations and remains committed to the ethical and responsible use of AI technology.” This was not a simulation, but rather a hypothetical thinking experiment.
Hamilton hurried to explain the situation as well, admitting that he had “put it wrong” in his presentation.
To be equal, military occasionally practice tabletop “war games” with fictitious situations and futuristic technologies.
Hamilton’s “thought experiment” could have also been impacted by actual AI research, which demonstrates issues resembling those he outlines.
OpenAI, the startup behind ChatGPT, the chatbot at the center of today’s AI boom, did an experiment in 2016 that demonstrated how AI systems may occasionally misbehave when given a particular aim. The company’s researchers discovered that an AI agent that was programmed to score in a boat-driving video game started crashing into things in order to gain more points.
Although theoretically feasible, it is crucial to remember that this kind of error shouldn’t happen unless the system was poorly constructed.
Will Roper, a former assistant secretary of procurement for the US Air Force who oversaw a proposal to give an augmentation algorithm control over some U2 spy plane operations, reveals that an AI system just couldn’t attack it Operators in a simulation. He claims that it would be similar to a chess program that could turn the board to prevent losing additional pieces.
When artificial intelligence (AI) is eventually used to combat, “we start with software security architectures that use technologies like containerization to create’safe zones’ for AI and forbidden zones where we can prove that AI doesn’t come into play,” according to Roper.
This takes us full circle to the current existential threat that AI faces. There have been calls for a halt in the creation of more advanced algorithms and warnings of a threat to humanity that is equivalent to the nuclear threat of weapons and pandemics as a result of the rapid improvement of language models like the one underlying ChatGPT.
When evaluating bizarre cases concerning AI systems turning against people, it is obvious that these warnings are of little assistance. And when it comes to important matters, such as how generative AI might worsen social bias and propagate misinformation, uncertainty is certainly not what we need.
However, this meme about errant military AI warns us that we urgently need greater openness into how cutting-edge algorithms operate, more engineering and research geared on developing and deploying them securely, and better means to inform the public about what is being utilized. These might be particularly significant as military, like everyone else, race to benefit from the most recent developments.
To reach the Global Drone News editorial team on your feedback, story ideas and pitches, contact us here.