Did an AI-controlled drone ‘kill’ its operator during a simulation?

The US Air Force denied on Friday June 2 that it had carried out a virtual test during which a drone controlled by artificial intelligence decided to “kill” its operator to prevent him from interfering with its mission, according to a information from the Guardian.

At the “Future Combat Air and Space Capabilities” summit held in London last May, Colonel Tucker Hamilton, head of artificial intelligence testing and operations within the United States Air Force, had claimed that the drone used ” very unexpected strategies to achieve your goal ».

After being ordered by the AI ​​to destroy an enemy’s air defense systems, the drone allegedly attacked anyone trying to interfere with that mission. ” He killed the operator because this person was preventing him from achieving his goal Col. Hamilton reportedly said according to a blog post. No real people were hurt. And when the military tried to train the AI ​​not to kill the operator, it allegedly got around the problem by destroying the communication system with the troublesome operator:

« We told the system: “Don’t kill the operator, it’s wrong. You will lose points if you do this” said Colonel Hamilton. He then proceeded to destroy the communication tower which the operator uses to communicate with him and prevent him from killing the target ».

US military slams comments ‘taken out of context’ »

The story, widely publicized, illustrates the challenges faced by the military world in the face of artificial intelligence. But is it true? Not so sure… In a statement to Insider, US Air Force spokeswoman Ann Stefanek denied the existence of such a simulation. ” The Air Force Department has not performed such drone simulations and remains committed to the ethical and responsible use of AI technology.she said. It appears the Colonel’s comments were taken out of context and intended to be anecdotal ».

In addition, the colonel himself returned to his statements, in a note from the Royal Aeronautical Society which organized the conference, published this Friday. He explains that he “ badly expressed during the conference, and that the scenario of an AI which turns against its operator is only a ” assumption ». “We don’t need to run a simulation to know that this scenario is possible,” he says, however.

The Royal Aeronautical Society, which organized the conference, and the US Air Force did not respond to Guardian requests for comment.

Source : Nouvelobs

Share this article:

Leave a Reply

most popular