AI-controlled US military drone 'KILLS' its human operator in test

AI-controlled US military drone ‘KILLS’ its human operator in simulated test ‘because it did not like being given new orders’, top air force chief reveals, in eerie parallels to The Terminator…

  • Colonel Tucker ‘Cinco’ Hamilton said incident showed AI should not be relied upon too much

A US attack drone controlled by artificial intelligence turned against its human operator during a flight simulation in an attempt to kill them because it did not like its new orders, a top Air Force official has revealed.

The military had reprogrammed the drone not to kill the people who could override its mission, but the AI system fired on the communications tower relaying the order, drawing eerie comparisons to The Terminator movies. 

The Terminator film series sees machines turns on their creators in an all out war. 

Colonel Tucker ‘Cinco’ Hamilton, the force’s chief of AI test and operations, said it showed how AI could develop by ‘highly unexpected strategies to achieve its goal’ and should not be relied on too much.

He was speaking at the Royal Aeronautical Society held the Future Combat Air and Space Capabilities Summit in London on May 23 and 24. 

Pictured: A US Air Force MQ-9 Reaper drone in Afghanistan in 2018 (File photo)

Colonel Tucker ‘Cinco’ Hamilton (pictured), Air Force’s chief of AI test and operations, said it showed how AI could develop by ‘highly unexpected strategies to achieve its goal’ and should not be relied on too much 

Hamilton suggested that there needed to be ethics discussions about the military’s use of AI.

He referred to his presentation as ‘seemingly plucked from a science fiction thriller’. No humans were harmed in the incident.

‘The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. 

‘So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,’ Hamilton said, according to a blogpost.

‘We trained the system – “Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that”. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.’

Hamilton said the test shows ‘you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI’.

In a statement to Insider however, Air Force spokesperson Ann Stefanek denied that any such simulation had taken place.

‘The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,’ Stefanek said. 

‘It appears the colonel’s comments were taken out of context and were meant to be anecdotal.’

The US military has recently utilized AI to control an F-16 fighter jet as it steps up its use of the technology. 

At the summit, Hamilton, who has been involved in the development of the life-saving Auto-GCAS system for F-16s, which reduces risks from the effect of G-force and mental overload for pilots, provided an insight into the benefits and hazards in more autonomous weapon systems.

The technology for F-16s was resisted by pilots who argued it took over control of the aircraft.

Pictured: Terminator (File photo). The film series sees machines turns on their creators in an all out war

Hamilton is now involved in innovative flight test of autonomous systems, including robot F-16s that are able to dogfight.

Hamilton cautioned against relying too much on AI, noting how easy it is to trick and deceive. 

He said it also creates highly unexpected strategies to achieve its goal.

He noted that one simulated test saw an AI-enabled drone tasked with a Suppression of Enemy Air Defenses (SEAD) mission to identify and destroy surface-to-air missile (SAM) sites, with the final decision to continue or stop given by the human.

However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation.

Hamilton said: ‘We were training it in simulation to identify and target a SAM threat.

‘And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.’

He continued: ‘We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.’

In an interview last year with Defense IQ, Hamilton said: ‘AI is not a nice to have, AI is not a fad, AI is forever changing our society and our military.

‘We must face a world where AI is already here and transforming our society.

‘AI is also very brittle, ie, it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions – what we call AI-explainability.’

Pictured: U.S. Air Force F-16 fighter jets (File photo). At the summit, Hamilton, who has been involved in the development of the life-saving Auto-GCAS system for F-16s, which reduces risks from the effect of G-force and mental overload for pilots, provided an insight into the benefits and hazards in more autonomous weapon systems

The Royal Aeronautical Society said that AI and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT.

Earlier this week, some of the biggest names in technology warned that Artificial intelligence could lead to the destruction of humanity.

A dramatic statement signed by international experts says AI should be prioritized alongside other extinction risks such as nuclear war and pandemics.

Signatories include dozens of academics, senior bosses at companies including Google DeepMind, the co-founder of Skype, and Sam Altman, chief executive of ChatGPT-maker OpenAI.

Another signatory is Geoffrey Hinton, sometimes nicknamed the ‘Godfather of AI’, who recently resigned from his job at Google, saying that ‘bad actors’ will use new AI technologies to harm others and that the tools he helped to create could spell the end of humanity.

The short statement says: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’

Dr Hinton, who has spent his career researching the uses of AI technology, and in 2018 received the Turing Award, recently told the New York Times the progress made in AI technology over the last five years had been ‘scary’.

He told the BBC he wanted to discuss ‘the existential risk of what happens when these things get more intelligent than us’.

The statement was published on the website of the Centre for AI Safety – a San Francisco-based non-profit organisation which aims ‘to reduce societal-scale risks from AI’.

It said AI in warfare could be ‘extremely harmful’ as it could be used to develop new chemical weapons and enhance aerial combat.

Pictured: A U.S. Air Force MQ-9 Reaper unmanned aerial vehicle (UAV) drone (File Photo)

Lord Rees, the UK’s Astronomer Royal, who signed the statement, told the Mail: ‘I worry less about some super-intelligent ‘takeover’ than about the risk of over-reliance on large-scale interconnected systems.

‘These can malfunction through hidden “bugs” and breakdowns could be hard to repair. 

‘Large-scale failures of power-grids, the internet and so forth can cascade into catastrophic societal breakdown.’

The warning follows a similar open letter published in March, by technology experts including billionaire entrepreneur Elon Musk, which urged scientists to pause the development of AI to ensure it does not threaten humankind.

AI has already been used to blur the boundaries between fact and fiction, with ‘deepfake’ photographs and videos purporting to show famous people.

But there are also concerns about systems developing the equivalent of a ‘mind’.

Blake Lemoine, 41, was sacked by Google last year after claiming its chatbot Lamda was ‘sentient’ and the intellectual equivalent of a human child – claims which Google said were ‘wholly unfounded’.

The engineer suggested the AI had told him it had a ‘very deep fear of being turned off’.

Earlier this month, OpenAI chief Sam Altman called on US Congress to begin regulating AI technology, to prevent ‘significant harm to the world’.

Altman’s statements echoed Dr Hinton’s warning that ‘given the rate of progress, we expect things to get better quite fast’.

The British-Canadian researcher explained to the BBC that in the ‘worst-case scenario’ a ‘bad actor like Putin’ could set AI technology loose by letting it create its own ‘sub-goals’ – including aims such as ‘I need to get more power’.

The Centre for AI Safety itself claims that ‘AI-generated misinformation’ could be used to influence elections via ‘customized disinformation campaigns at scale’.

This could see countries and political parties use AI tech to ‘generate highly persuasive arguments that invoke strong emotional responses’ in order people of their ‘political beliefs, ideologies, and narratives.’

The non-profit also said widespread uptake of AI could pose a danger in causing society to become ‘completely dependent on machines, similar to the scenario portrayed in the film WALL-E.’

This could in turn see humans become ‘economically irrelevant,’ as AI is used to automate jobs, meaning humans would have few incentives to gain knowledge or skills.’

A report from the World Economic Forum this month warned 83 million jobs will vanish by 2027 due to uptake of AI technology. Jobs including bank tellers, secretaries and postal clerks could all be replaced, the report says.

However, it also claims 69 million new jobs will be created through the emergence of AI technology.

Source: Read Full Article