AI could replace human military commanders in making life or death decisions

Modern military operations, whether it be combat, medical or disaster relief, require complex decisions to be made very quickly, and AI could be used to make them.

The Defense Advanced Research Projects Agency (DARPA) launched a new program aimed at introducing artificial intelligence into the decision making process.

This is because, in a real world emergency situation, that might require instant choices between who does and doesn’t get help, the answer isn’t always clear and people disagree over the correct course of action – AI will make a quick decision.

The latest DARPA initiative, called ‘In the Moment’, will involve new technology that could take difficult decisions in stressful situations, using live analysis of data, such as the condition of patients in a mass-casualty event and drug availability.  

It comes as the U.S. military increasingly leans on technology to reduce human error, with DARPA arguing removing human bias from decision making will ‘save lives’.

The new AI will take two years to train, then another 18 months to prepare, before it is likely to be used in a real world scenario, according to DARPA.

‘AI is great at counting things,’ Sally A. Applin, an expert in the interaction of AI and ethics, told Washington Post, adding ‘I think it could set a precedent by which the decision for someone’s life is put in the hands of a machine.’ 

Modern military operations, whether it be combat, medical or disaster relief, require complex decisions to be made very quickly, and AI could be used to make them. Stock image

According to DARPA, the technology is only part of the problem when it comes to switching to AI decision making, the rest is on building human trust. 

‘As AI systems become more advanced in teaming with humans, building appropriate human trust in the AI’s abilities to make sound decisions is vital,’ a spokesperson for the military research organization explained.

‘Capturing the key characteristics underlying expert human decision-making in dynamic settings and computationally representing that data in algorithmic decision-makers may be an essential element to ensure algorithms would make trustworthy choices under difficult circumstances.’ 

DARPA announced the In the Moment (ITM) program earlier this month, with the first task to work with trusted human decision makers, to explore the best options to take when there is no obvious agreed upon right answer.

The Defense Advanced Research Projects Agency (DARPA) launched a new program aimed at introducing artificial intelligence into the decision making process. Stock image

The Defense Advanced Research Projects Agency (DARPA) launched a new program aimed at introducing artificial intelligence into the decision making process. Stock image

‘ITM is different from typical AI development approaches that require human agreement on the right outcomes,’ said Matt Turek, ITM program manager. 

‘The lack of a right answer in difficult scenarios prevents us from using conventional AI evaluation techniques, which implicitly requires human agreement to create ground-truth data.’

For example, algorithms used by self-driving cars can be based on ground truth for right and wrong driving responses – based on traffic signs and rules of the road.

When the rules don’t change, hard coded risk values can be used to train the AI, but this won’t work for the Department of Defense (DoD). 

Fully autonomous Black Hawk helicopter takes to the skies without a pilot for the first time 

A fully autonomous Black Hawk helicopter has taken to the skies over the US without a human pilot on board for the first time.

A partnership between Lockheed Martin Sikorsky and the Defence Armed Research Projects Agency (DARPA), it took off from Fort Campbell in Kentucky on February 5. 

Without anyone on board, the UH-60A Black Hawk completed a 30 minute flight above the US army installation, with a second flight held on February 7.

It comes with an optionally piloted cockpit, that has to be switched from pilot, to autonomous mode – allowing an onboard computer brain to control the vehicle.

During the flight the Aircrew Labor In-Cockpit Automation System (ALIAS) autonomous pilot was presented with a series of simulated obstacles to overcome.

It had to execute a series of pedal turns, maneurvers and straightaways before carrying out a perfect landing back on the Fort Campbell runway – without any human intervention.

The autonomous helicopter could be used to deliver supplies to dangerous war zones, or recover soldiers without risking a pilot. 

‘Baking in one-size-fits-all risk values won’t work from a DoD perspective because combat situations evolve rapidly, and commander’s intent changes from scenario to scenario,’ Turek said. 

‘The DoD needs rigorous, quantifiable, and scalable approaches to evaluating and building algorithmic systems for difficult decision-making where objective ground truth is unavailable. 

‘Difficult decisions are those where trusted decision-makers disagree, no right answer exists, and uncertainty, time-pressure, and conflicting values create significant decision-making challenges.’ 

To solve the problem, DARPA is taking inspiration from the medical imaging analysis field.

In this area, techniques have been developed for evaluating systems even when skilled experts may disagree. 

‘Building on the medical imaging insight, ITM will develop a quantitative framework to evaluate decision-making by algorithms in very difficult domains,’ Turek said. 

‘We will create realistic, challenging decision-making scenarios that elicit responses from trusted humans to capture a distribution of key decision-maker attributes. 

‘Then we’ll subject a decision-making algorithm to the same challenging scenarios and map its responses into the reference distribution to compare it to the trusted human decision-makers.’

The program has four technical areas, covering different aspects of research.

The first looks at creating decision-maker characterization, that aims to identify key attributes of humans tasked with making decisions in the field. 

The second will be to create a score between a human decision-make and an algorithm – with the goal of creating algorithm decisions that humans can trust. 

The third will be to create a program, based on these scores, that can be evaluated, and the fourth will be to create policy and practice for its use.

It will be three and a half years before the final stage is reached, according to DARPA, with the first two years spent building a basic AI and testing it on different scenarios.

The second half, covering the final 18 months, will involve expanding the capabilities of the AI and testing it on more complex events with multiple casualties.    

NATO are also working to create AI assistants, that can help with decision making, in this case a triage assistant in collaboration with Johns Hopkins University. 

Colonel Sohrab Dalal, head of the medical branch for NATO’s Supreme Allied Command Transformation, told Washington Post triage could do with a refresh.

This is the process where clinicians visit soldiers to asses how urgent there care is, and hasn’t changed much in the past 200 years.  

His team will use NATA injury data, alongside casualty scoring systems, predictions and input on a patients condition to pick who should get care first.

‘It’s a really good use of artificial intelligence,’ Dalal, a trained doctor, said. ‘The bottom line is that it will treat patients better [and] save lives.’ 

HOW ARTIFICIAL INTELLIGENCES LEARN USING NEURAL NETWORKS

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.

ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.

Conventional AI uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.   

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information - including speech, text data, or visual images

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information – including speech, text data, or visual images

Practical applications include Google’s language translation services, Facebook’s facial recognition software and Snapchat’s image altering live filters.

The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge. 

A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other. 

This approach is designed to speed up the process of learning, as well as refining the output created by AI systems. 

***
Read more at DailyMail.co.uk