Autonomous Killing Machine
In February, the U.S. Army asked experts for ideas on how to build a system that would allow tanks and other ground-combat vehicles to quickly and automatically “acquire, identify, and engage” targets.
Some saw this as a step toward autonomous killer robots, leading the Army to now tweak its request.
Yes, it now says, it wants bots to be able to identify and kill targets. But that doesn’t mean “we’re putting the machine in a position to kill anybody,” an Army official told Defense One.
Just a Misunderstanding
According to the Defense One story, the Army decided to revise its request for information to make it clear that the Advanced Targeting and Lethality Automated System (ATLAS) would not violate the Defense Department’s policy requiring that a human always make the decision to use lethal force.
To that end, it added the following paragraph to the request:
All development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms, remain subject to the guidelines in the Department of Defense (DoD) Directive 3000.09, which was updated in 2017. Nothing in this notice should be understood to represent a change in DoD policy towards autonomy in weapon systems. All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DoD legal and ethical standards.
Bob Stephan, ATLAS project officer at Picatinny Arsenal, also stepped up to clarify how the military envisions ATLAS working in conjunction with human soldiers.
“The soldier would have to depress the palm switch to initiate firing,” he told Breaking Defense. “If that is never pulled down, the firing pin will never get to the weapon… That’s how we will make sure ATLAS never is allowed to fire autonomously.”