Pentagon Urges 'Appropriate Levels of Human Judgment' When Dealing With Killer Robots

3d rendering of robot hand holding grenade
The Pentagon Sets Killer Robot Rulessarah5 - Getty Images
  • The Department of Defense wants a transparent policy regarding military use of autonomous and artificial intelligence-led weapons.

  • The new directive is an update a decade in the making, amidst ever-changing technology.

  • The Pentagon is now focused on “appropriate levels of human judgment.”


The United States Department of Defense calls them “autonomous systems.” Others call them “killer robots.” Whatever term you prefer, the Pentagon has laid out an updated directive for how it plans to deal with autonomous and artificial intelligence-led weapons. And it may not be what everyone else in the world wants to see.

“DoD is committed to developing and employing all weapon systems, including those with autonomous features and functions, in a responsible and lawful manner,” Kathleen Hicks, deputy secretary of defense, says in a news release announcing an updated to DoD Directive 3000.09, Autonomy in Weapon Systems.

She continues:

“Given the dramatic advances in technology happening all around us, the update to our Autonomy in Weapon Systems directive will help ensure we remain the global leader of not only developing and deploying new systems, but also safety.”

The directive considers major technological evolution over the past decade, and the Pentagon says the update establishes a goal of minimizing the “probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.”

While some across the world call for a ban on all remote weapons, the United Nations has spent years discussing the concept. The level of human interaction with the autonomous weapons is the main sticking point—well, that and the concern the weapons go rogue somehow. For the U.S., that level of human engagement just needs to be “appropriate.”

“Autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force,” the directive states. The DoD says people who authorize the use of the systems “will do so with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement.”

The use of systems incorporating AI capabilities is still allowed in the DoD directive, so long as they follow the DoD’s AI Ethical Principles and the Responsible AI Strategy and Implementation Pathway.

You Might Also Like

Advertisement