Joshua Foust
National Journal
October 11, 2013
Eventually, drones may have the technical ability to make even lethal decisions autonomously: to respond to a programmed set of inputs, select a target and fire their weapons without a human reviewing or checking the result. Yet the idea of the U.S. military deploying a lethal autonomous robot, or LAR, is sparking controversy. Though autonomy might address some of the current downsides of how drones are used, they introduce new downsides policymakers are only just learning to grapple with.
The basic conceit behind a LAR is that it can outperform and outthink a human operator. “If a drone’s system is sophisticated enough, it could be less emotional, more selective and able to provide force in a way that achieves a tactical objective with the least harm,” said Purdue University Professor Samuel Liles. “A lethal autonomous robot can aim better, target better, select better, and in general be a better asset with the linked ISR [intelligence, surveillance, and reconnaissance] packages it can run.”
Though the pace for drone strikes has slowed down — only 21 have struck Pakistan in 2013, versus 122 in 2010 according to the New America Foundation — unmanned vehicles remain a staple of the American counterinsurgency toolkit. But drones have built-in vulnerabilities that military planners still have not yet grappled with. Last year, for example, an aerospace engineer told the House Homeland Security Committee that with some inexpensive equipment he could hack into a drone and hijack it to perform some rogue purpose.
Drones have been hackable for years. In 2009, defense officials told reporters that Iranian-backed militias used $26 of off-the-shelf software to intercept the video feeds of drones flying over Iraq. And in 2011, it was reported that a virus had infected some drone control systems at Creech Air Force Base in Nevada, leading to security concerns about the security of unmanned aircraft.
Read more