It’s that time of year. As we count down the final moments of 2016, we watch reviews of the year that make us laugh and cry, prepare our New Year’s Resolutions and mean it this time, and take inventory of catalogs of prediction on prediction about what’s in store in 2017 for celebrities, for techies, for Wall Street. No matter who you are and what your interest, there’s a list of predictions tailored for your planning purposes. Here’s one to watch.


The battle between thinking humans and artificial intelligence is about to begin. Before Christmas, the United Nations in all seriousness decided that 2017 is the year it begins serious consideration of the really complex moral and ethical questions surrounding deployment of what the United Nations Office for Disarmament Affairs calls Lethal Autonomous Weapons Systems (LAWS), more affectionately known as killer robots. Human Rights Watch first brought the issues to the UN back in 2012 in “Losing Humanity: The Case Against Killer Robots,” which is really worth a read. “With the rapid development and proliferation of robotic weapons,” the piece begins, “machines are starting to take the place of humans on the battlefield. Some military and robotics experts have predicted that ‘killer robots—fully autonomous weapons that could select and engage targets without human intervention—could be developed within 20 to 30 years.”

Twenty to 30 years came pretty quickly. In October, the Washington Post’s Vivek Wadhwa and Aaron Johnson reported, “The United States has on its Aegis-class cruisers a defense system that can track and destroy anti-ship missiles and aircraft. Israel has developed a drone, the Harpy [sic, it’s actually HAROP, but Harpy sounds more imposing], that can detect and automatically destroy radar emitters. South Korea has security-guard robots on its border with North Korea that can kill humans.” Here’s the clincher: “All of these can function autonomously,” Wadhwa and Johnson wrote, “without any human intention.” So, Human Rights Watch was all over it, though its timeline was way too conservative.


Earlier this month, Human Rights Watch announced that in Geneva, the UN “agreed to formalize their efforts next year to deal with the challenges raised by weapons systems that would select and attack targets without meaningful human control.” Human Rights Watch Arms Director Steven Goose wrote for the very serious StopKillerRobots.Org, “Governments have heeded the call of civil society to formalize and expand their deliberations on lethal autonomous weapons systems next year. The decision taken today by 89 nations at the Convention on Conventional Weapons (CCW) to establish a Group of Governmental Experts brings the world another step closer towards a prohibition on the weapons.” I think most would agree that Goose is way too glass-is-half-full about prohibition.

It’s highly unlikely the United Nations is going to put a dent in the march of technology towards more prolific LAWS, Lethal Autonomous Weapons Systems. However, those will be discussions to watch because they will grow the currently embryonic moral and ethical discussions about how artificial intelligence will take its place in our world. And here’s one place you can begin: “Morals and Ethics in Drone and Robotic Warfare.”

Related News

Ed Ledford enjoys the most challenging, complex, and high stakes communications requirements. His portfolio includes everything from policy and strategy to poetry. A native of Asheville, N.C., and retired Army Aviator, Ed’s currently writing speeches in D.C. and working other writing projects from his office in Rockville, MD. He loves baseball and enjoys hiking, camping, and exploring anything. Follow Ed on Twitter @ECLedford.