Autonomous weapons are not a new concept. The oldest known automatically triggered lethal weapon is the landmine. This simple device is typically binary in nature with no additional intelligence. But with continual technological advancement in war-fighting technology, lethality, and speed are rapidly changing the battle space.
Robotics and AI Make an Impact
The incorporation of robotics and artificial intelligence (AI) in hi-tech autonomous and semi-autonomous weaponry is new. Countries around the world are acquiring and integrating a multitude of rapidly changing technology in their battle systems, plans and operations.
In engineering, being “autonomous” is a reference to a machine’s ability to operate without human involvement. Not having a “human in the loop” in weapon systems creates a rapid response capability. Lethal autonomous weapon systems (LAWS) were originally designed to be defensive in nature. Active protection systems, such as the Phalanx, a radar-guided close in weapon systems (CIWS), have been used to defend ships since the 1970s. Generally, these systems autonomously identify and attack oncoming projectiles, protecting personnel and installations. Similar capabilities exist on some tanks, such as the Russian Arena and Israeli Trophy. Many missile defense systems, such as Iron Dome, also have autonomous targeting capabilities.
Defense to Offense Capabilities
However, AI is enabling a rapid transition from defensive autonomous weapons to offensive LAWS, with the lines rapidly being blurred. Currently, Israel Defense Forces (IDF) are deploying one of the first military robots in the world, replacing soldiers on their borders. According to the IDF website, ‘Jaguar’ is being integrated into the Gaza Division in southern Israel to protect border. The semi-autonomous robot is equipped with numerous sensors, an automated driving system, advanced fire capabilities, and a public address (PA) system. According to the IDF, Jaguar will provide non-stop (24/7/365) surveillance on the Israeli-Gaza border, identifying and thwarting terrorist activities. Carrying a machine gun, Jaguar can drive to designated locations, return fire, and even self-destruct when compromised. However, the robot needs a human operator to initiate the firing from the machine gun. Last week, Interesting Engineering News reported that Israel deployed Jaguar during the Gaza conflict in May.
In 2018, the U.N. Secretary-General António Guterres stated that “Autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”. The UN chief also noted in his statement that this represented a “line in the sand” that could not be crossed. At that time, some Member States believed new legislation would be required, while others desired to agree on less stringent measures.
The Line the Sand Has Been Crossed
A new U.N. report in March concluded at least one autonomous drone operated by AI may have killed people in Libya. This would be the first documented attack by a drone without any human consultation. The report stated the autonomous aircraft may have “hunted down and remotely engaged” soldiers and convoys fighting for Libyan general Khalifa Haftar in the spring of 2020. It is still unknown who exactly deployed these killer drones.
However, remnants of a Turkish military contractor produced “Kargu-2” drone were found in Libya. Turkish state-owned defense company STM, producer of the Kargu-2 states on their website that the Kargu is a rotary wing attack drone that can be deployed and operated by a single person in both autonomous and manual modes. STM states, “The system is engineered specifically for anti-terror and asymmetric warfare scenarios, responding against stationary or mobile targets with swarming capability.” The Kargu-2 is reported to use embedded real-time image processing capabilities with machine learning algorithms, using facial recognition software. STM recently stated the Kargu is not capable of launching fully autonomous attacks on targets, countering the United Nations report on Libya and autonomous weapons.
Last week, the Washington Post stated the Pentagon believes a ban on AI weapons is not necessary, and concerns are overblown as humans can effectively control autonomous weapons. The Russian government stance is simple, saying true AI weapons do not exist and therefore cannot be banned.
However, the U.N. Secretary-General’s “line in the sand’ has now been crossed. Autonomous offensive weapons are here and being used on the battlefield. The recent conflict between Turkey and Syria and the Libyan civil wars effectively demonstrate weapons maybe making their own decisions and killing people.