To kill or not to kill? Do autonomous weapons respect IHL when taking life and death decisions over humans?


Since 2016, a Group of Governmental Experts on emerging technologies have been meeting in Geneva, to discuss the latest developments in the field of autonomous weapons. The peculiarity of these armaments is that not only they move around, without the assistance of a remote pilot, but that they can identify targets and even engage them, using lethal force, without the need to receive inputs, nor authorisations, from human operators. The possibility that machines take alone life and death decisions over humans raises several concerns, related in particular to their capacity to abide by International Humanitarian Law (IHL). Indeed, the application of IHL norms entails the ability to carry out complex evaluations, based on shifting factors, that appear too complex for robots, considering current technological advancements. Nevertheless, on the other hand, some characteristics of autonomous weapons make them the best option to minimise collateral damages. These contradictions originated a heated debate, which is still ongoing, regarding the advisability of a ban on autonomous weapons. This study aims at identifying the core issues linked with the employment of autonomous weapons, from an IHL perspective and at analysing the feasibility of possible solutions.

Full Text