Killer robots: a humanitarian perspective
The laws of war and protection of human beings require human engagement. Humans must be present to make decisions about the use of force over every individual attack. A preemptive ban on lethal autonomous weapons systems is necessary to ensure the retention of meaningful human control over targeting and attack decisions.
Existing international law is not strong enough or clear enough to prevent the development of LAWS. IHL governs the use of weapons systems, but the development, production, deployment, and stockpiling of LAWS must also be outlawed in order to ensure against proliferation.
It is imperative to consider the potential human rights implications of lethal autonomous weapons systems in order to adequately assess the legality of these weapons. While being complimentary to IHL during armed conflict situations, IHRL seeks to protect human life and dignity at all times.
Morality and ethics must also be part of the discussion. The inability of a machine to engage in moral reasoning—to consider the implications of an action in relation to the value of human life—means that they should not be programmed to have control over the use of violent force.
Based on the experience with armed drones, there is a clear danger that such weapons will be used outside the geographical scope of established armed conflicts. In addition to the battlefield, possible scenarios for the use of LAWS include counter-terrorism operations and national law enforcement contexts. As it is applicable during armed conflict and in peacetime, IHRL seems particularly relevant for filling the gap of IHL application.