Lethal Autonomous Weapons Systems (LAWS), so-called killer robots, are weapons able to select and fire upon targets on their own, without any human intervention. Although weapons with full autonomy have not yet been deployed, the United States, United Kingdom, Israel, and the Republic of Korea currently deploy weapons with various degrees of autonomy and lethality. Research and development on further autonomy for weapon systems is ongoing.
While autonomy in technical systems is a feature of today’s world, the decision of weaponising these capabilities is not inevitable. The increasing autonomy of weapons systems is a major threat to compliance with international human rights law (IHRL) and international humanitarian law (IHL), as well as to morality and ethics. Due to this, WILPF joined the Campaign to Stop Killer Robots to contribute to a coordinated civil society response to the multiple challenges that autonomous weapons pose to humanity.
WILPF urges all countries to openly elaborate their policy on autonomous weapons, particularly with respect to ethical, legal, policy, technical, and other concerns that have been raised in the UN Special Rapporteur Heyns 2013 report. WILPF calls on countries to endorse the recommendations in the report, including the call for a moratorium on lethal autonomous robotics. But we also call on states to ensure meaningful human control over every individual attack and to prohibit the development, deployment, stockpiling, and use of autonomous weapon systems.
A bit about killer robots
Acting on the basis of an ‘artificial intelligence’, LAWS differ from remote-controlled weapons systems such as armed drones – which are piloted by humans remotely – as they have no human guidance after being deployed. Artificial intelligence is basically created by arithmetic calculations and programming of the robot. The robot would be programmed to assess the situation on the battlefield and select and fire on targets based on the processed information.
This gives control over matters of life and death to software and sensors. It lacks every feature of human intelligence, moral reasoning, and ethical and legal judgment that make humans subject and accountable to rules and norms (such as the assessment of proportionality, military necessity, and the capability to make distinctions between civilians and combatants). Deploying autonomous weapon systems that operate without meaningful human control is not legally or ethically acceptable.
Ongoing research and development in the field of LAWS has reached a critical stage, requiring in-depth reflection. The debate on LAWS raises the following fundamental questions:
- Should control over life and death be left to a machine?
- Can autonomous weapons function in an ethically “correct” manner?
- Are machines capable of acting in accordance to international humanitarian law (IHL) or international human rights law (IHRL)?
- Are these weapon systems able to differentiate between active combatants on the one side and civilians, wounded, defenceless, surrendering and/or uninvolved persons on the other?
- Can such systems evaluate the proportionality of attacks?
- Who can be held accountable if an autonomous weapon kills civilians or destroys civilian infrastructure?