Killer Robots: The future of Lethal Autonomous Weapons Systems

The exponential advancement in technology since the second part of the 20th Century has had a significant impact on warfare. One of the most notable developments has been the increasing autonomy of weapon functions. To date, a variety of weapons with some autonomous functions have been developed, but these largely operate within fairly restricted temporal and spatial contexts. Moreover, they are often used for defensive purposes.[1]  As the technology continues to advance, however, further autonomy could lead to the continued development of a “class of systems capable of selecting targets and initiating the use of potentially lethal force without the deliberate and specific consideration of humans”, known as Lethal Autonomous Weapons Systems (LAWS).[2]

While the use of autonomous robots in war has notable strategic, operational and tactical military advantages, it can have profound consequences on international peace and security, the nature of warfare and the protection of human lives. Between 13th and 17th April 2015, a group of States, civil society members, and experts convened at the second informal meeting on LAWS that was held under the auspices of the Convention on Certain Conventional Weapons (CCW). The meeting addressed some of the most serious legal, technical, security and ethical concerns relating to the use of LAWS, including the implications for international humanitarian law (IHL) and international human rights law (IHRL).

While, currently, States express a clear preference for maintaining humans-in-the-loop, increased research in the field has sparked concerns about the development and future use of LAWS. In the meantime, there is a strong call from parts of civil society to pre-emptively ban Killer Robots due to concerns about their incompatibility with international law and their potential impact on global peace and security. Opponents of a ban, however, argue that it is too early to rule out the possibility that future technological advancements might not only overcome these problems, but could also limit the extent of civilian casualties in conflict. They hold that the existing international legal framework provides adequate safeguards to ensure that weapons systems that would breach international law do not make it onto the battlefield.

In relation to IHL, one of the main questions is whether the use of LAWS could ever comply with the principles of distinction, proportionality, and necessity. The application of IHL on the battlefield is so complex, and the decision-making process so nuanced and situation-dependent, that it would be very difficult for the machines to comply with the law, particularly based upon an algorithm that is necessarily programmed ex-ante.

The difficulty stems from the fact that IHL rules are “unlike the rules of chess in that they require a great deal of interpretative judgement in order to be applied appropriately.” Therefore, for instance, the principle of proportionality “requires a distinctively human judgement (“common sense”, “reasonable military commander standard)”; the realities of a rapidly-changing situation render weighing up military advantages against collateral harm complex. LAWS “lack discrimination, empathy, and the capacity to make the proportional judgments necessary”. The same applies to the assessment on necessity.

Similarly, in relation to the principle of distinction, while “[w]e might like to believe that the principle […] is like a sorting rule […] however complex, that can definitively sort each individual into one category or the other”, in practice, determining whether a person is actively participating in hostilities, thereby rendering them a legitimate target, is far from straightforward. Delegating this assessment to a machine is difficult, if not impossible.

Nevertheless, supporters of continued research into LAWS suggest that future technological advancements might lead to the development of weapons systems capable of compliance with IHL and, additionally, of offering superior civilian protection by relying upon: the advanced technical and sensory capabilities of machines; speed in decision making and action; and clarity of judgment that is not swayed by emotions such as fear or anger. For instance, roboticist Prof. Ronald Arkin argues that “being human is the weakest point in the kill chain, i.e., our biology works against us in complying with IHL”. Subject to future technological advancements, Prof. Eric Talbon Jensen has illustrated the following possible scenario:

Instead of putting a soldier on the ground, subject to emotions and limited by human perceptions, we can put an autonomous weapon which […] tied to multiple layers of sensors [is] able to determine which civilian in the crowd has a metal object that might be a weapon, able to sense an increased pulse and breathing rate amongst the many civilians in the crowd, able to have a 360 degree view of the situation, able to process all that data in milliseconds, detect who the shooter is, and take the appropriate action based on pre-programmed algorithms that would invariably include contacting some human if the potential response to the attack was not sufficiently clear.

Despite the potential benefits that future technologies may bring, however, they are still hypothetical. As the International Committee of the Red Cross (ICRC) has observed, “[b]ased on current and foreseeable robotics technology, it is clear that compliance with the core rules of IHL poses a formidable technological challenge […] there are serious doubts about the ability […] to comply with IHL in all but the narrowest of scenarios and the simplest of environments”. Therefore, while the utopian prospect of LAWS that operate in the best interests of civilians is a possibility, it is by no means a certainty. What is certain is the development of weapons systems with very concerning autonomous functions.

Even in the event of significant technological advancements, delegating life and death decisions to an autonomous machine can create a serious criminal and civil accountability gap.[3]  This would run counter to the preventative and retributive functions of criminal justice; breach the right to an effective remedy; and, in the light of the very serious crimes that can be perpetrated by the machines, it would, arguably, be immoral. It has been aptly observed that  “[t]he least we owe our enemies is allowing that their lives are of sufficient worth that someone should accept responsibility for their deaths”. This poignant reflection holds equally true in relation to civilians and friendly casualties.

For these reasons, there has been a strong drive towards regulating the further development and eventual use of these machines.  Some are advocating a ban on killer robots while others, like the ICRC, are “urging States to consider the fundamental legal and ethical issues raised by autonomy in the ‘critical functions’ of weapon systems before these weapons are further developed or deployed”.

Still, opponents of a ban deem it unnecessary since IHL is “sufficiently robust to safeguard humanitarian values during the use of autonomous weapon systems”.  They argue, for instance that an adequate safeguard against the use of weapons that violate IHL is contained in Article 36 of Additional Protocol I (API) of the 1949 Geneva Conventions which obliges States to determine in the “study, development, acquisition or adoption of a new weapon, means or method of warfare… whether its employment would in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable”.

However, proponents of the ban have argued that this is insufficient. Some have suggested that opinion regarding whether Article 36 assessments form part of customary international law may still be divided. Other experts, however, disagree. They argue that customary international law does indeed create an obligation upon all states to carry out the assessment  in relation to new means of warfare acquired, and that a question mainly arises in relation to new methods of warfare. Therefore, they maintain that weapons reviews provide sufficient protection. In any case, it has been argued that an assessment is a corollary of the obligation to ensure compliance with IHL; if the machines cannot comply, they will inevitably breach other provisions of the law when they are deployed .

Nevertheless, from a practical perspective, it is questionable whether Article 36 reviews, which depend on the transparency, openness, and uniform application of IHL to LAWS in such a nebulous context, are sufficient. Moreover, as computer scientist and robotics expert Prof. Noel Sharkey notes, there are serious questions about future consequences on IHL if LAWS continue to be developed while efforts at making them compliant with the laws of war fail.

Furthermore, Article 36 does not sufficiently consider the IHRL implications of LAWS.  In particular, the use of LAWS might lead to a violation of IHRL norms including: the right to life; the prohibition of torture and other cruel, inhuman or degrading treatment or punishment; the right to security of person; and, in view of the fact that a weapons review will not necessarily close the accountability gap, the right to an adequate legal remedy. Finally, proponents of the ban argue that delegating life and death decisions to a machine, effectively “death by algorithm”, violates the basic tenets of human dignity, the principle of humanity and the dictates of public consciousness, therefore, contrary to the Martens Clause.[4]

Discussions on the way forward have centered round the possibility of necessitating ‘meaningful human control’ over the operation of weapons systems.   However, as William Boothby has observed, a machine requiring meaningful human control is not fully autonomous; while useful from a policy perspective, he advised refraining from elevating the concept to ‘some sort of legal criterion’ and suggested focusing on Article 36 weapons reviews. Conversely, supporters of the ban have argued that it is precisely because ‘meaningful human control’ implies that machines are not fully autonomous, and in light of the significant State support for maintaining  such control, that a ban is the most obvious course of action.

At this stage, a consolidated way forward needs to be established before States and private contractors invest too much public and private money, time and energy, in the further development of LAWS, thereby rendering future regulation much more complex. Time is of the essence; the “opportunity will disappear […] as soon as many arms manufacturers and countries perceive short-term advantages that could accrue to them from a robot arms race”. The consequences on civilians, combatants, and international peace and security generally, could be devastating.

[1] For an overview see this 2012 Human Rights Watch report and P.W. Singer’s Wired for War

[2] Although a precise definition of LAWS has not yet been agreed upon, see here and here for their general characteristics

[3] see Human Rights Watch and Harvard Law School’s International Human Rights Clinic’s report Mind the Gap: The Lack of Accountability for Killer Robots

[4] See here for a discussion on some of the challenges  T-1


1 Comment

Filed under Human Rights, International Criminal Law, Public International Law

One response to “Killer Robots: The future of Lethal Autonomous Weapons Systems

  1. El roam

    Thanks for that interesting post , really complicated , yet , just worth to note :

    The respectable author of the post , hasn’t presented the plain problematic issue of what has been called by him / her : ” accountability gap ” . This is because in terms of remedy , we won’t face real issue , international legislation , may solve it ( maybe , but too complicated ). Yet :

    It is deeply a rooted principle , that criminal accountability , can only be implied upon individuals ( natural person , see article 25(1) to the Rome statue for example , dictating : ” The Court shall have jurisdiction over natural persons pursuant to this Statute. ” ) . Whatsoever , a state , can’t be held accountable , for criminal action , but , natural persona ! So :

    We face a problem !! Suppose that severe reckless mistake ( or even malicious one ) has been committed by such autonomous system or robot . Who would be held accountable ?? The robot can’t be held so ( not natural person of course ) then one may presume that the operator , or the designer , Yet , both of them would be considered out of jurisdiction or picture , this is because of the very fact , that : designer wouldn’t be involved directly and concretely , while , if it is autonomous , how typically the operator would be held accountable ?? The machine , or the robot , acted in his shoes , and as human being in his shoes ( I mean , exercising human discretion ) .

    Another point , related to is , is that even human being , are not or never in fact , autonomous ( or at least , totally so ) . Every soldier and commander , is guided by principles , concerning the timing of the shooting , the selection of targets , proportionality and so forth ….. So , such argument that machines are not fully autonomous , is not valid at all !!

    So , the idea , that terrible mistake or recklessness or malicious attack , has caused huge collateral damage ( or not ) and no one would face criminal justice , is unacceptable !! It doesn’t mean , that robots wouldn’t create much less damage to civilians , but , only for that principle ( of non guilt ) we must consider it over and over .



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s