top of page
  • Paritoshvir Singh Baath & Shagnik Mukherjea

The Martens Clause and its Indictment of Autonomous Weapons System

Introduction

“A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

– Isaac Asimov


The Laws of Robotics, invented by Asimov for his myriad of science fiction universes, serve as an elegant and hierarchical set of rules dictating the actions of robots.[1] However, its simplicity and novelty are painfully ill-suited for the contemporary military realities of autonomous weapon systems.[2] Given the controversial nature of such weapon systems and the inherent accountability gap present within such frameworks, rapid technological advancements in the industry have been rife with constant criticisms and pre-emptive recommendations by NGOs, human rights organisations, and think tanks.[3] Of particular relevance is the  International Human Rights Clinic’s publication, Losing Humanity: The Case Against Killer Robots, which contended that autonomous weapons systems would violate the laws of armed conflict, and the Martens Clause and, therefore, requires a pre-emptive ban.[4] Shortly thereafter, the U.S. Department of Defense released a report clarifying its position on autonomous weapons systems, paying attention to the technical differences between autonomous weapons and those requiring human oversight.[5]


In light of its controversial discourse, it is imperative to take note of two aspects: (a) that there are no genuinely fully autonomous weapons currently in operation,[6] and (b) that specific provisions of the laws of armed conflict could be interpreted as providing the necessary legal framework to pre-emptively ban non-existent weapons.[7] In the latter’s regard, it would be sufficient to satisfy that the principles of humanity and the dictates of public conscience (as outlined in the Martens Clause) would be violated to justify a pre-emptive ban on such weapons. Thus, this article aims to reasonably interpret the Martens Clause, analyse its implications on an up-and-coming method of warfare, and examine whether it provides the necessary framework to impose a pre-emptive ban on such weapon systems.

Background on Autonomous Weapon Systems

Weapon systems are classified on their lethal autonomy according to three possible categories: “human-in-the-loop weapons,” “human-on-the-loop weapons,” and “human-out-of-the-loop weapons.” [8] Human-in-the-loop weapons enable targeting and delivery of force only once a human command has been issued. Similarly, a Human-on-the-Loop weapon can target and deliver force under the overarching oversight of an operator who may initiate overrides in certain circumstances. In contrast, Human-out-of-the-loop weapons can select targets and deliver force without human intervention. It is also why such potential weapon systems are often controversial since they can act independently.


With rapid advancements in military technology, various weapon systems are already equipped with an element of automation where human operators observe scenarios rather than controlling every action. The C-RAM System, Phalanx CIWS, SGR-A1 Sentry, and the Iron Dome are particularly relevant. As industry-leading prototypes of the technology behind autonomous weapons systems, such weapons offer enormous benefits in specific combat scenarios when speed and accuracy are of the utmost importance, where human operators cannot achieve similar results. However, these weapon systems are still commonly classified as Human-on-the-Loop systems, which raises yet another issue within such a framework. Given the blurred lines between these classifications of constantly updating technologically, it is imperative that operational weapons systems be adequately defined and classified for a pre-emptive ban to be truly effective.


However, it is essential to note that fully autonomous weapon systems (read: human-out-of-the-loop weapons) do not yet exist. Still, given the increasing investment in autonomous military technology, they are almost certainly under active development.[9] Nonetheless, fully autonomous weapon systems carry enormous moral and legal risks. In addition to having substantial compliance issues with international humanitarian law, its use would aggravate the existing accountability gap.[10] Consequently, this would pose significant problems in determining individual criminal liability. In part, this is due to ill-equipped legal and procedural mechanisms. Since international courts follow a “natural person” jurisdiction, autonomous weapons cannot be held liable under current frameworks. Additionally, one of the fundamental tenets of international law revolves around intent. As a result, robots that commit unlawful acts are not liable for their conduct since they cannot intentionally do so. Given the lack of scientific consensus about the manufacturing and use of autonomous weapons across nations, it would also be unreasonable to hold all programmers and operators responsible for the actions of a weapon that they could not have reasonably predicted, causing additional issues with the doctrine of command responsibility.


Analysis of the Martens Clause and its Elements

Initially introduced at the Hague Peace Conference of 1899 to draft rules for limiting war, reducing arms spending, and promoting peace, the clause was meant to resolve a stalemate that had halted conference proceedings.[11] The greater powers insisted on providing additional protections for occupying powers over local populations, whereas the lesser powers sought to protect their people. The proposal of Fyodor Fyodorovich Martens provided that if certain situations were not to be covered under a treaty, they would instead be covered under customary law, “the laws of humanity,” and “the dictates of public conscience,” providing a degree of protection to both blocs.


The clause has since been incorporated into the 1977 Additional Protocols, various disarmament treaties, and the Rome Statute and is widely considered part of customary international law. In Additional Protocol 1 to the Geneva Conventions, it reads:


“In cases not covered by this Protocol or by other international agreements, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience.” [12]


Initially intended to preserve customary rules, the Martens Clause’s impact on the laws of armed conflict and its interpretations have been widely contested. As is the case with the legal interpretations of contested clauses, organisations and nations have predominantly utilised Article 31 of the Vienna Convention on the Law of Treaties to provide different interpretations of the Martens Clause. Several states have contested that the Martens Clause merely states that the signatories to a treaty remain governed by customary international law. In contrast, a broader interpretation preferred by NGOs, human rights organisations, and think tanks contests that the Martens Clause serves as an independent source of law, paying particular attention to its elements: “established custom,” “the principles of humanity,” and “the dictates of public conscience.”


While a pre-emptive ban on autonomous weapons relies on a relatively broad interpretation, it is pertinent to take note of its legal precedents. Most notably, the discussions and negotiations which led to the pre-emptive ban on blinding lasers saw explicit and implicit references to the Martens Clause. During the Convention on Certain Conventional Weapons Conference, UN Agencies described blinding lasers as “inhumane” and “abhorrent to the conscience of humanity.” [13] Furthermore, the European Parliament declared that “deliberate blinding as a method of warfare is abhorrent,” urging other nations to ratify Protocol IV. Additionally, it stated that the “deliberate blinding as a method of warfare is … in contravention of established custom, the principles of humanity and the dictates of the public conscience.” [14]


Such discussions and precedents highlight that the Convention on Certain Conventional Weapons is well within its framework in utilising the Martens Clause for pre-emptive bans on controversial weapon systems. Additionally, the fact that the usage of fully autonomous weapon systems would be exponentially more devastating than blinding lasers merely showcases the urgency of such discussions.


Given that there is legal precedence in applying the Martens Clause, it is pertinent to analyse its elements and specific implications. While the Martens Clause does not make an effort to define such phrases, there exists an enormous amount of legal and scholarly discussion. In the context of international humanitarian law, the principles of humanity refer to the doctrine of jus in bello (distinction, proportionality, and minimising unnecessary suffering). Its application of autonomous weapon systems requires actors to treat others humanely and respect human life and dignity. As described below, autonomous weapon systems would face substantial issues in satisfying such components. In order to treat others humanely, actors must exercise compassion and be capable of making legal and ethical judgements.


Similarly, the International Committee of the Red Cross’ fundamental principles defines compassion as the “stirring of the soul which makes one responsive to the distress of others,” clearly referring to an emotional component.[15] Furthermore, legal and ethical judgement would be based on various information in varying combat scenarios to minimise unnecessary suffering by making prudent decisions to meet the standards of compassion and empathy. In this regard, compassion and the capacity for legal and ethical judgements are fundamentally human characteristics. While there are certain advantages for autonomous weapon systems being impervious to anger and hasty decisions, their inability to feel empathy and act compassionately severely limits their ability to treat others humanely. In this regard, Amanda Sharkey writes:


“Current robots, lacking living bodies, cannot feel pain, or even care about themselves, let alone extend that concern to others. How can they empathise with a human’s pain or distress if they are unable to experience either emotion?” [16]


Furthermore, robots would also be incapable of making judgement calls on differing cases, considering the myriad of real-time changes which would take place in active combat scenarios. Although such weapon systems would be better at processing such an excessive amount of information than soldiers, they would still be fundamentally limited by their programming in acting out their intended objectives, with little to no regard for the actual moral consequences of their actions. This is particularly relevant as compliance with the laws of armed conflict requires the subjective capacity of decision-making where soldiers are provided with the agency to make complex decisions through the nuances of various issues and moral risks associated with specific situations, an ability that autonomous weapon systems fundamentally do not possess. Various limitations would ultimately significantly compromise either the fully autonomous nature of weapon systems (providing ultimate control and decision-making back to human operators) or risk the possibility of violating the principles of humanity.


According to more reasonable and broader interpretations of the Martens Clause, the dictates of public conscience serve as one of the various sources. Such a reference to public conscience requires adhering to the opinions of the public, industry experts and governments. The issue, however, revolves around the difficulty in objectively reaching a public consensus on such a controversial issue. Nonetheless, such an element relies on shared moral guidelines across nations and organisations and a sense of morality commonly agreed to by the public. In this regard, most proponents of the pre-emptive ban of fully autonomous weapon systems utilise surveys, the public’s opinions, non-governmental organisations’ recommendations, and resolutions of expert groups in substantiating their overarching strategy.[17] Additionally, various governments have supported such arguments, stating that using weapon systems without any meaningful human control is unacceptable. While approximately 30 countries across the Middle East, Asia, Africa, and Europe have called for an overarching ban on autonomous weapon systems, another 125 member states of the CCW have formed the Non-Aligned Movement, calling for a “legally binding international instrument stipulating prohibitions and regulations on lethal autonomous weapons systems.” [18] Unsurprisingly, most member states that have called for a pre-emptive ban have relatively weaker military industries and are predominantly developing and underdeveloped countries. China is the only exception, calling for a ban on deploying lethal autonomous weapons systems but not on their manufacturing or research. In contrast, advanced militaries such as the United States, United Kingdom, Israel, and Russia have consistently prioritised increasing their investments and research into implementing artificial intelligence and robotics in their defence industries.[19]


Conclusion

While fully autonomous weapon systems do not yet exist, recent military advancements trends highlight the need for international humanitarian law to take into account future scenarios and make necessary pre-emptive legislation (as already seen with the case of blinding lasers). Furthermore, as broader interpretations of the Martens Clause form part of the academic and expert consensus, it would also provide the necessary framework for pre-emptively banning a weapon. It is, however, imperative to note that even with the broader interpretation of the Martens Clause, states still have legal protections in developing and prototyping autonomous weapon systems. However, it would require additional policy statements and legal reviews to effectively classify the plethora of autonomous weapon systems and their varying degrees of autonomy.


The principles of humanity and the dictates of public conscience, whether independent sources of law or mere reaffirmations of customary rules, provide the necessary ambit for the CCW in utilising the Martens Clause in furthering negotiations of such a controversial issue. Given that the UN Group of Governmental Experts and an additional 125 member states have called for legally binding international agreements concerning the manufacturing and use of fully autonomous weapons systems, it is high time we heed the call.

 

[1] Isaac Asimov, Runaround, I, Robot (The Isaac Asimov Collection) (1963).

[2] Ronald Arkin, Governing Lethal Behavior in Autonomous Robots, 37 Industrial Robot: An International Journal (2010).

[3] Bonnie Docherty, Heed the Call: A Moral and Legal Imperative to Ban Killer Robots, Human Rights Watch (2022).

[4] International Human Rights Clinic, Losing Humanity: The Case against Killer Robots (2012).

[5] United States Department of Defense, DoD Directive 3000.09 (2012).

[6] Michael N. Schmitt & Jeffrey S. Thurnher, Out of the Loop: Autonomous Weapon Systems and the Law of Armed Conflict, 4 HARV. NAT’L SEC. J. 231, 235 (2013).

[7] Antonio Cassese, The Martens Clause: Half a Loaf or Simply Pie in the Sky? 11 EUR. J. INT’L L. 187, 188-92 (2000).

[8] Vincent Boulanin & Maaike Verbruggen, Mapping the Development of Autonomy in Weapon Systems, Stockholm International Peace Research Institute (2017).

[9] Mattha Busby, Killer Robots: Pressure Builds for Ban as Governments Meet, The Guardian (2018).

[10] Thompson Chengeta, Accountability Gap, Autonomous Weapon Systems and Modes of Responsibility in International Law, 45 Denver Journal of International Law & Policy (2016).

[11] Rupert Ticehurst, The Martens Clause and the Laws of Armed Conflict, 37 International Review of the Red Cross 125-134 (1997).

[12] Article 1, Paragraph 2, Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977.

[13] Paragraph 60, Summary of Statement by Human Rights Watch, CCW First Review Conference, “Summary Record of the 6th Meeting,” CCW/CONF.I/SR.6 (1995).

[14] European Parliament, Resolution on the Failure of the International Conference on Anti-Personnel Mines and Laser Weapons (1995).

[15] ICRC, “The Fundamental Principles of the Red Cross: Commentary,” (1979).

[16] Amanda Sharkey, Can we program or train robots to be good?, 22 Ethics and Information Technology 283–295 (2017).

[17] Docherty, supra note 3.

[18] Campaign to Stop Killer Robots, Country Views on Killer Robots (2020); Government of Venezuela, “General Principles on Lethal Autonomous Weapons Systems,” Working Paper submitted on behalf of the Non-Aligned Movement (NAM) and other states parties to the Convention on Conventional Weapons Group of Governmental Experts on lethal autonomous weapons systems (2018).

[19] Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control, Human Rights Watch (2020).


This article has been authored by Paritoshvir Singh Baath, Technical Editor and Shagnik Mukherjea, AssistantEditor at RSRR. This blog is a part of the RSRR Editor’s Column Series.


bottom of page