Critical issues involved in conferment of legal personhood on AI and Robotic things

Critical issues involved in conferment of legal personhood on AI and Robotic things

This blog is a critical appraisal of the two enriching articles authored by Tyler L. Jaynes and Simon Chesterman. Links for the same are mentioned herein :


The classical discussion of the idea of legal personhood is found in John Chipman Gray’s ‘The Nature and Sources of the Law’, where he gives a technical definition of a legal person who is subject of legal rights and duties. Gray insists that calling a legal person a “person” involves a fiction unless the entity possessed “intelligence” and “will”. Those attributes are part of what is in contention in the debate over the possibility of AI.[1] To envisage the foundational beginnings of AI we need to trace the historical inventions of the mathematical genius, Alan Turing. AI refers to the ability of a machine to perform cognitive functions we associate with human minds such as perceiving, reasoning, learning, interacting, problem solving and exercising creativity.[2] In technical terms, AI is a field of computer science involved with developing a computer's capacity to behave as an intelligent entity, which includes machine-learning algorithms, natural language processing, knowledge representation and automated reasoning.[3]“Turing Test” is a successful technique for testing intelligence of any machine and to determine as to in which category the particular artificially intelligent being falls. To give legal personhood to AI is also influenced by the human conviction that this would increase the risk to lose control and a “robot uprising” which is equally problematic in nature.[4]

The articles titled ‘Legal Personhood for Artificial Intelligence: Citizenship as the Exception to the Rule’ and ‘Artificial Intelligence and the Limits of Legal Personality’ authored by Tyler L. Jaynes and Simon Chesterman respectively have drawn an analysis on what are the intricacies that are attached to conferring legal personhood to AI and Robotic things. Assessing both the articles, we find that there is an effort made by Tyler Jaynes in his article assuming that in the future artificial entities will be granted citizenship and discusses the jurisprudence and issues pertaining to non-biological intelligence that are important to consider. However, Simon Chesterman goes further into the depth facets of identifying the position of legal personhood on AI under varied laws such as Private law, Criminal law, and Intellectual Property Rights. He further identifies the problem that when considering the issue of introducing a new legal person or Juridical persons into the legal system, legislators must take into account the rights of already existing subjects. Policy makers have to analyse how such legal innovation will comply with previous legal order, and how it will affect the fundamental rights and freedoms of the human beings.[5]

Many legal systems recognize two forms of legal persons: natural and juridical. Natural persons are recognized because of the simple fact of them being born, as human. While juridical persons are non-human entities, who are granted with certain rights and duties by law, contemporary examples being corporations and companies. With the growing advancement of technology, there is a speculation that AI will become an omnipresent part of society in the future and granting of legal personhood with rights and obligations can become a credible happening.

Author Simon argues that the assumption that natural personality is limited to human beings is so ingrained in most legal systems that it is not even articulated. He further argues in his article that the AI system’s rationality itself provides the basis for personhood and, when AI pass the Turing’s Test; they are entitled to certain rights as well as accountability under the law. He elaborates that attributing AI systems with some form of legal personality may help address liability questions as well. Such as an automated driving system entity in case of driverless cars whose behavior may not be predictable by their manufacturer and further claims that procedures can be laid down to try robot criminals with provisions for punishment through reprogramming, and destruction.  Under Private law ambit, he contemplates that the possibility that AI systems could accumulate wealth raises the question of how they might be imposed tax. Taxation in the means of addressing the diminished tax base and displacement of workers anticipated because of automation can be a probable solution to that query.[6] Underneath the Criminal law, once AI develops itself to a level where it begins to think upon itself, it will be engaged in several tasks that may have bad repercussions. For instance, what if an autopilot of fighter aircraft detects its pilot as an obstruction in its mission and ejects the pilot when he aborts the mission due to bad weather, which kills the pilot? The developer of autopilot maybe held accountable though he had no intention to kill the pilot in the current legal system. However, if the AI is deliberated as a legal entity, it can be held liable for its own actions. In this case, the autopilot will be held accountable which will save the developers from liability. The algorithms of the AI can be corrected by either reprogramming or destructing. This will save the innocent developers of the AI as well as its owners from liability arising from an act that they never intended and will promote the development of the AI field as it will save discouragement of AI developers and its users and promote more innovations into the AI field. Under IP ambit, we find that AI systems now write news reports, compose songs, paint which generates value but do they attract the protections of copyright? There are legislations adopted in few countries (Britain, New Zealand, and Hong Kong) that provides copyright protection for computer-generated works. The work of AI at an instance may qualify as original work, by virtue of compilation or arrangement of structure by the virtue of its programming and parameter on which such AI actually compiles and creates the work. [7] Discussing about Sophia, the woman Robot, he addresses that AI programs can analyse conversations and extract data from outside interaction, which allows it to improve itself in better understanding of the matter in hand. If one of the functions of a legal system is the moral education of its subjects, it is convincible that including AI in this way could contribute to a reflective equilibrium that might encourage an eventual superintelligence to embrace values compatible with our own.

Author Tyler Janes in his article however, has a very different perspective while presenting his arguments in favour of conferring legal personhood to AI. He lays his emphasis more on conferring rights to non-biological intelligence (NBI). He contends his favour of arguments by stating that an AGI or NBI system cannot function without a source of power much the same as human requires. Understanding that the AGI or NBI system can recognize the damage done to it is enough to satisfy the condition that the systems will understand the harm being done to them immediately as they do not possess an equivalent level of emotions to humans but they will logically be able to realise that there is harm being done to them. This characteristic itself gives a sense of personhood quality. Such harms can be dealt by restructuring of its code to the area that the system is stored. These intelligences have their own observations and opinions and possess the same value as the observations and opinions of natural or modified human beings. They have the right to seek freedom from human servitude and bondage and they equally has the right to be protected from arbitrary legal suits. A possible middle ground may be granting AI a bundle of rights selected from those currently ascribed to legal persons.

Conferring AI’s and Robotic things rights similar to human involves a technical lawyerly manoeuvre. Legal personhood of a human being is recognized as something natural because the law is the development of the human mind. They are created taking into consideration of human abilities, qualities, weaknesses, and other characteristics. These legal rules were developed during the centuries based upon human distinctions, such as emotions, intentions and consciousness. [8]And, that is where often the main arguments against AI as legal person , rely on lack of some vital elements of human legal personhood as some authors define such kind of grounds as the “missing something arguments”. [9] The ‘missing something arguments’ are categorized under six heads: AI’s cannot have souls, AI’s cannot possess consciousness, AI’s cannot possess Intentionality, AI’s cannot possess feelings, AI’s cannot possess interests, and AI’s cannot possess free will. [10]Granting human rights to an AI would degrade human dignity. For instance, when Saudi Arabia granted citizenship to a robot called Sophia, human women, including feminist scholars, objected, noting that the robot was given more rights than many Saudi women have.[11]

Author Tyler identifies that the primary issue that is difficult to accept is that something that is designed only to support a human’s intellectual capabilities can stand on an equal legal or moral grounds as a human bothers many researchers and scholars. Granting of legal personhood to intellectual computer systems is valid only if a new legal regime is not in a serious conflict with legal norms that exist before. It implies that we have to test how obligation to respect rights of AI would affect rights of other legal persons. The obvious question is how it restricts rights of a human being. A vital concern arises wherein the consequences of granting AI robots legal personhood may have. Once we admit there being artificial agents capable of autonomous decisions similar in all relevant aspects to the ones humans make, the next step would be to acknowledge that the legal meaning of “person” and, for that matter, of crimes of intent, of constitutional rights, of dignity, etc., will radically change. Simon contemplates that granting personality to AI systems would also shift responsibility under current laws away from existing legal persons. It would create an incentive to transfer risk to such electronic persons in order to shield natural and traditional juridical ones from exposure. Neil Richards and William Smart have termed the tendency to anthropomorphize AI systems the “android fallacy” which can give factually incorrect results due to their fallacy or faulty reasoning. If AI systems eventually match human intelligence, it is unlikely that they would cease therein. The risks associated with such a scenario is hard to quantify. However, a malevolent AI bent towards enslavement of the human race is the most dramatic situation; ones that are more plausible include a misalignment of values such that the ends desired by the AI or robots conflict with those of humanity or a desire for self-preservation.  This could lead such an entity to prevent humans from being able to switch it off or impair its ability to function. The rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, and thereby reduce them to moral patients since that ability and willingness is central to the value system in modern liberal democratic states. [12] A true AI or robot would moreover have the ability to predict and avoid human interventions or deceive us into making them. It is possible that efforts focused on controlling such an entity will bring about the catastrophe that they are intended to prevent in the future. Rather than improving everyday life, AI could instead upend the entire social order in a negative way. AI beings’ superior skills, combined with personhood status, would likely give them the upper hand against humans in many ways, from work to property ownership to wealth.[13]

CONCLUDING REMARKS

It seems that there is no definite algorithm or guideline to detect whether we should or should not grant legal personhood to AI. If AI systems became more intelligent than people, humans could be relegated to an inferior role as workers hired and fired by AI corporate and even challenge for social dominance. AI Personhood would allow producers and owners of AIs to shift liability to the artifact itself. This will disincentivize investment in adequate testing before deployment. AI Personhood could thus result in an unsafe environment wherever AIs are deployed. AI’s do not yet have the capacity to suffer, and it is unclear if it is possible to program or develop empathy digitally, such that an AI would meaningfully understand suffering in others. Further, an AI system cannot today interpret it ethical responsibilities on a contextual basis, nor is it intrinsically aware of its own existence. We may imagine that in the far future AIs may become much more like people, to the point where we are morally compelled to grant them rights and responsibilities. However, this trajectory is more difficult to foresee based on the current state of AI research. Undoubtedly, AI uses cognitive processes to achieve identified aims, but this does not seem sufficient reason to vest it legal personality, in light of the criterion of rights and obligations. In the context of a robot equipped with AI, it is hard to say that it has a free will, which could lead to commission of prohibited acts with the aim of achieving its own ends. Thus it cannot be ascribed a degree of fault, such as negligence or recklessness nor is it possible to hold it liable for damages for its errors. I believe that as of now there is no justification for awarding legal personality to either robots or AI.


[1]Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. REV. (1992), p.1238.

[2]An Executive's Guide to AL McKINSEY & Co., https://www.mckinsey.com/businessfunctions/mckinsey-analytics/our-insights/an-executives-guide-to-ai, (last visited Dec. 12, 2020).

[3]Stuart J. Russell & Peter Norvig, AI: A Modern Approach, 1 (3rd ed. 2010).

[4] Robert van den Hoven van Genderen, Do We Need New Legal Personhood in the Age of Robots and AI?, Robotics, AI and the future of Law, (Nov 2018), p.15.

[5] Roman Dremliuga, Pavel Kuznetcov & Alexey Mamychev, ‘Criteria for Recognition of AI as a Legal Person’, 12 J. POL.  & L. 105 (2019).

[6]BA King, T Hammond, & J. Harrington, ‘Disruptive Technology: Economic consequences of AI and Robotics Revolution’, (2017) Journal of Strategic Innovation and Sustainability, p. 53.

[7] Lucy Rana and Meril Mathew Joy, Artificial Intelligence and Copyright – The Authorship, (https://www.mondaq.com/india/copyright/876800/artificial-intelligence-and-copyright-the-authorship).

[8] Roman Dremliuga, Pavel Kuznetcov & Alexey Mamychev, Criteria for Recognition of AI as a Legal Person, 12 J. POL. & L. 105 (2019).

[9] Solum B., (1992), Lawrence Legal Personhood for AI. North Carolina Law Review. Volume 70, Number 4, p. 1262.

[10]Lawrence B. Solum, Legal Personhood for AI, 70 N.C. L. REV. p. 1258- 1274, (1992).

[11]Roman V.Yampolskiy, ‘Could an AI be considered as person under the law?’, (https://www.pbs.org/newshour/science/could-an-artificial-intelligence-be-considered-a-person-under-the-law, last visited-15 Dec’20).

[12] Danaher, J., ‘The rise of the robots and the crisis of moral patiency’, p.129–136(2019) (https://doi.org/10.1007/s00146-017-0773-9, last visited 14 Dec’20).

[13]Kristin Manganello, ‘Defining Personhood in the age of AI’, (https://www.thomasnet.com/insights/defining-personhood-in-the-age-of-ai/).