Sign up for our Free newsletter
Subscribe
Un-Subscribe
 


 

 

The Rights of Robots:
Technology, Culture and Law in the 21ST Century

Phil McNally and Sohail Inayatullah*

INTRODUCTION  

In the last five years, the Hawaii Judiciary has developed as part of its comprehensive planning program, a futures research component.  Initially futures research was largely concerned with identifying emerging issues; that is issues that are low in awareness to decision makers and high in potential impact.1  

At present the Courts futures program is engaged in a variety of activities.  Researchers study the impact of possible legislation on the Judiciary, forecast future caseloads, publish a newsletter of emerging issues, trends, and research findings2, and provide research information to decision makers as to the future of technology, economy, population, management and crime.  

However in the past few years of concentrating on short and medium term futures, we have remained fascinated by one long term emerging issue‑‑the Rights of Robots.  

The predictable response to the question: should robots have rights has been one of disbelief.  Those in government often question the credibility of an agency that funds such research.  Many futurists, too, especially those concerned with environmental or humanistic futures, react unfavorably.  They assume that we are unaware of the second and third order effects of robotics‑‑the potential economic dislocations, the strengthening of the world capitalist system, and the development of belief systems that view the human brain as only a special type of computer.  

Why then in the face of constant ridicule should we pursue such a topic.  We believe that the development of robots and their emerging rights is a compelling issue which will signficantly and dramatically impact not only the judicial and criminal justice system, but also the philosophical and political ideas that govern our societal institutions.  

In the coming decades, and perhaps even years, sophisticated thinking devices will be developed and installed in self‑propelled casings which will be called robots.  Presently robots are typically viewed as machines; as inanimate objects and therefore devoid of rights.  Since robots have restricted mobility, must be artifically programmed for "thought,"lack senses as well as the emotions associated with them, and most importantly cannot experience suffering or fear, they, it is argued, lack the essential attributes to be considered alive.  However, the robot of tomorrow will undoubtedly have many of these characteristics and may perhaps become an intimate companion to its human counterpart.  

We believe that robots will one day have rights.  This will undoubtedly be a historically significant event.  Such an extension of rights obviously presupposes a future that will be fundamentally different from the present.  The expansion of rights to robots may promote a new appreciation of the interrelated rights and responsibilities of humans, machines and nature.   

With such an holistic extension of rights to all things in nature from animals and trees to oceans comes a renewed sense of responsibility, obligation and respect for all things.  Certainly these concepts are foreign to the worldview of most of us today.  The burden of this paper is then to convince the reader that there is strong possibility that within the next 25 to 50 years robots will have "rights."  

CULTURAL PERSPECTIVES  

The definition of rights has been historically problematic.  In part, it is an unresolved problem because there are numerous disparate definitions of what constitutes "rights."  These fundamentally different views are largely politically, institutional and culturally based.  Those in or with power tend to define rights differently then those out of or without power.  In addition, cultures with alternative cosmologies define notions of natural, human, and individual rights quite differently.  

Historically, humanity has developed ethnocentric and egocentric view of rights.   Many notions of "rights" reflect the 16th century views of Newton's clockwork universe and Descarte's rationality as well as the emerging Protestant ethic.  The impact of such views upon thinkers of the Enlightenment like John Locke, Jean Jacques Rousseau and Thomas Hobbes was profound.  In Leviathan, Hobbes vividly illustrated the problem of existence.  For Hobbes, life without legal rights (as provided by governing institutions) was one of "continual fear, of violent death; with the life of man, solitary, poor, nasty, brutish and short."3  With the development of Western capitalism and rationality, suddenly man assumed dominance over nature and replaced God as the center of the universe.  Thus natural rights of man became institutionalized, bureaucratized and formalized and like legal systems developed along rational lines so as to provide the necessary stability and predictability for the growth of market capitalism.  

In addition, this Western capitalistic notion of governance led to the loss of individual efficacy as well as the elimination or subjugation of rights of nature, women, non‑whites, and religious groups.  For capitalism to thrive, for surplus to be appropriated, a division of capital, labor and resources must exist; that is there must be capitalists who exploit and the underclass‑‑the environment, the internal proleteriat and the external colonies‑‑who must be exploited.  To provide an ideological justification of exploitation, it was necessary to percieve the exploited as the "other" as less than human, as less then the agents of dominance.  Thus, nature, those in the colonies and the underclass within industrialized nations (women and the proleteriat) had to be deined certain rights.  The denial of rights for nature, in addition, found its ideological justification from Christianity and the classical Cartesian separation in Western thought between mind/body, self/environment and self/nature.   Similarly and unfortunately from our persepective, the possibility of robotic rights in the future is tied to the expansion of the world capitalist system.  Robots will gain rights only insofar as such an event will lead to the further strengthening of the capitalist system.  Most likely they will gain rights during a system crisis; when the system is threatened by anarchy and legal unpredictability‑‑a condition that paradoxically may result from developments in artificial intelligence and robotics.  

Other cultures however provide a different if not fresh perception of the meaning and purpose of rights that is in marked contrast to the historical and present Western position.  For example, American Indian Jamake Highwater states in The Primal Mind, "whites are extremely devoted to limiting the rights of individuals and preventing anarchy, which is greatly feared in individualized cultures...by contrast the Indian, generally speaking, does not recognize the individual and therefore has not formulated strict regulations for its control"4   

The Indian recognizes the collective.  This collective is more than the aggregate of individuals in his his tribe.  It is rocks, trees, sacred grounds, animals‑‑the universe itself.  Thus for the Indian, there exists a harmony between Nature and the individual; a relationship characterized by sharing, caring and gratitude, not dominance.  

Social philospher, activist and mystic P.R. Sarkar states in Neo‑Humanism: The Liberation of the Intellect5 that we must develop a new humanism that transcends the narrow outlooks of the ego.  We must transcend our attachments to our nation, to our religion and to our class.  In addition, humans must include animals and plants and all of life in definitions of what constitutes "real" and " important."  We cannot neglect the life of animals and plants.  Of course, this is not to say there should not be hierarchy among species especially as human life is rare and precious; still our economic development decisions, our food decisions must take into consideration plants and animals as participants. The rights of technology is a legitimate concern from the Eastern perspective because all‑that‑is is alive. The universe is alive.  

Sarkar also forecasts the day when technology will have "mind" in it.  While this may seem foreign to the Western notion of mind, for Sarkar "mind" is in all things.  Evolution is the reflection, the development of this mind towards total awareness, towards Godhood, Self‑realization.  Humans in general have the most developed mind, animals less, plants even lesser and rocks the least.  Once technology can develop and become more subtle, then it, like the brain, can become a better carrier of the mind.  Mind is constantly "looking" for vehicles to express itself.  Nothing is souless, although there are gradations of awareness.6  

The Buddhist notion is similar to this.  For the Buddhist, the self is always changing: evolving and deevolving.  Defining humans as the sole inheritors of the planet at the expense of other sentient beings leads to hubris and evil.  Again, the Buddha, nor any of his future disciples, developed an explicit rights for robots; however, his perspective certainly involves seeing All as persons not as things.   

From the American Indian, Yogic Sarkarian and the Buddhist perspective, we must live in harmony with nature, with technology‑‑things do not exist solely for our use as humans, life exists for itself, or as a reflection of the Supreme Consciousness.  Animals and Plants, then, as well as robots, should have rights not because they are like humans, but of what they are, as‑themselves.  

Chinese cultural attitudes towards the notion of rights also offer a decidedly different approach than that of the West.  From this perspective, the legal norms of rights, established by man, are held as secondary to natural rights.  Clarence Morris in The Justification of the Law argues that for the Chinese, harmony instead of dominance is more important.7  For example, "few Chinese scholars prize law in general or the imperial codes in particular: most of them hold that proper conduct is consonate with the cosmic order and therefore is determined not by law but by natural prppriety."8  

Morris continues in the vein of natural law noting that "we live in an unsuperstitutious world‑‑in which enforceable legal obligations (are) human artifices, and the laws of nature, in themselves, (do) not indicate where earthly rights (lay)‑‑man inevitably (has given) up the primitive practice of prosecuting brutes and things.  So beasts and trees no longer (have) any legal duties.  Westerners who gave up the conceit that nature had legal duties also became convinced that nature has no legal rights."9  

Morris believes that nature should be a party to any case, not for man's purpose but for its purpose.  Nature should have rights.  "Nature should no longer be dislocated on whim or without forethought about the harm that may ensue; he who proposes dislocation should justify it before he starts."10  Certain authorities should then be designated as nature's guardians in the same way that children who cannot represent themselves have guardians.  In addition, writes Morris:11  

When legal rights are, by statute, conferred on feral beasts, green forests, outcroppings of stone, and sweet air, and when these legal rights are taken seriously, men will respect these duties in much the same way as they respect their other legal obligations.  

NATURE AND ROBOTS  

This neo‑humanistic type of thinking can and, we believe, should apply to robots as well.  Eventually humans may see robots in their own right, not only as our mechanical slaves, not only as our products, as ours to buy and sell, but also entities in their own right.  Of course, at present the notion of robots with rights is unthinkable, whether one argues from an "everything is alive" Eastern perspective or "only man is alive" Western perspective.  Yet as Christopher Stone argues in Should Trees Have Standing?‑‑Toward Legal Rights for Natural Objects, "throughout legal history, each successive extension of rights to some new entity has been, theretofore, a bit unthinkable.  We are inclined to suppose the rightlessness of rightless "things" to be a decree of Nature, not a legal convention acting in support of some status quo."12  

Stone reminds us of the obvious but easily forgotten.  Human history is the history of exclusion and power.  Humans have defined numerous groups as less than human: slaves, woman, the "other races", children and foreigners. These are the wretched who have been defined as, stateless, personless, as suspect, as rightless.  This is the present realm of robotic rights.  

The concept of extending right to nature represents a dialectical return to a holistic sense of natural rights. Once a renewed respect of the rights of all things to exist is established then an understanding of the legal dimensions of human‑made creations, such as robots, can emerge.  

As we enter a post‑industrial technology‑driven society, we need to ressess our interconnected relationship with nature and machines as well as the notions of rights associated with this new relationship.  

Computer and robotic technology are not only modernizing traditional industries, they are also creating numerous new opportunities and problems in space, genetic engineering, and war and defense systems.  The adaption of these new technologies in education, healthcare and in our institutions as well as in our models of thought are inevitable and may, through proper forecasting and control, be positive.  Any continued attempt to ignore the needs of technology or to deter its use would be foolish and impossible.  Yet, in many ways that is pricisely what we continue to do.  Presntly, the foundation of the American Constitution and the Bill of Rights, "obviously reflects the technological and political issues of 18th century English society...what we continue to do is restructure and reinterpret it to fit an ever more rapidly evolving technological society."13  Perhaps, what we really need to do, is to rewrite, or video, the Constitution in the light of future trends and issues.  

The Constitution could be rewritten to include the rights of trees and streams, robots and humans.  Of course, we are not aruging that robots should have the same rights as humans, rather, that they are seen as an integral part of the known universe.  In addition, although we are not advocating the worship of technology, yet with " the genie of technology having been let out the bottle and (as it) can be force(d) back in," 14 social planning for robots must be attempted.  

ROBOT TECHNOLOGY  

The rapid impact of computers upon the world since the development of the first computer, UNIVAC in 1946, has been profound.  As little as ten years ago, the thought of having a personal computer at one's office desk, home, or grade school seemed far‑fetched indeed.  Now personal computers are accepted complacently as part of our modern world.  Computer brains run cars, stereos, televisions, refrigerators, phone systems, factories, offices, airplanes, and defense systems, to name a few examples.  The next progression of the computer as a mobile unit, robot, may like the personal computer, become a common and essential companion at home and in the workplace.   

At the vanguard of computer technology is the development of artificial intelligence (AI) and the creation of living computer circuitry called "biochips."  The development of "AI" requires the computer to make a jump in inference, a quantum leap over miscellaneous data, something a programmed machine has been unable to do.  Literally, the computer must skip variables rather than measure each one.  It is not quite a mirror of the human gestalt "aha" illunimation of a decision but similar.  

One of the essential difficulties in developing such a thinking computer is the problem of converting the holistic process of thought into the linear description of written language.  Common sense reasoning does not conform to the logic of computer languages as FORTRAN.  "For instance there is no program around today that will tell the difference between a dish and a cup."15  What is needed is the development of a new language for programming which combines the multiple meanings of Chinese pictography with the preciseness of Western script.  

The development of living "biochips" will further blur the definition of a living machine.  By synthesizing living bacteria, scientists have found a way to program the bacteria's genetic development to mimic the on and off switching of electronic circuitry.  Many scientists presently feel silicon miniturization has reached its limit because of the internal heat that they generate.  The "biochip" is then expected to greatly expand the capabilities of computerization by reaching the ultimate in miniaturization.  "Biochips" also will have the unique ability to correct design flaws.  Moreover, James McAlear, of Gentronix Labs notes, "because proteins have the ability to assemble themselves the (organic) computer would more or less put itself together."16  

In the creation of a living computer system "we are, according to Kevin Ulmer of The Genex Corporation, making a computer from the very stuff of life."17  Eventually it is expected that these systems will be so miniturized that they may be planted in humans so as to regulate chemical and systemic imbalances.  As these chips are used to operate mechanical arms, or negate brain or nerve damage the issue of man‑robots, cyborgs, will arise.  The development of such organic computers is expected in the early 1990's.  This new technolgical development will force a redefinition of our conception of life.  

During this explosive era of high‑tech innovation, contact between machine with artificial intelligence and humans will rapidly increase.  Computer intelligent devices, especially expert systems, are now making decisions in medicine, oil exploration, space travel, air traffic control, train conduction, and graphic design to mention a few areas of impact.

The greatest attribute of an expert system is its infinite ability to store the most minute information and its tremendous speed at recalling and cross referencing information to make instantaneous conclusions.  The greatest drawback will be in convincing people to trust the computers decisions.  This, mistrust, however, will be signficantly reduced as robots in human form (voice, smell, sight, shape)‑‑androids‑‑are developed.  

In deciding if computers can make experts decisions, we must first delineate the attributes of an expert?  Randall Davis of MIT provides the following definition: "(1) they can solve problems; (2) they can explain results; (3) they can learn by experience; (4) they can restructure their knowledge; (5) they are able to break rules when necessary; (6) they can determine relevance; and (7) their performance degrades gracefully as they reach the limits of their knowledge."18  Presently computers are capable of achieving the first three stages but cannot reprogram themselves or break rules, a decidedly human trait.  

ARE ROBOTS ALIVE?  

Robots presently are construed to be dead, inanimate.  However, an argument can be made that with advances in artificial intelligence, robots will be considered "alive."  Sam N. Lehman‑Wilzig in his essay titled "Frankenstein Unbound: Towards a legal definition of Artificial Intelligence"19 presents evidence that Artificial Intelligence (AI) machines already created or theoretical possible will be by most definitions alive.  We quote extensively from this landmark article:  

By any definition the present powers of AI machines are both impressive and worrisome.  Cyberneticists have already created or proven that AI constructs can do the following:20  

(1)  "Imitate the behavior of any other machine."21

(2)  Exhibit curiosity (ie are always moving to               investigate their environment); display self‑recognition (ie react to the sight of themselves); and manifest mutual recognition of members of their own machine species.22

(3)        Learn from their own mistakes.23

(4)        Be as "creative" and "purposive" as are humans, even to the extent of "look[ing] for purposes  which they can fulfill."24

(5)        Reproduce themselves, in five fundamentally different modes, of which the fifth‑‑the "probabilistic mode of self‑reproduction"‑‑closely  arallels biological evolution through mutations (which in the case of [machines] means random changes of elements), so that "highly efficient, complex, powerful automata can evolve from inefficient, simple, weak automata."25

(6)        "Can have an unbounded life span through self‑repairing mechanisms."26

In short, "a generation of robots is rapidly evolving,  a breed that can see, read, talk, learn, and even feel  [emotions]."27

 

But the essential question remains‑‑can these machines

            be considered to be "alive?" Kemeny presents six

            criteria which distinguish living from inanimate

            matter: metabolism, locomotion, reproducibility,

            individuality, intelligence, and a "natural"

(non‑artificial) composition.28  In all six, he

            concludes, AI servo‑mechanisms clearly pass the test.29

            Even a critic of AI such as Weizenbaum admits that

            computers are sufficiently "complex and autonomous" to

            be called an "organism" with "self‑consciousness"

and an ability to be "socialized."  He sees "no way to

            put a bound on the degree of intelligence such an

            organism could, at least in principle, attain,"

            although from his critical vantage point, not

in the "visible future."30

 

Viewed from this perspective, robots are indeed "alive."  However, we should note the worldview behind this perspective; it is based on the assumption that we can compare a human brain to a computer brain, that creativity is something that is not divinely inspired, it is simply the "juxtaposing of previously existing information"31‑‑thus humans and computers can be equally creative.  "Humaness" then is defined by aliveness, the ability to make decisions, to reflect, learn and discriminate‑‑reflective awareness, to ask the questions, Do I exist? Who am I? 

 

AI enthusiasts seriously argue that not only do robots have the theoretically possibility of "life" but inevitably will be perceived as alive.  It is only our "humancentricness," our insistence that life must be judged strictly on human terms as evidenced, for instance, by the structural bias in our language, that prevents us from understanding the similarity of robots‑‑now and in the future‑‑to humans.  Of course, there are numerous arguments against this perspective.  From the Western religious view, Man's soul is given directly by God; robots are souless, thus, dead and thereby rightless.  From a humanistic perspective, only by the clever use of language‑‑comparing our brains to robot's memories, and other reductionist arguments‑‑can it be argued that robots are alive.  Aliveness is flesh and bones, aliveness is blood.  Thus, robots remain dead complex machines that can be made to act and look like humans, but will always remain as robots, not humans.  As the case with B.F. Skinner's pigeons who were trained to hit a ping‑pong ball back and forth, we should not be fooled to believe that they are really "playing" ping‑pong.

 

However compelling these arguments against robots‑as‑humans, they may lose some of their instinctive truth once computers and robots increasingly become a part of our life, as we slowly renegotiate the boundaries of us and them.  We have seen this with household pets, who certainly are perceived as having human traits and who have certain rights.  Of course, the notion that dogs and cats have rights is contentious, since it can be argued that cruelty to animal statutes only confer a right on the human public, represented by the State, to have a culprit punished.  Conversely, it can be argued that humans are simply acting as agents of interest and that animals themselves are the real parties of interest.  We will further develop the contours of the definition of a rightholder later on in this paper.

 

In addition, arguing from the perspective of robot's rights, AI and robotics are relatively new innovations.  If we assume that growth in computermemory continues, we can safely forecast that computers and robots by the year 2100 will only differ in physical form from humans.

Already, computers that preform psychotherpy cannot be distinguished from doctors who do the same, although clearly computers are not thinking.  For example, in the 1960's MIT Professor Joseph Weisenbaum invented a computer program ELIZA to parody a therapist in a doctor‑patient format picking up key phrases, making grammatical substitutions and providing encouraging non‑committal responses.  "Weizenbaum was soon schocked to see people become emotionally involved with the computer, believing that ELIZA understood them... the computer program had properties and powers that he had not anticipated."32  Nor had he anticipated the needs of humans to attribute human characteristics to gods, animals and inanimate objects.

 

Programs such as ELIZA, however, are only a beginning.  Far more complex programs will be developed untill distinctions between human thought and computer‑generated thought become impossible.  Our perceptions of thinking, life, will continue to change as a response to changing technology and changing beliefs of what is natural.  These perceptions may change to such a degree that, one day, robots, may have legal rights.

 

DEFINING RIGHTS

 

But what does it mean to have legal rights?  At present, but not necessarily so in the future, an entity cannot have a right "unless and untill some public authoritative body is prepared to give some amount of review to actions that are colorably inconsistent with that "rights."33  However, according to Christoper Stone, for a thing to be a holder of legal rights, the following criteria must be satisfied: (1) the thing can institute legal actions at its behest; (2) that in determining the granting of legal relief, the Court must take injury to it into account; and the relief must run to the benefit of it.  If these conditions are satisfied then the thing counts jurally, it has legally recognized worth and dignity for its own sake. 34

 

For example, writes Stone, the action of an owner suing and collecting damages if his slave is beaten is quite differently from the slave instituting legal actions himself, for his own recovery, because of his pain and suffering.35  Of course, a suit could be brought by a guardian in the subject's name in the case of a child or a robot, for the child's or robot's sake, for damages to it.

This is equally true for Nature as well. We cannot always rely on individuals to protect Nature, as they may not have standing and as it may not be cost‑effective for an individual owner to, say for example, sue for damages for downstream pollution. However, a stream may be protected by giving it legal rights.  If Nature had rights, Court's then would not only weigh the concerns of the polluter with that of the individual plaintiff but the rights of the stream as well.  With Nature rightless, Courts presently can rule that it is in the greater public interest to allow Business to continue pollution as Industry serves a larger public interest.  "The stream," writes Stone," is lost sight of in a quantitative compromise between two conflicting interests.'"36

 

Similarly, we can anticipate cases and controversies where the needs of robot developers, manufacturers and users will be weighed against those who are against robots (either because they have been injured by a robot, because of their religious perspectives or because of their labor interests). Judges will have to weigh the issues and decide between parties.  But, unless robots themselves have rights, they will not be a party to the decision.  They will not have standing.  They will not be legally real.

 

But certainly as robot technology develops, as they are utilized to increase humanity's collective wealth‑‑albeit in a capitalistic framework, robots will only increase the gap between rich and poor, between employed and unemployed‑‑their future will be inextricably tied to our future, as is the case with the environment today.

 

EMERGENCE OF RIGHTS

 

As important as defining legal rights is developing a theory on how rights emerge.  They, of course, do not suddenly appear in Courts.  Neal Milner has developed a useful theory on the emergence of rights from a synthesis of literature on children's rights, women's rights, right's of the physically and mentally handicapped, rights to health, legal mobilization and legal socialization.37

 

His first stage in this theory is imagery.  Here imagery stressing rationality of the potential rights‑holder is necessary. From this perspective, the robot then must be defined as a rational actor, an actor with intent. This, however, is only true from the Western perspective.  From the Eastern perspective, previously outlined, rationality does not define life. 

 

The next stage of rights emergence requires a justifying ideology.  Ideologies justifying changes in imagery develop.  These, according to Milner, include ideologies by agents of social control and those on the part of potential rights holders or their representatives. These ideologies would be developed by scientists, science fiction writers, philosophers and perhaps even futurists. They would have to argue that robots are a legitimate category of life.

 

The next stage is one of changing authority patterns.  Here authority patterns of the institutions governing the emerging rights holders begin to change.  It is not clear what institution directly control robots‑‑the intellectual/academic university sector, or business/manufacturers, or government/military?  Howevers, as rights for robots emerge we can forecast conflicts between various institutions that control them and within those institutions themselves.  Milner next sees the development of "social networks that reinforce the new ideology and that form ties among potential clients, attorneys and intermediaries."38 We would see the emergence of support groups for robots with leading scientists joining political organizations.  The next stage involves access to legal representation.  This is followed by routinization, wherein legal representation is made routinely available.  Finally government uses its processes to represent the emerging rights‑holders.

 

Of course, this is just a general model.  The initial step will be the most difficult.  Arguing that robots have rationality, especially from the Western perspective which reserves rationalities for self‑directed, individual, autonomous adult persons will be difficult.  Given the dominance of the West, it may be that robots will not gain rights until there are seen or imaged in the above manner.

 

ECONOMIC ISSUES

 

However, eventually, AI technoloy may reach a genesis stage which will bring robots to a new level of awareness that can be considered alive, wherein they will be perceived as rational actors.  At this stage, we can expect robot creators, human companions and robots themselves to demand some form of recognized rights as well as responsibilities.  What types of rights will be demanded?  Basic human rights of life, friendship and caring?  The right to reproduce?  The right to self programming (self expression)?  The right to be wrong?  The right to intermarry with humans?  The right to an income?  The right to time off from the job?  The right to a trial by its peers (computers)?  The right to be recognized as victims of crimes?  The right to protection of unwarranted search and seizure of its memory bank?  The right to protection from cruel and unusual punishments such as the termination of its power supply?

 

In a brief play script Don Mitchell vividly illustrates the future image of the blue collor industrial robot on the assembly line as one of danger, monotony and despair.  Here the exploitation of robots is a reflection of the human exploitation incurred during early 20th century industrialization.  However, unlike their human counterparts these robots have no way to voice their suffering.  This situation raises these types of questions; "How do you measure value?  By the price tag?  By the need?  By the blood and sweat that goes into making something?  Robots do not produce labor value, though.  There is no mechanical Karl Marx to save them."39

 

Obviously, in the discussion of robot rights questions like the above are difficult to answer.  Yet robots continue to replace their human counterparts on the assembly line and at the factory in a rapidly increasing pace.  They are replacing humans because of their high productivity and low cost. Faster robots do not tire, more reliable robots do not have family problems, drink or do drugs, cheaper to maintain robots do not strike for wages and fringe benefits, for example. 

 

Soon the initial question that will be raised is: How are robotic generated goods and services to be distributed in the community?  The distribution of this wealth requires a new conception of ownership, production, and consumption.  In a potential world without work some form of redistribution of wealth will be necessary.  "In Sweeden employers pay the same taxes for robots that they do for human employees.  In Japan some companies pay union dues for robots."40  Supporters of robotic rights might say that computers are paying these taxes and dues from their labor and should derive rights for such labor.

 

Following questions of distribution of wealth come questions of ownership.  In the very near future it is expected that computers will begin to design their own software programs.  Considering the fact that, "the Copyright Act limits copyright protection to the author's lifetime,

which is clearly inappropriate for a computer, it would then seem that a change in the law may be needed to provide proper protection for programs with non‑human authors."41

 

Legal rights and responsibilities will then be needed to protect humans and robots alike.  This need should give rise to a new legal specialty like environmental law, robotic law.  With this new specialty we may find lawyers defending the civil rights of self‑aware robots which could take the following form: "to protect the super‑robot from total and irreversible loss of power (LIFE); to free the robot from slave labor (LIBERTY); and allow it to choose how it spends it time (THE PURSUIT OF HAPPINESS).42

 

NEW CASES

 

We will then see an avalanche of cases: we will have robots that have killed humans, robots that have been killed by humans, robots who have stolen state secrets, robots who have been stolen; robots who have taken hostages, robots who have been held hostage, robots who carry illegal drugs across boarders, and robots themselves who illegally cross national boarders.  Cases will occur in general when robots damage something or someone or a robot itself is damaged or terminated.  In addition, robots will soon enter our homes as machines to save labor, and as machines to provide child care and protection.  Eventually these entities will become companions to be loved, defended and protected. 

 

Robots that are damaged or damaged or break other human laws will raise various complex issues.  Of course at present, robot damage will be simply a tort case, just as if ones car was damaged.  But an attorney will one day surely argue that the robot has priceless worth. It is not a car. It talks, it is loved and it "loves."  The robot, then, like a human, has been injured.  Its program and wires damaged.   In this scenario, we will then need to have special tort laws for robots. 

The legal system is today unprepared for the development of robotic crimes.  Recently, the Morbidity and Mortality Weekly Report  cited the first death caused by a robot.  This accident occurred when a machinist at a Michigan company entered a robots work envelope.  Apparently not programmed to take human frailty into account the robot used its arm to pin the man to a safety pole killing him with the force."43  This case is considered an industrial accident and could have possibly been avoided if the robot had an improved sense of sight and more careful programming.  In the future, robotic legislation may require laws similar to Issac Asimov's First Law Of Robotics that prevent the injury of humans by robots.  These laws could be coded into the robots memory such that robots will have to terminate themselves if a conflict arises.44 However, we can easily imagine scenarios where a robot will have to choose betwen one and many humans or situations wherein its own termination may cause injuries to humans.  These issues and conflicts will task programmers, the legal systems, and robots themselves.

 

Once (the computers within) robots begin to program themselves according to external stimuli the robot may begin to commit crimes completely independent of earlier human programming.  If a robot can commit a crime

then a number of problematic questions will arise.  "Can a robot intend to commit a crime?  How is a robot to be punished?  Is it sufficient to reprogram it?  To take it apart?  To penalize its owner?  Its designer?  Its manufacture?  Its programmer?"45

 

Such questions also raise problems concerning criminal trials that involve robots.  Many court procedures will need to be adapted to accommodate the needs of such cases.  This situation will be exacerbated by the development of robots who serve as witnesses for robots or provide expert testimony.  Certainly, "a trial by a jury of peers seems inappropriate and certainly the 6th and 14th amendments guarantees to such a trial do not apply to robots."46  Or do they?

 

To understand the legal principles that can be applied to robots we must first have an understanding of the emerging electronic Judiciary.

 

THE ELECTRONIC JUDICIARY

 

Also relevant in developing scenarios is the future of the Judiciary and the legal system itself.47  Courts themselves in the next fifty years may be robot‑computer run.  Judges are faced with a rapidly expanding caseload where the must analyze legal documents, settle plea bargains, determine sentences, keep abreast of social, economic and political issues as well as act court administrators.  Furthermore, as the Courts continue to act as political and social decisionmakers, judges must cope with complex scientific and technological issues.  Of this situation critics note "judges have little or no training or background to understand and resolve problems of nuclear physics, toxicology, hydrology, biotechnology or a myriad of other specialties."48  Computer technology should then be incorporated into the judicial process to aid in decisionmaking.

The first step will be judges using computers to aid in searching out the most appropriate precedent to fit the present case.  The development of a legal reasoning robot could serve as a valuable adjunct to a judges ability to render fair decisions.  "As computers grow more elaborate and versatile (they) can better cope with the complexities of law, judgements and precedence."49  A legal reasoning robot could "serve as a repository of knowledge outlining the general parameters of the law...assisting in the reasoning process necessary to reach a conclusion."50  As logic oriented companion and a massive knowledge bank with the ability to instantly recall legal facts, precedent and procedure a legal robot would greatly assist the judicial system by speeding up court procedure, minimizing appeals based on court error, and preventing legal maneuvering resulting in fewer cases brought to court.

 

Eventually, as enough statistics are compiled, judges may not be that necessary except at the appellate level.  Judges could then be free to vigorously pursue the legal and philosophical dimensions of societal problems.  Of course, initially during the pre‑trial phase, humans would be necessary.  Attorneys would enter the facts into computers (manually, through voice‑telecommunications) and a motions judge could monitor discovery and fact finding.  Computers would then decide the case outcome.51  In addition, as most cases are negotiated (only about 5 percent ever end up in trial,52) we will see the continued development and sophistication of negotiation and mediation programs.  Disputants would enter their side of the problem, the computer‑robot would interact with each side and aid in reaching a settlement.  Computers might inspire trust as they can instantaneously and annonymously provide relevant previous cases to both disputants.  They can inform the disputants how the case might be settled (in terms of probabilities) if they went to trial or if they settled, that is they could provide a range of alternative choices and solutions.  In addition, AI programs, as we are seeing in computerized psychotherapy, allow individuals to relax and "open up easier."  Besides being impressed by the "intelligence of robot‑judges" we might gain trust in the machines because of the magic they invoke and they authority they command.  This magic and authority may lead to an increased belief in the fairness of the Judiciary.

 

Of course, fairness is not a given; it is a political issue.  Law, unlike mathematics is laden with assumptions and biases.  Decisionmaking is an act of power.  Intitially the use of comptuers will shift power in the court system from judges to programmers.  Judges of course, if they allow AI to enter their courtroom, will do their best to keep control of the law and programmers.  However, given the anticipated development of robotics, eventually we may see computers changing the programming and developing novel solutions to cases.  If computers can develop creativity then judges and other experts will have to find new roles and purposes for themselves.

 

Finally, although it is presently ludicrous, a day may come when robots attorneys negotiate or argue in front of a robot judge with a robot plaintiff and defendent.

LEGAL PRINCIPLES

 

To understand in more concrete terms the legal future of robots, we must understand what legal principles will be applied to conflicts that involve robots.  Lehman‑Wilzig's article on the legal definition of artificial intelligence is extremely useful.  He presents various legal principles which may be of relevance to robot cases.  They include:  product liability, dangerous animals, slavery, diminished capacity, childeren, and agency.53

 

Product liability would be applied as long a robots are believed to be complex machines.  Not only will the manufacturer be liable, say in the case when a robot guard shoots an intruder, but so will "importers, wholesalers, and retailers (and their individual employees if personally negligent); repairers, installers, inspector, and certifiers.54  Thus those that produce, regulate, transport, and use the robot will be liable to some degree.  Certainly, as caseload for robot liability cases mount insurance companies will be cautious about insuring robots.  Moreover, we can imagine the day when manufacturers will argue that the robot is alive and that the company is not liable.  Although the company may have manufactured the robot, they will argue that since then the robot has either (1) reprogrammed itself or (2) the new owner has reprogrammed it.  The argument then will be that it is the robot which should suffer damages and if it has no money, other parties who are partially liable under the joint severibility law should pay the entire bill‑‑the deep pockets principle.  When the first attorney will call a robot on stand is difficult to forecast but not impossible to imagine.

 

Product liability will be especially problematic for AI, because of the present distinction between hardware and software.  For the robot that kills, is the manufacturer of the arms liable, or the software designer, the owner, or is there no liability‑‑Human beware, computer around!  Will we see no‑fault computer insurance law?

 

The danger that robots may cause would logically increase as they become auto‑locomoative, that is, once they can move.  At this stage law relating to dangerous animals may be applicable to robots.  Like animals, they move and like animals they give a sense of intelligence, although whether they actually are intelligent is a political‑ philosophical question. Lehman‑Wilzing writes:55

 

While the difference in tort responsibility between product liability and dangerous animals is relatively small, the transition does involve a quantum jump from a metaphysical standpoint.  As long as AI robots are considered mere machines no controversial evaluative connotations are placed on their essence‑‑they are inorganic matter pure and simple.  However, applying the legal principle of dangerous animals (among others) opens a jurisprudential and definitional Pandora's Box, for ipso facto the "machine" will have been transformed into a legal entity with properties of consciousness, if not some semblance of free will.  Once begun, the legal development towards the "higher" categories will be as inexorable as the physical expansion of robotic powers.  In short, the move from the previous legal category to the present one is the most critical step; afterwards, further jurisprudential evolution becomes inevitable.

 

It is important to remember here that as important as legal rights, those rights that can resolved or judged by a public authority, there are human rights.  These often cannot be resolved by any judicial authority.  The right to employment, the right to minimum basic necessities, and other United Nations Charter human rights although stated morally and unequivocally cannot be guaranteed given that rights are politically won and lost.  Rights, thus, are gained through ideologically‑‑philosophical as well as militant‑‑battles.

 

Given the structure of dominance in the world today: between nations, peoples, races, and sexes, the most likely body of legal theory that will be applied to robots will be that which sees robots as slaves.  They will be ours to use and abuse.  Of course, as Stone has pointed out, this means that they will have no legal status. The slave and the robot cannot institute proceedings himself, for his own recovery, wherein damages are recovered for his pain and suffering.  Will errant robots have to be responsible for their actions, or will owners who argue that the slave understood the intent of his or her actions make the slave responsible?  If the manufacturer or owner is liable in civil cases and guilty of wrong doing in criminal cases, then he will certain argue that the robot understands intent, understands its programming.  If this line of argument succeeds, then the robot can then pursue his own case.  Most likely as mentioned earlier, it will be the programmer or group of programmers who will be responsible. 

 

The problem of punishment is also problematic.  Robots have neither money nor property.  One way would be to give the robot to the injured party for his economic use.  Another would be to eliminate the robot or to reprogram the robot.  This may be analogous to the present debate on the right of the foetus: is it alive, do we have the right to terminate it?  Also, who has the right to terminate a robot who has taken a human life, or a robot who is no longer economically useful?  We would not be surprised if in the 21st century we have right to life groups for robots.

 

Lehman‑Wilzeg argues that another category for robots would be that of diminished capacity‑‑"used for those individuals who are legally independent but have a diminished capacity for initiating actions or understanding the consequences of such actions at the time they are being committed."56  Of course, what is important here is intent.  However, robots will not be the stupidest of species‑‑more likely they will be the most intelligent‑‑at question will be their morality, their ethical decisionmaking.

Far more useful of a category is that of children, or the whiz kid.  High in brain power and low in wisdom.  Moreover, more useful, yet also ultimately problematic is that of the law of agency.  As Lehman‑Wilzeg writes:57

To begin with, the common law in some respects relates to the agent as a mere instrument.  It is immaterial whether the agent himself has any legal capacity, for since he is a sort of tool for his principal he could be a slave, infant, or even insane.58 ..."it is possible for one not sui juris to exercise an agency power."59  Indeed, the terms automation and human machine have been used in rulings to describe the agent.60  Nor must there be any formal acceptance of responsibility on the apart of the agent[.]...The only element required for authority to do acts or conduct transactions61...is the communication by one person to another that the other is to act on his account and subject to his orders.  Acceptance by the other is unnecessary.  Thus, ...[g]enerally speaking, anyone can be an agent who is in fact capable of performing the functions involved.  Here, then, is a legal category already tailor‑made for such a historical novelty as the humanoid.

 

While, it is true that the law of agency may be tailor‑made, given that law is itself changing, given that in the next ten years there may emerge a science court to deal with questions of science and technology (questions that lawyers and judges devoid of scientific and technological training can rarely adequately understand), and given rapid changes in robotics and computers, is it all possible to forecast the legal principles in which AI robots can be understood? 

 

Thus, although the legal categories presented‑‑from product liability to agency‑‑are useful heuristics, the fantastic notion of the robotic rights behooves us to remember that development in robots may result in (or may need) entirely new legal principles and futures.

 

Another perspective and useful heuristic in understanding the rights of robots involves developing two continuums at right angles with each other.  At one end of the x‑axis would be life as presently defined: real live, flesh and bones, reflective consciousness and soul.  At the other end would be robots in much the way that many see them today‑‑a mechanical‑electronic gadget that runs programs designed by humans.  Along this continuum we can imagine humans with a majority of robotic parts (artificial limbs, heart, eyes) and robots with human‑like responses and reactions (creativity, ability to learn).  We would also have robots that look like humans and humans that increasingly look like robots.

 

On the y‑axis we can also develop a rights dimension.  At one end of this continuum would be a condition of total "human rights" and at the other end, a state of rightlessness.  Along this continuum, we can visualize robots representing themselves and robots represented by guardians.  Finally we can develop a moving‑stationary dimension as well as various economic dimensions (household robots to military robots).  By juxtaposing these dimensions (flesh‑mechanical; rights‑‑rightless; moving‑stationary) and visualizing them across time, we can develop various alternative scenarios of the future of robots

 

Along these times line and dimensions, we can imagine the day when a bold lawyer rewrites history and argues that robot should be treated legally as a person.  On this day an entirely new future will emerge.

 

CONCLUSION

 

Technological change is growing at an exponential rate.  Genetic engineering, lasers, space settlement, telecommunications, computers, and robotics are bringing economic, social and political changes like no other period of human history.  Unfortunately it is difficult for individuals and institutions to keep pace with such change.  In order to minimize the stress causes by the expansing role of robotics it is vital that the judiciary and legislators make proactive decisions and plan for the eventual development of robotic rights before the issue reaches a crisis point.

 

We feel the issue of robotic rights and responsibilities to be an eventuality.  Considering the "question of rights" in this new dimension offers the unique opportunity to reconceptualize our very notion of "rights" and what the will mean in a global society.  This issue generates a larger question of mans relationship with his world.  As a quantum change in our perspective of ourselves it signals a new understanding and appreciation for the concerns of everything.  This is the underlying theme of this paper. 

 

John Haught, professor of theology at Georgetown, has identified a higher spiritual dimension to the growing planetary interconnectedness that the computer age is establishing.  He likens "the spread of satellites and computer networks over the Earth as comparable to the complexification of the primate nervous system as the condition for the birth of thought.  Now the complexification is taking on a planetary demention.  So the whole planet is being prepared by technology for the eventual birth of a far higher form of consciousness...we are participating in a magnificent process of bringing about a physiological base for a higher and dramatically novel form of consciousness."62 It is with such a global transformation in mind that we should consider the rights of robots as well as rights for all things.

 

Someday robots will be in our houses as playmates for children, servants for adults.  They may become sex surrogates.  They will be in the courts as judges.  They will be in hospitals as caretakers.  They will proform dangerous military and space tasks for us.  They will clean pollution, save us from numerous hazards.  The child who loses her robot because of malfunction will when she grown up always remember her robot.  She may, at the insistence of her parents, relegate robots as persons of the world of fairies, goblins and ghosts, the unreal and the impossible.  Or she may decide that her robot like her family, friends and pets is part of her, is part of life itself.

 

We must remember that the impossible is not always the fantastic and the fantastic not always the impossible.


 

                                                                                     NOTES

 

 

 

*Phil McNally and Sohail Inayatullah are planners/futurists with the Hawaii Judiciary.  Both are active with the World Futures Studies Federation.  Mr. McNally, in addition, provides strategic planning advice to the YMCA.  Mr. Inayatullah provides strategic planning advice to Mid‑Pacific Institute and various self‑reliance, spiritual associations.

 

The authors would like to thank the Judiciary for research time to complete this article.  Research by Sally Taylor and an earlier paper on the history of Robots by Anne Witebsky was also helpful.

 

The views expressed in this paper are not necessarily shared by the Judiciary or any other organizations and assocations the authors are affliated with.  

 

1.         See Gregory Sugimoto, Comprehensive Planning in the Hawaii Judiciary (Honolulu, Hawaii, Hawaii Judiciary, 1981); also see Sohail Inayatullah, "Futures and the Organization," Futures (June 1984), pp. 302‑315.  

2.         See the Hawaii Judiciary Newsletter, Nu Hou Kanawai: Justice Horizons for the most recent reviews and comments on the legal impacts of emerging technologies and social changes.  

3.   Thomas Hobbes, "Leviathan," Social and Political Philosophy, Eds. John Somerville and Ronald Santoni (Garden City, New York: Double Day & Co., Inc., 1963). p. 143.  

4.  Jamake Highwater, The Primal Mind (New York: Harpers and Row Ins., 1981), 180.  

5.         P.R. Sarkar, Neo‑Humanism: The Liberation of Intellect (Ananda Nagar, Ananda Press, 1984).  See also Sohail Inayatullah, ­"P.R. Sarkar as Futurist," Renaissance Universal Journal (forthcoming 1987) as well as 1985 and 1986 issues of Renaissance Universal Journal for articles by P.R. Sarkar.  

6.         See Michael Towsey, Eternal Dance of Macrocosm (Copenhagan, Denmark: PROUT Publications, 1986).   

7.         Clarence Morris, The Justification of the Law (Philadelphia: University of Pennsylvania Press, 1971), p. 192.  

8.         Morris, The Justification of the Law, p. 194.  

9.         Ibid, p. 196.  

10.       Ibid, p. 198.

11.       Ibid, p. 199.  

12.       Christopher D. Stone, Should Trees Have Standing: Towards Legal Rights for Natural Objects (Los Altos, California: William A. Kaufman, 1974).  

13.  Joseph Coates, "The Future of Law: A Diagnosis and Prescription," Judgeing The Future, Eds. James Dator and Clem Bezold (Honolulu, Hawaii: University of Hawaii Social Science Research Institute, 1981), p.54.  

14.  Francis Allan, "1984: The End of Intimacy," Human Rights (Winter 1984), p.55.  

15.  Joel Shirkin, "The Expert System: The Practical Face of AI,"  Technology Review (Nov/Dec 1983), p.78.  See also Margarate Boden, "Impacts of Artifical Intelligence," Futures (Feb. 1984).  Clark Holloway, ­"Strategic Management and Artificial Intelligence," Long Range Planning (Oct. 1983).  Richard Bold, "Conversing With Computers,"  Technology Review (Feb./March 1985).  "Artificial Intelligence is Here," Business Week (July 9,1985).  

16.  Stanley Wellborn, "Race to Create A Living Computer," U.S. News and World Report (Dec. 31,1984/Jan. 7, 1986), p.50.  

17.  Ibid, p.50.  

18.  Shirkin, "The Expert System: The Practical Face of AI," p.75.  

19.       Sam N. Lehman Wilzeg, "Frankenstein Unbound: Towards a Legal Definition of Artifical Intelligence," Futures (December 1981), pp. 442‑457.  

20.       Ibid, p. 443.  

21.       J. von Neumann, The Computer and the Brain (New Haven: Yale University Press, 1974) quoted in Ibid.  

22.       W.G. Walter, The Living Brain (New York: W.W. Norton and Co, 1953) quoted in Ibid.  

23.       N. Wiener, God and Golem, Inc (Cambridge, MA: MIT Press, 1966) quoted in Ibid.  

24.       N. Wiener, The Human Use of Human Beings (Garden City, NY: Dobleday, 1954) quoted in Ibid.  

25.       J. von Neumann, Theory of Self‑Reproducing Automata (Urbana: University of Illinois Press, 1966) quoted in Ibid.  

26.       M. Arbib, Brains, Machines and Mathematics (New York: McGraw‑Hill, 1964) quoted in Ibid.  

27.       D. Rorvik, As Man Becomes Machine (New York: Pocket Books, 1971) quoted in Ibid.

28.       J. G. Kemeny, Man and the Computer (New York: Charles Scribner's Sons, 1972) quoted in Ibid.  

29.       Kemeny, Man and the Computer quoted in Ibid.  

30.       J. Wesizenbaum, Computer Power and Human Reason (San Francisco: W. H. Freeman and Co. 1976) quoted in Ibid.  

31.       Lehman‑Wilzeg, "Frankenstein Unbound," p. 444.  

32.  Marion Long, "Turncoat of the Computer Revolution," New Age Journal (Dec. 1985), p.48.  

33.       Stone, Should Trees Have Standing?, p. 11.  

34.       Ibid, p. 11.  

35.       Ibid, p. 13.  

36.       Ibid, p. 15.  

37.       Neal Milner, "The Emergence of Rights, " Proposal to the National Science Foundation (Honolulu, Hawaii: University of Hawaii Department of Political Science, 1980).  

38.  Ibid, p. 4.  

39.  Don Mitchell, "Metal Lunch," Whole Earth Review (Jan. 1985), p.4.

See also Jerry, Mander, "Six Grave Doubts About Computers," Whole Earth Review (Jan. 1985).  

40.  Edith Weiner and Arnold Brown, "Issues For The 1990's," The Futurist (March/April 1986), p. 10.   

41.  Robert Anderson, "Piracy and New Technologies: The Protection of Computer Software Against Piracy," (London: American Bar Association Conference Paper 7/17/85), p. 176.  See also the following conference papers; Robert Bigelow, "Computers and Privacy in the United States," David Calcutt, "The Entertainment Industry, Piracy and Remedies,"  Colin Tapper, "From Privacy to Data Protection," Arthur Levine, "Piracy and the New Technologies."  Stewart Brand, "Keep Designing," Whole earth Review (May 1985).  

42.       Mike Higgins, "The Future of Personal Robots," The Futurist (May/June 1986), p. 46.  See also James Albus, "Robots and the Economy" The Futurist (December 1984).  

43.  "Death by Robot," Science Digest (Aug. 1985), p. 67.

44.  See Issac Azimov, The Naked Sun (London: Granada Publishing, 1975).

"The Second Law of Robotics: A robot must obay the orders given it by Human beings except where such orders would conflict with the First Law; The Third Law of Robotics: A robot must protect its own existance as long as such protection does not conflict with either the First or Second Law.  

45.  Ramond August, "Turning The Computer Into A Criminal," Barrister (Fall 1983), p. 53.  See also Don Parker, Fighting Computer Crime (New York: Charles Scribners and Sons, 1983.  Ted Singer, "Controlling Computer Crime," Security Management (January 1984).  

46.  Ibid, p. 54  

47.       See Sohail Inayatullah, "Challenges Ahead for State Judiciaries," Futurics (Vol 9, No. 2, 1985), pp. 16‑18; see also James Dator and Clement Bezold, Judging the Future (Honolulu, Hawaii: University of Hawaii Social Science Research Institute, 1981); and, Orville Richardson, "A Glimpse of Justice to Come," Trial (June 1983‑‑November 1983, a six part series on law in the future).  

48.  David Bazelon, "Risk and Responsibility," Science Technology and National Policy, Eds. Thomas Keuhn and Alan Porter (New York: Cornell University Press, 1981), p. 358.  

49.  Issac Azimov, "The Next 20 Years For Law and Lawyers," American Bar Association Journal (Jan. 1985), p. 59.  See also Anthony D'Amato, "Can Should Computers Replace Judges," Georgia Law Review (Sept. 1977).  According to D'Amato such a computer would work in this fashion.  The computer program is essentially that of a multiple regression analysis.  The dependent variables are plaintiff wins (+1) and defendent wins (‑1); the facts of the case are independent variables.  The computer recieves all the facts and performs a complex multivariate analysis of them.  The facts will be regressed to fit other clusters of facts previously programmed into the computer.  The fit will never be exact: the only question the computer then decides is whether the new facts as programmed fit more closly or cluster around the dependent variables "plaintiff wins" or "defendent wins."  

50.  Gary Grossman and Lewis Soloman, " Computers and Legal Reasoning," American Bar Association Journal (Jan. 1983), p. 66.  See also Larry Polansky, "Technophobia: Confronting the New Technology and Shaping Solutions to Court Problems," State Management Journal (1984).  

51.       See Guy M. Bennet and Signa Treat, "Selected Bibliographical Material on Computer‑Assisted Legal Analysis," Jurimetrics (Spring 1984), pp. 283‑290.  This excellent bibiliography includes a wide range of entries ranging from articles on Searchable Data Bases to articles on computer decisionmaking.  Particularly useful is L. T. McCarty, ­"Reflection on TAXMAN: An Experiment in Artificial Intelligence and Legal Reasonry," 90 Harvard Law Review, 837‑93 (March 1977).

52.       Howard Bedlin and Paul Nejelski, "Unsettling Issues About Settling Civil Litigation," Judicature (June‑July 1984), p. 10.  

53.       Lehman‑Wilzeg, "Frankenstein Unbound," p. 447  

54.       S. M. Waddams, Product Liability (Toronto: Carswell, 1974) quoted in Ibid.  

55.       Lehman‑Wilzeg, "Frankenstein Unbound," pp. 448‑449.

 

56.       Ibid, p. 450.

 

57.       Ibid, p. 451.

 

58.       S.J. Stoljar, The Law of Agency (London: Sweet and Maxwell, 1961) quoted in Ibid.

 

59.       W.A. Seavey, Handbook of the Law of Agency (St. Paul: West Publishing Co, 1974) quoted in Ibid.

 

60.       Seavey Handbook of the Law of Agency quoted in Ibid.

 

61.       Seavey Handbook of the Law of Agency quoted in Ibid.

 

62.  Brad Lemley, "Other Voices Other Futures," P.C. Magazine (Jan. 8, 1985), p. 135.

Web site design and development by Alb-Future (Click Here) Search Engine Optimization
by A1-Optimization