The 21st Century Artilect

Moral Dilemmas Concerning the Ultra Intelligent Machine

Hugo de Garis’ publication of the Terran vs Cosmists Artilect War scenario. He would add the ‘Cyborgian’ camp later, and the ‘gigadeath’ prediction, but otherwise repeat the same scenario.
AI
Author

Hugo de Garis

Published

May 1, 1989

Abstract

Within one to two human generations, it is likely that computer technology will be capable of building brain-like computers containing millions if not billions of artificial neurons. This development will allow neuroengineers and neurophysiologists to combine forces to discover the principles of the functioning of the human brain. These principles will then be translated into more sophisticated computer architectures, until a point is reached in the 2Ist Century when the primary global politica! issue will become, “Who or what is to be dominant species on this planet — human beings, or artilects (artificial intellects)?”

A new branch of applied moral philosophy is needed to study the profound implications of the prospect of life in a world in which it is generally recognised to be only a question of time before our computers become smarter than we are. Since human beings could never be sure of the attitudes of advanced artilects towards us, due to their unfathomable complexity and possible “Darwinian” self modification, the prospect of possible hostilities between human beings and artilects cannot be excluded.

Keywords: Artilect, Ultra Intelligent Machine, Neuro-Engineering, Dominant Species, Artificial Neuron.

Introduction

Artificial Intelligence is a branch of computer science which is at tempting to make machines intelligent, and in so doing, to cast light on the mysteries of biological intelligence. This valiant enterprise has not escaped the critical eye of the philosophers over the years, some of whom have attempted to show that certain claims of the intelligists (AI researchers) are excessive. (See for example, Searle 1981, Dennett 1981, Dreyfus 1986 and the replies of the intelligists, Hofstadter 1981, Gregory 1987). However, this article does not address itself to such traditional “philosophical-ΑI” concerns as the mind-brain distinction, the freedom of the will, or the impossibility or otherwise of artificial intelligence. It assumes that artificial intelligence is a reasonable endeavor, and raises new questions concerning the moral consequences for humanity when AI eventually succeeds.

A revolution is taking place in the field of Artificial Intelligence. This revolution, called “Connectionism”, attempts to understand the junctioning of the human brain in terms of interactions between artificial abstract neuron-like components, and hopes to provide computer science with design principles sufficiently powerful to be able to build genuine artificial electronic (optical, molecular) brains (Kohonen 1987, McClelland et al 1986, Mead 1987). Progress in micro electronics and related fields, such as optical Computing, has been so impressive over the last few years, that the possibility of building a true artilect within a human generation or two becomes a real possibility and not merely a science fiction pipe dream.

However, if the idea of the 21st Century artilect is to be taken seriously (and a growing number of Artificial Intelligence specialists are doing just that (Michie 1974, Waltz 1988, de Garis 1989)), then a large number of profound political and philosophical questions arise. This paper addresses itself to some of the philosophical and moral issues concerning the fundamental question “Who or what is to be dominant species on this planet — human beings or the artilects?”

A Moral Dilemma

In order to understand the disquiet which has been growing amongst an increasing number of intelligists (specialists in Artificial Intelligence) around the world in the late 1980s (Waltz 1988, de Garis 1989), it is useful to make a historical analogy with the development of the awareness of the nuclear physicists in the 1930s, of the possibility of a chain reaction when Splitting the uranium atom. At the time, that is immediately after the announcement of the Splitting, very few nuclear physicists thought hard about the consequences to humanity of life in a nuclear age and the possibility of a large scale nuclear war in which billions of human beings would die.

Some intelligists feel that a similar situation is developing now with the connectionist revolution. The intelligists concerned, are worried that if the artificial intelligence community simply rushes ahead with the construction of increasingly sophisticated artilects, without thinking about the possible long term political, social and philosophical consequences, then humanity may end up in the same sort of diabolical situation as in the present era of possible nuclear holocaust.

Within a single human generation, computer scientists will be building brain-like computers based on the technology of the 21st Century. These true “electronic (optical, molecular) brains” will allow neurophysiologists to perform experiments on machines instead of being confined to biological specimens. The marriage between neuroengineers and neurophysiologists will be extremely fruitful and artificial intelligence can expect to make rapid progress towards its long term goal of building a machine that can “think”, a machine usually called an “artificial intelligence”, or “artilect”. However, since an artilect is, by definition, highly intelligent, (and in the limit, ultra intelligent, that is, having an intelligence which is orders of magnitude superior to ours), if ever such a machine should turn against humanity, it could be extremely dangerous. An atomic bomb has the enormous advantage, from the point of view of human beings, of being totally stupid. It has no intelligence. It is human beings who control it. But an artilect is a different kettle of fish entirely.

Artilects, unlike the human species, will probably be capable of extremely rapid evolution and will, in a very short time (as judged by human standards), reach a state of sophistication beyond human comprehension. Remember, that human neurons communicate at hundreds of meters per second, whereas electronic components communicate near the speed of light, a million times faster. Remember, that our brains, although containing some trillion neurons, has a fixed architecture, as specified by our genes. The artilects could choose to undertake “Darwinian experiments” on themselves, or parts of themselves, and incorporate the more successful results into their structure. Artilects have no obvious limit as to the number of components they may choose to integrate into themselves. To them, our trillion neurons may seem puny. Not only may artilects be superior to humans in quantitative terms, they may be greatly our superiors in qualitative terms as well. They may discover whole new principles of “intelligence theory” which they may use in restructuring themselves. This continuous updating may grow exponentially — the smarter the machine, the better and faster the redesigning phase, so that a take-off point may be reached, beyond which, we human beings will appear to artilects as mice do to us.

This notion of Darwinian experimentation is important in this discussion, because it runs counter to the opinions of many people who believe (rather naively, in my view) that it will be possible to construct artilects which will obey human commands with docility. Such machines are not artilects according to my conception of the word.

I accept that machines will be built which will show some obvious signs of real intelligence and yet remain totally obedient. However, this is not the issue being discussed in this paper. What worries me is the type of machine which is so smart that it is capable of modifying itself, of searching out new structures and behaviours, that is, the “Darwinian Artilect”.

Since any machine, no matter how intelligent, is subject to the same physical laws as is any other material object in the universe, there will be upper limits to the level of self-control of its intellectual functions. At some level in its architectural design, there will be “givens”, that is, top level structures determining the artilect’s functioning, which are not “judged” by any higher level structures. If the artilect is to modify these top level structures, how can it judge the quality of the change? What is meant by quality in such a context?

This problem is universal for biological systems. Quality, in a biological context, is defined as increased survivability. Structural innovations such as reproduction, mutation, sex, death, etc., are ail “measured” according to the survivability criterion. It is just possible that there may be no other alternative for the artilect, than taking the same route. Survivability however, only has meaning in a context in which the concept of death has meaning. But would not an artilect be essentially immortal, as are cancer cells, and would a folly autonomous artilect, resulting from an artilectual reproductive process, but with modified structures, accept being “termina ted” by its “parent” artilects, if the latter consider the experiment to have been a failure?

If the offspring artilects do not agree to being “killed”, they might be allowed to live, but this would imply that every artilectual experiment would create a new immortal being, which would consume scarce re sources. There seem to be at least three possible solutions to this problem. Either a limit is placed on the number of experiments being performed, a philosophy inevitably leading to evolutionary stagnation, or artilects are replaced, by newer versions, (processes called reproduction and death, in biological contexts), or the growing population of artilects could under take a mass migration into the cosmos in search of other resources. This Darwinian modification is, by its nature, random and chancy. The problem for human beings is that an artilect, by definition, is beyond our control. As human beings, with our feeble intellects (by artilectual standards), we are unable to understand the implications of structural changes to the artilect’s “brain”, because this requires a greater intellect than we possess. We can only sit back and observe the impact of artilectual change upon us. But this change may not necessarily be to our advantage. The “moral circuits” of the artilects may change so that they no longer feel any “sympathy” for human beings and decide that, given a materials shortage on the planet, it might be advisable, from an artilectual point of view, to reduce the “ecological load” by removing the “hungriest” of the inferior species, namely human beings.

Since human moral attitudes, like any other psychological attitudes, are ultimately physical/chemical phenomena, human beings could not be sure of the attitudes of artilects towards human beings, once the artilects had evolved to a highly advanced State. What human beings consider as moral is merely the result of our biological evolution. As human beings we have no qualms about killing mosquitoes or cattle. To us, they are such inferior creatures we do not question our power of life or death over them. This uncertainty raises the inevitable fear of the unknown in human beings. With artilects undertaking experiments to “improve” themselves (however the artilects define improvement), we humans could never be sure that the changing intelligences and attitudes of the artilects would remain favorable to us, even if we humans did our best to instil some sort of initial “Asimovian”, human-oriented moral code into them. Personally, I believe that Asimov’s “Three Laws of Robotics” are inappropriate for machines making random changes to themselves to see whether they lead to “improvements”. Asimov’s robots were not artilects.

A World Divided

With many intelligists agreeing that it will be technologically possible to build electronic (optical, molecular) brains within a human generation or two, what are the moral problems presented to humanity, and particularly to applied moral philosophers? The biggest question in many peoples minds will be, “Do we, or do we not, allow such artilects to be built?” Given the time frame we are talking about, namely 20 to 50 years from now, it is unlikely that human societies will have evolved sufficiently to have formed a world State, having the power to enforce a world wid ban on artilectual development, beyond an agreed point. What wi probably happen, is that military powers will argue that they cannot afford to stop the development of artilects, in case the “other side” creates smarter “soldier robots” than they do. Military/political pressures may ensure artilect funding and research until it is too late.

The artilect question alone, is sufficient in itself, to provide a very strong motivation for the establishment of a skeleton world government within the next human generation. With the rapid development of global tel communications and the corresponding development of a world language the establishment of a skeleton world government within such a short time may not be as naive as it sounds.

For the purposes of discussion, imagine that such a ban, or at least a moratorium, on artilectual development is established. Should such a ban remain in force forever? Could one not argue that mankind has not onl the power, but the moral duty to initiate the next major phase in evolution and that it would be a “crime” on a universal or cosmic scale not to exercise that power?

One can imagine new ideological political factions being established, comparable with the capitalist/communist factions of the 20th Century. Those in favour of giving the artilects freedom to evolve as they wish, I have labelled the “Cosmists”, and those opposed, I have labelled the “Terras” (or Terrestrialists). I envisage a bitter ideological conflict bet ween these two groups, taking on a planetary and military scale. The Cosmists are so named because of the idea that it is unlikely, once the artilects have evolved beyond a certain point, that they will want to remain on this provincial little planet we call Earth. After all, there are some trillion trillion other stars to choose from. It seems more credible that the artilects will leave our planet and move into the Cosmos, perhaps in search of other ultraintelligences.

The Terras are so named because they wish to remain dominant on this planet. Their horizons are terrestrial. To the Cosmists, this attitude is provincial in the extreme. To the Terras, the aspirations of the Cosmists are fraught with danger, and are to be resisted at any cost. The survival of humanity is at stake.

There may be a way out of this moral dilemma. With 21st Century Space technology, it may be entirely feasible to transport whole populations of Cosmist scientists and technicians to some distant planet, where they can build their artilects and suffer the consequences. However, even this opinion may be too risky for some Terran politicians, because the artilects may choose to return to the Earth, and with their superior intellects, they could easily overcome the military precautions installed by the Terras.

Summary

This article claims that intelligists will be able to construct true electronic (optical, molecular) brains, called artilects, within one to two human generations. It is argued that this possibility is not a piece of science fiction, but is an opinion held by a growing number of professional intelligists. This prospect raises the moral dilemma of whether human beings should or should not allow the artilects to be built, and whether they should or should not be allowed to modify themselves into super beings, beyond human comprehension. This dilemma will probably dominate political and philosophical discussion in the 2Ist Century. A new branch of applied moral philosophy needs to be established to consider the artilect problem.

References

  • (de Garis 1989) “What if AI Succeeds? The Rise of the 21st Century Artilect” ; Artificial Intelligence Magazine, Summer 1989.
  • (Dennet 1981) Brainstorms – Philosophical Essays on Mind and Psychology, D. C. Dennet, M.I.T. Press, 1981
  • (Dreyfus 1986) Mind Over Machine, Dreyfus H., Blackwell Oxford, 1986
  • (Evans 1979) The Mighty Micro, Evans C., Coronet Books, London.
  • (Gregory 1987) “In Defense of Artificial Intelligence - A Reply to John Searle”, R. Gregory, in Mindwaves, eds C. Blakemore and S. Greenfield, Blackwell, 1987
  • (Hofstadter 1981) The Mind’s I, Hofstadter D. R., Bantam, 1981
  • (Jastrow 1981) The Enchanted Loom, Simon & Schuster, New York.
  • (Kelly 1987) “Intelligent Machines. What Chance?”, Advances in Artificial Intelligence, Wiley.
  • (Kohonen 1987) Self-Organization and Associative Memory, 2nd edn. Kohonen T., Springer-Verlag, Belin, Heidelberg.
  • (McClelland et al 1986) Parallel Distributed Processing, Vols 1 and 2, McClelland J. L. & Rumelhart D. E. (Eds), MIT Press, Cambridge, Mass.
  • (McCorduck 1979) Forging the Gods, in Machines Who Think, Freeman (Mead 1987) Analog VLSI and Neural Systems, Mead C., Addison Wesley, Reading, Mass.
  • (Michie 1974) On Machine Intelligence, Michie D., Edinburgh University Press, Edinburgh
  • (Searle 1981) “Minds, Brains, and Programs”, Searle J., in The Minds’s I, see (Hofstadter 1981)
  • (Waltz 1988) “The Prospects for Building Truly Intelligent Machines”, Waltz D., in The Artificial Intelligence Debate, Cambridge, Mass., MIT Press.

Appendix: Metadata

This essay was written in 1989-05 and published in 1990 as De Garis, Hugo. “The 21st Century Artilect Moral Dilemmas Concerning the Ultra Intelligent Machine.” Revue Internationale de Philosophie (1990): 131-138.

It was hosted as a plaintext file at his homepage, with the following author’s information:

Dr. Hugo de Garis,
Head, Brain Builder Group,
Evolutionary Systems Department,
ATR Human Information Processing Research Labs,
2-2 Hikaridai, Seika-cho, Soraku-gun,
Kansai Science City, Kyoto-fu, 619-02, Japan.
tel. + 81 774 95 1079,
fax. + 81 774 95 1008,
email. degaris@hip.atr.co.jp
web. http://www.hip.atr.co.jp/~degaris