Andrea Meibos
Professor Norton
Honors 200
April 8, 1998

Artificial Intelligence: An Oxymoron

"The most fascinating question about artificial intelligence remains whether or not machines can have minds" (Fetzer xiii). The possibility of human created, nonliving intelligence, whether in the form of Mary Shelley's monster in Frankenstein or the near- normal android Data in Star Trek: The Next Generation, has fascinated Western culture for years. While it is true that "machine learning is one of the fastest-growing technologies today" (Abu-Mostafa 64), giving us devices that certainly seem intelligent, is it possible for anything nonliving to have intelligence? Based on the definition of intelligence, machines lack of creativity and consciousness, and the fundamental differences between humans and computers, it is not currently possible for "artificial intelligence" (or AI) to truly be intelligent.

"Securing an adequate grasp of the nature of the artificial would do only as long as we were already in possession of a suitable understanding of the idea of intelligence" (Fetzer 3). One must therefore define exactly what one means by "intelligence." Webster's Dictionary defines it as "the ability to learn or understand from experience . . . use of the faculty of reason in solving problems." Although it is possible for machines to apply programmed information and even new information from its surroundings to solve problems, it is not possible for computers to understand the significance of the problem or use reason. When we think of reason and problem solving, we usually think of logic and other consistent methods that would be perfect ground for computer implementation. However, "nonprogrammable human capacities are involved in all forms of intelligent behavior" (Dreyfus 285); thus reason involves not only logic, but cultural values, "common sense," and/or intuition, and therefore computers would need to implement these qualities to use "the faculty of reason in solving problems."

AI proponents have sought to sidestep this hurdle by redefining intelligence for machines. Alan Turing, computer scientist of the 1940s, proposed the following test for determining whether a machine is intelligent. In a room there are two computers at which one can type questions. One computer is connected to a human for the responses, the other to a computer. If, after questioning both, one is unable to determine which is which, the machine passes the test and can be considered intelligent. (Fetzer 3-4) While there are some computers that could possibly pass the Turing test in one specific area, such as medical diagnosis or mathematics, the only thing the test measures is if the program can simulate intelligence. In order to measure true learning or understanding, one would have to prove the machine sees beyond the program it is executing that makes it seem intelligent.

William Rapaport suggests that a machine's ability to understand natural language be used as a criterion for intelligence. Here the difficulty lies in the word "understand"; speech-recognition programs today seem to understand when they type to the screen what the user says and obey verbal commands. This comprehension, however, is very shallow and consists only of translation of sound waves to certain letters that make certain words (Fetzer 5-9). In short, there is no magic criterion for determining whether a machine is intelligent. As a result, we must look at specific examples of seemingly intelligent behavior, such as creativity and consciousness, to ascertain whether the computer really "understand[s] from experience" and uses "the faculty of reason in solving problems."

If computers can be programmed to be creative and imaginative, they would have tools besides logic to solve problems, just as humans use other faculties besides logic when reasoning. "Imagination is a pervasive structuring activity by means of which we achieve coherent, patterned, unified representations. . . . Imagination is absolutely central to human rationality" (Johnson qtd. in Dreyfus xxi). Countess Ada of Lovelace, the first computer programmer, seemed to think such creativity was impossible. "The Analytical Engine [a planned computer of the 1880's] has no pretensions whatever to originate anything. It can do [only] whatever we know how to order it to perform" (qtd. in Matthews 30).

How, then, can one explain the seemingly imaginative or creative behavior of several recent computer programs? One program can improvise modern jazz at the intermediate-beginner level, and another creates surprisingly artistic and unique images with no required input. Still other programs, starting from only a few basic ground rules of math, deduced mathematical concepts known to humans only within the last 300 years (Matthews 31-32). The apparent creativity of these behaviors, however, is based on random elements of the program constricted by bounds, not on the portrayal of emotions or breaking away from normal and accepted standards, necessary for true creativity. "All these programs are inherently incapable of breaking out of 'standard' ways of thinking -- the hallmark of the ultimate type of creativity" (Matthews 32). While they may simulate imagination, these programs have no understanding beyond interpreting what the programmer has told them to do. In addition, could true creativity ever be achieved, it would undermine the fundamental qualities of computers that make them so useful -- dependability, honesty, and reliability (Seufert 53-54). Creativity in machines is not only undesirable; it is fundamentally impossible.

According to Robert Schank, understanding is a spectrum between two ends: complete empathy and making sense. In order for computers to "understand" and therefore be intelligent, they would have to be able to both interpret data and relate it in some way to their experience. Although computers are very good at interpreting and organizing data, and can be programmed to some extent to "learn," or modify their algorithms (methods) based on new data, empathy would require both emotion and consciousness, or self-awareness (Schank 44-46). While current computers clearly do not have emotions or self-awareness, neuroscientists and computer engineers are currently collaborating to create a computer whose structure more closely resembles that of the human brain, with many parallel processors all interconnected much like neurons are interconnected in the brain. Cognitive scientists believe they have discovered that emotions are the results of neurotransmitters polarizing various neurons in the brain and are either positive or negative. Because of the binary nature of current digital computers, positive and negative could be easily represented, causing some AI experts to theorize that emotion could be implemented in computers (Nadeau 55). Because required technology and knowledge are missing from both the neurology and the computer engineering aspects of this possibility, AI scientists still do not know for certain whether "consciousness could be an emergent property of a massively parallel computer system" (Nadeau 61-62).

These computer engineers and neuroscientists, in attempting to build such a self- aware system, fail to take into account that "consciousness has an existence that is somehow anterior to or separate from the neuronal activities of the brain" (Nadeau 159). Clearly, if we have not even discovered the physical organ or device that controls human consciousness (if it physically exists at all), and therefore do not fully understand it, we cannot attempt to set up such a device in a machine. For us to make a machine that knows that it exists (a conscious machine), we would have to understand how we know that we exist. Thus while we may be able to simulate emotions or self-awareness, they will be only "skin" deep, and disappear once one looks past the programs it runs. Such simulation is not necessary or desirable, however; instead of trying to force emotions or irrationality on an inherently unemotional, logical device, we would be better off leaving "values and key decisions" to humans (Hayes-Roth 112).

If the "central dogma" of AI -- that "what the brain does may be thought of at some level as a kind of computation" (Charniak & McDermott 6) -- is wrong, then artificial intelligence cannot be intelligence at all. The Basic Model of AI states that humans react to stimuli based on certain processes in much the same way that computers turn input into output using a program. While for many low-level activities this model is accurate, the deeper one attempts to simulate, the clearer it becomes that "there are few but crucial differences that distinguish human beings from digital machines" (Fetzer 271). One main difference is that humans are not digital, as our entire thinking process is not based on numbers like computer thinking is, and thus is not really a computation at all.

"Machines might be able to do some of the things that humans can do (add, subtract, etc.), but they may or may not be doing them in the same way that humans do them" (Fetzer 18). If a computer simply gets the same result with the same input that a human would, regardless of process, it is called simulation. Emulation, on the other hand, is getting the same result with a machine and using the same method a human would. For example, computers simulate addition by translating numbers into binary numbers, translating binary numbers into electrical impulses, and then turning on and off other electrical impulses based on certain conditions -- quite different from the way we add. In order for computers to emulate human addition, they would have to have a table of values for adding numbers zero through nine like humans memorize, and then rules for carrying and correct digit placement. Although a computer can do some things that in a human would be considered intelligent, simulation programs do not "understand" what they are doing; they are simply crunching numbers by translating electricity. Therefore, if the computer programs don't match the human processes for the same data, the Basic Model is no longer valid, and the central dogma of AI is no longer relevant.

While the media and extreme AI proponents would have the world believe that "if all knowledge can be formalized, then the human self can be matched . . . by a machine" (Bronowski qtd in Nadeau 24), we see that mere knowledge and data crunching algorithms, essentially all that today's computers are capable of, do not equate to intelligence. Schank explains the only way computers could truly be intelligent:

		Computers do only what they have been programmed to do, and this
		essential fact of computers will not change.  Any intelligence computers may
		have will result from an evolution of our ideas about the nature of
		intelligence -- not as a result of advances in electronics. (Schank 7)

Although such human-like characteristics as creativity and consciousness may be simulated quite convincingly, underneath it all the computer is still the same, with no understanding of its data manipulation, and therefore no intelligence. "Perhaps the time has come to face the possibility that [machine intelligence] will never be realized with a digital computer" (Wilkes 17).


Works Cited

Abu-Mostafa. "Machines that Learn from Hints." Scientific American Apr. 1995:64-69.

Charniak, E., and D. McDermott. Introduction to Artificial Intelligence. Reading, MA:Addison-Wesley, 1985.

Dreyfus, Hubert L. What Computers Still Can't Do. Cambridge, MA: The MIT Press, 1992.

Fetzer, James H. Artificial Intelligence: Its Scope and Limits. Dordrecht, The Netherlands:Kluwer Academic Publishers, 1990.

Hayes-Roth, Frederick. "Artificial Intelligence: What Works and What Doesn't?." AI Magazine Summer 1997: 99-113.

Matthews, Robert. "Computers at the Dawn of Creativity." New Scientist Dec. 1994: 30-34.

Nadeau, Robert. Mind, Machines, and Human Consciousness: Are There Limits to AI?. Chicago: Contemporary Books, Inc., 1991.

Schank, R. The Cognitive Computer. Reading, MA: Addison-Wesley, 1984.

Seufert, Wolf. "A Measure of Brains to Come." New Scientist Oct. 1994: 53-54.

Webster's New World Dictionary of the American Language. Cleveland, OH: The World Publishing Company, 1964.

Wilkes, Maurice V. "Artificial Intelligence as the Year 2000 Approaches." Association for Computing Machinery Communications of the ACM Aug. 1992: 17-18.

Back to Main | Back to Schoolwork | E-mail me!