In two previous articles: A robot wrote this entire article. Are you scared yet? and The Machine I’ve written about Artificial Intelligence not being all that is it cracked up to be. This third article is what I’ve been squawking about since the beginning. The so-called “Artificial Intelligence” that is being shoved down the throats of everyone on the planet is not what it seems in the light of day.
The argument and thought experiment now generally known as the Chinese Room Argument was first published in a 1980 article by American philosopher John Searle. It has become one of the best known arguments in recent philosophy. Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door.
Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.
The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but could not produce real understanding. Hence the Turing Test is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics.
The broader conclusion of the argument is that the theory that human minds are computer like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes. Thus the argument has large implications for semantics, philosophy of language and mind, theories of consciousness, computer science and cognitive science generally. As a result, there have been many critical replies to the argument.
Work in Artificial Intelligence has produced computer programs that can beat the world chess champion, control autonomous vehicles, complete our email sentences, and defeat the best human players on the television quiz show Jeopardy. AI has also produced programs with which one can converse in natural language, including customer service virtual agents, and Amazon’s Alexa and Apple’s Siri.
Our experience shows that playing chess or Jeopardy, and carrying on a conversation, are activities that require understanding and intelligence. Does computer prowess at conversation and challenging games then show that computers can understand language and be intelligent? Will further development result in digital computers that fully match or even exceed human intelligence? Alan Turing (1950), one of the pioneer theoreticians of computing, believed the answer to these questions was “yes.” Turing proposed what is now known as “The Turing Test.”
If a computer can pass for human in online chat, we should grant that it is intelligent. By the late 1970’s some AI researchers claimed that computers already understood at least some natural language. In 1980 Universty of California, Berkeley philosopher John Searle introduced a short and widely discussed argument intended to show conclusively that it is impossible for digital computers to understand language or think.
Searle argues that a good way to test a theory of mind, say a theory that holds that understanding can be created by doing this or that, is to imagine what it would be like to actually do what the theory says will create understanding. Searle (1999) summarized his Chinese Room Argument.
Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols. Together with a book of instructions for manipulating the symbols. Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese. And imagine that by following the instructions in the book, the man in the room is able to pass out Chinese symbols which are correct answers to the questions. The book enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.
Searle goes on to say, “The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate book for understanding Chinese then neither does any other digital computer solely on that basis because no computer, has anything the man does not have.”
Searle demonstrated years ago with the so-called Chinese Room Argument that the implementation of the computer program is not by itself sufficient for consciousness or intentional. Computation is defined purely formally or syntactically, whereas minds have actual mental or semantic contents, and we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else.
To put this point slightly more technically, the notion “same implemented program” defines an equivalence class that is specified independently of any specific physical realization. But such a specification necessarily leaves out the biologically specific powers of the brain to cause cognitive processes. A system, me, for example, would not acquire an understanding of Chinese just by going through the steps of a computer program that simulated the behavior of a Chinese speaker.
“Intentionality” is a technical term for a feature of mental and certain other things, namely being about something. Thus a desire for a piece of chocolate and thoughts about real Manhattan or fictional Harry Potter all display intentionality.
Searle’s shift from machine understanding to consciousness and intentionality is not directly supported by the original 1980 argument. However the re-description of the conclusion indicates the close connection between understanding and consciousness in Searle’s later accounts of meaning and intentionality. Those who don’t accept Searle’s linking account might hold that running a program can create understanding without necessarily creating consciousness, and conversely a fancy robot might have dog level consciousness, desires, and beliefs, without necessarily understanding natural language.
In moving to discussion of intentionality, Searle seeks to develop the broader implications of his argument. It aims to refute the functionalist approach to understanding minds, that is, the approach that holds that mental states are defined by their causal roles, not by the neurons/ transistors that plays those roles. The argument counts especially against the form of functionalism known as the Computational Theory of Mind that treats minds as information processing systems.
As a result of its scope, as well as Searle’s clear and forceful writing style, the Chinese Room argument has probably been the most widely discussed philosophical argument in cognitive science to appear since the Turing Test. By 1991, computer scientist Pat Hayes had defined Cognitive Science as the ongoing research project of refuting Searle’s argument. Cognitive psychologist Steven Pinker (1997) pointed out that by the mid-1990’s well over 100 articles had been published on Searle’s thought experiment, and that discussion of it was so pervasive on the Internet that Pinker found it a compelling reason to remove his name from all Internet discussion lists.
This interest has not subsided, and the range of connections with the argument has broadened, including papers making connections between the argument and topics ranging from embodied cognition to theater talk, psychotherapy to postmodern views of truth and, as well as discussions of group or collective minds and discussions of the role of intuitions in philosophy.
Searle’s argument has four important antecedents. The first of these is an argument set out by the philosopher and mathematician Gottfried Leibniz (1646–1716). This argument, often known as “Leibniz’ Mill”, appears as section 17 of Leibniz’ Monadology. Like Searle’s argument, Leibniz’ argument takes the form of a thought experiment. Leibniz asks us to imagine a physical system, a machine, that behaves in such a way that it supposedly thinks and has experiences.
Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception. Thus it is in a simple substance, and not in a compound or in a machine, that perception must be sought for.
A second antecedent to the Chinese Room argument is the idea of a paper machine, a computer implemented by a human. This idea is found in the work of Alan Turing, for example in “Intelligent Machinery” (1948). Turing writes there that he wrote a program for a “paper machine” to play chess. A paper machine is a kind of program, a series of simple steps like a computer program, but written in natural language, and implemented by a human.
The human operator of the paper chess-playing machine need not know how to play chess. All the operator does is follow the instructions for generating moves on the chess board. In fact, the operator need not even know that he or she is involved in playing chess, the input and output strings, such as “N–QB7” need mean nothing to the operator of the paper machine.
As part of the WWII project to decipher German military encryption, Turing had written English language programs for human “computers,” as these specialized workers were then known, and these human computers did not need to know what the programs that they implemented were doing.
One reason the idea of a human-plus-paper machine is important is that it already raises questions about and understanding similar to those in the Chinese Room Arugement. Suppose I am alone in a closed room and follow an instruction book for manipulating strings of symbols. I thereby implement a paper machine that generates symbol strings such as “N-KB3” that I write on pieces of paper and slip under the door to someone outside the room.
Suppose further that prior to going into the room I don’t know how to play chess, or even that there is such a game. However, unbeknownst to me, in the room I am running Turing’s chess program and the symbol strings I generate are chess notation and are taken as chess moves by those outside the room. They reply by sliding the symbols for their own moves back under the door into the room.
If all you see is the resulting sequence of moves displayed on a chess board outside the room, you might think that someone in the room knows how to play chess very well. Do I now know how to play chess? Or is it a system that is playing chess? If I memorize the program and do the symbol manipulations inside my head, do I then know how to play chess, albeit with an odd phenomenology? Does someone’s conscious states matter for whether or not they know how to play chess? If a digital computer implements the same program, does the computer then play chess, or merely simulate this?
A third antecedent of Searle’s argument was the work of Searle’s colleague at Berkeley, Hubert Dreyfus. Dreyfus was an early critic of the optimistic claims made by AI researchers. In 1965, when Dreyfus was at MIT, he published a circa hundred page report titled “Alchemy and Artificial Intelligence.” Dreyfus argued that key features of human mental life could not be captured by formal rules for manipulating symbols. Dreyfus moved to Berkeley in 1968 and in 1972 published his extended critique, “What Computers Can’t Do.”
Dreyfus’ primary research interests were in Continental philosophy, with its focus on consciousness, intentionality, and the role of intuition and the in-articulated background in shaping our understandings. Dreyfus identified several problematic assumptions in AI, including the view that brains are like digital computers, and, again, the assumption that understanding can be codified as explicit rules.
However by the late 1970s, as computers became faster and less expensive, some in the burgeoning AI community started to claim that their programs could understand English sentences, using a database of background information. The work of one of these, Yale researcher Roger Schank (Schank & Abelson 1977) came to Searle’s attention. Schank developed a technique called “conceptual representation” that used “scripts” to represent conceptual relations (related to Conceptual Role Semantics). Searle’s argument was originally presented as a response to the claim that AI programs such as Schank’s literally understand the sentences that they respond to.
A fourth antecedent to the Chinese Room argument are thought experiments involving myriad humans acting as a computer. In 1961 Anatoly Mickevich published “The Game,” a story in which a stadium full of 1,400 math students are arranged to function as a digital computer. For 4 hours each repeatedly does a bit of calculation on binary numbers received from someone near them, then passes the binary result onto someone nearby. They learn the next day that they collectively translated a sentence from Portuguese into their native Russian. Mickevich’s protagonist concludes “We’ve proven that even the most perfect simulation of machine thinking is not the thinking process itself.”
Apparently independently, a similar consideration emerged in early discussion of functionalist theories of minds and cognition, functionalists hold that mental states are defined by the causal role they play in a system. Just as a door stop is defined by what it does, not by what it is made out of. Critics of functionalism were quick to turn its proclaimed virtue of multiple readability against it. While functionalism was consistent with a materialist or biological understanding of mental states, it did not identify types of mental states, such as experiencing pain, or wondering about the feeling of love.
With particular types of neurophysiology’s states, as “type-type identity theory” did. In contrast with type-type identity theory, functionalism allowed sentient beings with different physiology to have the same types of mental states as humans, pains, for example. But it was pointed out that if extraterrestrial aliens, with some other complex system in place of brains, could realize the functional properties that constituted mental states, then, presumably so could systems even less like human brains.
The computational form of functionalism, which holds that the defining role of each mental state is its role in information processing or computation, is particularly vulnerable to this maneuver, since a wide variety of systems with simple components are computationally equivalent. Critics asked if it was really plausible that these inorganic systems could have mental states or feel pain.
Let a functionalist theory of pain be instantiated by a system the sub-assemblies of which are not such things as C-fibers and reticular systems but telephone lines and offices staffed by people. Perhaps it is a giant robot controlled by an army of human beings that inhabit it. When the theory’s functionally characterized conditions for pain are now met we must say, if the theory is true, that the robot is in pain. That is, real pain, as real as our own, would exist in virtue of the perhaps disinterested and businesslike activities of these bureaucratic teams, executing their proper functions.
As you can see, a robot is not capable of feeling.