Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

The Chinese Room

with 4 comments

In 1980, the philosopher John Searle presented a thought experiment that has become known as the Chinese Room. I first encountered it in William Poundstone’s book Labyrinths of Reason, which describes it as follows:

Imagine that you are confined to a locked room. The room is virtually bare. There is a thick book in the room with the unpromising title What to Do If They Shove Chinese Writing Under the Door. One day a sheet of paper bearing Chinese script is shoved underneath the locked door. To you, who know nothing of Chinese, it contains meaningless symbols, nothing more…You are supposed to scan the text for certain Chinese characters and keep track of their occurrences according to complicated rules outlined in the book…The next day, you receive another sheet of paper with more Chinese writing on it…The book has further instructions for correlating and manipulating the Chinese symbols on the second sheet, and combining this information with your work from the first sheet. The book ends with instructions to copy certain Chinese symbols…onto a fresh sheet of paper. Which symbols you copy depends, in a very complicated way, on your previous work. Then the book says to shove the new sheet under the door of your locked room. This you do.

Unknown to you, the first sheet of Chinese characters was a Chinese short story, and the second sheet was questions about the story, such as might be asked in a reading test…You have been manipulating the characters via a very complicated algorithm written in English…The algorithm is so good that the “answers” you gave are indistinguishable from those that a native speaker of Chinese would give, having read the same story and been asked the same questions.

Searle concludes that this scenario is essentially identical to that of a computer program operating on a set of symbols, and that it refutes the position of strong artificial intelligence, which he characterizes as the belief that “the appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.” According to Searle, it’s clear that there isn’t any “mind” or “understanding” involved here:

As regards the first claim, it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing.

I’ve never been convinced by this argument, in part because I approached it through the work of Douglas R. Hofstadter, who calls it “a quintessential ‘bad meme’—a fallacious but contagious virus of an idea, similar to an annoying childhood disease such as measles or chicken pox.” (If it’s a bad meme, it’s one of the all-time greats: the computer scientist Pat Hayes once jokingly defined cognitive science as “the ongoing research program of showing Searle’s Chinese Room Argument to be false.”) The most compelling counterargument, at least to me, is that Searle is deliberately glossing over how this room really would look. As Hofstadter notes, any program capable of performing in the manner described would consist of billions or trillions of lines of code, which would require a library the size of an aircraft carrier. Similarly, even the simplest response would require millions of individual decisions, and the laborious approach that Searle presents here would take years for a single exchange. If you try to envision a version of the Chinese Room that could provide answers in real time, you end up with something considerably more impressive, of which the human being in the room—with whom we intuitively identify—is just a single component. In this case, the real “understanding” resides in the fantastically complicated and intricate system as a whole, a stance of which Searle dismissively writes in his original paper: “It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible.”

In other news, a lawsuit was filed last week against John Searle and the Regents of the University of California, where he has taught for decades, accusing him of sexual harassment. The plaintiff is a twenty-four-year-old woman, Joanna Ong, who was employed as Searle’s research assistant for three months. The complaint states:

On or about July 22, 2016, after only a week of working together, Searle sexually assaulted Ong. On that date, he asked his previous research assistant to leave his office. He then locked the door behind the assistant and then went directly to Ong to grope her. Professor Searle slid his hands down the back of her spine to her buttocks and told Ong that “they were going to be lovers,” that he had an “emotional commitment to making her a public intellectual,” and that he was “going to love her for a long time.”

When Ong took her story to the director of the John Searle Center for Social Ontology, she was allegedly told that Searle “has had sexual relationships with his students and others in the past in exchange for academic, monetary, or other benefits.” No further attempt was made to investigate or respond to her claim, and the incidents continued. According to Ong, Searle asked her to log onto a “sugar daddy” website on his behalf and watched online pornography in her presence. The complaint adds: “On one occasion, when Ong”—who is Asian-American—“brought up the topic of American Imperialism as a discussion topic, Searle responded: ‘American Imperialism? Oh boy, that sounds great honey! Let’s go to bed and do that right now.’” When Ong complained again, the lawsuit states, she was informed that none of these issues would be addressed, and she ultimately lost her job. Earlier this month, Searle ceased to teach his undergraduate course on “Philosophy of Mind,” with university officials alluding to undisclosed “personal reasons.” As far as I know, neither Searle’s attorney nor anyone at the university has commented on the allegations.

Now let’s get back to the Chinese Room. At its heart, the argument comes down to a contest between dueling intuitions. Proponents of strong artificial intelligence have the intuition, or the “ideology,” that consciousness can emerge from a substrate other than the biological material of the brain, and Searle doesn’t. To support his position, he offers up a thought experiment, which Daniel C. Dennett once called “an intuition pump,” that is skewed to encourage the reader to arrive at a misleading conclusion. As Hofstadter puts it: “Either Searle…[has] a profound disrespect for the depth of the human mind, or—far more likely—he knows it perfectly well but is being coy about it.” It reduces an incomprehensibly complicated system to a user’s manual and a pencil, and it encourages us to identify with a human figure who is really just a cog in a much vaster machine. Even the use of Chinese itself, which Searle says he isn’t sure he could distinguish from “meaningless squiggles,” is a rhetorical trick: it would come off as subtly different to many readers if it involved, say, Hungarian. (In a response to one of his critics, Searle conceives of a system of water pipes in which “each water connection corresponds to a synapse in the Chinese brain,” while a related scenario asks what would happen if every Chinese citizen were asked to play the role of a single neuron. I understand that these thought experiments are taking their cues from Searle’s original paper, but maybe we should just leave the Chinese alone.) And while I don’t know if Searle’s actions amounted to sexual harassment, Ong’s sense of humiliation seems real enough, which implies that he was guilty, if nothing else, of a failure of empathy—which is really just a word for our intuition about the inner life of another person. In many cases, sexual harassment can be generously viewed as a misreading of what another person needs, wants, or feels, and it’s often a willful one: the harasser skews the evidence to justify a pattern of behavior that he has already decided to follow. If the complaint can be believed, Searle evidently has trouble empathizing with or understanding minds that are different from his own. Maybe he even convinced himself that he was in the right. But it wouldn’t have been the first time.

Written by nevalalee

March 27, 2017 at 9:07 am

4 Responses

Subscribe to comments with RSS.

  1. >a very complicated algorithm written in English

    This part, usually ignored, is key. Who wrote the algorithm?

    The person who writes this algorithm is the conscious mind. Everything else is just translation.

    Imagine the algorithm written by a old hippie Hillary-supporting old-style-rock-and-roller versus a preteen conservative Trump-supporting rapper. Or by any well-rounded fictional character. Or by an untalented hack who doesn’t truly understand a certain well-rounded fictional character but is trying to write in the voice of that character.

    The Chinese room is just a recording of the algorithm author.

    As stated, the Chinese room can only give the answers the algorithm author already created,and can’t generate new answers, not even randomly. The room couldn’t, therefore, pass a true Turing test.

    Suppose I read a story, wrote down in my notebook (in English) answers to questions about the story. Ten years later, you ask the questions and read the answers out of my notebook. Is the notebook intelligent?

    About the other matter…
    I don’t know which is more horrific: What Searle (allegedly) did, or the fact that apparently the only official response of the John Searle Center for Social Ontology was to punish the victim.

    dellstories

    March 27, 2017 at 4:01 pm

  2. Senior male academics using their position of power to sexually assault graduate students and post docs with relative impunity is not something I have ever seen myself, or received a complaint about, but is an ongoing source of rumours in academia. While Searle’s theories of AI may tie in with his alleged behaviour, I think that is over-thinking it. Major academics with international repudiations and a record for bringing in millions of dollars of funding (on which many people depend for jobs) yet show ‘unfortunate foibles’ get protected by the old boys network around them, whether they’re researching the nature of intelligence or magnetic behaviour of superconductors. It’s human nature, sadly, that some fraction of people will, when in a position that permits it with relatively little risk, take advantage of, and hurt, others. Maybe when we’ve got a computer that can do _that_, then we’ll really know AI has come to be.

    Darren

    March 27, 2017 at 4:04 pm

  3. @dellstories: “As stated, the Chinese room can only give the answers the algorithm author already created,and can’t generate new answers, not even randomly. The room couldn’t, therefore, pass a true Turing test.” That’s a very good point. Hofstadter has a nice dialogue, “A Conversation With Einstein’s Brain,” that addresses some of these issues.

    nevalalee

    April 25, 2017 at 5:39 pm

  4. @Darren: You’re right that I was probably overthinking it. But it was hard to resist.

    nevalalee

    April 25, 2017 at 5:39 pm


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: