Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘John Searle

The Chinese Room

with 4 comments

In 1980, the philosopher John Searle presented a thought experiment that has become known as the Chinese Room. I first encountered it in William Poundstone’s book Labyrinths of Reason, which describes it as follows:

Imagine that you are confined to a locked room. The room is virtually bare. There is a thick book in the room with the unpromising title What to Do If They Shove Chinese Writing Under the Door. One day a sheet of paper bearing Chinese script is shoved underneath the locked door. To you, who know nothing of Chinese, it contains meaningless symbols, nothing more…You are supposed to scan the text for certain Chinese characters and keep track of their occurrences according to complicated rules outlined in the book…The next day, you receive another sheet of paper with more Chinese writing on it…The book has further instructions for correlating and manipulating the Chinese symbols on the second sheet, and combining this information with your work from the first sheet. The book ends with instructions to copy certain Chinese symbols…onto a fresh sheet of paper. Which symbols you copy depends, in a very complicated way, on your previous work. Then the book says to shove the new sheet under the door of your locked room. This you do.

Unknown to you, the first sheet of Chinese characters was a Chinese short story, and the second sheet was questions about the story, such as might be asked in a reading test…You have been manipulating the characters via a very complicated algorithm written in English…The algorithm is so good that the “answers” you gave are indistinguishable from those that a native speaker of Chinese would give, having read the same story and been asked the same questions.

Searle concludes that this scenario is essentially identical to that of a computer program operating on a set of symbols, and that it refutes the position of strong artificial intelligence, which he characterizes as the belief that “the appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.” According to Searle, it’s clear that there isn’t any “mind” or “understanding” involved here:

As regards the first claim, it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing.

I’ve never been convinced by this argument, in part because I approached it through the work of Douglas R. Hofstadter, who calls it “a quintessential ‘bad meme’—a fallacious but contagious virus of an idea, similar to an annoying childhood disease such as measles or chicken pox.” (If it’s a bad meme, it’s one of the all-time greats: the computer scientist Pat Hayes once jokingly defined cognitive science as “the ongoing research program of showing Searle’s Chinese Room Argument to be false.”) The most compelling counterargument, at least to me, is that Searle is deliberately glossing over how this room really would look. As Hofstadter notes, any program capable of performing in the manner described would consist of billions or trillions of lines of code, which would require a library the size of an aircraft carrier. Similarly, even the simplest response would require millions of individual decisions, and the laborious approach that Searle presents here would take years for a single exchange. If you try to envision a version of the Chinese Room that could provide answers in real time, you end up with something considerably more impressive, of which the human being in the room—with whom we intuitively identify—is just a single component. In this case, the real “understanding” resides in the fantastically complicated and intricate system as a whole, a stance of which Searle dismissively writes in his original paper: “It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible.”

In other news, a lawsuit was filed last week against John Searle and the Regents of the University of California, where he has taught for decades, accusing him of sexual harassment. The plaintiff is a twenty-four-year-old woman, Joanna Ong, who was employed as Searle’s research assistant for three months. The complaint states:

On or about July 22, 2016, after only a week of working together, Searle sexually assaulted Ong. On that date, he asked his previous research assistant to leave his office. He then locked the door behind the assistant and then went directly to Ong to grope her. Professor Searle slid his hands down the back of her spine to her buttocks and told Ong that “they were going to be lovers,” that he had an “emotional commitment to making her a public intellectual,” and that he was “going to love her for a long time.”

When Ong took her story to the director of the John Searle Center for Social Ontology, she was allegedly told that Searle “has had sexual relationships with his students and others in the past in exchange for academic, monetary, or other benefits.” No further attempt was made to investigate or respond to her claim, and the incidents continued. According to Ong, Searle asked her to log onto a “sugar daddy” website on his behalf and watched online pornography in her presence. The complaint adds: “On one occasion, when Ong”—who is Asian-American—“brought up the topic of American Imperialism as a discussion topic, Searle responded: ‘American Imperialism? Oh boy, that sounds great honey! Let’s go to bed and do that right now.’” When Ong complained again, the lawsuit states, she was informed that none of these issues would be addressed, and she ultimately lost her job. Earlier this month, Searle ceased to teach his undergraduate course on “Philosophy of Mind,” with university officials alluding to undisclosed “personal reasons.” As far as I know, neither Searle’s attorney nor anyone at the university has commented on the allegations.

Now let’s get back to the Chinese Room. At its heart, the argument comes down to a contest between dueling intuitions. Proponents of strong artificial intelligence have the intuition, or the “ideology,” that consciousness can emerge from a substrate other than the biological material of the brain, and Searle doesn’t. To support his position, he offers up a thought experiment, which Daniel C. Dennett once called “an intuition pump,” that is skewed to encourage the reader to arrive at a misleading conclusion. As Hofstadter puts it: “Either Searle…[has] a profound disrespect for the depth of the human mind, or—far more likely—he knows it perfectly well but is being coy about it.” It reduces an incomprehensibly complicated system to a user’s manual and a pencil, and it encourages us to identify with a human figure who is really just a cog in a much vaster machine. Even the use of Chinese itself, which Searle says he isn’t sure he could distinguish from “meaningless squiggles,” is a rhetorical trick: it would come off as subtly different to many readers if it involved, say, Hungarian. (In a response to one of his critics, Searle conceives of a system of water pipes in which “each water connection corresponds to a synapse in the Chinese brain,” while a related scenario asks what would happen if every Chinese citizen were asked to play the role of a single neuron. I understand that these thought experiments are taking their cues from Searle’s original paper, but maybe we should just leave the Chinese alone.) And while I don’t know if Searle’s actions amounted to sexual harassment, Ong’s sense of humiliation seems real enough, which implies that he was guilty, if nothing else, of a failure of empathy—which is really just a word for our intuition about the inner life of another person. In many cases, sexual harassment can be generously viewed as a misreading of what another person needs, wants, or feels, and it’s often a willful one: the harasser skews the evidence to justify a pattern of behavior that he has already decided to follow. If the complaint can be believed, Searle evidently has trouble empathizing with or understanding minds that are different from his own. Maybe he even convinced himself that he was in the right. But it wouldn’t have been the first time.

Written by nevalalee

March 27, 2017 at 9:07 am

The dancer from the dance

leave a comment »

The Voyager golden record

Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What one piece of pop culture would you use to teach an artificial intelligence what it means to be human?”

When I was growing up, one of the books I browsed through endlessly was Murmurs of Earth by Carl Sagan, which told the story behind the Voyager golden records. Attached to the two Voyager spacecraft and engraved with instructions for playback, each record was packed with greetings in multiple languages, sounds, encoded images of life on earth, and, most famously, music. The musical selection opens with the first movement of Bach’s Brandenburg Concerto No. 2, which is about as solid a choice as it gets, and the remaining tracks are eclectic and inspired, ranging from a Pygmy girls’ initiation song to Blind Willie Johnson’s “Dark Was the Night, Cold Was the Ground.” (The inclusion of “Johnny B. Goode” led to a legendary joke on Saturday Night Live, purporting to predict the first message from an alien civilization: “Send more Chuck Berry.”) Not included, alas, was “Here Comes the Sun,” which the Beatles were happy to contribute, only to be vetoed by their record company. Evidently, EMI was concerned about the distribution of royalties from any commercial release of the disc—which says more about our society than we’d like any alien culture to know.

Of course, the odds of either record ever being found and played are infinitesimal, but it was still a valuable exercise. What, exactly, does it mean to be us, and how can we convey this to a nonhuman intelligence? Other solutions have been proposed, some simpler and more elegant than others. In The Lives of a Cell, Lewis Thomas writes:

Perhaps the safest thing to do at the outset, if technology permits, is to send music. This language may be the best we have for explaining what we are like to others in space, with least ambiguity. I would vote for Bach, all of Bach, streamed out into space, over and over again. We would be bragging of course, but it is surely excusable to put the best possible face on at the beginning of such an acquaintance. We can tell the harder truths later.

If such thought experiments so often center on music, it’s because we intuitively see it as our most timeless, universal production, even if that’s as much a cultural construct as anything else. All art, Walter Pater says, aspires to the condition of music, in which form and content can’t be separated, so it’s natural to regard it as the best we have to offer.

Ballets Russes

Yet music, for all its merits, only hints at a crucial aspect of human existence: its transience. It’s true that every work of music has a beginning and an end, but once written, it potentially exists forever—if not as a single performance, then as an act of crystalized thought—and it can be experienced in pretty much the form that Bach or Beethoven intended. In that sense, it’s an idealized, aspirational, and not particularly accurate representation of human life, in which so much of what matters is ephemeral and irreproducible. We may never have a chance to explain this to an alien civilization, but it’s likely that we’ll have to convey it sooner or later to another form of nonhuman consciousness that arises closer to home. Assuming we’re not convinced, like John Searle, of the philosophical impossibility of artificial intelligence, it’s only a matter of time before we have to take this problem seriously. And when we do, it’s our sense of mortality and impermanence that might pose the greatest obstacle to mutual comprehension. Unless its existence is directly threatened, as with HAL in 2001, an A.I., which is theoretically immortal, might have trouble understanding how we continue to find meaning in a life that is defined largely by the fact that it ends.

When I ask myself what form of art expresses this fact the most vividly, it has to be dance. And although I’d be tempted to start with The Red Shoes, my favorite movie of all time, there’s an even better candidate: the extraordinary documentary Ballets Russes, available now for streaming on Hulu, which celebrates its tenth anniversary this year. (I didn’t even realize this until I looked up its release date shortly before typing this sentence, which is just another reminder of how quickly time slips away.) Just as the Voyager record was a kind of exercise to determine what art we find most worthy of preservation, the question of what to show a nonhuman intelligence is really more about what works can teach us something about what it means to be human. Ballets Russes qualifies as few other movies do: I welled up with tears within the first minute, which juxtaposes archival footage of dancers in their prime with the same men and women sixty years later. In the space of a cut, we see the full mystery of human existence, and it’s all the more powerful when we reflect that these artists have devoted their lives to creating a string of moments that can’t be recaptured—as we all do, in our different ways. An artificial intelligence might wonder if there was any point. I don’t have an answer to that. But if one exists at all, it’s here.

%d bloggers like this: