Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘Wired

The planetary chauvinists

with 10 comments

In a profile in the latest issue of Wired, the journalist Steven Levy speaks at length with Jeff Bezos, the world’s richest man, about his dream of sending humans permanently into space. Levy was offered a rare glimpse into the operations of the Amazon founder’s spaceflight company, Blue Origin, but it came with one condition: “I had to promise that, before I interviewed [Bezos] about his long-term plans, I would watch a newly unearthed 1975 PBS program.” He continues:

So one afternoon, I opened my laptop and clicked on the link Bezos had sent me. Suddenly I was thrust back into the predigital world, where viewers had more fingers than channels and remote shopping hadn’t advanced past the Sears catalog. In lo-res monochrome, a host in suit and tie interviews the writer Isaac Asimov and physicist Gerard O’Neill, wearing a cool, wide-lapeled blazer and white turtleneck. To the amusement of the host, O’Neill describes a future where some ninety percent of humans live in space stations in distant orbits of the blue planet. For most of us, Earth would be our homeland but not our home. We’d use it for R&R, visiting it as we would a national park. Then we’d return to the cosmos, where humanity would be thriving like never before. Asimov, agreeing entirely, called resistance to the concept “planetary chauvinism.”

The discussion, which was conducted by Harold Hayes, was evidently lost for years before being dug up in a storage locker by the Space Studies Institute, the organization that O’Neill founded in the late seventies. You can view the entire program here, and it’s well worth watching. At one point, Asimov, whom Hayes describes as “our favorite jack of all sciences,” alludes briefly to my favorite science fiction concept, the gravity gauge: “Well once you land on the moon, you know the moon is a lot easier to get away from than the earth is. The earth has a gravity six times as strong as that of the moon at the surface.” (Asimov must have known all of this without having to think twice, but I’d like to believe that he was also reminded of it by The Moon is a Harsh Mistress.) And in response to the question of whether he had ever written about space colonies in his own fiction, Asimov gives his “legendary” response:

Nobody did, really, because we’ve all been planet chauvinists. We’ve all believed people should live on the surface of a planet, of a world. I’ve had colonies on the moon—so have a hundred other science fiction writers. The closest I came to a manufactured world in free space was to suggest that we go out to the asteroid belt and hollow out the asteroids, and make ships out of them [in the novelette “The Martian Way”]. It never occurred to me to bring the material from the asteroids in towards the earth, where conditions are pleasanter, and build the worlds there.

Of course, it isn’t entirely accurate that science fiction writers had “all” been planet chauvinists—Heinlein had explored similar concepts in such stories as “Waldo” and “Delilah and the Space Rigger,” and I’m sure there are other examples. (Asimov had even discussed the idea ten years earlier in the essay “There’s No Place Like Spome,” which he later described as “an anticipation, in a fumbling sort of way, of Gerard O’Neill’s concept of space settlements.”) And while there’s no doubt that O’Neill’s notion of a permanent settlement in space was genuinely revolutionary, there’s also a sense in which Asimov was the last writer you’d expect to come up with it. Asimov was a notorious acrophobe and claustrophile who hated flying and suffered a panic attack on the roller coaster at Coney Island. When he was younger, he loved enclosed spaces, like the kitchen at the back of his father’s candy store, and he daydreamed about running a newsstand on the subway, where he could put up the shutters and just read magazines. Years later, he refused to go out onto the balcony of his apartment, which overlooked Central Park, because of his fear of heights, and he was always happiest while typing away in his office. And his personal preferences were visible in the stories that he wrote. The theme of an enclosed or underground city appears in such stories as The Caves of Steel, while The Naked Sun is basically a novel about agoraphobia. In his interview with Hayes, Asimov speculates that space colonies will attract people looking for an escape from earth: “Once you do realize that you have a kind of life there which represents a security and a pleasantness that you no longer have on earth, the difficulty will be not in getting people to go but in making them line up in orderly fashion.” But he never would have gone there voluntarily.

Yet this is a revealing point in itself. Unlike Heinlein, who dreamed of buying a commercial ticket to the moon, Asimov never wanted to go into space. He just wanted to write about it, and he was better—or at least more successful—at this than just about anybody else. (In his memoirs, Asimov recalls taping the show with O’Neill on January 7, 1975, adding that he was “a little restless” because he was worried about being late for dinner with Lester and Judy-Lynn del Rey. After he was done, he hailed a cab. On the road, as they were making the usual small talk, the driver revealed that he had once wanted to be a writer. Asimov, who hadn’t mentioned his name, told him consolingly that no one could make a living as writer anyway. The driver responded: “Isaac Asimov does.”) And the comparison with Bezos is an enlightening one. Bezos obviously built his career on books, and he was a voracious reader of science fiction in his youth, as Levy notes: “[Bezos’s] grandfather—a former top Defense Department official—introduced him to the extensive collection of science fiction at the town library. He devoured the books, gravitating especially to Robert Heinlein and other classic writers who explored the cosmos in their tales.” With his unimaginable wealth, Bezos is in a position remarkably close to that of the protagonist in such stories, with the ability to “painlessly siphon off a billion dollars every year to fund his boyhood dream.” But the ideas that he has the money to put into practice were originated by writers and other thinkers whose minds went in unusual directions precisely because they didn’t have the resources, financial or otherwise, to do it personally. Vast wealth can generate a chauvinism of its own, and the really innovative ideas tend to come from unexpected places. This was true of Asimov, as well as O’Neill, whose work was affiliated in fascinating ways with the world of Stewart Brand and the Whole Earth Catalog. I’ll have more to say about O’Neill—and Bezos—tomorrow.

The twilight of the skeptics

with 3 comments

A few years ago, I was working on an idea for a story—still unrealized—that required a sidelong look at the problem of free will. As part of my research, I picked up a copy of the slim book of the same name by the prominent skeptic Sam Harris. At the time, I don’t think I’d even heard of Harris, and I was expecting little more than a readable overview. What I remember about it the most, though, is how it began. After a short opening paragraph about the importance of his subject, Harris writes:

In the early morning of July 23, 2007, Steven Hayes and Joshua Komisarjevsky, two career criminals, arrived at the home of Dr. William and Jennifer Petit in Cheshire, a quiet town in central Connecticut. They found Dr. Petit asleep on a sofa in the sunroom. According to his taped confession, Komisarjevsky stood over the sleeping man for some minutes, hesitating, before striking him in the head with a baseball bat. He claimed that his victim’s screams then triggered something within him, and he bludgeoned Petit with all his strength until he fell silent.

Harris goes on to provide a graphically detailed account, which I’m not going to retype here, of the sexual assault and murder of Petit’s wife and two daughters. Two full pages are devoted to it, in a book that is less than a hundred pages long, and only at the end does Harris come to the point: “As sickening as I find their behavior, I have to admit that if I were to trade places with one of these men, atom for atom, I would be him: there is no extra part of me that could decide to see the world differently or resist the impulse to victimize other people.”

I see what Harris is trying to say here, and I don’t think that he’s even wrong. Yet his choice of example—a horrifying crime that was less than five years old when he wrote Free Will, which the surviving victim, William Petit, might well have read—bothered me a lot. It struck me as a lapse of judgment, or at least of good taste, and it remains the one thing that I really remember about the book. And I’m reminded of it now only because of an excellent article in Wired, “Sam Harris and the Myth of Perfectly Rational Thought,” that neatly lays out many of my old misgivings. The author, Robert Wright, documents multiple examples of his subject falling short of his professed standards, but he focuses on an exchange with the journalist Ezra Klein, whom Harris accused of engaging in “a really indissoluble kind of tribalism, which I keep calling identity politics.” When Klein pointed out that this might be a form of tribal thinking in itself, Harris replied: “I know I’m not thinking tribally.” Wright continues:

Reflecting on his debate with Klein, Harris said that his own followers care “massively about following the logic of a conversation” and probe his arguments for signs of weakness, whereas Klein’s followers have more primitive concerns: “Are you making political points that are massaging the outraged parts of our brains? Do you have your hands on our amygdala and are you pushing the right buttons?”

Just a few years earlier, however, Harris didn’t have any qualms about pushing the reader’s buttons by devoting the first two pages of Free Will to an account of a recent, real-life home invasion that involved unspeakable acts of sexual violence against women—when literally any other example of human behavior, good or bad, would have served his purposes equally well.

Harris denies the existence of free will entirely, so perhaps he would argue that he didn’t have a choice when he wrote those words. More likely, he would say that the use of this particular example was entirely deliberate, because he was trying to make a point by citing most extreme case of deviant behavior that he could imagine. Yet it’s the placement, as much as the content, that gives me pause. Harris puts it right up front, at the place where most books try for a narrative or argumentative hook, which suggests two possible motivations. One is that he saw it as a great “grabber” opening, and he opportunistically used it for no other reason than to seize the reader’s attention, only to never mention it ever again. This would be bad enough, particularly for a man who claims to disdain anything so undignified as an appeal to the amygdala, and it strikes me as slightly unscrupulous, in that it literally indicates a lack of scruples. (I’ll have more to say about this word later.) Yet there’s an even more troubling possibility that didn’t occur to me at the time. Harris’s exploitation of these murders, and the unceremonious way in which he moves on, is a signal to the reader. This is the kind of book that you’re getting, it tells us, and if you can’t handle it, you should close it now and walk away. In itself, this amounts to false advertising—the rest of Free Will isn’t much like this at all, even if Harris is implicitly playing to the sort of person who hopes that it might be. More to the point, the callousness of the example probably repelled many readers who didn’t appreciate the rhetorical deployment, without warning, of a recent rape and multiple murder. I was one of them. But I also suspect that many women who picked up the book were just as repulsed. And Harris doesn’t seem to have been overly concerned about this possibility.

Yet maybe he should have been. Wright’s article in Wired includes a discussion of the allegations against the physicist and science writer Lawrence Krauss, who has exhibited a pattern of sexual misconduct convincingly documented by an article in Buzzfeed. Krauss is a prominent member of the skeptical community, as well as friendly toward Harris, who stated after the piece appeared: “Buzzfeed is on the continuum of journalistic integrity and unscrupulousness somewhere toward the unscrupulous side.” Whether or not the site is any less scrupulous than a writer who would use the sexual assault and murder of three women as the opening hook—and nothing else—in his little philosophy book is possibly beside the point. More relevant is the fact that, as Wright puts it, Harris’s characterization of the story’s source “isn’t true in any relevant sense.” Buzzfeed does real journalism, and the article about Krauss is as thoroughly reported and sourced as the most reputable investigations into any number of other public figures. With his blanket dismissal, Harris doesn’t sound much like a man who cares “massively” about logic or rationality. (Neither did Krauss, for that matter, when he said last year in the face of all evidence: “Science itself overcomes misogyny and prejudice and bias. It’s built in.”) But he has good reason to be uneasy. The article in Buzzfeed isn’t just about Krauss, but about the culture of behavior within the skeptical community itself:

What’s particularly infuriating, said Lydia Allan, the former cohost of the Dogma Debate podcast, is when male skeptics ask how they could draw more women into their circles. “I don’t know, maybe not put your hands all over us? That might work,” she said sarcastically. “How about you believe us when we tell you that shit happens to us?”

Having just read the first two pages of Free Will again, I can think of another way, too. But that’s probably just my amygdala talking.

Written by nevalalee

May 21, 2018 at 9:38 am

Crossing the Rhine

leave a comment »

Zener cards

Note: I’m out of town today for the Grappling with the Futures symposium at Harvard and Boston University, so I’m republishing a piece from earlier in this blog’s run. This post originally appeared, in a slightly different form, on March 1, 2017.

Two groups of very smart people are looking at the exact same data and coming to wildly different conclusions. Science hates that.

—Katie M. Palmer, Wired

In the early thirties, the parapsychologist J.B. Rhine conducted a series of experiments at Duke University to investigate the existence of extrasensory perception. His most famous test involved a deck of Zener cards, printed variously with the images of a star, a square, three waves, a circle, or a cross, in which subjects were invited to guess the symbol on a card drawn at random. The participants in the study, most of whom were college students, included the young John W. Campbell, who displayed no discernible psychic ability. At least two, however, Adam Linzmayer and Hubert Pearce, were believed by Rhine to have consistently named the correct cards at a higher rate than chance alone would predict. Rhine wrote up his findings in a book titled Extrasensory Perception, which was published in 1934. I’m not going to try to evaluate its merits here—but I do want to note that attempts to replicate his work were made almost at once, and they failed to reproduce his results. Within two years, W.S. Cox of Princeton University had conducted a similar run of experiments, of which he concluded: “There is no evidence of extrasensory perception either in the ‘average man’ or of the group investigated or in any particular individual of that group. The discrepancy between these results and those obtained by Rhine is due either to uncontrollable factors in experimental procedure or to the difference in the subjects.” By 1938, four other studies had taken place, to similar effect. Rhine’s results were variously attributed to methodological flaws, statistical misinterpretation, sensory leakage, or outright cheating, and in consequence, fairly or not, parapsychological research was all but banished from academic circles.

Decades later, another study was conducted, and its initial reception was very different. Its subject was ego depletion, or the notion that willpower draws on a finite reservoir of internal resources that can be reduced with overuse. In its most famous demonstration, the psychologists Roy Baumeister and Dianne Tice of Case Western University baked chocolate chip cookies, set them on a plate next to a bowl of radishes, and brought a series of participants into the room. They were all told to wait, but some were allowed to eat the cookies, while the others were instructed to snack only on the radishes. Then they were all given the same puzzle to complete, although they weren’t told that it was impossible to solve. According to the study, students who had been asked to stick to the radishes spent an average of just eight minutes on the puzzle, while those who had been allowed to eat the cookies spent nineteen minutes. The researchers concluded that our willpower is a limited quantity, and it can even be exhausted, like a muscle. Their work was enormously influential, and dozens of subsequent studies seemed to confirm it. In 2010, however, an analysis of published papers on the subject was unable to find any ego depletion effect, and last year, it got even worse—an attempt to replicate the core findings, led by the psychologist Martin Hagger, found zero evidence to support its existence. And this is just the most notable instance of what has been called a replication crisis in the sciences, particularly in psychology. One ambitious attempt to duplicate the results of such studies, the Reproducibility Project, has found that only about a third can be reproduced at all.

J.B. Rhine

But let’s consider the timelines involved. With Rhine, it took only two years before an attempt was made to duplicate his work, and two more years for the consensus in the field to turn against it decisively. In the case of ego depletion, twelve years passed before any questions were raised, and close to two decades before the first comprehensive effort to replicate it. And you don’t need to be a psychologist to understand why. Rhine’s results cut so radically against what was known about the brain—and the physical universe—that accepting them would have required a drastic overhaul of multiple disciplines. Not surprisingly, they inspired immediate skepticism, and they were subjected to intense scrutiny right away. Ego depletion, by contrast, was an elegant theory that seemed to confirm ordinary common sense. It came across as an experimental verification of something that we all know instinctively, and it was widely accepted almost at once. Many successful studies also followed in its wake, in large part because experiments that seemed to confirm it were more likely to be submitted for publication, while those that failed to produce interesting results simply disappeared. (When it came to Rhine, a negative result wouldn’t be discarded, but embraced as a sign that the system was working as intended.) Left to itself, the lag time between a study and any serious attempt to reproduce it seems to be much longer when the answer is intuitively acceptable. As the Reproducibility Project has shown, however, when we dispassionately pull studies from psychological journals and try to replicate them without regard to their inherent interest or plausibility, the results are often no better than they were with Rhine. It can leave psychologists sounding a lot like parapsychologists who have suffered a crisis of faith. As the psychologist Michael Inzlicht wrote: “Have I been chasing puffs of smoke for all these years?”

I’m not saying that Rhine’s work didn’t deserve to be scrutinized closely, because it did. And I’m also not trying to argue that social psychology is a kind of pseudoscience. But I think it’s worth considering whether psychology and parapsychology might have more in common than we’d like to believe. This isn’t meant to be a knock against either one, but an attempt to nudge them a little closer together. As Alex Holcombe of the University of Sydney put it: “The more optimistic interpretation of failures to replicate is that many of the results are true, but human behavior is so variable that the original researchers had to get lucky to find the result.” Even Martin Hagger says much the same thing: “I think ego-depletion effect is probably real, but current methods and measures are problematic and make it difficult to find.” The italics, as usual, are mine. Replace “human behavior” and “ego depletion” with “extrasensory perception,” and you end up with a concise version of the most widely cited explanation for why psychic abilities resist scientific verification, which is that these phenomena are real, but difficult to reproduce. You could call this wishful thinking, and in most cases, it probably is. But it also raises the question of whether meaningful phenomenon exist that can’t be reproduced in a laboratory setting. Regardless of where you come down on the issue, the answer shouldn’t be obvious. Intuition, for instance, is often described as a real phenomenon that can’t be quantified or replicated, and whether or not you buy into it, it’s worth taking seriously. A kind of collective intuition—or a hunch—is often what determines what results the scientific community is likely to accept. And the fact that this intuition is so frequently wrong means that we need to come to terms with it, even if it isn’t in a lab.

The end of applause

leave a comment »

On July 8, 1962, at a performance of Bach’s The Art of Fugue, the pianist Glenn Gould asked his audience not to applaud at the end. Most of his listeners complied, although the request clearly made them uneasy. A few months earlier, Gould had published an essay, “Let’s Ban Applause!”, in which he presented the case against the convention. (I owe my discovery of this piece to an excellent episode of my wife’s podcast, Rework, which you should check out if you haven’t done so already.) Gould wrote:

I have come to the conclusion, most seriously, that the most efficacious step which could be taken in our culture today would be the gradual but total elimination of audience response…I believe that the justification of art is the internal combustion it ignites in the hearts of men and not its shallow, externalized, public manifestations. The purpose of art is not the release of a momentary ejection of adrenaline but is, rather, the gradual, lifelong construction of a state of wonder and serenity.

Later that year, Gould expanded on his position in an interview with The Globe and Mail. When asked why he disliked applause, he replied:

I am rebellious about the institution of the concert—of the mob, which sits in judgment. Some artists seem to place too much reliance on the sweaty mass response of the moment. If we must have a public response at all, I feel it should be much less savage than it is today…Applause tells me nothing. Like any other artist, I can always pull off a few musical tricks at the end of a performance and the decibel count will automatically go up ten points.

The last line is the one that interests me the most. Gould, I think, was skeptical of applause largely because it reminded him of his own worst instincts as a performer—the part that would fall back on a few technical tricks to milk a more enthusiastic response from his audience in the moment. The funny thing about social media, of course, is that it places all of us in this position. If you’ve spent any time on Twitter or Facebook, you know that some messages will generate an enthusiastic response from followers, while others will go over like a lead balloon, and we quickly learn to intuitively sense the difference. Even if it isn’t conscious, it quietly affects the content that we decide to put out there in the world, as well as the opinions and the sides of ourselves that we reveal to others. And while this might seem like a small matter, it had a real impact on our politics, which became increasingly driven by ideas that thrived in certain corners of the social marketplace, where they inspired the “momentary ejection of adrenaline” that Gould decried. Last month, Antonio García Martínez, a former Facebook employee, wrote on Wired of the logistics of the site’s ad auction system:

During the run-up to the election, the Trump and Clinton campaigns bid ruthlessly for the same online real estate in front of the same swing-state voters. But because Trump used provocative content to stoke social media buzz, and he was better able to drive likes, comments, and shares than Clinton, his bids received a boost from Facebook’s click model, effectively winning him more media for less money. In essence, Clinton was paying Manhattan prices for the square footage on your smartphone’s screen, while Trump was paying Detroit prices.

And in the aftermath, Trump’s attitudes toward important issues often seem driven by the response that he gets on Twitter, which leads to a cycle in which he’s encouraged to become even more like what he already is. (In the past, I’ve drawn a comparison between his evolution and that of L. Ron Hubbard, and I think that it still holds up.) In many ways, Trump is the greatest embodiment so far of the tendency that Gould diagnosed half a century ago, in which the performer is driven to change himself in response to the collective feedback that he receives from applause. It’s no accident that Trump only seems truly alive on camera, in front of a cheering crowd, or while tweeting, or why he displays such an obsession with polls and television ratings. Applause may have told Gould nothing, but it tells Trump everything. Social media was a pivotal factor in his victory, but only at the cost of transforming him into a monster that his younger self—as craven and superficial as he was—might not have recognized. And it worked horrifyingly well. At an interview in January, Trump admonished reporters: “The fact is, you people won’t say this, but I’ll say it: I was a much better candidate than [Clinton]. You always say she was a bad candidate; you never say I was a good candidate. I was one of the greatest candidates. Someday you’re going to say that.” Well, I’m ready to say it now. Before the election, I argued in a blog post that Trump’s candidacy would establish the baseline of the popular vote that could be won by the worst possible campaign, and by any conventional measure, I was right. Like everyone else, though, I missed the larger point. Even as we mocked Trump for boasting about the attendance at his rallies, he was listening to the applause, and he evolved in real time into something that would raise the decibel count to shattering levels.

It almost makes me wish that we had actually banned applause back in the sixties, at least for the sake of a thought experiment. In his essay, Gould sketched a picture of how a concert might conclude under his new model:

In the early stages…the performers may feel a moment of unaccustomed tension at the conclusion of their selection, when they must withdraw to the wings unescorted by the homage of their auditors. For orchestral players this should provide no hazard: a platoon of cellists smartly goose-stepping offstage is an inspiring sight. For the solo pianist, however, I would suggest a sort of lazy-Susan device which would transport him and his instrument to the wings without his having to rise. This would encourage performance of those sonatas which end on a note of serene reminiscence, and in which the lazy Susan could be set gently in motion some moments before the conclusion.

It’s hard to imagine Trump giving a speech in such a situation. If it weren’t for the rallies, he never would have run for president at all, and much of his administration has consisted of his wistful efforts to recapture that glorious moment. (The infamous meeting in which he was showered with praise by his staff members—half a dozen of whom are now gone—feels now like an attempt to recreate that dynamic in a smaller room, and his recent request for a military parade channels that impulse into an even more troubling direction.) Instead of banning applause, of course, we did exactly the opposite. We enabled it everywhere—and then we upvoted its ultimate creation into the White House.

Written by nevalalee

March 16, 2018 at 9:02 am

The closed circle

leave a comment »

In his wonderful book The Nature of Order, the architect Christopher Alexander lists fifteen properties that characterize places and buildings that feel alive. (“Life” itself is a difficult concept to define, but we can come close to understanding it by comparing any two objects and asking the one question that Alexander identifies as essential: “Which of the two is a better picture of my self?”) These properties include such fundamentals of design as “Levels of Scale,” “Local Symmetries,” and “Positive Space,” and elements that are a bit trickier to pin down, including “Echoes,” “The Void,” and “Simplicity and Inner Calm.” But the final property, and the one that Alexander suggests is the most important, bears the slightly clunky name of “Not-Separateness.” He points to the Tower of the Wild Goose in China as an example of this quality at its best, and he says of its absence:

When a thing lacks life, is not whole, we experience it as being separate from the world and from itself…In my experiments with shapes and buildings, I have discovered that the other fourteen ways in which centers come to life will make a center which is compact, beautiful, determined, subtle—but which, without this fifteenth property, can still often somehow be strangely separate, cut off from what lies around it, lonely, awkward in its loneliness, too brittle, too sharp, perhaps too well delineated—above all, too egocentric, because it shouts, “Look at me, look at me, look how beautiful I am.”

The fact that he refers to this property as “Non-Separateness,” rather than the more obvious “Connectedness,” indicates that he sees it as a reaction against the marked tendency of architects and planners to strive for distinctiveness and separation. “Those unusual things which have the power to heal…are never like this,” Alexander explains. “With them, usually, you cannot really tell where one thing breaks off and the next begins, because the thing is smokily drawn into the world around it, and softly draws this world into itself.” It’s a characteristic that has little to do with the outsized personalities who tend to be drawn to huge architectural projects, and Alexander firmly skewers the motivations behind it:

This property comes about, above all, from an attitude. If you believe that the thing you are making is self-sufficient, if you are trying to show how clever you are, to make something that asserts its beauty, you will fall into the error of losing, failing, not-separateness. The correct connection to the world will only be made if you are conscious, willing, that the thing you make be indistinguishable from its surroundings; that, truly, you cannot tell where one ends and the next begins, and you do not even want to be able to do so.

This doesn’t happen by accident, particularly when millions of dollars and correspondingly inflated egos are involved. (The most blatant way of separating a building from its surroundings is to put your name on it.) And because it explicitly asks the designer to leave his or her cleverness behind, it amounts to the ultimate test of the subordination of the self to the whole. You can do great work and still falter at the end, precisely because of the strengths that allowed you to get that far in the first place.

It’s hard for me to read these words without thinking of Apple’s new headquarters in Cupertino, variously known as the Ring and the Mothership, which is scheduled to open later this year. A cover story in Wired by Steven Levy describes it in enraptured terms, in which you can practically hear Also Sprach Zarathustra:

As we emerge into the light, the Ring comes into view. As the Jeep orbits it, the sun glistens off the building’s curved glass surface. The “canopies”—white fins that protrude from the glass at every floor—give it an exotic, retro-­future feel, evoking illustrations from science fiction pulp magazines of the 1950s. Along the inner border of the Ring, there is a walkway where one can stroll the three-quarter-mile perimeter of the building unimpeded. It’s a statement of openness, of free movement, that one might not have associated with Apple. And that’s part of the point.

There’s a lot to unpack here, from the reference to pulp science fiction to the notion of “orbiting” the building to the claim that the result is “a statement of openness.” As for the contrary view, here’s what another article in Wired, this one by Adam Rogers, had to say about it a month later:

You can’t understand a building without looking at what’s around it—its site, as the architects say. From that angle, Apple’s new [headquarters] is a retrograde, literally inward-looking building with contempt for the city where it lives and cities in general. People rightly credit Apple for defining the look and feel of the future; its computers and phones seem like science fiction. But by building a mega-headquarters straight out of the middle of the last century, Apple has exacerbated the already serious problems endemic to twenty-first-century suburbs like Cupertino—transportation, housing, and economics. Apple Park is an anachronism wrapped in glass, tucked into a neighborhood.

Without delving into the economic and social context, which a recent article in the New York Times explores from another perspective, I think it’s fair to say that Apple Park is an utter failure from the point of view of “Not-Separateness.” But this isn’t surprising. Employees may just be moving in now, but its public debut dates back to June 7, 2011, when Steve Jobs himself pitched it to the Cupertino City Council. Jobs was obsessed by edges and boundaries, both physical and virtual, insisting that the NeXT computer be a perfect cube and introducing millions of consumers to the word “bezel.” Compare this to what Alexander writes of boundaries in architecture:

In things which have not-separateness, there is often a fragmented boundary, an incomplete edge, which destroys the hard line…Often, too, there is a gradient of the boundary, a soft edge caused by a gradient in which scale decreases…so that at the edge it seems to melt indiscernibly into the next thing…Finally, the actual boundary is sometimes rather careless, deliberately placed to avoid any simple complete sharp cutting off of the thing from its surroundings—a randomness in the actual boundary line which allows the thing to be connected to the world.

The italics are mine, because it’s hard to imagine anything less like Jobs or the company he created. Apple Park is being positioned as Jobs’s posthumous masterpiece, which reminds me of the alternate wording to Alexander’s one question: “Which one of these two things would I prefer to become by the day of my death?” (If the building is a monument to Jobs, it’s also a memorial to the ways in which he shaded imperceptibly into Trump, who also has a fixation with borders.) It’s the architectural equivalent of the design philosophy that led Apple to glue in its batteries and made it impossible to upgrade the perfectly cylindrical Mac Pro. Apple has always loved the idea of a closed system, and now its employees get to work in one.

Written by nevalalee

July 5, 2017 at 8:59 am

Crossing the Rhine

leave a comment »

Zener cards

Two groups of very smart people are looking at the exact same data and coming to wildly different conclusions. Science hates that.

—Katie M. Palmer, Wired

In the early thirties, the parapsychologist J.B. Rhine conducted a series of experiments at Duke University to investigate the existence of extrasensory perception. His most famous test involved a deck of Zener cards, variously printed with the images of a star, a square, three waves, a circle, or a cross, in which subjects were invited to guess the symbol on a card drawn at random. The participants in the study, most of whom were college students, included the young John W. Campbell, who displayed no particular psychic ability. At least two, however, Adam Linzmayer and Hubert Pearce, were believed by Rhine to have consistently named the correct cards at a higher rate than chance alone would predict. Rhine wrote up his findings in a book titled Extrasensory Perception, which was published in 1934, and I’m not going to try to evaluate its merits here. What I will note is that attempts to replicate his work were made almost at once, and they failed to reproduce his results. Within two years, W.S. Cox of Princeton University had conducted a similar run of experiments, of which he concluded: “There is no evidence of extrasensory perception either in the ‘average man’ or of the group investigated or in any particular individual of that group. The discrepancy between these results and those obtained by Rhine is due either to uncontrollable factors in experimental procedure or to the difference in the subjects.” By 1938, four other studies had taken place, to similar effect. Rhine’s results were variously attributed to methodological flaws, statistical misinterpretation, sensory leakage, or outright cheating, and in consequence, fairly or not, parapsychological research was all but banished from academic settings.

Decades later, another study was conducted, and its initial reception was very different. Its subject was ego depletion, or the notion that willpower draws on a finite reservoir of internal resources that can be reduced with overuse. In its most famous demonstration, the psychologists Roy Baumeister and Dianne Tice of Case Western University baked chocolate chip cookies, set them on a plate next to a bowl of radishes, and brought a series of participants into the room. All were told to wait there, but some were allowed to eat the cookies, while the others were instructed to snack only on the radishes. Then they were all given the same puzzle to complete—although they weren’t told that it was impossible to solve. According to the study, students who had been asked to stick to the radishes spent an average of just eight minutes on the puzzle, while those who had been allowed to eat the cookies spent nineteen minutes. The researchers concluded that our willpower is a limited quantity, and it can even be exhausted, like a muscle. Their work was enormously influential, and dozens of subsequent studies seemed to confirm it. In 2010, however, an analysis of published papers on the subject was unable to find any ego depletion effect, and last year, it got even worse: an attempt to replicate the core findings, led by the psychologist Martin Hagger, found zero evidence to support its existence. And this is just the most notable instance of what has been called a replication crisis in the sciences, particularly psychology, with one ambitious attempt to duplicate the results of psychological studies, the Reproducibility Project, finding that only about a third could be reproduced.

J.B. Rhine

But let’s consider the timelines involved. With Rhine, it took only two years before an attempt was made to duplicate his work, and two more years for the consensus in the field to turn against it decisively. In the case of ego depletion, twelve years passed before any questions were raised, and close to two decades before the first comprehensive effort to replicate it. And you don’t need to be a psychologist to understand why. Rhine’s results cut so radically against what was known about the brain—and the physical universe—that accepting them would have required a drastic overhaul of multiple disciplines. Not surprisingly, they inspired immediate skepticism, and they were subjected to intense scrutiny right away. Ego depletion, by contrast, was an elegant theory that seemed to confirm ordinary common sense. It came across as an experimental verification of something that we all know instinctively, and it was widely accepted almost at once. Many successful studies also followed in its wake, in large part because experiments that seemed to confirm it were more likely to be submitted for publication, while those that failed to produce interesting results simply disappeared. (When it came to Rhine, a negative result wouldn’t be discarded, but embraced as a sign that the system was working as intended.) Left to itself, the lag time between a study and any serious attempt to reproduce it seems to be much longer when the answer is intuitively acceptable. As the Reproducibility Project has shown, however, when we dispassionately pull studies from psychological journals and try to replicate them without regard to their inherent interest or plausibility, the results are often no better than they were with Rhine. It can leave psychologists sounding a lot like parapsychologists suffering through a crisis of faith. As the psychologist Michael Inzlicht wrote: “Have I been chasing puffs of smoke for all these years?”

I’m not saying that Rhine’s work didn’t deserve to be scrutinized closely, because it did. And I’m also not trying to argue that social psychology is a kind of pseudoscience. But I think it’s worth considering whether psychology and parapsychology might have more in common than we’d like to believe. This isn’t meant to be a knock against either one, but an attempt to nudge them a little closer together. As Alex Holcombe of the University of Sydney put it: “The more optimistic interpretation of failures to replicate is that many of the results are true, but human behavior is so variable that the original researchers had to get lucky to find the result.” Even Martin Hagger says much the same thing: “I think ego-depletion effect is probably real, but current methods and measures are problematic and make it difficult to find.” The italics, as usual, are mine. Replace “human behavior” and “ego depletion” with “extrasensory perception,” and you end up with a concise version of the most widely cited justification for the resistance of such abilities to scientific verification, which is that these phenomena are real, but difficult to reproduce. You could call this wishful thinking, and in most cases, it probably is. But it also raises the question of whether it’s possible to have a meaningful phenomenon that can’t be reproduced in a laboratory setting. Regardless of where you come down on the issue, the answer shouldn’t be obvious. Intuition, for instance, is often described as a real phenomenon that can’t be quantified or replicated, and whether or not you believe this, it’s worth taking seriously. A kind of collective intuition—or a hunch—is exactly what determines what results the scientific community is likely to accept. And the fact that this intuition is so often wrong means that we need to come to terms with it, even if it isn’t in a lab.

The Watergate Fix

with 4 comments

Gore Vidal

“I must get my Watergate fix every morning,” Gore Vidal famously said to Dick Cavett in the final days of the Nixon administration. In his memoir In Joy Still Felt, Isaac Asimov writes: “I knew exactly what he meant.” He elaborates:

I knew we had [Nixon]…From that point on, I took to combing the Times from cover to cover every morning, skipping only the column by Nixon’s minion William Safire. I sometimes bought the New York Post so I could read additional commentary. I listened to every news report on the radio.

I read and listened with greater attention and fascination than in even the darkest days of World War II. Thus my diary entry for May 11, 1973, says, “Up at six to finger-lick the day’s news on Watergate.”

I could find no one else as hooked on Watergate as I was, except for Judy-Lynn [del Rey]. Almost every day, she called me or I called her and we would talk about the day’s developments in Watergate. We weren’t very coherent and mostly we laughed hysterically.

Now skip ahead four decades, and here’s what Wired reporter Marcus Wohlsen wrote earlier this week of a “middle-age software developer” with a similar obsession:

Evan is a poll obsessive, FiveThirtyEight strain—a subspecies I recognize because I’m one of them, too. When he wakes up in the morning, he doesn’t shower or eat breakfast before checking the Nate Silver-founded site’s presidential election forecast (sounds about right). He keeps a tab open to FiveThirtyEight’s latest poll list; a new poll means new odds in the forecast (yup). He get push alerts on his phone when the forecast changes (check). He follows the 538 Forecast Bot, a Twitter account that tweets every time the forecast changes (same). In all, Evan says he checks in hourly, at least while he’s awake (I plead the Fifth).

Wohlsen notes that the design of FiveThirtyEight encourages borderline addictive behavior: its readers are like the lab rats who repeatedly push a button to send a quick, pleasurable jolt coursing through their nervous systems. The difference is that polls and political news, no matter how favorable to one side, deliver a more complicated mix of emotions—hope, uncertainty, apprehension. But as long as the numbers are trending in the right direction, we can’t get enough of them.

Princeton Election Consortium

And it’s striking to see how little the situation has changed since the seventies, apart from a few advances in technology. Asimov had to buy two physical newspapers to get his fix, while we can click effortlessly from one source to another. On the weekend that the Access Hollywood recording was released, I found myself cycling nonstop between the New York Times, Politico, Talking Points Memo, the Washington Post—where I rapidly used up my free articles for the month—and other political sites, like Daily Kos, that I hadn’t visited in years. (I don’t think I’ve been as hooked on political analysis since George W. Bush nominated Harriet Miers to the Supreme Court, which still stands out as a golden age in my memories.) Like Asimov, who skipped William Safire’s column, I also know what to avoid. Instead of calling a friend to talk about the day’s developments, I read blog posts and comment threads. Not surprisingly, the time I spend on all this is inversely correlated to the trajectory of the Trump campaign. During a rough stretch in September, I deleted FiveThirtyEight from my bookmarks because it was causing me more anxiety than it was worth. I still haven’t put it back, perhaps on the assumption that if I have to type it into my address bar, rather than clicking on a shortcut, I won’t go back as often. In practice, I’ll often use a quick spin through FiveThirtyEight, Politico, and Talking Points Memo as my reward for getting through half an hour of work, which is the only positive behavior on my part to come out of this entire election.

Of course, there are big differences between Vidal and Asimov’s Watergate fix and its equivalent today. By the time Haldeman and Ehrlichman resigned, Nixon’s goose was pretty much cooked, and someone like Asimov could take unmixed pleasure in his comeuppance. Trump, by contrast, could still get elected. More surprising is the fact that the overall arc of this presidential campaign has been mostly unresponsive to the small daily movements that analytics are meant to track. As Sam Wang of the Princeton Election Consortium recently pointed out, this election has actually been less volatile than usual, and its shape has remained essentially unchanged for months, with Clinton holding a national lead of between two and six points over Trump. It seems noisy, but only because every move is subjected to such scrutiny. In other words, our obsession with polls creates the psychological situation that we’re presumably trying to avoid: we’re subjectively experiencing this race as more volatile than it really is. Our polling fix isn’t rational, at least not from the point of view of minimizing anxiety. As Wohlesen says in Wired, it’s more like a species of magical thinking, in which we place our trust in a certain kind of magician—a data wizard—to see us through an election in which the facts have been treated with disdain. At my lowest moments last month, I would console myself with the thought of Elan Kriegel, Clinton’s director of analytics. The details didn’t matter; it was enough that he existed, and that I could halfway believe that he had access to magic that allowed him to exercise some degree of control over an inherently uncontrollable future. Or as the Wired headline put it: “I just want Nate Silver to tell me it’s all going to be fine.”

The forced error

leave a comment »

R2-D2 and J.J. Abrams on the set of The Force Awakens

Note: Oblique spoilers follow for Star Wars Episode VII: The Force Awakens.

At this point, it might seem that there isn’t anything new left to say about The Force Awakens, but I’d like to highlight a revealing statement from director J.J. Abrams that, to my knowledge, hasn’t been given its due emphasis before. It appears in an interview that was published by Wired on November 9, or over a month in advance of the film’s release. When the reporter Scott Dadich asks if there are any moments from the original trilogy that stand out to him, Abrams replies:

It would be a much shorter conversation to talk about the scenes that didn’t stand out. As a fan of Star Wars, I can look at those movies and both respect and love what they’ve done. But working on The Force Awakens, we’ve had to consider them in a slightly different context. For example, it’s very easy to love “I am your father.” But when you think about how and when and where that came, I’m not sure that even Star Wars itself could have supported that story point had it existed in the first film, Episode IV. Meaning: It was a massively powerful, instantly classic moment in movie history, but it was only possible because it stood on the shoulders of the film that came before it. There had been a couple of years to allow the idea of Darth Vader to sink in, to let him emerge as one of the greatest movie villains ever. Time built up everyone’s expectations about the impending conflict between Luke and Vader. If “I am your father” had been in the first film, I don’t know if it would have had the resonance. I actually don’t know if it would have worked.

Taken in isolation, the statement is interesting but not especially revelatory. When we revisit it in light of what we now know about The Force Awakens, however, it takes on a startling second meaning. It’s hard not to read it today without thinking of a particular reveal about one new character and the sudden departure of another important player. When I first saw the film, without having read the interview in Wired, it immediately struck me that these plot points were in the wrong movie: they seemed much more like moments that would have felt more at home in the second installment of the sequel trilogy, and not merely because the sequence in question openly pays homage to the most memorable scene in The Empire Strikes Back. To venture briefly into spoilerish territory: if Kylo Ren had been allowed to dominate the entirety of The Force Awakens “as one of the greatest movie villains ever,” to use Abrams’s own words, the impact of his actions and what we learn about his motivations would have been far more powerful—but only if they had been saved for Episode VIII. As it stands, we’re introduced to Ren and his backstory all but in the same breath, and it can’t help but feel rushed. Similarly, when another important character appears and exits the franchise within an hour or so of screentime, it feels like a wasted opportunity. They only had one chance to do it right, and compressing what properly should have been the events of two films into one is a real flaw in an otherwise enjoyable movie.

The Empire Strikes Back

And what intrigues me the most about the quote above is that Abrams himself seems agonizingly aware of the issue. When you read over his response again, it becomes clear that he isn’t quite responding to the question that the interviewer asked. Instead, he goes off on a tangent that wouldn’t even have occurred to him if it hadn’t already been on his mind. I have no way of looking into Abrams’s brain, Jedi style, but it isn’t difficult to imagine what happened. Abrams, Lawrence Kasdan, and Michael Arndt—the three credited screenwriters, which doesn’t even take into account the countless other producers and executives who took a hand in the process—must have discussed the timing of these plot elements in detail, along with so many others, and at some point, the question would have been raised as to whether they might not better be saved for a later movie. Abrams’s statement to Wired feels like an undigested excerpt from those discussions that surfaced in an unrelated context, simply because he happened to remember it in the course of the interview. (Anyone who has ever been interviewed, and who wants to give well-reasoned responses, will know how this works: you often end up repurposing thoughts and material that you’ve worked up elsewhere, if they have even the most tangential relevance to the topic at hand.) If you replace “Darth Vader” with “Kylo Ren” in Abrams’s reply, and make a few other revisions to square it with Episode VII, you can forensically reconstruct one side of an argument that must have taken place in the offices of Bad Robot on multiple occasions. And Abrams never forgot it.

So what made him decide to ignore an insight so good that he practically internalized it? There’s no way of knowing for sure, but it seems likely that contract negotiations with one of the actors involved—and those who have seen the movie will know which one I mean—affected the decision to move this scene up to where it appears now. Dramatically speaking, it’s in the wrong place, but Abrams and his collaborators may not have had a choice. As he implies throughout this interview and elsewhere, The Force Awakens was made under conditions of enormous pressure: it isn’t just a single movie, but the opening act in the renewal of a global entertainment franchise, and the variables involved are so complicated that no one filmmaker can have full control over the result. (It’s also tempting to put some of the blame on Abrams’s directing style, which rushes headlong from one plot point to another as if this were the only new Star Wars movie we were ever going to get. The approach works wonderfully in the first half, which is refreshingly eager to get down to business and slot the necessary pieces into place, but it starts to backfire in the second and third acts, which burn through big moments so quickly that we’re left scrambling to feel anything about what we’ve seen.) Tomorrow, I’m going to talk a little more about how the result left me feeling both optimistic and slightly wary of what the future of Star Wars might bring. But in this particular instance, Abrams made an error. Or he suspects that he did. And when he searches his feelings, he knows it to be true.

Written by nevalalee

December 28, 2015 at 10:16 am

Cool tools and hot ideas

leave a comment »

The Next Whole Earth Catalog

In 1968, in a garage in Menlo Park, California, a remarkable publication was born. It was laid out with an IBM Selectric typewriter and a Polaroid industrial camera, in an office furnished with scrap doors and plywood, and printed cheaply on rough paper. Modeled after the L.L. Bean catalog, it opened with Buckminster Fuller, ended with the I Ching, and included listings for portable sawmills, kits for geodesic domes, and books on everything from astronomy to beekeeping to graphic design, interspersed with a running commentary that cheerfully articulated an entire theory of civilization. The result was the original manual of soft innovation, a celebration of human ingenuity that sold millions of copies while retaining an endearing funkiness, and it profoundly influenced subcultures as different as the environmental movement and Silicon Valley. As I’ve said before, the Whole Earth Catalog is both a guide to good reading and living and a window onto an interlocking body of approaches to managing the complicated problems that modern life presents. Its intended readers, both then and now, are ambitious, but resistant to specialization; interested in technology as a means of greater personal freedom; and inspired by such practical intellectuals as Fuller, Gregory Bateson, and Catalog founder Stewart Brand himself, who move gracefully from one area of expertise to the next.

And it had an enormous impact on my own life. I grew up in the Bay Area, not far from where the Catalog was born, and I’ve been fascinated by it for over twenty years. Leafing through its oversized pages was like browsing through the world’s greatest bookstore, and as I photocopied my favorite sections and slowly acquired the works it recommended, it subtly guided my own reading and thinking. In its physical format, with its double spreads on subjects from computers to ceramics, it emphasized the connections between disciplines, and the result was a kind of atlas for living in boundary regions, founded on an awareness of how systems evolve and how individuals fit within the overall picture. I became a novelist because it seemed like the best way of living as a generalist, tackling big concepts, and studying larger patterns. It provided me with an alternative curriculum that took up where my university education left off, an array of tools for addressing my own personal and professional challenges. Looking at my bookshelves now, the number of books whose presence in my life I owe to the Catalog is staggering: A Pattern Language, Zen in English Literature and Oriental Classics, On Growth and Form, The Plan of St. Gall in Brief, and countless others.

Cool Tools by Kevin Kelly

The Catalog has been out of print for a long time, and although the older editions are still available in PDF form online, I’ve often wished for an updated version that could survey the range of books and tools that have appeared in the fifteen years since the last installment was published. Much to my delight, I’ve recently discovered that such a work exists, in a somewhat different form. Kevin Kelly, a former Brand protégé who later became the executive editor of Wired, once wrote: “It is no coincidence that the Whole Earth Catalogs disappeared as soon as the web and blogs arrived. Everything the Whole Earth Catalogs did, the web does better.” It seems that Kelly has slightly modified his point of view, because last year he released Cool Tools, an oversized, self-published overview of hardware, gadgets, books, and software that comes as close as anything in decades to recapturing the spirit of the Catalog itself. Cool Tools originally appeared as a series of reviews on Kelly’s blog, but in book form, it gains a critical sense of serendipity: you’re constantly exposed to ideas that you never knew you needed. I’ve been browsing through it happily for days, and I’ve already found countless books that I can’t believe I didn’t know about before: Scott McCloud’s Making Comics, Richard D. Pepperman’s The Eye is Quicker, James P. Carse’s Finite and Infinite Games, and many more.

I can quibble with Cool Tools in small ways. Personally, I’d prefer to see more books and fewer gadgets, and I especially wish that Kelly hadn’t confined himself to works that were still in print: some of the most exciting, interesting ideas can be found in authors that have fallen off the radar, and with used copies so easily accessible online, there’s no reason not to point readers in their direction. And we get only glimpses of the overarching philosophy of life that was so great a part of the original Catalog‘s appeal. But I’m still profoundly grateful that it exists. It serves as a kind of sanity check, or a course correction, and I’m gratified whenever I see something in its pages that I’ve independently discovered on my own. My favorite entry may be for the Honda Fit, my own first car, because it sits next to a parallel entry for the blue Volvo 240 station wagon—”the cheapest reliable used car”—that my parents owned when I was growing up in the East Bay. I spent a lot of time in both vehicles, which serves as a reminder that who I am and what I might become is inextricably tied into the culture from which the Catalog emerged. Cool Tools probably won’t have the cultural impact of its predecessors, but it’s going to change more than a few lives, especially if it falls into the hands of bright, curious kids. And that’s more than enough.

Written by nevalalee

April 21, 2014 at 9:39 am

“A dream about going to Shaolin Temple…”

leave a comment »

Wired: Speaking of writers of color, I saw you say that one of your ambitions was to be a Dominican Samuel R. Delany or Octavia E. Butler.

Díaz: Did I actually say that? That’s so deranged! I think that was one of my younger ambitions. Sort of like when you used to have a dream about going to Shaolin Temple. Me trying to be Octavia Butler or Samuel R. Delany really is like the 40-year-old guy wistfully thinking about how if only he had run away when he was 14 and gone on a tramp steamer off to Hong Kong, and from there slipped across the border into the new territories and gone up to Shaolin Temple and practiced his wushu, my god, if only I’d done that I’d be already the absolute master killer. Let me tell you something, that tramp steamer has sailed and gone, my friend. I’ll be lucky if I can write another two books before I’m in the grave.

Junot Díaz, to Wired

Written by nevalalee

October 7, 2012 at 9:50 am

Quote of the Day

with one comment

You can’t possibly get a good technology going without an enormous number of failures. It’s a universal rule. If you look at bicycles, there were thousands of weird models built and tried before they found the one that really worked. You could never design a bicycle theoretically. Even now, after we’ve been building them for 100 years, it’s very difficult to understand just why a bicycle works—it’s even difficult to formulate it as a mathematical problem. But just by trial and error, we found out how to do it, and the error was essential.

Freeman Dyson, in a Wired interview with Stewart Brand

Written by nevalalee

May 18, 2011 at 8:01 am

The Pixar problem

leave a comment »

A week ago, in my appreciation of Hayao Miyazaki, I wrote the following about Pixar:

Pixar has had an amazing run, but it’s a singularly corporate excellence. The craft, humor, and love of storytelling that we see in the best Pixar movies feels learned, rather than intuitive; it’s the work of a Silicon Valley company teaching itself to be compassionate.

Which I still believe is true. But the more I think about this statement, the more I realize that it raises as many questions as it answers. Yes, Pixar’s excellence is a corporate one—but why does it strive to be compassionate and creative, when so many other studios seem ready to settle for less? Faced with Pixar’s historic run of eleven quality blockbusters in fifteen years, it’s easy to fall into the trap of saying that Pixar’s culture is simply different from that of other studios, or that it has a special, mysterious genius for storytelling, which, again, simply avoids the question. So what is it, really, that sets Pixar apart?

It’s tempting to reduce it to a numbers game. Pixar releases, at most, one movie per year, while the other major studios release dozens. This means that Pixar can devote all of its considerable resources to a single flagship project, rather than spreading them across a larger slate of films. If every studio released only one picture a year, it’s nice to think that, instead of a hundred mostly forgettable movies, we’d get a handful of big, ambitious films like Inception, or even Avatar. Of course, we might also end up with a dozen variations on Transformers: Revenge of the Fallen. So I suspect that there’s something else going on here that can’t be explained by the numbers alone.

And as much as I hate to say it, Pixar’s special quality does, in fact, seem to boil down to a question of culture. So where does culture come from? Two places. The first, more accidental source is history: studios, like artists, tend to be subconsciously defined by their first successful works. In Pixar’s case, it was Toy Story; for DreamWorks, it was Shrek. And the contrast between these two films goes a long way toward accounting for the differences between their respective studios. Because its first movie was a classic, Pixar was encouraged to aim high, especially once they saw how audiences responded. If the first Pixar movie had been, say, Cars, I don’t think we’d be having this conversation.

The second factor is even more important. For reasons of luck, timing, and corporate politics, the creative side of Pixar is essentially run by John Lasseter, a director of genius. And his genius is less important than the fact that he’s a director at all. Most studios are run by men and women who have never directed a movie or written a screenplay, and as talented as some of these executives may be, there’s a world of difference between receiving notes from a Wharton MBA and from the man who directed Toy Story. The result, at best, is a climate where criticism is seen as a chance to make a movie better, rather than as inference from overhead. As a recent Wired article on Pixar pointed out:

The upper echelons also subject themselves to megadoses of healthy criticism. Every few months, the director of each Pixar film meets with the brain trust, a group of senior creative staff. The purpose of the meeting is to offer comments on the work in progress, and that can lead to some major revisions. “It’s important that nobody gets mad at you for screwing up,” says Lee Unkrich, director of Toy Story 3. “We know screwups are an essential part of making something good. That’s why our goal is to screw up as fast as possible.” [Italics mine.]

In other words, it isn’t true that Pixar has never made a bad movie: it makes bad movies—or parts of movies—all the time. The difference is that the bad movies are reworked until they get better, which isn’t the case at most other studios. (And at Pixar, if they still aren’t any good, they get canceled.) And because the cultural factors that made this climate possible are as much the result of timing and luck as intentional planning, the situation is more fragile than it seems. A real Pixar flop, with its ensuing loss of confidence, could change things overnight. Which is why, in the end, what I said of Miyazaki is also true of Pixar: if it goes away, we may never see anything like it again.

Written by nevalalee

January 14, 2011 at 12:02 pm

Nolan’s Run

with 3 comments

To continue my recent run of stating the obvious: I know I’m not alone in considering Christopher Nolan to be the most interesting director of the past ten years. In just over a decade, he’s gone from Memento to Inception, with The Dark Knight as one big step along the way, which ranks with Powell and Pressburger’s golden period as one of the most impressive runs in the history of movies. And his excellent interview with Wired last week, timed to coincide with Inception’s release on DVD, serves as a reminder that Nolan’s example is valuable for reasons that go far beyond his intelligence, skill, and massive popular success.

Nolan’s artistic trajectory has been a fascinating one. While most artists start with passion and gradually work their way toward craft, Nolan has always been a consummate craftsman, and is just now starting to piece together the emotional side of the equation. He’s been accused of being overly cold and cerebral, a criticism that has some basis in fact. But his careful, deliberate efforts to invest his work with greater emotion—and humor—have been equally instructive. As he says to Wired:

The problem was that I started [Inception] with a heist film structure. At the time, that seemed the best way of getting all the exposition into the beginning of the movie—heist is the one genre where exposition is very much part of the entertainment. But I eventually realized that heist films are usually unemotional. They tend to be glamorous and deliberately superficial. I wanted to deal with the world of dreams, and I realized that I really had to offer the audience a more emotional narrative, something that represents the emotional world of somebody’s mind. So both the hero’s story and the heist itself had to be based on emotional concepts. That took years to figure out. [Italics mine.]

Nolan’s masterstroke, of course, was to make the ghost that haunts Inception—originally that of a dead business partner—the main character’s wife. He also made strategic choices about where to keep things simple, in order to pump up the complexity elsewhere: the supporting cast is clearly and simply drawn, as is the movie’s look, which gives necessary breathing room to the story’s multiple layers. For a writer, the lesson is obvious: if you’re going to tell a complicated story, keep an eye out for ways to ease up on the reader in other respects.

In the case of Inception, the result is a film that is both intellectually dense and emotionally involving, and which famously rewards multiple viewings. In that light, this exchange is especially interesting:

Wired: I know that you’re not going to tell me [what the ending means], but I would have guessed that really, because the audience fills in the gaps, you yourself would say, “I don’t have an answer.”

Nolan: Oh no, I’ve got an answer.

Wired: You do?!

Nolan: Oh yeah. I’ve always believed that if you make a film with ambiguity, it needs to be based on a sincere interpretation. If it’s not, then it will contradict itself, or it will be somehow insubstantial and end up making the audience feel cheated. I think the only way to make ambiguity satisfying is to base it on a very solid point of view of what you think is going on, and then allow the ambiguity to come from the inability of the character to know, and the alignment of the audience with that character.

Wired: Oh. That’s a terrible tease.

Well, yes. But it’ll be interesting to see where Nolan goes from here. After Inception and The Dark Knight, he has as much power as any director in Hollywood. (Worldwide, Inception is the fourth highest-grossing movie in history based on an original screenplay, behind only Avatar, Titanic, and Finding Nemo.) He continues to grow in ambition and skill with every film. He seems determined to test the limits of narrative complexity in movies intended for a mass audience.

And he’s still only forty years old.

Written by nevalalee

December 11, 2010 at 7:39 am

%d bloggers like this: