Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘Wired

The planetary chauvinists

with 10 comments

In a profile in the latest issue of Wired, the journalist Steven Levy speaks at length with Jeff Bezos, the world’s richest man, about his dream of sending humans permanently into space. Levy was offered a rare glimpse into the operations of the Amazon founder’s spaceflight company, Blue Origin, but it came with one condition: “I had to promise that, before I interviewed [Bezos] about his long-term plans, I would watch a newly unearthed 1975 PBS program.” He continues:

So one afternoon, I opened my laptop and clicked on the link Bezos had sent me. Suddenly I was thrust back into the predigital world, where viewers had more fingers than channels and remote shopping hadn’t advanced past the Sears catalog. In lo-res monochrome, a host in suit and tie interviews the writer Isaac Asimov and physicist Gerard O’Neill, wearing a cool, wide-lapeled blazer and white turtleneck. To the amusement of the host, O’Neill describes a future where some ninety percent of humans live in space stations in distant orbits of the blue planet. For most of us, Earth would be our homeland but not our home. We’d use it for R&R, visiting it as we would a national park. Then we’d return to the cosmos, where humanity would be thriving like never before. Asimov, agreeing entirely, called resistance to the concept “planetary chauvinism.”

The discussion, which was conducted by Harold Hayes, was evidently lost for years before being dug up in a storage locker by the Space Studies Institute, the organization that O’Neill founded in the late seventies. You can view the entire program here, and it’s well worth watching. At one point, Asimov, whom Hayes describes as “our favorite jack of all sciences,” alludes briefly to my favorite science fiction concept, the gravity gauge: “Well once you land on the moon, you know the moon is a lot easier to get away from than the earth is. The earth has a gravity six times as strong as that of the moon at the surface.” (Asimov must have known all of this without having to think twice, but I’d like to believe that he was also reminded of it by The Moon is a Harsh Mistress.) And in response to the question of whether he had ever written about space colonies in his own fiction, Asimov gives his “legendary” response:

Nobody did, really, because we’ve all been planet chauvinists. We’ve all believed people should live on the surface of a planet, of a world. I’ve had colonies on the moon—so have a hundred other science fiction writers. The closest I came to a manufactured world in free space was to suggest that we go out to the asteroid belt and hollow out the asteroids, and make ships out of them [in the novelette “The Martian Way”]. It never occurred to me to bring the material from the asteroids in towards the earth, where conditions are pleasanter, and build the worlds there.

Of course, it isn’t entirely accurate that science fiction writers had “all” been planet chauvinists—Heinlein had explored similar concepts in such stories as “Waldo” and “Delilah and the Space Rigger,” and I’m sure there are other examples. (Asimov had even discussed the idea ten years earlier in the essay “There’s No Place Like Spome,” which he later described as “an anticipation, in a fumbling sort of way, of Gerard O’Neill’s concept of space settlements.”) And while there’s no doubt that O’Neill’s notion of a permanent settlement in space was genuinely revolutionary, there’s also a sense in which Asimov was the last writer you’d expect to come up with it. Asimov was a notorious acrophobe and claustrophile who hated flying and suffered a panic attack on the roller coaster at Coney Island. When he was younger, he loved enclosed spaces, like the kitchen at the back of his father’s candy store, and he daydreamed about running a newsstand on the subway, where he could put up the shutters and just read magazines. Years later, he refused to go out onto the balcony of his apartment, which overlooked Central Park, because of his fear of heights, and he was always happiest while typing away in his office. And his personal preferences were visible in the stories that he wrote. The theme of an enclosed or underground city appears in such stories as The Caves of Steel, while The Naked Sun is basically a novel about agoraphobia. In his interview with Hayes, Asimov speculates that space colonies will attract people looking for an escape from earth: “Once you do realize that you have a kind of life there which represents a security and a pleasantness that you no longer have on earth, the difficulty will be not in getting people to go but in making them line up in orderly fashion.” But he never would have gone there voluntarily.

Yet this is a revealing point in itself. Unlike Heinlein, who dreamed of buying a commercial ticket to the moon, Asimov never wanted to go into space. He just wanted to write about it, and he was better—or at least more successful—at this than just about anybody else. (In his memoirs, Asimov recalls taping the show with O’Neill on January 7, 1975, adding that he was “a little restless” because he was worried about being late for dinner with Lester and Judy-Lynn del Rey. After he was done, he hailed a cab. On the road, as they were making the usual small talk, the driver revealed that he had once wanted to be a writer. Asimov, who hadn’t mentioned his name, told him consolingly that no one could make a living as writer anyway. The driver responded: “Isaac Asimov does.”) And the comparison with Bezos is an enlightening one. Bezos obviously built his career on books, and he was a voracious reader of science fiction in his youth, as Levy notes: “[Bezos’s] grandfather—a former top Defense Department official—introduced him to the extensive collection of science fiction at the town library. He devoured the books, gravitating especially to Robert Heinlein and other classic writers who explored the cosmos in their tales.” With his unimaginable wealth, Bezos is in a position remarkably close to that of the protagonist in such stories, with the ability to “painlessly siphon off a billion dollars every year to fund his boyhood dream.” But the ideas that he has the money to put into practice were originated by writers and other thinkers whose minds went in unusual directions precisely because they didn’t have the resources, financial or otherwise, to do it personally. Vast wealth can generate a chauvinism of its own, and the really innovative ideas tend to come from unexpected places. This was true of Asimov, as well as O’Neill, whose work was affiliated in fascinating ways with the world of Stewart Brand and the Whole Earth Catalog. I’ll have more to say about O’Neill—and Bezos—tomorrow.

The twilight of the skeptics

with 3 comments

A few years ago, I was working on an idea for a story—still unrealized—that required a sidelong look at the problem of free will. As part of my research, I picked up a copy of the slim book of the same name by the prominent skeptic Sam Harris. At the time, I don’t think I’d even heard of Harris, and I was expecting little more than a readable overview. What I remember about it the most, though, is how it began. After a short opening paragraph about the importance of his subject, Harris writes:

In the early morning of July 23, 2007, Steven Hayes and Joshua Komisarjevsky, two career criminals, arrived at the home of Dr. William and Jennifer Petit in Cheshire, a quiet town in central Connecticut. They found Dr. Petit asleep on a sofa in the sunroom. According to his taped confession, Komisarjevsky stood over the sleeping man for some minutes, hesitating, before striking him in the head with a baseball bat. He claimed that his victim’s screams then triggered something within him, and he bludgeoned Petit with all his strength until he fell silent.

Harris goes on to provide a graphically detailed account, which I’m not going to retype here, of the sexual assault and murder of Petit’s wife and two daughters. Two full pages are devoted to it, in a book that is less than a hundred pages long, and only at the end does Harris come to the point: “As sickening as I find their behavior, I have to admit that if I were to trade places with one of these men, atom for atom, I would be him: there is no extra part of me that could decide to see the world differently or resist the impulse to victimize other people.”

I see what Harris is trying to say here, and I don’t think that he’s even wrong. Yet his choice of example—a horrifying crime that was less than five years old when he wrote Free Will, which the surviving victim, William Petit, might well have read—bothered me a lot. It struck me as a lapse of judgment, or at least of good taste, and it remains the one thing that I really remember about the book. And I’m reminded of it now only because of an excellent article in Wired, “Sam Harris and the Myth of Perfectly Rational Thought,” that neatly lays out many of my old misgivings. The author, Robert Wright, documents multiple examples of his subject falling short of his professed standards, but he focuses on an exchange with the journalist Ezra Klein, whom Harris accused of engaging in “a really indissoluble kind of tribalism, which I keep calling identity politics.” When Klein pointed out that this might be a form of tribal thinking in itself, Harris replied: “I know I’m not thinking tribally.” Wright continues:

Reflecting on his debate with Klein, Harris said that his own followers care “massively about following the logic of a conversation” and probe his arguments for signs of weakness, whereas Klein’s followers have more primitive concerns: “Are you making political points that are massaging the outraged parts of our brains? Do you have your hands on our amygdala and are you pushing the right buttons?”

Just a few years earlier, however, Harris didn’t have any qualms about pushing the reader’s buttons by devoting the first two pages of Free Will to an account of a recent, real-life home invasion that involved unspeakable acts of sexual violence against women—when literally any other example of human behavior, good or bad, would have served his purposes equally well.

Harris denies the existence of free will entirely, so perhaps he would argue that he didn’t have a choice when he wrote those words. More likely, he would say that the use of this particular example was entirely deliberate, because he was trying to make a point by citing most extreme case of deviant behavior that he could imagine. Yet it’s the placement, as much as the content, that gives me pause. Harris puts it right up front, at the place where most books try for a narrative or argumentative hook, which suggests two possible motivations. One is that he saw it as a great “grabber” opening, and he opportunistically used it for no other reason than to seize the reader’s attention, only to never mention it ever again. This would be bad enough, particularly for a man who claims to disdain anything so undignified as an appeal to the amygdala, and it strikes me as slightly unscrupulous, in that it literally indicates a lack of scruples. (I’ll have more to say about this word later.) Yet there’s an even more troubling possibility that didn’t occur to me at the time. Harris’s exploitation of these murders, and the unceremonious way in which he moves on, is a signal to the reader. This is the kind of book that you’re getting, it tells us, and if you can’t handle it, you should close it now and walk away. In itself, this amounts to false advertising—the rest of Free Will isn’t much like this at all, even if Harris is implicitly playing to the sort of person who hopes that it might be. More to the point, the callousness of the example probably repelled many readers who didn’t appreciate the rhetorical deployment, without warning, of a recent rape and multiple murder. I was one of them. But I also suspect that many women who picked up the book were just as repulsed. And Harris doesn’t seem to have been overly concerned about this possibility.

Yet maybe he should have been. Wright’s article in Wired includes a discussion of the allegations against the physicist and science writer Lawrence Krauss, who has exhibited a pattern of sexual misconduct convincingly documented by an article in Buzzfeed. Krauss is a prominent member of the skeptical community, as well as friendly toward Harris, who stated after the piece appeared: “Buzzfeed is on the continuum of journalistic integrity and unscrupulousness somewhere toward the unscrupulous side.” Whether or not the site is any less scrupulous than a writer who would use the sexual assault and murder of three women as the opening hook—and nothing else—in his little philosophy book is possibly beside the point. More relevant is the fact that, as Wright puts it, Harris’s characterization of the story’s source “isn’t true in any relevant sense.” Buzzfeed does real journalism, and the article about Krauss is as thoroughly reported and sourced as the most reputable investigations into any number of other public figures. With his blanket dismissal, Harris doesn’t sound much like a man who cares “massively” about logic or rationality. (Neither did Krauss, for that matter, when he said last year in the face of all evidence: “Science itself overcomes misogyny and prejudice and bias. It’s built in.”) But he has good reason to be uneasy. The article in Buzzfeed isn’t just about Krauss, but about the culture of behavior within the skeptical community itself:

What’s particularly infuriating, said Lydia Allan, the former cohost of the Dogma Debate podcast, is when male skeptics ask how they could draw more women into their circles. “I don’t know, maybe not put your hands all over us? That might work,” she said sarcastically. “How about you believe us when we tell you that shit happens to us?”

Having just read the first two pages of Free Will again, I can think of another way, too. But that’s probably just my amygdala talking.

Written by nevalalee

May 21, 2018 at 9:38 am

Crossing the Rhine

leave a comment »

Zener cards

Note: I’m out of town today for the Grappling with the Futures symposium at Harvard and Boston University, so I’m republishing a piece from earlier in this blog’s run. This post originally appeared, in a slightly different form, on March 1, 2017.

Two groups of very smart people are looking at the exact same data and coming to wildly different conclusions. Science hates that.

—Katie M. Palmer, Wired

In the early thirties, the parapsychologist J.B. Rhine conducted a series of experiments at Duke University to investigate the existence of extrasensory perception. His most famous test involved a deck of Zener cards, printed variously with the images of a star, a square, three waves, a circle, or a cross, in which subjects were invited to guess the symbol on a card drawn at random. The participants in the study, most of whom were college students, included the young John W. Campbell, who displayed no discernible psychic ability. At least two, however, Adam Linzmayer and Hubert Pearce, were believed by Rhine to have consistently named the correct cards at a higher rate than chance alone would predict. Rhine wrote up his findings in a book titled Extrasensory Perception, which was published in 1934. I’m not going to try to evaluate its merits here—but I do want to note that attempts to replicate his work were made almost at once, and they failed to reproduce his results. Within two years, W.S. Cox of Princeton University had conducted a similar run of experiments, of which he concluded: “There is no evidence of extrasensory perception either in the ‘average man’ or of the group investigated or in any particular individual of that group. The discrepancy between these results and those obtained by Rhine is due either to uncontrollable factors in experimental procedure or to the difference in the subjects.” By 1938, four other studies had taken place, to similar effect. Rhine’s results were variously attributed to methodological flaws, statistical misinterpretation, sensory leakage, or outright cheating, and in consequence, fairly or not, parapsychological research was all but banished from academic circles.

Decades later, another study was conducted, and its initial reception was very different. Its subject was ego depletion, or the notion that willpower draws on a finite reservoir of internal resources that can be reduced with overuse. In its most famous demonstration, the psychologists Roy Baumeister and Dianne Tice of Case Western University baked chocolate chip cookies, set them on a plate next to a bowl of radishes, and brought a series of participants into the room. They were all told to wait, but some were allowed to eat the cookies, while the others were instructed to snack only on the radishes. Then they were all given the same puzzle to complete, although they weren’t told that it was impossible to solve. According to the study, students who had been asked to stick to the radishes spent an average of just eight minutes on the puzzle, while those who had been allowed to eat the cookies spent nineteen minutes. The researchers concluded that our willpower is a limited quantity, and it can even be exhausted, like a muscle. Their work was enormously influential, and dozens of subsequent studies seemed to confirm it. In 2010, however, an analysis of published papers on the subject was unable to find any ego depletion effect, and last year, it got even worse—an attempt to replicate the core findings, led by the psychologist Martin Hagger, found zero evidence to support its existence. And this is just the most notable instance of what has been called a replication crisis in the sciences, particularly in psychology. One ambitious attempt to duplicate the results of such studies, the Reproducibility Project, has found that only about a third can be reproduced at all.

J.B. Rhine

But let’s consider the timelines involved. With Rhine, it took only two years before an attempt was made to duplicate his work, and two more years for the consensus in the field to turn against it decisively. In the case of ego depletion, twelve years passed before any questions were raised, and close to two decades before the first comprehensive effort to replicate it. And you don’t need to be a psychologist to understand why. Rhine’s results cut so radically against what was known about the brain—and the physical universe—that accepting them would have required a drastic overhaul of multiple disciplines. Not surprisingly, they inspired immediate skepticism, and they were subjected to intense scrutiny right away. Ego depletion, by contrast, was an elegant theory that seemed to confirm ordinary common sense. It came across as an experimental verification of something that we all know instinctively, and it was widely accepted almost at once. Many successful studies also followed in its wake, in large part because experiments that seemed to confirm it were more likely to be submitted for publication, while those that failed to produce interesting results simply disappeared. (When it came to Rhine, a negative result wouldn’t be discarded, but embraced as a sign that the system was working as intended.) Left to itself, the lag time between a study and any serious attempt to reproduce it seems to be much longer when the answer is intuitively acceptable. As the Reproducibility Project has shown, however, when we dispassionately pull studies from psychological journals and try to replicate them without regard to their inherent interest or plausibility, the results are often no better than they were with Rhine. It can leave psychologists sounding a lot like parapsychologists who have suffered a crisis of faith. As the psychologist Michael Inzlicht wrote: “Have I been chasing puffs of smoke for all these years?”

I’m not saying that Rhine’s work didn’t deserve to be scrutinized closely, because it did. And I’m also not trying to argue that social psychology is a kind of pseudoscience. But I think it’s worth considering whether psychology and parapsychology might have more in common than we’d like to believe. This isn’t meant to be a knock against either one, but an attempt to nudge them a little closer together. As Alex Holcombe of the University of Sydney put it: “The more optimistic interpretation of failures to replicate is that many of the results are true, but human behavior is so variable that the original researchers had to get lucky to find the result.” Even Martin Hagger says much the same thing: “I think ego-depletion effect is probably real, but current methods and measures are problematic and make it difficult to find.” The italics, as usual, are mine. Replace “human behavior” and “ego depletion” with “extrasensory perception,” and you end up with a concise version of the most widely cited explanation for why psychic abilities resist scientific verification, which is that these phenomena are real, but difficult to reproduce. You could call this wishful thinking, and in most cases, it probably is. But it also raises the question of whether meaningful phenomenon exist that can’t be reproduced in a laboratory setting. Regardless of where you come down on the issue, the answer shouldn’t be obvious. Intuition, for instance, is often described as a real phenomenon that can’t be quantified or replicated, and whether or not you buy into it, it’s worth taking seriously. A kind of collective intuition—or a hunch—is often what determines what results the scientific community is likely to accept. And the fact that this intuition is so frequently wrong means that we need to come to terms with it, even if it isn’t in a lab.

The end of applause

leave a comment »

On July 8, 1962, at a performance of Bach’s The Art of Fugue, the pianist Glenn Gould asked his audience not to applaud at the end. Most of his listeners complied, although the request clearly made them uneasy. A few months earlier, Gould had published an essay, “Let’s Ban Applause!”, in which he presented the case against the convention. (I owe my discovery of this piece to an excellent episode of my wife’s podcast, Rework, which you should check out if you haven’t done so already.) Gould wrote:

I have come to the conclusion, most seriously, that the most efficacious step which could be taken in our culture today would be the gradual but total elimination of audience response…I believe that the justification of art is the internal combustion it ignites in the hearts of men and not its shallow, externalized, public manifestations. The purpose of art is not the release of a momentary ejection of adrenaline but is, rather, the gradual, lifelong construction of a state of wonder and serenity.

Later that year, Gould expanded on his position in an interview with The Globe and Mail. When asked why he disliked applause, he replied:

I am rebellious about the institution of the concert—of the mob, which sits in judgment. Some artists seem to place too much reliance on the sweaty mass response of the moment. If we must have a public response at all, I feel it should be much less savage than it is today…Applause tells me nothing. Like any other artist, I can always pull off a few musical tricks at the end of a performance and the decibel count will automatically go up ten points.

The last line is the one that interests me the most. Gould, I think, was skeptical of applause largely because it reminded him of his own worst instincts as a performer—the part that would fall back on a few technical tricks to milk a more enthusiastic response from his audience in the moment. The funny thing about social media, of course, is that it places all of us in this position. If you’ve spent any time on Twitter or Facebook, you know that some messages will generate an enthusiastic response from followers, while others will go over like a lead balloon, and we quickly learn to intuitively sense the difference. Even if it isn’t conscious, it quietly affects the content that we decide to put out there in the world, as well as the opinions and the sides of ourselves that we reveal to others. And while this might seem like a small matter, it had a real impact on our politics, which became increasingly driven by ideas that thrived in certain corners of the social marketplace, where they inspired the “momentary ejection of adrenaline” that Gould decried. Last month, Antonio García Martínez, a former Facebook employee, wrote on Wired of the logistics of the site’s ad auction system:

During the run-up to the election, the Trump and Clinton campaigns bid ruthlessly for the same online real estate in front of the same swing-state voters. But because Trump used provocative content to stoke social media buzz, and he was better able to drive likes, comments, and shares than Clinton, his bids received a boost from Facebook’s click model, effectively winning him more media for less money. In essence, Clinton was paying Manhattan prices for the square footage on your smartphone’s screen, while Trump was paying Detroit prices.

And in the aftermath, Trump’s attitudes toward important issues often seem driven by the response that he gets on Twitter, which leads to a cycle in which he’s encouraged to become even more like what he already is. (In the past, I’ve drawn a comparison between his evolution and that of L. Ron Hubbard, and I think that it still holds up.) In many ways, Trump is the greatest embodiment so far of the tendency that Gould diagnosed half a century ago, in which the performer is driven to change himself in response to the collective feedback that he receives from applause. It’s no accident that Trump only seems truly alive on camera, in front of a cheering crowd, or while tweeting, or why he displays such an obsession with polls and television ratings. Applause may have told Gould nothing, but it tells Trump everything. Social media was a pivotal factor in his victory, but only at the cost of transforming him into a monster that his younger self—as craven and superficial as he was—might not have recognized. And it worked horrifyingly well. At an interview in January, Trump admonished reporters: “The fact is, you people won’t say this, but I’ll say it: I was a much better candidate than [Clinton]. You always say she was a bad candidate; you never say I was a good candidate. I was one of the greatest candidates. Someday you’re going to say that.” Well, I’m ready to say it now. Before the election, I argued in a blog post that Trump’s candidacy would establish the baseline of the popular vote that could be won by the worst possible campaign, and by any conventional measure, I was right. Like everyone else, though, I missed the larger point. Even as we mocked Trump for boasting about the attendance at his rallies, he was listening to the applause, and he evolved in real time into something that would raise the decibel count to shattering levels.

It almost makes me wish that we had actually banned applause back in the sixties, at least for the sake of a thought experiment. In his essay, Gould sketched a picture of how a concert might conclude under his new model:

In the early stages…the performers may feel a moment of unaccustomed tension at the conclusion of their selection, when they must withdraw to the wings unescorted by the homage of their auditors. For orchestral players this should provide no hazard: a platoon of cellists smartly goose-stepping offstage is an inspiring sight. For the solo pianist, however, I would suggest a sort of lazy-Susan device which would transport him and his instrument to the wings without his having to rise. This would encourage performance of those sonatas which end on a note of serene reminiscence, and in which the lazy Susan could be set gently in motion some moments before the conclusion.

It’s hard to imagine Trump giving a speech in such a situation. If it weren’t for the rallies, he never would have run for president at all, and much of his administration has consisted of his wistful efforts to recapture that glorious moment. (The infamous meeting in which he was showered with praise by his staff members—half a dozen of whom are now gone—feels now like an attempt to recreate that dynamic in a smaller room, and his recent request for a military parade channels that impulse into an even more troubling direction.) Instead of banning applause, of course, we did exactly the opposite. We enabled it everywhere—and then we upvoted its ultimate creation into the White House.

Written by nevalalee

March 16, 2018 at 9:02 am

The closed circle

leave a comment »

In his wonderful book The Nature of Order, the architect Christopher Alexander lists fifteen properties that characterize places and buildings that feel alive. (“Life” itself is a difficult concept to define, but we can come close to understanding it by comparing any two objects and asking the one question that Alexander identifies as essential: “Which of the two is a better picture of my self?”) These properties include such fundamentals of design as “Levels of Scale,” “Local Symmetries,” and “Positive Space,” and elements that are a bit trickier to pin down, including “Echoes,” “The Void,” and “Simplicity and Inner Calm.” But the final property, and the one that Alexander suggests is the most important, bears the slightly clunky name of “Not-Separateness.” He points to the Tower of the Wild Goose in China as an example of this quality at its best, and he says of its absence:

When a thing lacks life, is not whole, we experience it as being separate from the world and from itself…In my experiments with shapes and buildings, I have discovered that the other fourteen ways in which centers come to life will make a center which is compact, beautiful, determined, subtle—but which, without this fifteenth property, can still often somehow be strangely separate, cut off from what lies around it, lonely, awkward in its loneliness, too brittle, too sharp, perhaps too well delineated—above all, too egocentric, because it shouts, “Look at me, look at me, look how beautiful I am.”

The fact that he refers to this property as “Non-Separateness,” rather than the more obvious “Connectedness,” indicates that he sees it as a reaction against the marked tendency of architects and planners to strive for distinctiveness and separation. “Those unusual things which have the power to heal…are never like this,” Alexander explains. “With them, usually, you cannot really tell where one thing breaks off and the next begins, because the thing is smokily drawn into the world around it, and softly draws this world into itself.” It’s a characteristic that has little to do with the outsized personalities who tend to be drawn to huge architectural projects, and Alexander firmly skewers the motivations behind it:

This property comes about, above all, from an attitude. If you believe that the thing you are making is self-sufficient, if you are trying to show how clever you are, to make something that asserts its beauty, you will fall into the error of losing, failing, not-separateness. The correct connection to the world will only be made if you are conscious, willing, that the thing you make be indistinguishable from its surroundings; that, truly, you cannot tell where one ends and the next begins, and you do not even want to be able to do so.

This doesn’t happen by accident, particularly when millions of dollars and correspondingly inflated egos are involved. (The most blatant way of separating a building from its surroundings is to put your name on it.) And because it explicitly asks the designer to leave his or her cleverness behind, it amounts to the ultimate test of the subordination of the self to the whole. You can do great work and still falter at the end, precisely because of the strengths that allowed you to get that far in the first place.

It’s hard for me to read these words without thinking of Apple’s new headquarters in Cupertino, variously known as the Ring and the Mothership, which is scheduled to open later this year. A cover story in Wired by Steven Levy describes it in enraptured terms, in which you can practically hear Also Sprach Zarathustra:

As we emerge into the light, the Ring comes into view. As the Jeep orbits it, the sun glistens off the building’s curved glass surface. The “canopies”—white fins that protrude from the glass at every floor—give it an exotic, retro-­future feel, evoking illustrations from science fiction pulp magazines of the 1950s. Along the inner border of the Ring, there is a walkway where one can stroll the three-quarter-mile perimeter of the building unimpeded. It’s a statement of openness, of free movement, that one might not have associated with Apple. And that’s part of the point.

There’s a lot to unpack here, from the reference to pulp science fiction to the notion of “orbiting” the building to the claim that the result is “a statement of openness.” As for the contrary view, here’s what another article in Wired, this one by Adam Rogers, had to say about it a month later:

You can’t understand a building without looking at what’s around it—its site, as the architects say. From that angle, Apple’s new [headquarters] is a retrograde, literally inward-looking building with contempt for the city where it lives and cities in general. People rightly credit Apple for defining the look and feel of the future; its computers and phones seem like science fiction. But by building a mega-headquarters straight out of the middle of the last century, Apple has exacerbated the already serious problems endemic to twenty-first-century suburbs like Cupertino—transportation, housing, and economics. Apple Park is an anachronism wrapped in glass, tucked into a neighborhood.

Without delving into the economic and social context, which a recent article in the New York Times explores from another perspective, I think it’s fair to say that Apple Park is an utter failure from the point of view of “Not-Separateness.” But this isn’t surprising. Employees may just be moving in now, but its public debut dates back to June 7, 2011, when Steve Jobs himself pitched it to the Cupertino City Council. Jobs was obsessed by edges and boundaries, both physical and virtual, insisting that the NeXT computer be a perfect cube and introducing millions of consumers to the word “bezel.” Compare this to what Alexander writes of boundaries in architecture:

In things which have not-separateness, there is often a fragmented boundary, an incomplete edge, which destroys the hard line…Often, too, there is a gradient of the boundary, a soft edge caused by a gradient in which scale decreases…so that at the edge it seems to melt indiscernibly into the next thing…Finally, the actual boundary is sometimes rather careless, deliberately placed to avoid any simple complete sharp cutting off of the thing from its surroundings—a randomness in the actual boundary line which allows the thing to be connected to the world.

The italics are mine, because it’s hard to imagine anything less like Jobs or the company he created. Apple Park is being positioned as Jobs’s posthumous masterpiece, which reminds me of the alternate wording to Alexander’s one question: “Which one of these two things would I prefer to become by the day of my death?” (If the building is a monument to Jobs, it’s also a memorial to the ways in which he shaded imperceptibly into Trump, who also has a fixation with borders.) It’s the architectural equivalent of the design philosophy that led Apple to glue in its batteries and made it impossible to upgrade the perfectly cylindrical Mac Pro. Apple has always loved the idea of a closed system, and now its employees get to work in one.

Written by nevalalee

July 5, 2017 at 8:59 am

Crossing the Rhine

leave a comment »

Zener cards

Two groups of very smart people are looking at the exact same data and coming to wildly different conclusions. Science hates that.

—Katie M. Palmer, Wired

In the early thirties, the parapsychologist J.B. Rhine conducted a series of experiments at Duke University to investigate the existence of extrasensory perception. His most famous test involved a deck of Zener cards, variously printed with the images of a star, a square, three waves, a circle, or a cross, in which subjects were invited to guess the symbol on a card drawn at random. The participants in the study, most of whom were college students, included the young John W. Campbell, who displayed no particular psychic ability. At least two, however, Adam Linzmayer and Hubert Pearce, were believed by Rhine to have consistently named the correct cards at a higher rate than chance alone would predict. Rhine wrote up his findings in a book titled Extrasensory Perception, which was published in 1934, and I’m not going to try to evaluate its merits here. What I will note is that attempts to replicate his work were made almost at once, and they failed to reproduce his results. Within two years, W.S. Cox of Princeton University had conducted a similar run of experiments, of which he concluded: “There is no evidence of extrasensory perception either in the ‘average man’ or of the group investigated or in any particular individual of that group. The discrepancy between these results and those obtained by Rhine is due either to uncontrollable factors in experimental procedure or to the difference in the subjects.” By 1938, four other studies had taken place, to similar effect. Rhine’s results were variously attributed to methodological flaws, statistical misinterpretation, sensory leakage, or outright cheating, and in consequence, fairly or not, parapsychological research was all but banished from academic settings.

Decades later, another study was conducted, and its initial reception was very different. Its subject was ego depletion, or the notion that willpower draws on a finite reservoir of internal resources that can be reduced with overuse. In its most famous demonstration, the psychologists Roy Baumeister and Dianne Tice of Case Western University baked chocolate chip cookies, set them on a plate next to a bowl of radishes, and brought a series of participants into the room. All were told to wait there, but some were allowed to eat the cookies, while the others were instructed to snack only on the radishes. Then they were all given the same puzzle to complete—although they weren’t told that it was impossible to solve. According to the study, students who had been asked to stick to the radishes spent an average of just eight minutes on the puzzle, while those who had been allowed to eat the cookies spent nineteen minutes. The researchers concluded that our willpower is a limited quantity, and it can even be exhausted, like a muscle. Their work was enormously influential, and dozens of subsequent studies seemed to confirm it. In 2010, however, an analysis of published papers on the subject was unable to find any ego depletion effect, and last year, it got even worse: an attempt to replicate the core findings, led by the psychologist Martin Hagger, found zero evidence to support its existence. And this is just the most notable instance of what has been called a replication crisis in the sciences, particularly psychology, with one ambitious attempt to duplicate the results of psychological studies, the Reproducibility Project, finding that only about a third could be reproduced.

J.B. Rhine

But let’s consider the timelines involved. With Rhine, it took only two years before an attempt was made to duplicate his work, and two more years for the consensus in the field to turn against it decisively. In the case of ego depletion, twelve years passed before any questions were raised, and close to two decades before the first comprehensive effort to replicate it. And you don’t need to be a psychologist to understand why. Rhine’s results cut so radically against what was known about the brain—and the physical universe—that accepting them would have required a drastic overhaul of multiple disciplines. Not surprisingly, they inspired immediate skepticism, and they were subjected to intense scrutiny right away. Ego depletion, by contrast, was an elegant theory that seemed to confirm ordinary common sense. It came across as an experimental verification of something that we all know instinctively, and it was widely accepted almost at once. Many successful studies also followed in its wake, in large part because experiments that seemed to confirm it were more likely to be submitted for publication, while those that failed to produce interesting results simply disappeared. (When it came to Rhine, a negative result wouldn’t be discarded, but embraced as a sign that the system was working as intended.) Left to itself, the lag time between a study and any serious attempt to reproduce it seems to be much longer when the answer is intuitively acceptable. As the Reproducibility Project has shown, however, when we dispassionately pull studies from psychological journals and try to replicate them without regard to their inherent interest or plausibility, the results are often no better than they were with Rhine. It can leave psychologists sounding a lot like parapsychologists suffering through a crisis of faith. As the psychologist Michael Inzlicht wrote: “Have I been chasing puffs of smoke for all these years?”

I’m not saying that Rhine’s work didn’t deserve to be scrutinized closely, because it did. And I’m also not trying to argue that social psychology is a kind of pseudoscience. But I think it’s worth considering whether psychology and parapsychology might have more in common than we’d like to believe. This isn’t meant to be a knock against either one, but an attempt to nudge them a little closer together. As Alex Holcombe of the University of Sydney put it: “The more optimistic interpretation of failures to replicate is that many of the results are true, but human behavior is so variable that the original researchers had to get lucky to find the result.” Even Martin Hagger says much the same thing: “I think ego-depletion effect is probably real, but current methods and measures are problematic and make it difficult to find.” The italics, as usual, are mine. Replace “human behavior” and “ego depletion” with “extrasensory perception,” and you end up with a concise version of the most widely cited justification for the resistance of such abilities to scientific verification, which is that these phenomena are real, but difficult to reproduce. You could call this wishful thinking, and in most cases, it probably is. But it also raises the question of whether it’s possible to have a meaningful phenomenon that can’t be reproduced in a laboratory setting. Regardless of where you come down on the issue, the answer shouldn’t be obvious. Intuition, for instance, is often described as a real phenomenon that can’t be quantified or replicated, and whether or not you believe this, it’s worth taking seriously. A kind of collective intuition—or a hunch—is exactly what determines what results the scientific community is likely to accept. And the fact that this intuition is so often wrong means that we need to come to terms with it, even if it isn’t in a lab.

The Watergate Fix

with 4 comments

Gore Vidal

“I must get my Watergate fix every morning,” Gore Vidal famously said to Dick Cavett in the final days of the Nixon administration. In his memoir In Joy Still Felt, Isaac Asimov writes: “I knew exactly what he meant.” He elaborates:

I knew we had [Nixon]…From that point on, I took to combing the Times from cover to cover every morning, skipping only the column by Nixon’s minion William Safire. I sometimes bought the New York Post so I could read additional commentary. I listened to every news report on the radio.

I read and listened with greater attention and fascination than in even the darkest days of World War II. Thus my diary entry for May 11, 1973, says, “Up at six to finger-lick the day’s news on Watergate.”

I could find no one else as hooked on Watergate as I was, except for Judy-Lynn [del Rey]. Almost every day, she called me or I called her and we would talk about the day’s developments in Watergate. We weren’t very coherent and mostly we laughed hysterically.

Now skip ahead four decades, and here’s what Wired reporter Marcus Wohlsen wrote earlier this week of a “middle-age software developer” with a similar obsession:

Evan is a poll obsessive, FiveThirtyEight strain—a subspecies I recognize because I’m one of them, too. When he wakes up in the morning, he doesn’t shower or eat breakfast before checking the Nate Silver-founded site’s presidential election forecast (sounds about right). He keeps a tab open to FiveThirtyEight’s latest poll list; a new poll means new odds in the forecast (yup). He get push alerts on his phone when the forecast changes (check). He follows the 538 Forecast Bot, a Twitter account that tweets every time the forecast changes (same). In all, Evan says he checks in hourly, at least while he’s awake (I plead the Fifth).

Wohlsen notes that the design of FiveThirtyEight encourages borderline addictive behavior: its readers are like the lab rats who repeatedly push a button to send a quick, pleasurable jolt coursing through their nervous systems. The difference is that polls and political news, no matter how favorable to one side, deliver a more complicated mix of emotions—hope, uncertainty, apprehension. But as long as the numbers are trending in the right direction, we can’t get enough of them.

Princeton Election Consortium

And it’s striking to see how little the situation has changed since the seventies, apart from a few advances in technology. Asimov had to buy two physical newspapers to get his fix, while we can click effortlessly from one source to another. On the weekend that the Access Hollywood recording was released, I found myself cycling nonstop between the New York Times, Politico, Talking Points Memo, the Washington Post—where I rapidly used up my free articles for the month—and other political sites, like Daily Kos, that I hadn’t visited in years. (I don’t think I’ve been as hooked on political analysis since George W. Bush nominated Harriet Miers to the Supreme Court, which still stands out as a golden age in my memories.) Like Asimov, who skipped William Safire’s column, I also know what to avoid. Instead of calling a friend to talk about the day’s developments, I read blog posts and comment threads. Not surprisingly, the time I spend on all this is inversely correlated to the trajectory of the Trump campaign. During a rough stretch in September, I deleted FiveThirtyEight from my bookmarks because it was causing me more anxiety than it was worth. I still haven’t put it back, perhaps on the assumption that if I have to type it into my address bar, rather than clicking on a shortcut, I won’t go back as often. In practice, I’ll often use a quick spin through FiveThirtyEight, Politico, and Talking Points Memo as my reward for getting through half an hour of work, which is the only positive behavior on my part to come out of this entire election.

Of course, there are big differences between Vidal and Asimov’s Watergate fix and its equivalent today. By the time Haldeman and Ehrlichman resigned, Nixon’s goose was pretty much cooked, and someone like Asimov could take unmixed pleasure in his comeuppance. Trump, by contrast, could still get elected. More surprising is the fact that the overall arc of this presidential campaign has been mostly unresponsive to the small daily movements that analytics are meant to track. As Sam Wang of the Princeton Election Consortium recently pointed out, this election has actually been less volatile than usual, and its shape has remained essentially unchanged for months, with Clinton holding a national lead of between two and six points over Trump. It seems noisy, but only because every move is subjected to such scrutiny. In other words, our obsession with polls creates the psychological situation that we’re presumably trying to avoid: we’re subjectively experiencing this race as more volatile than it really is. Our polling fix isn’t rational, at least not from the point of view of minimizing anxiety. As Wohlesen says in Wired, it’s more like a species of magical thinking, in which we place our trust in a certain kind of magician—a data wizard—to see us through an election in which the facts have been treated with disdain. At my lowest moments last month, I would console myself with the thought of Elan Kriegel, Clinton’s director of analytics. The details didn’t matter; it was enough that he existed, and that I could halfway believe that he had access to magic that allowed him to exercise some degree of control over an inherently uncontrollable future. Or as the Wired headline put it: “I just want Nate Silver to tell me it’s all going to be fine.”

%d bloggers like this: