Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘New York Times

The purity test

with one comment

Earlier this week, The New York Times Magazine published a profile by Taffy Brodesser-Akner of the novelist Jonathan Franzen. It’s full of fascinating moments, including a remarkable one that seems to have happened entirely by accident—the reporter was in the room when Frazen received a pair of phone calls, including one from Daniel Craig, to inform him that production had halted on the television adaptation of his novel Purity. Brodesser-Akner writes: “Franzen sat down and blinked a few times.” That sounds about right to me. And the paragraph that follows gets at something crucial about the writing life, in which the necessity of solitary work clashes with the pressure to put its fruits at the mercy of the market:

He should have known. He should have known that the bigger the production—the more people you involve, the more hands the thing goes through—the more likely that it will never see the light of day resembling the thing you set out to make in the first place. That’s the real problem with adaptation, even once you decide you’re all in. It just involves too many people. When he writes a book, he makes sure it’s intact from his original vision of it. He sends it to his editor, and he either makes the changes that are suggested or he doesn’t. The thing that we then see on shelves is exactly the thing he set out to make. That might be the only way to do this. Yes, writing a novel—you alone in a room with your own thoughts—might be the only way to get a maximal kind of satisfaction from your creative efforts. All the other ways can break your heart.

To be fair, Franzen’s status is an unusual one, and even successful novelists aren’t always in the position of taking for granted the publication of “exactly the thing he set out to make.” (In practice, it’s close to all or nothing. In my experience, the novel that you see on store shelves mostly reflects what the writer wanted, while the ones in which the vision clashes with those of other stakeholders in the process generally doesn’t get published at all.) And I don’t think I’m alone when I say that some of the most interesting details that Brodesser-Akner provides are financial. A certain decorum still surrounds the reporting of sales figures in the literary world, so there’s a certain frisson in seeing them laid out like this:

And, well, sales of his novels have decreased since The Corrections was published in 2001. That book, about a Midwestern family enduring personal crises, has sold 1.6 million copies to date. Freedom, which was called a “masterpiece” in the first paragraph of its New York Times review, has sold 1.15 million since it was published in 2010. And 2015’s Purity, his novel about a young woman’s search for her father and the story of that father and the people he knew, has sold only 255,476.

For most writers, selling a quarter of a million copies of any book would exceed their wildest dreams. Having written one of the greatest outliers of the last twenty years, Franzen simply reverting to a very exalted mean. But there’s still a lot to unpack here.

For one thing, while Purity was a commercial disappointment, it doesn’t seem to have been an unambiguous disaster. According to Publisher’s Weekly, its first printing—which is where you can see a publisher calibrating its expectations—came to around 350,000 copies, which wasn’t even the largest print run for that month. (That honor went to David Lagercrantz’s The Girl in the Spider’s Web, which had half a million copies, while a new novel by the likes of John Grisham can run to over a million.) I don’t know what Franzen was paid in advance, but the loss must have fallen well short of a book like Tom Wolfe’s Back to Blood, for which he received $7 million and sold 62,000 copies, meaning that his publisher paid over a hundred dollars for every copy that someone actually bought. And any financial hit would have been modest compared to the prestige of keeping a major novelist on one’s list, which is unquantifiable, but no less real. If there’s one thing that I’ve learned about publishing over the last decade, it’s that it’s a lot like the movie industry, in which apparently inexplicable commercial and marketing decisions are easier to understand when you consider their true audience. In many cases, when they buy or pass on a book, editors aren’t making decisions for readers, but for other editors, and they’re very conscious of what everyone in their imprint thinks. A readership is an abstraction, except when quantified in sales, but editors have their everyday judgement calls reflected back on them by the people they see every day. Giving up a writer like Franzen might make financial sense, but it would be devastating to Farrar, Straus and Giroux, to say nothing of the relationship that can grow between an editor and a prized author over time.

You find much the same dynamic in Hollywood, in which some decisions are utterly inexplicable until you see them as a manifestation of office politics. In theory, a film is made for moviegoers, but the reactions of the producer down the hall are far more concrete. The difference between publishing and the movies is that the latter publish their box office returns, often in real time, while book sales remain opaque even at the highest level. And it’s interesting to wonder how both industries might differ if their approaches were more similar. After years of work, the success of a movie can be determined by the Saturday morning after its release, while a book usually has a little more time. (The exception is when a highly anticipated title doesn’t make it onto the New York Times bestseller list, or falls off it with alarming speed. The list doesn’t disclose any sales figures, which means that success is relative, not absolute—and which may be a small part of the reason why writers seldom wish one another well.) In the absence of hard sales, writers establish the pecking order with awards, reviews, and the other signifiers that have allowed Franzen to assume what Brodesser-Akner calls the mantle of “the White Male Great American Literary Novelist.” But the real takeaway is how narrow a slice of the world this reflects. Even if we place the most generous interpretation imaginable onto Franzen’s numbers, it’s likely that well under one percent of the American population has bought or read any of his books. You’ll find roughly the same number on any given weeknight playing HQ Trivia. If we acknowledged this more widely, it might free writers to return to their proper cultural position, in which the difference between a bestseller and a disappointment fades rightly into irrelevance. Who knows? They might even be happier.

Written by nevalalee

June 28, 2018 at 7:49 am

The Big One

leave a comment »

In a heartfelt appreciation of the novelist Philip Roth, who died earlier this week, the New York Times critic Dwight Garner describes him as “the last front-rank survivor of a generation of fecund and authoritative and, yes, white and male novelists…[that] included John Updike, Norman Mailer and Saul Bellow.” These four names seem fated to be linked together for as long as any of them is still read and remembered, and they’ve played varying roles in my own life. I was drawn first to Mailer, who for much of my adolescence was my ideal of what a writer should be, less because of his actual fiction than thanks to my repeated readings of the juiciest parts of Peter Manso’s oral biography. (If you squint hard and think generously, you can even see Mailer’s influence in the way I’ve tried to move between fiction and nonfiction, although in both cases it was more a question of survival.) Updike, my favorite, was a writer I discovered after college. I agree with Garner that he probably had the most “sheer talent” of them all, and he represents my current model, much more than Mailer, of an author who could apparently do anything. Bellow has circled in and out of my awareness over the years, and it’s only recently that I’ve started to figure out what he means to me, in part because of his ambiguous status as a subject of biography. And Roth was the one I knew least. I’d read Portnoy’s Complaint and one or two of the Zuckerman novels, but I always felt guilty over having never gotten around to such late masterpieces as American Pastoral—although the one that I should probably check out first these days is The Plot Against America.

Yet I’ve been thinking about Roth for about as long as I’ve wanted to be a writer, largely because he came as close as anyone ever could to having the perfect career, apart from the lack of the Nobel Prize. He won the National Book Award for his debut at the age of twenty-six; he had a huge bestseller at an age when he was properly equipped to enjoy it; and he closed out his oeuvre with a run of major novels that critics seemed to agree were among the best that he, or anyone, had ever written. (As Garner nicely puts it: “He turned on the afterburners.”) But he never seemed satisfied by his achievement, which you can take as an artist’s proper stance toward his work, a reflection of the fleeting nature of such rewards, a commentary on the inherent bitterness of the writer’s life, or all of the above. Toward the end of his career, Roth actively advised young writers not to become novelists, and in his retirement announcement, which he delivered almost casually to a French magazine, he quoted Joe Louis: “I did the best I could with what I had.” A month later, in an interview with Charles McGrath of the New York Times, he expanded on his reasoning:

I know I’m not going to write as well as I used to. I no longer have the stamina to endure the frustration. Writing is frustration—it’s daily frustration, not to mention humiliation. It’s just like baseball: you fail two-thirds of the time…I can’t face any more days when I write five pages and throw them away. I can’t do that anymore…I knew I wasn’t going to get another good idea, or if I did, I’d have to slave over it.

And on his computer, he posted a note that gave him strength when he looked at it each day: “The struggle with writing is over.”

Roth’s readers, of course, rarely expressed the same disillusionment, and he lives most vividly in my mind as a reference point against which other authors could measure themselves. In an interview with The Telegraph, John Updike made one of the most quietly revealing statements that I’ve ever heard from a writer, when asked if he felt that he and Roth were in competition:

Yes, I can’t help but feel it somewhat. Especially since Philip really has the upper hand in the rivalry as far as I can tell. I think in a list of admirable novelists there was a time when I might have been near the top, just tucked under Bellow. But since Bellow died I think Philip has…he’s certainly written more novels than I have, and seems more dedicated in a way to the act of writing as a means of really reshaping the world to your liking. But he’s been very good to have around as far as goading me to become a better writer.

I think about that “list of admirable novelists” all the time, and it wasn’t just a joke. In an excellent profile in The New Yorker, Claudia Roth Pierpoint memorably sketched in all the ways in which other writers warily circled Roth. When asked if the two of them were friends, Updike said, “Guardedly,” and Bellow seems to have initially held Roth at arm’s length, until his wife convinced him to give the younger writer a chance. Pierpont concludes of the relationship between Roth and Updike: “They were mutual admirers, wary competitors who were thrilled to have each other in the world to up their game: Picasso and Matisse.”

And they also remind me of another circle of writers whom I know somewhat better. If Bellow, Mailer, Updike, and Roth were the Big Four of the literary world, they naturally call to mind the Big Three of science fiction—Heinlein, Asimov, and Clarke. In each case, the group’s members were perfectly aware of how exceptional they were, and they carefully guarded their position. (Once, in a conference call with the other two authors, Asimov jokingly suggested that one of them should die to make room for their successors. Heinlein responded: “Fuck the other writers!”) Clarke and Asimov seem to have been genuinely “thrilled to have each other in the world,” but their relationship with the third point of the triangle was more fraught. Toward the end, Asimov started to “avoid” the combative Heinlein, who had a confrontation with Clarke over the Strategic Defense Initiative that effectively ended their friendship. In public, they remained cordial, but you can get a hint of their true feelings in a remarkable passage from the memoir I. Asimov:

[Clarke] and I are now widely known as the Big Two of science fiction. Until early 1988, as I’ve said, people spoke of the Big Three, but then Arthur fashioned a little human figurine of wax and with a long pin— At least, he has told me this. Perhaps he’s trying to warn me. I have made it quite plain to him, however, that if he were to find himself the Big One, he would be very lonely. At the thought of that, he was affected to the point of tears, so I think I’m safe.

As it turned out, Clarke, like Roth, outlived all the rest, and perhaps they felt lonely in the end. Longevity can amount to a kind of victory in itself. But it must be hard to be the Big One.

From Montgomery to Bilbao

leave a comment »

On August 16, 2016, the Equal Justice Initiative, a legal rights organization, unveiled its plans for the National Memorial for Peace and Justice, which would be constructed in Montgomery, Alabama. Today, less than two years later, it opens to the public, and the timing could hardly seem more appropriate, in ways that even those who conceived of it might never have imagined. As Campbell Robertson writes for the New York Times:

At the center is a grim cloister, a walkway with eight hundred weathered steel columns, all hanging from a roof. Etched on each column is the name of an American county and the people who were lynched there, most listed by name, many simply as “unknown.” The columns meet you first at eye level, like the headstones that lynching victims were rarely given. But as you walk, the floor steadily descends; by the end, the columns are all dangling above, leaving you in the position of the callous spectators in old photographs of public lynchings.

And the design represents a breakthrough in more ways than one. As the critic Philip Kennicott points out in the Washington Post: “Even more remarkable, this memorial…was built on a budget of only $15 million, in an age when major national memorials tend to cost $100 million and up.”

Of course, if the memorial had been more costly, it might not exist at all, and certainly not with the level of independence and the clear point of view that it expresses. Yet if there’s one striking thing about the coverage of the project, it’s the absence of the name of any one architect or designer. Neither of these two words even appears in the Times article, and in the Post, we only read that the memorial was “designed by [Equal Justice Initiative founder Bryan] Stevenson and his colleagues at EJI in collaboration with the Boston-based MASS Design Group.” When you go to the latter’s official website, twelve people are credited as members of the project design team. This is markedly different from the way in which we tend to talk about monuments, museums, and other architectural works that are meant to invite our attention. In many cases, the architect’s identity is a selling point in itself, as it invariably is with Frank Gehry, whose involvement in a project like the Guggenheim Museum Bilbao is consciously intended to rejuvenate an entire city. In Montgomery, by contrast, the designer is essentially anonymous, or part of a collaboration, which seems like an aesthetic choice as conscious as the design of the space itself. The individual personality of the architect departs, leaving the names and events to testify on their own behalf. Which is exactly as it should be.

And it’s hard not to compare this to the response to the design of the Vietnam Veterans Memorial in 1981. The otherwise excellent documentary by Ken Burns and Lynn Novick alludes to the firestorm that it caused, but it declines to explore how much of the opposition was personal in nature. As James Reston, Jr. writes in the definitive study A Rift in the Earth:

After Maya Lin’s design was chosen and announced, the public reaction was intense. Letters from outraged veterans poured into the Memorial Fund office. One claimed that Lin’s design had “the warmth and charm of an Abyssinian dagger.” “Nihilistic aesthetes” had chosen it…Predictably, the names of incendiary antiwar icons, Jane Fonda and Abbie Hoffman, were invoked as cheering for a design that made a mockery of the Vietnam dead…As for the winner with Chinese ancestry, [donor H. Ross] Perot began referring to her as “egg roll.”

If anything, the subject matter of the National Memorial for Peace and Justice is even more fraught, and the decision to place the designers in the background seems partially intended to focus the conversation on the museum itself, and not on those who made it.

Yet there’s a deeper lesson here about architecture and its creators. At first, you might think that a building with a singular message would need to arise from—or be identified with—an equally strong personality, but if anything, the trend in recent years has gone the other way. As Reinier de Graaf notes in Four Walls and a Roof, one of the more curious developments over the last few decades is the way in which celebrity architects, like Frank Gehry, have given up much of their own autonomy for the sake of unusual forms that no human hand or brain could properly design:

In partially delegating the production of form to the computer, the antibox has seemingly boosted the production of extravagant shapes beyond any apparent limits. What started as a deliberate meditation on the notion of form in the early antibodies has turned into a game of chance. Authorship has become relative: with creation now delegated to algorithms, the antibox’s main delight is the surprise it causes to the designers.

Its opposite number is the National Memorial for Peace and Justice, which was built with simple materials and techniques that rely for their impact entirely on the insight, empathy, and ingenuity of the designer, who then quietly fades away. The architect can afford to disappear, because the work speaks for those who are unable to speak for themselves. And that might be the most powerful message of all.

Who Needs the Kwik-E-Mart?

leave a comment »

Who needs the Kwik-E-Mart?
Now here’s the tricky part…

“Homer and Apu”

On October 8, 1995, The Simpsons aired the episode “Bart Sells His Soul,” which still hasn’t stopped rattling around in my brain. (A few days ago, my daughter asked: “Daddy, what’s the soul?” I may have responded with some variation on Lisa’s words: “Whether or not the soul is physically real, it’s the symbol of everything fine inside us.” On a more typical morning, though, I’m likely to mutter to myself: “Remember Alf? He’s back—in pog form!”) It’s one of the show’s finest installments, but it came close to being about something else entirely. On the commentary track for the episode, the producer Bill Oakley recalls:

There’s a few long-lived ideas that never made it. One of which is David Cohen’s “Homer the Narcoleptic,” which we’ve mentioned on other tracks. The other one was [Greg Daniels’s] one about racism in Springfield. Do you remember this? Something about Homer and Dr. Hibbert? Well, you pitched it several times and I think we were just…It was some exploration of the concept of race in Springfield, and we just said, you know, we don’t think this is the forum. The Simpsons can’t be the right forum to deal with racism.

Daniels—who went on to create Parks and Recreation and the American version of The Office—went with the pitch for “Bart Sells His Soul” instead, and the other premise evidently disappeared forever, including from his own memory. When Oakley brings it up, Daniels only asks: “What was it?”

Two decades later, The Simpsons has yet to deal with race in any satisfying way, even when the issue seems unavoidable. Last year, the comedian Hari Kondabolu released the documentary The Problem With Apu, which explores the complicated legacy of one of the show’s most prominent supporting characters. On Sunday, the show finally saw fit to respond to these concerns directly, and the results weren’t what anyone—apart perhaps from longtime showrunner Al Jean—might have wanted. As Sopan Deb of the New York Times describes it:

The episode, titled “No Good Read Goes Unpunished,” featured a scene with Marge Simpson sitting in bed with her daughter Lisa, reading a book called “The Princess in the Garden,” and attempting to make it inoffensive for 2018. At one point, Lisa turns to directly address the TV audience and says, “Something that started decades ago and was applauded and inoffensive is now politically incorrect. What can you do?” The shot then pans to a framed picture of Apu at the bedside with the line, “Don’t have a cow!” inscribed on it. Marge responds: “Some things will be dealt with at a later date.” Followed by Lisa saying, “If at all.”

Kondabolu responded on Twitter: “This is sad.” And it was. As Linda Holmes of NPR aptly notes: “Apu is not appearing in a fifty-year-old book by a now-dead author. Apu is a going concern. Someone draws him, over and over again.” And the fact the show decided to put these words into the mouth of Lisa Simpson, whose importance to viewers everywhere was recently underlined, makes it doubly disappointing.

But there’s one obvious change that The Simpsons could make, and while it wouldn’t be perfect, it would be a step in the right direction. If the role of Apu were recast with an actor of South Asian descent, it might not be enough in itself, but I honestly can’t see a downside. Hank Azaria would still be allowed to voice dozens of characters. Even if Apu sounded slightly different than before, this wouldn’t be unprecedented—Homer’s voice changed dramatically after the first season, and Julie Kavner’s work as Marge is noticeably more gravelly than it used to be. Most viewers who are still watching probably wouldn’t even notice, and the purists who might object undoubtedly left a long time ago. It would allow the show to feel newsworthy again, and not just on account of another gimmick. And even if we take this argument to its logical conclusion and ask that Carl, Officer Lou, Akira, Bumblebee Man, and all the rest be voiced by actors of the appropriate background, well, why not? (The show’s other most prominent minority character, Dr. Hibbert, seems to be on his way out for other reasons, and he evidently hasn’t appeared in almost two years.) For a series that has systematically undermined its own legacy in every conceivable way out of little more than boredom, it seems shortsighted to cling to the idea that Azaria is the only possible Apu. And even if it leaves many issues unresolved on the writing level, it also seems like a necessary precondition for change. At this late date, there isn’t much left to lose.

Of course, if The Simpsons were serious about this kind of effort, we wouldn’t be talking about its most recent episode at all. And the discussion is rightly complicated by the fact that Apu—like everything else from the show’s golden age—was swept up in the greatness of those five or six incomparable seasons. Before that unsuccessful pitch on race in Springfield, Greg Daniels was credited for “Homer and Apu,” which deserves to be ranked among the show’s twenty best episodes, and the week after “Bart Sells His Soul,” we got “Lisa the Vegetarian,” which gave Apu perhaps his finest moment, as he ushered Lisa to the rooftop garden to meet Paul and Linda McCartney. But the fact that Apu was a compelling character shouldn’t argue against further change, but in its favor. And what saddens me the most about the show’s response is that it undermines what The Simpsons, at its best, was supposed to be. It was the cartoon that dared to be richer and more complex than any other series on the air; it had the smartest writers in the world and a network that would leave them alone; it was just plain right about everything; and it gave us a metaphorical language for every conceivable situation. The Simpsons wasn’t just a sitcom, but a vocabulary, and it taught me how to think—or it shaped the way that I do think so deeply that there’s no real distinction to be made. As a work of art, it has quietly fallen short in ways both small and large for over fifteen years, but I was able to overlook it because I was no longer paying attention. It had done what it had to do, and I would be forever grateful. But this week, when the show was given the chance to rise again to everything that was fine inside of it, it faltered. Which only tells me that it lost its soul a long time ago.

The dawn of man

leave a comment »

Note: To celebrate the fiftieth anniversary of the release of 2001: A Space Odyssey, which held its premiere on April 2, 1968, I’ll be spending the week looking at various aspects of what remains the greatest science fiction movie ever made.

Almost from the moment that critics began to write about 2001, it became fashionable to observe that the best performance in the movie was by an actor playing a computer. In his review in Analog, for example, P. Schuyler Miller wrote:

The actors, except for the gentle voice of HAL, are thoroughly wooden and uninteresting, and I can’t help wondering whether this isn’t Kubrick’s subtle way of suggesting that the computer is really more “human” than they and fully justified in trying to get rid of them before they louse up an important mission. Someday we may know whether the theme of this part is a Clarke or a Kubrick contribution. I suspect it was the latter…perhaps just because Stanley Kubrick is said to like gadgets.

This criticism is often used to denigrate the other performances or the film’s supposed lack of humanity, but I prefer to take it as a tribute to the work of actor Douglas Rain, Kubrick and Clarke’s script, and the brilliant design of HAL himself. The fact that a computer is the character we remember best isn’t a flaw in the movie, but a testament to its skill and imagination. And as I’ve noted elsewhere, the acting is excellent—it’s just so understated and naturalistic that it seems vaguely incongruous in such spectacular settings. (Compare it to the performances in Destination Moon, for instance, and you see how good Keir Dullea and William Sylvester really are here.)

But I also think that the best performance in 2001 isn’t by Douglas Rain at all, but by Vivian Kubrick, in her short appearance on the phone as Heywood Floyd’s daughter. It’s a curious scene that breaks many of the rules of good storytelling—it doesn’t lead anywhere, it’s evidently designed to do nothing but show off a piece of hardware, and it peters out even as we watch it. The funniest line in the movie may be Floyd’s important message:

Listen, sweetheart, I want you to tell mommy something for me. Will you remember? Well, tell mommy that I telephoned. Okay? And that I’ll try to telephone tomorrow. Now will you tell her that?

But that’s oddly true to life as well. And when I watch the scene today, with a five-year-old daughter of my own, it seems to me that there’s no more realistic little girl in all of movies. (Kubrick shot the scene himself, asking the questions from offscreen, and there’s a revealing moment when the camera rises to stay with Vivian as she stands. This is sometimes singled out as a goof, although there’s no reason why a sufficiently sophisticated video phone wouldn’t be able to track her automatically.) It’s a scene that few other films would have even thought to include, and now that video chat is something that we all take for granted, we can see through the screen to the touchingly sweet girl on the other side. On some level, Kubrick simply wanted his daughter to be in the movie, and you can’t blame him.

At the time, 2001 was criticized as a soulless hunk of technology, but now it seems deeply human, at least compared to many of its imitators. Yesterday in the New York Times, Bruce Handy shared a story from Keir Dullea, who explained why he breaks the glass in the hotel room at the end, just before he comes face to face with himself as an old man:

Originally, Stanley’s concept for the scene was that I’d just be eating and hear something and get up. But I said, “Stanley, let me find some slightly different way that’s kind of an action where I’m reaching—let me knock the glass off, and then in mid-gesture, when I’m bending over to pick it up, let me hear the breathing from that bent-over position.” That’s all. And he says, “Oh, fine. That sounds good.” I just wanted to find a different way to play the scene than blankly hearing something. I just thought it was more interesting.

I love this anecdote, not just because it’s an example of an evocative moment that arose from an actor’s pragmatic considerations, but because it feels like an emblem of the production of the movie as a whole. 2001 remains the most technically ambitious movie of all time, but it was also a project in which countless issues were being figured out on the fly. Every solution was a response to a specific problem, and it covered a dizzying range of challenges—from the makeup for the apes to the air hostess walking upside down—that might have come from different movies entirely.

2001, in short, was made by hand—and it’s revealing that many viewers assume that computers had to be involved, when they didn’t figure in the process at all. (All of the “digital” readouts on the spacecraft, for instance, were individually animated, shot on separate reels of film, and projected onto those tiny screens on set, which staggers me even to think about it. And even after all these years, I still can’t get my head around the techniques behind the Star Gate sequence.) It reminds me, in fact, of another movie that happens to be celebrating an anniversary this year. As a recent video essay pointed out, if the visual effects in Jurassic Park have held up so well, it’s because most of them aren’t digital at all. The majority consist of a combination of practical effects, stop motion, animatronics, raptor costumes, and a healthy amount of misdirection, with computers used only when absolutely necessary. Each solution is targeted at the specific problems presented by a snippet of film that might last just for a few seconds, and it moves so freely from one trick to another that we rarely have a chance to see through it. It’s here, not in A.I., that Spielberg got closest to Kubrick, and it hints at something important about the movies that push the technical aspects of the medium. They’re often criticized for an absence of humanity, but in retrospect, they seem achingly human, if only because of the total engagement and attention that was required for every frame. Most of their successors lack the same imaginative intensity, which is a greater culprit than the use of digital tools themselves. Today, computers are used to create effects that are perfect, but immediately forgettable. And one of the wonderful ironies of 2001 is that it used nothing but practical effects to create a computer that no viewer can ever forget.

The axioms of behavior

with one comment

Earlier this week, Keith Raniere, the founder of an organization known as Nxivm, was arrested in Mexico, to which he had fled last year in the wake of a devastating investigation published in the New York Times. The article described a shady operation that combined aspects of a business seminar, a pyramid scheme, and a sex cult, with public workshops shading into a “secret sisterhood” that required its members to provide nude photographs or other compromising materials and be branded with Raniere’s initials. (In an email obtained by the Times, Raniere reassured one of his followers: “[It was] not originally intended as my initials but they rearranged it slightly for tribute.”) According to the report, about sixteen thousand people have taken the group’s courses, which are marketed as leading to “greater self-fulfillment by eliminating psychological and emotional barriers,” and some went even further. As the journalist Barry Meier wrote:

Most participants take some workshops, like the group’s “Executive Success Programs,” and resume their lives. But other people have become drawn more deeply into Nxivm, giving up careers, friends and families to become followers of its leader, Keith Raniere, who is known within the group as “Vanguard”…Former members have depicted [Raniere] as a man who manipulated his adherents, had sex with them and urged women to follow near-starvation diets to achieve the type of physique he found appealing.

And it gets even stranger. In 2003, Raniere sued the Cult Education Institute for posting passages from his training materials online. In his deposition for the suit, which was dismissed just last year, Raniere stated:

I discovered I had an exceptional aptitude for mathematics and computers when I was twelve. It was at the age of twelve I read The Second Foundation [sic] by Isaac Asimov and was inspired by the concepts on optimal human communication to start to develop the theory and practice of Rational Inquiry. This practice involves analyzing and optimizing how the mind handles data. It involves mathematical set theory applied in a computer programmatic fashion to processes such as memory and emotion. It also involves a projective methodology that can be used for optimal communication and decision making.

Raniere didn’t mention any specific quotations from Asimov, but they were presumably along the lines of the following, which actually appears in Foundation and Empire, spoken by none other than the Mule:

Intuition or insight or hunch-tendency, whatever you wish to call it, can be treated as an emotion. At least, I can treat it so…The human mind works at low efficiency. Twenty percent is the figure usually given. When, momentarily, there is a flash of greater power it is termed a hunch, or insight, or intuition. I found early that I could induce a continual use of high brain-efficiency. It is a killing process for the person affected, but it is useful.

At this point, one might be tempted to draw parallels to other cults, such as Aum Shinrikyo, that are also said to have taken inspiration from Asimov’s work. In this case, however, the connection to the Foundation series seems tangential at best. A lot of us read science fiction at the golden age of twelve, and while we might be intrigued by psychohistory or mental engineering, few of us take it in the direction that Raniere evidently did. (As one character observes in Umberto Eco’s Foucault’s Pendulum: “People don’t get the idea of going back to burn Troy just because they read Homer.”) In fact, Raniere comes off a lot more like L. Ron Hubbard, at least in the version of himself that he presents in public. In the deposition, he provided an exaggerated account of his accomplishments that will strike those who know Hubbard as familiar:

In 1988, I was accepted into the Mega Society. The requirements to be accepted into the Mega Society were to have a demonstrated IQ of 176…In 1989, I was accepted into the Guinness Book of World Records under the category “Highest IQ.” I also left my position as a Computer Programmer/Analyst and resumed business consulting with the intention to raise money to start the “Life Learning Institute.” At this point in time I became fascinated with how human motivation affected behavior. I started to refine my projective mathematical theory of the human mind to include a motivational behavior equation.

And when Raniere speaks of developing “a set of consistent axioms of how human behavior interfaced with the world,” it’s just a variation on an idea that has been recycled within the genre for decades.

Yet it’s also worth asking why the notion of a “mathematical psychology” appeals to these manipulative personalities, and why many of them have repackaged these ideas so successfully for their followers. You could argue that Raniere—or even Charles Manson—represents the psychotic fringe of an impulse toward transformation that has long been central to science fiction, culminating in the figure of the superman. (It’s probably just a coincidence, but I can’t help noting that two individuals who have been prominently linked with the group, the actresses Kristin Kreuk and Allison Mack, both appeared on Smallville.) And many cults hold out a promise of change for which the genre provides a convenient vocabulary. As Raniere said in his deposition:

In mathematics, all things are proven based on axioms and a step by step systematic construction. Computers work the same way. To program a computer one must first understand the axioms of the computer language, and then the step by step systematic construction of the problem-solution methodology. Finally, one must construct the problem-solution methodology in a step by step fashion using the axioms of the language. I discovered the human mind works the same way and I formalized the process.

This sounds a lot like Hubbard, particularly in the early days of dianetics, in which the influence of cybernetics was particularly strong. But it also represents a limited understanding of what the human mind can be, and it isn’t surprising that it attracts people who see others as objects to be altered, programmed, and controlled. The question of whether such figures as Hubbard or Raniere really buy into their own teachings resists any definitive answer, but one point seems clear enough. Even if they don’t believe it, they obviously wish that it were true.

Life on the last mile

with 2 comments

In telecommunications, there’s a concept called “the last mile,” which states that the final leg of a network—the one that actually reaches the user’s home, school or office—is the most difficult and expensive to build. It’s one thing to construct a massive trunkline, which is basically a huge but relatively straightforward feat of engineering, and quite another to deal with the tangle of equipment, wiring, and specifications on the level of thousands of individual households. More recently, the concept has been extended to public transportation, delivery and distribution services, and other fields that depend on connecting an industrial operation on the largest imaginable scale with specific situations on the retail side. (For instance, Amazon has been trying to cross the last mile through everything from its acquisition of Whole Foods to drone delivery, and the fact that these are seen as alternative approaches to the same problem points to how complicated it really is.) This isn’t just a matter of infrastructure, either, but of the difficulties inherent to any system in which a single pipeline has to split into many smaller branches, whether it’s carrying blood, water, mail, or data. Ninety percent of the wiring can be in that last mile, and success lies less in any overall principles than in the irritating particulars. It has to be solved on the ground, rather than in a design document, and you’ll never be able to anticipate all of the obstacles that you’ll face once those connections start to multiply. It’s literally about the ramifications.

I often feel the same way when it comes to writing. When I think back at how I’ve grown as a writer over the last decade or so, I see clear signs of progress. Thanks mostly to the guidelines that David Mamet presents in On Directing Film, it’s much easier for me to write a decent first draft than it was when I began. I rarely leave anything unfinished; I know how to outline and how to cut; and I’m unlikely to make any huge technical mistakes. In his book Which Lie Did I Tell?, William Goldman says something similar about screenwriting:

Stephen Sondheim once said this: “I cannot write a bad song. You begin it here, build, end there. The words will lay properly on the music so they can be sung, that kind of thing. You may hate it, but it will be a proper song.” I sometimes feel that way about my screenplays. I’ve been doing them for so long now, and I’ve attempted most genres. I know about entering the story as late as possible, entering each scene as late as possible, that kind of thing. You may hate it, but it will be a proper screenplay.

Craft, in other words, can take you most of the way—but it’s the final leg that kills you. As Goldman concludes of his initial pass on the script for Absolute Power: “This first draft was proper as hell—you just didn’t give a shit.” And sooner or later, most writers find that they spend most of their time on that last mile.

Like most other art forms, creative writing can indeed be taught—but only to the point that it still resembles an engineering problem. There are a few basic tricks of structure and technique that will improve almost anyone’s work, much like the skills that you learn in art books like Drawing on the Right Side of the Brain, and that kind of advancement can be enormously satisfying. When it comes to the last mile between you and your desired result, however, many of the rules start to seem useless. You aren’t dealing with the general principles that have gotten you this far, but with problems that arise on the level of individual words or sentences, each one of which needs to be tackled on its own. There’s no way of knowing whether or not you’ve made the right choice until you’ve looked at them all in a row, and even if something seems wrong, you may not know how to fix it. The comforting shape of the outline, which can be assembled in a reasonably logical fashion, is replaced by the chaos of the text, and the fact that you’ve done good work on this level before is no guarantee that you can do it right now. I’ve learned a lot about writing over the years, but to the extent that I’m not yet the writer that I want to be, it lies almost entirely in that last mile, where the ideal remains tantalizingly out of reach.

As a result, I end up revising endlessly, even a late stage, and although the draft always gets better, it never reaches perfection. After a while, you have to decide that it’s as good as it’s going to get, and then move on to something else—which is why it helps to have a deadline. But you can take comfort in the fact that the last mile affects even the best of us. In a recent New York Times profile of the playwright Tony Kushner, Charles McGrath writes:

What makes Angels in America so complicated to stage is not just Mr. Kushner’s need to supervise everything, but that Perestroika, the second part, is to a certain extent a work in progress and may always be. The first part, Millennium Approaches, was already up and running in the spring of 1991, when, with a deadline looming, Mr. Kushner retreated to a cabin in Northern California and wrote most of Perestroika in a feverish eight-day stint, hardly sleeping and living on junk food. He has been tinkering with it ever since…Even during rehearsal last month he was still cutting, rewriting, restructuring.

If Tony Kushner is still revising Angels in America, it makes me feel a little better about spending my life on that last mile. Or as John McPhee says about knowing when to stop: “What I know is that I can’t do any better; someone else might do better, but that’s all I can do; so I call it done.”

Instant karma

with one comment

Last year, my wife and I bought an Instant Pot. (If you’re already dreading the rest of this post, I promise in advance that it won’t be devoted solely to singing its praises.) If you somehow haven’t encountered one before, it’s a basically a programmable pressure cooker. It has a bunch of other functions, including slow cooking and making yogurt, but aside from its sauté setting, I haven’t had a chance to use them yet. At first, I suspected that it would be another appliance, like our bread maker, that we would take out of the box once and then never touch again, but somewhat to my surprise, I’ve found myself using it on a regular basis, and not just as a reliable topic for small talk at parties. Its great virtue is that it allows you to prepare certain tasty but otherwise time-consuming recipes—like the butter chicken so famous that it received its own writeup in The New Yorker—with a minimum of fuss. As I write these lines, my Instant Pot has just finished a batch of soft-boiled eggs, which is its most common function in my house these days, and I might use it tomorrow to make chicken adobo. Occasionally, I’ll be mildly annoyed by its minor shortcomings, such as the fact that an egg set for four minutes at low pressure might have a perfect runny yolk one day and verge on hard-boiled the next. It saves time, but when you add in the waiting period to build and then release the pressure, which isn’t factored into most recipes, it can still take an hour or more to make dinner. But it still marks the most significant step forward in my life in the kitchen since Mark Bittman taught me how to use the broiler more than a decade ago.

My wife hasn’t touched it. In fact, she probably wouldn’t mind if I said that she was scared of the Instant Pot—and she isn’t alone in this. A couple of weeks ago, the Wall Street Journal ran a feature by Ellen Byron titled “America’s Instant-Pot Anxiety,” with multiple anecdotes about home cooks who find themselves afraid of their new appliance:

Missing from the enclosed manual and recipe book is how to fix Instant Pot anxiety. Debbie Rochester, an elementary-school teacher in Atlanta, bought an Instant Pot months ago but returned it unopened. “It was too scary, too complicated,” she says. “The front of the thing has so many buttons.” After Ms. Rochester’s friends kept raving about their Instant Pot meals, she bought another one…Days later, Ms. Rochester began her first beef stew. After about ten minutes of cooking, it was time to release the pressure valve, the step she feared most. Ms. Rochester pulled her sweater over her hand, turned her back and twisted the knob without looking. “I was praying that nothing would blow up,” she says.

Elsewhere, the article quotes Sharon Gebauer of San Diego, who just wanted to make beef and barley soup, only to be filled with sudden misgivings: “I filled it up, started it pressure cooking, and then I started to think, what happens when the barley expands? I just said a prayer and stayed the hell away.”

Not surprisingly, the article has inspired derision from Instant Pot enthusiasts, among whom one common response seems to be: “People are dumb. They don’t read instruction manuals.” Yet I can testify firsthand that the Instant Pot can be intimidating. The manual is thick and not especially organized, and it does a poor job of explaining such crucial features as the steam release and float valve. (I had to watch a video to learn how to handle the former, and I didn’t figure out what the latter was until I had been using the pot for weeks.) But I’ve found that you can safely ignore most of it and fall back on a few basic tricks— as soon as you manage to get through at least one meal. Once I successfully prepared my first dish, my confidence increased enormously, and I barely remember how it felt to be nervous around it. And that may be the single most relevant point about the cult that the Instant Pot has inspired, which rivals the most fervent corners of fan culture. As Kevin Roose noted in a recent article in the New York Times:

A new religion has been born…Its deity is the Instant Pot, a line of electric multicookers that has become an internet phenomenon and inspired a legion of passionate foodies and home cooks. These devotees—they call themselves “Potheads”—use their Instant Pots for virtually every kitchen task imaginable: sautéing, pressure-cooking, steaming, even making yogurt and cheesecakes. Then, they evangelize on the internet, using social media to sing the gadget’s praises to the unconverted.

And when you look at the Instant Pot from a certain angle, you realize that it has all of the qualities required to create a specific kind of fan community. There’s an initial learning curve that’s daunting enough to keep out the casuals, but not so steep that it prevents a critical mass of enthusiasts from forming. Once you learn the basics, you forget how intimidating it seemed when you were on the outside. And it has a huge body of associated lore that discourages newbies from diving in, even if it doesn’t matter much in practice. (In the months that I’ve been using the Instant Pot, I’ve never used anything except the manual pressure and sauté functions, and I’ve disregarded the rest of the manual, just as I draw a blank on pretty much every element of the mytharc on The X-Files.) Most of all, perhaps, it takes something that is genuinely good, but imperfect, and elevates it into an object of veneration. There are plenty of examples in pop culture, from Doctor Who to Infinite Jest, and perhaps it isn’t a coincidence that the Instant Pot has a vaguely futuristic feel to it. A science fiction or fantasy franchise can turn off a lot of potential fans because of its history and complicated externals, even if most are peripheral to the actual experience. Using the Instant Pot for the first time is probably easier than trying to get into Doctor Who, or so I assume—I’ve steered clear of that franchise for many of the same reasons, reasonable or otherwise. There’s nothing wrong with being part of a group drawn together by the shared object of your affection. But once you’re on the inside, it can be hard to put yourself in the position of someone who might be afraid to try it because it has so many buttons.

Written by nevalalee

February 15, 2018 at 8:45 am

Reorganizing the peace

with one comment

In his book Experiment in Autobiography, which he wrote when he was in his sixties, the novelist H.G. Wells defined what he saw as his great task ahead: “To get the primaries of life under control and to concentrate the largest possible proportion of my energy upon the particular system of effort that has established itself for me as my distinctive business in the world.” He explained:

I do not now in the least desire to live longer unless I can go on with what I consider to be my proper business…And that is where I am troubled now. I find myself less able to get on with my work than ever before. Perhaps the years have something to do with that, and it may be that a progressive broadening and deepening of my conception of what my work should be, makes it less easy than it was; but the main cause is certainly the invasion of my time and thought by matters that are either quite secondary to my real business or have no justifiable connection with it. Subordinate and everyday things, it seems to me in this present mood, surround me in an ever-growing jungle. My hours are choked with them; my thoughts are tattered by them. All my life I have been pushing aside intrusive tendrils, shirking discursive consequences, bilking unhelpful obligations, but I am more aware of them now and less hopeful about them than I have ever been. I have a sense of crisis; that the time has come to reorganize my peace, if the ten or fifteen years ahead, which at the utmost I may hope to work in now, are to be saved from being altogether overgrown.

As it turns out, Wells was exactly right, and he lived for another fourteen years. And his notion of rethinking one’s life by “reorganizing the peace” has preoccupied me for a long time, too, although it wasn’t until I read this passage that I was able to put it into words. Wells associated such problems with the lives of creative professionals, whom he compares to “early amphibians” trying to leave the water for the first time, but these days, they seem to affect just about everyone. What troubled Wells was the way in which the work of artists and writers, which is usually done in solitude, invariably involves its practitioners in entanglements that take them further away from whatever they wanted to do in the first place. To some extent, that’s true of all pursuits—success in any field means that you spend less time on the fundamentals than you did when you started—but it’s especially hard on the creative side, since its rewards and punishments are so unpredictable. Money, if it comes at all, arrives at unreliable intervals, and much of your energy is devoted to dealing with problems that can’t be anticipated in advance. It’s exhausting and faintly comic, as Wells beautifully phrased it:

Imperfection and incompleteness are the certain lot of all creative workers. We all compromise. We all fall short. The life story to be told of any creative worker is therefore by its very nature, by its diversions of purpose and its qualified success, by its grotesque transitions from sublimation to base necessity and its pervasive stress towards flight, a comedy.

But the artists were just ahead of the curve, and these “grotesque transitions” are now part of all our lives. Instead of depending on the simple sources of gratification—family, work, religion—that served human beings for so long, we’re tied up in complicated networks that offer more problematic forms of support. A few days ago, the social psychologist Jane Adams published an article in the New York Times about the epidemic of “perfectionism” in college students, as young people make unreasonable demands on themselves to satisfy the expectations of social media:

As college students are returning to school after their winter breaks, many parents are concerned about the state of their mental health. The parents worry about the pressure their kids are putting on themselves. Thinking that others in their social network expect a lot of them is even more important to young adults than the expectations of parents and professors…Parents in my practice say they’re noticing how often their kids come away from Facebook and Instagram feeling depressed, ashamed and anxious, and how vulnerable they are to criticism and judgment, even from strangers, on their social media feeds.

And this simply places more of us in the predicament that Wells identified in artists, whose happiness was tied up with the fickle responses of tastemakers, gatekeepers, and the anonymous public. The need to deal with such factors, which were impossible to anticipate from one day to the next, was the source of many of the “entanglements” that he saw as interfering with his work. And the only difference these days is that everyone’s a critic, and we’re all selling ourselves.

But the solution remains the same. Wells spoke fondly of his vision of what he called the Great Good Place, borrowing a phrase from his friend Henry James:

I require a pleasant well-lit writing room in good air and a comfortable bedroom to sleep in—and, if the mood takes me, to write in—both free from distracting noises and indeed all unexpected disturbances. There should be a secretary or at least a typist within call and out of earshot, and, within reach, an abundant library and the rest of the world all hung accessibly on to that secretary’s telephone. (But it would have to be a one-way telephone, so that when we wanted news we could ask for it, and when we were not in a state to receive and digest news, we should not have it forced upon us.)

This desire for a “one-way telephone” makes me wonder how Wells would view our online lives, which over the last decade have evolved to a point where the flow of energy seems to pass both ways. Wells, of course, was most famous for his science fiction, in which he foresaw such future innovations as space travel and nuclear weapons, but this might be his most prescient observation of all: “We are therefore, now and for the next few hundred years at least, strangers and invaders of the life of every day. We are all essentially lonely. In our nerves, in our bones. We are too preoccupied and too experimental to give ourselves freely and honestly to other people, and in the end other people fail to give themselves fully to us.”

Written by nevalalee

January 23, 2018 at 8:45 am

American Stories #9: 808s & Heartbreak

leave a comment »

Note: As we enter what Joe Scarborough justifiably expects to be “the most consequential political year of our lives,” I’m looking back at ten works of art—books, film, television, and music—that deserve to be reexamined in light of where America stands today. You can find the earlier installments here

If there’s a common thread that connects many of the works of art that I’ve been discussing here, it’s the way in which our private selves can be invaded by our lives as members of a larger nation, until the two become neurotically fused into one. This is probably true of all countries, but its deeper connection with the notion of personal reinvention feels especially American, and no celebrity embodies it as much as Kanye West. It might seem impossible to make sense of the political evolution of a man who once told us that President Bush didn’t care about black people and then ended up—despite the efforts of a concerned time traveler—taking a very public meeting with Donald Trump. Yet if one of our most ambitious, talented, and inventive artists can be frequently dismissed by critics as “oblivious,” it may only be because he’s living two years ahead of the rest of us, and he’s unusually committed to working out his confusions in public. We should all feel bewildered these days, and West doesn’t have the luxury of keeping it to himself. It might seem strange to single out 808s & Heartbreak, which looks at first glance like his least political work, but if this is the most important album of the last ten years, and it is, it’s largely because it reminded us of how unbearable emotion can be expressed through what might seem to casual listeners like cold detachment. It’s an insight that has crucial implications for those of us who just want to get through the next few years, and while West wasn’t the first to make it, he was remarkably candid about acknowledging his sources to the New York Times:

I think the fact that I can’t sing that well is what makes 808s so special…808s was the first album of that kind, you know? It was the first, like, black new wave album. I didn’t realize I was new wave until this project. Thus my connection with Peter Saville, with Raf Simons, with high-end fashion, with minor chords. I hadn’t heard new wave! But I am a black new wave artist.

This is exactly right, and it gets at why this album, which once came off as a perverse dead end, feels so much now like the only way forward. When I think of its precursors, my mind naturally turns to the Pet Shop Boys, particularly on Actually, which was first released in 1987. A song like “Shopping” anticipates 808s in its vocal processing, its dry drum machine, its icy synthesizers, and above all in how it was widely misconstrued as a reflection of the Thatcherite consumerism that it was criticizing. That’s the risk that you run as an ironist, and West has been punished for it more often than anybody else. And while these two worlds could hardly seem further apart, the underlying impulses are weirdly similar. New wave is notoriously hard to define, but I like to think of it as a movement occupied by those who aren’t comfortable in rock or punk. Maybe you’re just a huge nerd, or painfully shy, or not straight or white, or part of a group that has traditionally been penalized for expressing vulnerability or dissent. One solution is to remove as much of yourself from the work as possible, falling back on irony, parody, or Auto-Tune. You make a virtue of reticence and understatement, trusting that your intentions will be understood by those who feel the same way. This underlies the obsessive pastiches of Stephin Merritt and the Magnetic Fields, whose 69 Love Songs is the other great album of my adult life, as well as West’s transformation of himself into a robot programmed to feel pain, like an extended version of the death of HAL in 2001: A Space Odyssey. West has taken it further in the years since—“Blood on the Leaves” may be his most scandalous mingling of the political and the personal—but it was 808s that introduced it to his successors, for whom it serves both as a formula for making hits and as an essential means of survival. Sometimes the only way to make it through the coldest winter is to turn it into the coldest story ever told.

The Hedgehog, the Fox, and the Fatted Ram, Part 1

leave a comment »

Over the long weekend, both the New York Times and the Washington Post published lead articles on the diminishing public profile of Jared Kushner. The timing may have been a coincidence, but the pieces had striking similarities. Both made the argument that Kushner’s portfolio, once so vast, has been dramatically reduced by the arrival on the scene of White House chief of staff John F. Kelly; both ran under a headline that inclined some version of the word “shrinking”; and both led off with memorable quotes from their subject. In the Times, it was Kushner’s response when asked by Reince Priebus what his Office of American Innovation would really do: “What do you care?” (The newspaper of record, proper as ever, added: “He emphasized his point with an expletive.”) Meanwhile, the Post, which actually scored an interview, came away with something even stranger. Here’s what Kushner said of himself:

During the campaign, I was more like a fox than a hedgehog. I was more of a generalist having to learn about and master a lot of skills quickly. When I got to D.C., I came with an understanding that the problems here are so complex—and if they were easy problems, they would have been fixed before—and so I became more like the hedgehog, where it was more taking issues you care deeply about, going deep and devoting the time, energy and resources to trying to drive change.

The Post merely noted that this is Kushner’s “version the fable of the fox, who knows many things, and the hedgehog, who knows one important thing,” but as the Washington Examiner pointed out, the real source is Isaiah Berlin’s classic book The Hedgehog and the Fox, which draws its famous contrast between foxes and hedgehogs as a prelude to a consideration of Leo Tolstoy’s theory of history.

Berlin’s book, which is one of my favorites, is so unlike what I’d expect Jared Kushner to be reading that I can’t resist trying to figure out what this reference to it means. If I were conspiratorially minded, I’d observe that if Kushner had wanted to put together a reading list to quickly bring himself up to speed on the history and culture of Russia—I can’t imagine why—then The Hedgehog and the Fox, which can be absorbed in a couple of hours, would be near the top. But the truth, unfortunately, is probably more prosaic. If there’s a single book from the last decade that Kushner, who was briefly touted as the prodigy behind Trump’s data operation, can be assumed to have read, or at least skimmed, it’s Nate Silver’s The Signal and the Noise. And Silver talks at length about the supposed contrast between foxes and hedgehogs, courtesy of a professor of psychology and political science named Philip E. Tetlock, who conducted a study of predictions by experts in various fields:

Tetlock was able to classify his experts along a spectrum between what he called hedgehogs and foxes. The reference to hedgehogs and foxes comes from the title of an Isaiah Berlin essay on the Russian novelist Leo Tolstoy—The Hedgehog and the Fox…Foxes, Tetlock found, are considerably better at forecasting than hedgehogs. They had come closer to the mark on the Soviet Union, for instance. Rather than seeing the USSR in highly ideological terms—as an intrinsically “evil empire,” or as a relatively successful (and perhaps even admirable) example of a Marxist economic system—they instead saw it for what it was: an increasingly dysfunctional nation that was in danger of coming apart at the seams. Whereas the hedgehogs’ forecasts were barely any better than random chance, the foxes’ demonstrated predictive skill.

As intriguing as we might find this reference to Russia, which Kushner presumably read, it also means that in all likelihood, he never even opened Berlin’s book. (Silver annoyingly writes: “Unless you are a fan of Tolstoy—or of flowery prose—you’ll have no particular reason to read Berlin’s essay.”) But it doesn’t really matter where he encountered these classifications. As much as I love the whole notion of the hedgehog and the fox, it has one big problem—as soon as you read it, you’re immediately tempted to apply it to yourself, as Kushner does, when in fact its explanatory power applies only to geniuses. Like John Keats’s celebrated concept of negative capability, which is often used to excuse sloppy, inconsistent thinking, Berlin’s essay encourages us to think of ourselves as foxes or hedgehogs, when we’re really just dilettantes or suffering from tunnel vision. And this categorization has its limits even when applied to unquestionably exceptional personalities. Here’s how Berlin lays it out on the very first page of his book:

There exists a great chasm between those, on one side, who relate everything to a single central vision, one system less or more coherent or articulate, in terms of which they understand, think and feel—a single, universal, organizing principle in terms of which alone all that they are and say has significance—and, on the other side, those who pursue many ends, often unrelated and even contradictory, connected, if at all, only in some de facto way, for some psychological or physiological cause, related by no moral or aesthetic principle; these last lead lives, perform acts, and entertain ideas that are centrifugal rather than centripetal, their thought is scattered or diffused, moving on many levels…without, consciously or unconsciously, seeking to fit [experiences and objects] into, or exclude them from, any one unchanging, all-embracing, sometimes self-contradictory and incomplete, at times fanatical, unitary inner vision.

The contrast that Berlin draws here could hardly seem more stark, but it falls apart as soon as we apply it to, say, Kushner’s father-in-law. On the one hand, Trump has succeeded beyond his wildest dreams by harping monotonously on a handful of reliable themes, notably white nationalism, xenophobia, and resentment of liberal elites. Nothing could seem more like the hedgehog. On the other hand, from one tweet to the next, he’s nothing if not “centrifugal rather than centripetal,” driven by his impulses, embracing contradictory positions, undermining his own surrogates, and resisting all attempts to pin him down to a conventional ideology. It’s all very foxlike. The most generous reading would be to argue that Trump, as Berlin contends of Tolstoy, is “by nature a fox, but [believes] in being a hedgehog,” a comparison that seems ridiculous even as I type it. It’s far more plausible that Trump lacks the intellectual rigor, or even the basic desire, to assemble anything like a coherent politics out of his instinctive drives for power and revenge. Like most of us, he’s a mediocre thinker, and his confusions, which reflect those of his base, have gone a long way toward enabling his rise. Trump bears much the same relationship to his fans that Emerson saw in the man who obsessed Tolstoy so deeply:

Among the eminent persons of the nineteenth century, Bonaparte is far the best known and the most powerful; and owes his predominance to the fidelity with which he expresses the tone of thought and belief, the aims of the masses…If Napoleon is France, if Napoleon is Europe, it is because the people whom he sways are little Napoleons.

Faced with a Trump, little or big, Berlin’s categories lose all meaning—not out of any conceptual weakness, but because it wasn’t what they were designed to do. But that doesn’t mean that Berlin doesn’t deserve our attention. In fact, The Hedgehog and the Fox has more to say about our current predicament than any other book I know, and if Kushner ever bothered to read it, it might give him reason to worry. I’ll have more to say about this tomorrow.

Of texts and textiles

leave a comment »

Yesterday, if you spend as much time as I do browsing random news articles online, your eye might have been caught by a story with the headline “‘Allah’ is Found on Viking Funeral Clothes.” Similar pieces ran in multiple publications, but I’ll stick with the one in the New York Times, which I think is where I saw it first. Here’s how it begins:

The discovery of Arabic characters that spell “Allah” and “Ali” on Viking funeral costumes in boat graves in Sweden has raised questions about the influence of Islam in Scandinavia. The grave where the costumes were found belonged to a woman dressed in silk burial clothes and was excavated from a field in Gamla Uppsala, north of Stockholm, in the 1970s, but its contents were not cataloged until a few years ago, Annika Larsson, a textile archaeologist at Uppsala University, said on Friday.

Larsson says that she was examining the patterns when she “remembered seeing them in similar Moorish designs in silk ribbons from Spain. I understood it had to be a kind of Arabic character, not Nordic.” The article continues: “Upon closer examination of the band from all angles, she said, she realized she was looking at Kufic script. The words Allah and Ali appeared in the silk found in Boat Grave 36 and in many other graves—and, most intriguing, the word Allah could be seen when reflected in a mirror.” It’s “most intriguing” indeed, particularly because it’s consistent with the hypothesis, which is widely credited, that “the Viking settlements in the Malar Valley of Sweden were, in fact, a western outpost of the Silk Road that stretched through Russia to silk-producing centers east of the Caspian Sea.”

Unfortunately, this particular piece of evidence began to fall apart almost at once. I’d like to say that I felt a flicker of doubt even as I read the article, particularly the part about the pattern being “reflected in a mirror,” but I can’t be entirely sure—like a lot of other readers, I glanced over it briefly and moved on. A few hours later, I saw another story headlined “That Viking Textile Probably Didn’t Actually Have ‘Allah’ On It.” It linked to a very persuasive blog post by Carolyn Priest-Dorman, a textile historian and Viking reenactor who seems perfectly positioned to identify the flaws in Larsson’s argument. As the Times article neglects to mention, Larsson’s reconstruction doesn’t just depend on reflecting the design, but in extending it conjecturally on either side, on the assumption that portions of the original are missing. Priest-Dorman points out that this is unwarranted on the evidence:

This unexplained extrapolation practically doubles the width of the band, and here’s why that’s a problem…If you consult…a photo of Band 6, you can clearly see the continuous metallic weft of the band turning at each selvedge to enter back in the other direction.If Larsson were correct that Band 6 was originally significantly wider, you would not see those turning loops; you’d see a series of discontinuous single passes of brocading weft with cut or broken ends at each edge.

In other words, if the pattern were incomplete, we’d see the breaks, but we don’t. And even if this point were up for debate, you clearly increase the risk of subjective readings when you duplicate, reflect, and otherwise distort the raw “text.”

No one has accused Larsson of intentional fraud, but it appears that the right combination of elements—a source of ambiguous patterns, some erudition, and a certain amount of wishful thinking—resulted in a “solution” to a problem that wasn’t there. If this sounds familiar, it might be because I’ve discussed similar cases on this blog before. One is The Great Cryptogram by Ignatius L. Donnelly, who argued that Francis Bacon was the true author of the works of Shakespeare and left clues to his identity in a code in the plays. An even better parallel is the scholar William Romaine Newbold, who died believing that he had cracked the mysterious Voynich Manuscript. As David Kahn recounts in his masterpiece The Codebreakers, Newbold fell victim to much the same kind of error that Larsson did, except at far greater length and complexity:

Newbold saw microscopic shorthand symbols in the macroscopic characters of the manuscript text and began his decipherment by transliterating them into Roman letters. A secondary text of seventeen different letters resulted. He doubled all but the first and last letters of each section…The resultant quaternary text was then “translated”: Newbold replaced the pairs of letters with a single letter, presumably according to a key, which, however, he never made clear…Finally, Newbold anagrammed the letters of this senary text to produce the alleged plaintext in Latin.

The result, of course, was highly suspect. Anagramming chunks of over a hundred characters at a time, as Newbold did, could result in almost any text you wanted, and the “microscopic shorthand symbols” were nothing but “the breaking up of the thick ink on the rough surface of the vellum into shreds and filaments that Newbold had imagined were individual signs.”

Donnelly and Newbold were working before an era of instantaneous news coverage, but I don’t doubt that they would have received plenty of sympathetic, or at least credulous, attention if they had published their results today—and, in fact, hardly a month goes by without reports of a new “breakthrough” in the Voynich Manuscript. (I’m reminded of the Beale cipher, a similar enigma encoding an alleged hidden treasure that inspired an entire society, the Beale Cypher Association, devoted to solving it. In his book Biggest Secrets, the author William Poundstone examined a copy of the society’s quarterly newsletter, which is available online. It contained no fewer than three proposed solutions.) In the aftermath of the Larsson debacle, a number of observers, including Stephennie Mulder of the University of Texas, raised concerns about how the theory was reported: “It should go without saying that a single scholar’s un-peer-reviewed claim does not truth make.” She’s right. But I think there’s a more specific lesson here. Both Larsson and Newbold started with a vast source of raw material, selected a tiny piece of it, and subjected it to a series of analogous permutations. Larsson doubled the pattern and reflected it in a mirror; Newbold doubled the illusory characters and then anagrammed the result. The first step increased the amount of text that could be “studied,” while the second rearranged it arbitrarily to facilitate additional readings. Each transformation moved further away from the original, which should have been a red flag for any skeptical reader. But when you summarize the process by providing only the first and the last steps, while omitting the intermediate stages, the conclusion looks a lot more impressive. This is exactly what happened with Larsson, and when we turn to Newbold, who announced his findings in 1921, we see how little anything has changed. As Kahn writes in The Codebreakers: “The public at large was fascinated. Sunday supplements had a field day.”

Quote of the Day

leave a comment »

I have a theatrical temperament. I’m not interested in the middle road—maybe because everyone’s on it. Rationality, reasonableness bewilder me. I think it comes out of being a “daughter of the Golden West.” A lot of the stories I was brought up on had to do with extreme actions—leaving everything behind, crossing the trackless wastes, and in those stories the people who stayed behind and had their settled ways—those people were not the people who got the prize. The prize was California.

Joan Didion, in an interview with Michiko Kakutani in the New York Times

Written by nevalalee

October 6, 2017 at 7:30 am

The man up the tree

with 2 comments

In his remarkable book The Sound of the One Hand, the author Yoel Hoffmann provides a translation and commentary for one of the most famous of all Zen koans, which is usually known as “The Man Up the Tree.” Here’s Hoffmann’s version:

Zen Master Kyōgen said, “Let us suppose that a man climbs up a tree. He grips the branches with his teeth, his hands do not hold onto the tree, and his feet do not touch the ground. A monk below asks him about the meaning of our founder coming from the west. If he does not answer, he will be avoiding the monk’s question. But if he opens his mouth and utters a word, he will fall to his death. Under such circumstances, what should the man do?” A certain monk by the name of Koto said, “Once the man is up the tree, no question should be raised. The man should ask the monk if the latter has anything to say to him before he goes up the tree.” On hearing this, Kyōgen laughed out loud. Later, Master Setchō commented, “It is easy to say it up on the tree. To say it under the tree is difficult. So I shall climb the tree myself. Come, ask me a question!”

A koan is a question traditionally posed by a Zen master to a novice, and according to Hoffmann, there’s a “correct” answer for each one, in the form of a ritual response or gesture: “In some cases, the answer simply consists of a repetition of the essential phrase within the koan. In other cases, it adds a somewhat different variation of what is already implied in the koan. The best answers are those which through an unexpected phrase or action provide a flash of insight into the koan’s meaning.” And I’ll get to the “answer” to this koan in a moment.

I found himself thinking about the man up the tree shortly after yesterday’s horrific mass shooting in Las Vegas. More specifically, it came to mind after I read the comments from White House press secretary Sarah Huckabee Sanders, who was clearly shaken by the attack, but who also responded to questions about gun control: “There will certainly be a time for that policy discussion to take place, but that’s not the place that we’re in at this moment.” If this rings a bell, it’s because it’s highly reminiscent—as David Dayen of The New Republic has pointed out—of the statement made last month by Scott Pruitt, the head of the Environmental Protection Agency, about the debate over climate change in advance of Hurricane Irma:

To have any kind of focus on the cause and effect of the storm versus helping people, or actually facing the effect of the storm, is misplaced…To use time and effort to address it at this point is very, very insensitive to this people in Florida.

I don’t want to overanalyze the political calculation here, which seems both instinctive and fairly obvious—if this isn’t a good time to discuss these issues, it’s because there will never be a good time. But it also left me with the picture of an entire culture hanging over a precipice, afflicted by existential risk and unable to open its mouth to talk about it. As Koto says: “Once the man is up the tree, no question should be raised.” Or as Lisa Friedman of the New York Times wrote more than three weeks ago: “In Washington, where science is increasingly political, the fact that oceans and atmosphere are warming and that the heat is propelling storms into superstorms has become as sensitive as talking about gun control in the wake of a mass shooting.”

A koan isn’t the same thing as an argument, and the image that this one presents isn’t entirely clear. (I’m not even sure who the man in the tree is supposed to be in this scenario. Is it me? The government? All of us? Scott Pruitt?) But it rings true as a commentary on life itself, in which we’re constantly suspended by the teeth. Two months ago, I wrote of the state of perpetual emergency that Jesus saw in the coming of the kingdom of heaven, which the historian Michael Grant insightfully discusses in light of the parable of the unjust steward:

How shocking…to find Jesus actually praising this shady functionary. He praised him because, when confronted with a crisis, he had acted. You, declared Jesus to his audience, are faced with a far graver crisis, a far more urgent need for decision and action. As this relentless emergency approaches you cannot just hit with your hands folded. Keep your eyes open and be totally apart and prepared to act if you want to be among the Remnant who will endure the terrible time.

I quoted these lines in August in response to the violence in Charlottesville, which seemed at the time like the most urgent manifestation so far of our own emergency. Now its memory threatens to fade, effaced by the seemingly endless succession of crises—large, small, and ludicrous—that have followed. It isn’t a political strategy or a run of bad luck, but the way of life that we’ve bought for ourselves. This is how it’s going to feel to be alive for the foreseeable future. And the best image that I’ve found for it is that of the man clinging by his teeth to the branch.

So what’s the answer? Master Setchō says that it’s easier to reply to the question when you’re in the tree than under it. Hoffmann explains: “Setchō’s quasi-paradoxical comment implies that the concrete problem of being caught up in a tree…is not to be confused with abstract speculations.” But it might also mean that it’s exactly in the moment of greatest danger that the best answer is likely to be given, if only we can manage to say it. Meanwhile, here’s the “correct” answer that the student is supposed to offer, which, at first glance, doesn’t seem especially helpful:

The pupil stands up and takes the pose of hanging down from a tree. With certain masters, there are pupils who may stick a finger in the mouth, utter, “Uh…uh”; and, shaking the body slightly, give the pretense of one trying to answer but unable to…The pupil pretends to fall from a tree. Landing on his bottom, he says, “Ouch! That hurt!”

But there’s a message here that I find faintly encouraging. The man falls from the tree—but he doesn’t die. Instead, in a moment of slapstick that recalls the comic hero, he lands on his bottom. It stings, but he’ll recover, which implies that the risks of opening one’s mouth are less hazardous than the alternative. And perhaps Hoffmann gets closest to the truth when he says:

It is plausible to assume that a man who holds onto a tree with his teeth would fall anyway. Answering or not answering the question is not his most urgent problem. What he needs is not philosophy, but somebody who is kind and courageous enough to help him down.

Written by nevalalee

October 3, 2017 at 8:03 am

The tendency blanket

leave a comment »

This Russian writer Mikhail Zoshchenko wrote, “Man is excellently made and eagerly lives the kind of life that is being lived.” I love the idea that there’s this thing we might call human tendency, and it’s like a big blanket that gets draped over whatever conditions a given time period has produced. So you know, the Spanish Inquisition comes along, and human tendency gets draped over that historical reality, and “being human” lays out in a certain way. Or it’s 1840, and you’re living in Iceland, and human tendency drapes itself over whatever is going on there and—“being human” looks another way. Same blanket, different manifestation. The Internet shows up, and social media and so on, and the blanket of our human tendency gets draped over all of that, and “being human” looks yet another way.

Likewise, if we drape that tendency blanket over some imagined future time where everybody’s eighty percent prosthetic, it’s still the same blanket. So the writer’s ultimate concentration should be on the blanket, not on what’s underneath it. What writing can do uniquely, I think, is show us fundamental human tendencies, and the ways these tendencies lead to suffering—Faulkner’s good old “human heart in conflict with itself” idea. That’s what we’re really interested in, I think, and why we turn to literature.

George Saunders, in the New York Times

Written by nevalalee

September 30, 2017 at 7:30 am

Dancing in a box

leave a comment »

In her book The Creative Habit, the choreographer Twyla Tharp devotes an entire chapter to a cardboard box. Before I get to it, though, I wanted to highlight another anecdote that she shares. When she was developing the idea for what became the musical Movin’ Out, Tharp put together a twenty-minute videotape of dancers performing to the music of Billy Joel—at her own expense—as a proof of concept. Only then did she tell Joel himself what she had in mind. Tharp explains:

The tape was a critical piece of preparation and vital to selling the idea to the two people who could make or break the project. The first person was me: I had to see that Billy’s music could “dance.” The tape was visual evidence of something I felt. The second person, of course, was Billy. That’s why I called him the moment I was sure. I have learned over the years that you should never save for two meetings what you can accomplish in one. The usual routine for selling an idea is that you set up a first meeting to explain it and then a second meeting to show it. I didn’t want to leave anything to chance. Who knew if I would ever get a second meeting? When busy people are involved, a lot of things can happen to foul up even well intentioned plans, so I decided to go for it all in one shot and invested my time and money into producing and editing the twenty-minute tape.

Much of Tharp’s book alternates between inspiring bromides and useful advice, but this paragraph is the real deal. Nassim Nicholas Taleb writes of such meetings in The Black Swan: “I am sometimes shocked at how little people realize that these opportunities do not grow on trees.” He’s right. When you pitch a project to someone in a position to make it happen, you give it everything you’ve got. Even if you’re Twyla Tharp.

As soon as Tharp and Joel had a handshake deal to make the musical, Tharp began to prepare the box that she uses for all her projects, which she describes as a cardboard carton of the kind that you can pick up in bulk at Office Depot. She writes:

I start every dance with a box. I write the project name on the box, and as the piece progresses I fill it up with every item that went into the making of the dance. This means notebooks, news clippings, CDs, videotapes of me working alone in my studio, videos of the dancers rehearsing, books and photographs and pieces of art that may have inspired me.

In short, it’s a place to put ideas—which I’ve elsewhere identified as an essential creative tool—and Tharp prefers the humble banker’s box for its sheer practicality: “They’re easy to buy, and they’re cheap…They’re one hundred percent functional; they do exactly what I want them to do: hold stuff.” For Movin’ Out, the first thing that went into the box was the twenty-minute videotape, followed by two blue index cards on which Tharp wrote her objectives for the show, which in this case were “Tell a story” and “Make dance pay for the dancers.” (These statements of purpose remain there throughout the process, even if you can’t see them: “They sit there as I write this, covered by months of research, like an anchor keeping me connected to my original impulse.” I’ll return to this point later on.) Other items included notebooks, news clippings, movies like Full Metal Jacket and The Wild One, the green beret once worn by her military adviser, and photographs of location research. Ultimately, that one box grew to twelve. And in the end, it paid off—Movin’ Out broke out of the jukebox musical mold to run for three years on Broadway and win Tony Awards for both Tharp and Joel.

But that isn’t the box that I want to talk about today. Several years after the critical and commercial triumph of Movin’ Out, Tharp tried again, this time with the music of Bob Dylan—and the result, The Times They Are A-Changin’, was such a resounding flop that I don’t even remember it, even though I was living in New York at the time. And there’s no reason to think that Tharp’s process had changed. She began working with Dylan around two years after The Creative Habit was published, and the preparatory phrase, if anything, was even more intense, as Tharp relates: “The Times They Are A-Changin’ was the product of one year of research and preparation and another year and a half of casting, rehearsing, and workshops.” Tharp surely put together a wonderful box, just as she did with Joel, but the result seems to have underwhelmed nearly everyone who saw it. (The critic Ben Brantley wrote in the New York Times: “When a genius goes down in flames, everybody feels the burn.”) Like The Lord of the Rings and The Hobbit, it serves as a cautionary tale for what happens when everything looks the same on paper, down to the dropped “g” in the title, but lightning fails to strike twice. In her subsequent book The Collaborative Habit, Tharp pins part of the blame on “Dylan’s possessive fan base,” who didn’t like the liberties that she took with the material: “I did not prepare them for the fact that my Dylan might not be theirs.” Another red flag was the fact that Dylan approached Tharp, not the other way around:   

Bob Dylan is charming, smart, funny—and, like Billy Joel, very busy. When he called to suggest that we collaborate on a dance musical, it was clear that I would be filling in most of the dotted lines. And that was a blinking yellow light, for Dylan’s catalog is massive. Before I started looking through it in search of a dramatic thread, I thought to prove to myself—and to reassure us both—that his songs were danceable.

At first, this seems like another reminder that success in art has as much to do with luck as with skill, and perhaps Tharp was simply due for a regression to the mean. But there’s another explanation, and it comes back down to that box. Tharp remembers:

When I first started working with Dylan’s music, I had an idea that really appealed to me—to use only Dylan’s love songs. Those songs aren’t what most of us think of when we list our favorite Dylan music, and Dylan’s greatest hits were very important to the producers. We’re used to hearing him angry and accusing, exhorting us to protest, scorning a friend who has betrayed him. But the fact is, he’s also written a sheaf of gorgeous love songs and it was the sentiment in these that made me want to dance. To have used them and dramatized the relationship they suggest might have produced a show I could feel more intensely. But I had walked away from my original instinct—thus violating another of my cardinal rules—and instead, created an evening rich in pageantry and metaphor, a kind of Fellini circus.

I can picture Tharp writing “Dylan love songs” on a blue index card, putting it in the box—only to have it covered up by clippings, photographs, and sketches of circuses. It was there, but it got buried. (After the show folded, Tharp worked through her grief by dancing in her apartment to Dylan’s music: “That is, to the music I would have used had I not veered off my original path—to the love songs.”) The box evidently has its risks, as well as its rewards. But it can also have a surprising afterlife. Tharp writes of the cardboard cartons for her old projects: “I may have put the box away on a shelf, but I know it’s there. The project name on the box in bold black lettering is a constant reminder that I had an idea once and may come back to it very soon.” And just last week, ten years after her first attempt failed, she presented a new show for the current season of Twyla Tharp Dance. It’s called “Dylan Love Songs.” She held onto the box.

Handbook for morals

with one comment

Yesterday, the pop culture site Pajiba broke the strange story behind the novel Handbook for Mortals, which topped the New York Times bestseller list for Young Adult Fiction, despite not being available at most of the big chains or on Amazon. It soon became clear that somebody was gaming the system, calling stores, asking if they were among the retailers who reported sales data to the Times, and then placing bulk orders of the book. (Whoever did this was smart enough to keep all purchases below the threshold that would flag it as a corporate sale, which is usually around thirty copies.) But why bother doing this in the first place? An update on the site sheds some light on the subject:

Pajiba received details from two separate anonymous sources who got in touch, each claiming that author Lani Sarem herself admitted plans in multiple meetings with potential business partners and investors to push the book onto the New York Times bestseller list by fudging the numbers. Both sources also noted that the author and publisher’s primary concerns were to get a film deal, with the movie having been promised funding if it became a bestseller, hence a bulk-buying strategy with a focus on reaching the convention circuit.

In other words, the book, which has since been pulled from the Times list, didn’t have any value in itself, but as an obligatory stepping stone on the way to a movie deal—an important point that I’ll discuss later. For now, I’ll content myself with observing that the plan, if anything, succeeded too well. If the book had debuted a few notches further down, it might have raised eyebrows, but not to the extent that it did by clumsily clawing its way to the top. As Ace Rothstein notes wearily of a con artist in Casino: “If he wasn’t so fuckin’ greedy, he’d have been tougher to spot.”

The coverage on Pajiba is excellent, but it doesn’t mention the most famous precedent for this kind of self-defeating strategy. On April 15, 1990, the San Diego Union published an article by Mike McIntyre headlined “Hubbard Hot-Author Status Called Illusion,” which remains the best piece ever written on the tactics used by the Church of Scientology to get its founder on the bestseller lists. It begins:

In 1981, St. Martin’s Press was offered a sure thing. L. Ron Hubbard, the pulp writer turned religious leader, had written his first science-fiction novel in more than thirty years. If St. Martin’s published it, Hubbard aides promised the firm, subsidiary organizations of Hubbard’s Church of Scientology would buy at least fifteen thousand copies…”Five, six, seven people at a time would come in, with cash in hand, buying [Battlefield Earth],” said Dave Dutton, of Dutton’s Books, a group of four stores in the Los Angeles area. “They’d blindly ask for the book. They would buy two or three copies at a time with fifty-dollar bills. I had the suspicion that there was something not quite right about it.”

Michael Denneny, a senior editor at St. Martin’s, confirmed the arrangement, saying that Author Services—the affiliate of the church devoted to Hubbard’s literary work—promised to purchase between fifteen and twenty thousand copies, but ultimately went even further: “The Author Services people were very rambunctious. They wanted to make it a New York Times best seller. They were obsessed by that.” And in another article from the Los Angeles Times, a former sales manager for the church’s Bridge Publications revealed: “My orders for the week were to find the New York Times’ reporting stores anywhere in the east so they could send people into the stores to buy [Hubbard’s] books.”

If this sounds a lot like Handbook for Mortals, it wouldn’t be the first time that the church’s tactics have been imitated by others. After Hubbard’s death in 1986, the same bulk-buying techniques were applied to all ten volumes of the Mission Earth “dekalogy,” with the added goal of securing a Hugo Award nomination—which turned out to be substantially easier. The writer Charles Platt had become annoyed by a loophole in the nominating process, in which anyone could nominate a book who paid a small fee to become a supporting member of the World Science Fiction Convention. Platt wasn’t a Scientologist, but he wrote to the church suggesting that they exploit this technicality, hoping that it would draw attention to the problem. A few years later, the Church of Scientology apparently took his advice, buying memberships to vote in droves for Black Genesis, the second volume in the series. As the fan Paul Kincaid noted:

At least fifty percent of the nominations for Black Genesis came from people taking out supporting membership with their nominations. A large number of these came from people in Britain whom I’ve never heard of in any sort of fannish context, either before or since the convention. A lot of the nominations from people in Britain came on photocopied ballots with [Scientologist] Robert Springall’s name written on the bottom. It is within the rules to photocopy ballots and circulate them, providing that the person who has done the photocopying puts his or her name on the bottom…I didn’t make any record of this, but my impression was that a large number of people who took out supporting memberships to nominate Hubbard’s book didn’t actually vote in the final ballot.

In the end, Black Genesis came in dead last, behind “No Award,” but the structural weakness in the Hugos remained. Two decades later, the groups known collectively as the Sad Puppies and the Rabid Puppies took advantage of it in similar fashion, encouraging followers to purchase supporting memberships and vote for their recommended slates. And the outcome was much the same.

It’s striking, of course, that methods pioneered by Scientology have been appropriated by other parties, either for personal gain or to make a political point. But it isn’t surprising. What the author of Handbook for Mortals, the Church of Scientology, and the Puppies all have in common is a shared disregard for the act of reading. Their actions can only be justified if bestsellerdom or an award nomination is taken as a means to an end, rather than as a reflection of success among actual readers. Sarem wanted a movie deal, the Puppies wanted to cause trouble, and the Scientologists wanted “to establish an identity for Hubbard other than as the founder of a controversial religious movement…to recruit new members into the Church of Scientology.” And the real irony here is that Hubbard himself wouldn’t have approved. The Union article notes of his novel’s publication history:

Harvey Haber, a former Scientologist who served as Hubbard’s literary aide, was dispatched to New York to sell the manuscript [of Battlefield Earth]. Hubbard demanded that the book be represented by a major literary agency and placed with one of the ten largest publishers. The church and Bridge Publications were to play no role. “He wanted to prove to everyone and all that he still had it,” Haber said. “That he was the best in the world.”

But fifty-eight New York literary agencies thought otherwise, Haber said. “Not one of them would touch it.” In Haber’s opinion, “The book was a piece of shit.” Church officials didn’t dare tell Hubbard his book was unmarketable, said Haber. “You would’ve been handed your head.” Thus, he said, was hatched the plan to offer guaranteed sales in return for publication.

Hubbard never learned that the church was buying his books in bulk, and he might have been furious if he had found out. Instead, he died believing that he had reached the bestseller list on his own merits. Whatever his other flaws, he genuinely wanted to be read. And this might be one of the few cases on record in which his integrity was greater than that of his followers.

The public eye

leave a comment »

Last month, the New York Times announced that it was eliminating its public editor, an internal watchdog position that dates back over a decade to the Jayson Blair scandal. In a memo to employees, publisher Arthur Sulzberger outlined the reasoning:

The responsibility of the public editor—to serve as the reader’s representative—has outgrown that one office…Today, our followers on social media and our readers across the Internet have come together to collectively serve as a modern watchdog, more vigilant and forceful than one person could ever be. Our responsibility is to empower all of those watchdogs, and to listen to them, rather than to channel their voice through a single office.

We are dramatically expanding our commenting platform. Currently, we open only ten percent of our articles to reader comments. Soon, we will open up most of our articles to reader comments. This expansion, made possible by a collaboration with Google, marks a sea change in our ability to serve our readers, to hear from them, and to respond to them.

The decision was immediately criticized, as much for its optics and timing as for its underlying rationale. As Zach Schonfeld wrote for Newsweek: “The Times’s ability to hold the [Trump] administration accountable relies on its ability to convince readers that it’s holding itself accountable—to convince the country that it’s not ‘fake news,’ as Trump frequently charges, and that it is getting the story right.”

This seems obvious to me. Even if it was a legitimate call, it looks bad, especially at this particular moment. The public editor hasn’t always been as empowered or vocal as it should be, but these are problems that should have been addressed by improving it, not discontinuing it entirely, even if the Times itself lacked the inclination to do so. (Tom Scocca observed on Politico: “Sulzberger seemed to approach the routine duty of holding his paper accountable the same way a surly twelve-year-old approaches the task of mowing the lawn—if he could do it badly enough, maybe people would decide he shouldn’t have been made to do it at all.”) But I’m more concerned by the argument that the public editor’s role could somehow be outsourced to comments, both on the site itself and on unaffiliated platforms like Twitter. As another article in the Times explains:

We have implemented a new system called Moderator, and starting today, all our top stories will allow comments for an eight-hour period on weekdays. And for the first time, comments in both the News and Opinion sections will remain open for twenty-four hours.

Moderator was created in partnership with Jigsaw, a technology incubator that’s part of Alphabet, Google’s parent company. It uses machine-learning technology to prioritize comments for moderation, and sometimes, approves them automatically…The Times struck a deal with Jigsaw that we outlined last year: In exchange for the Times’s anonymized comments data, Jigsaw would build a machine learning algorithm that predicts what a Times moderator might do with future comments.

Without delving into the merits of this approach or the deal that made it possible, it seems clear that the Times wants us to associate the removal of the public editor with the overhaul of its comments section, as if one development were a response to the other. In his memo, Sulzberger wrote that the relationship between the newspaper and its readers was too important to be “outsourced”—which is a strange way to describe an internal position—to any one person. And by implication, it’s outsourcing it to its commenters instead.

But is that really what’s happening here? To my eyes, it seems more likely that the Times is mentioning two unrelated developments in one breath in hopes that we’ll assume that they’re solutions to the same problem, when, in fact, the paper has done almost nothing to build a comments section that could conceivably take on a watchdog role. In the article on the partnership with Jigsaw, we read: “The community desk has long sought quality of comments over quantity. Surveys of Times readers have made clear that the approach paid off—readers who have seen our comment sections love them.” Well, whenever I’ve seen those comment sections, which is usually by mistake, I’ve clicked out right away—and if these are what “quality” comments look like, I’d hate to see those that didn’t make the cut. But even if I’m not the intended audience, it seems to me that there are a number of essential factors that go into making a viable commentariat, and that the Times has implemented none of them. Namely:

  1. A sense of ownership. A good comment system provides users with a profile that archives all of their submissions in one place, which keeps them accountable and provides a greater incentive to put more effort into what they write. The Times, to my knowledge, doesn’t offer this.
  2. A vibrant community. The best comment sections, like the ones on The A.V. Club and the mid-sized communities on Reddit, benefit from a relatively contained pool of users, which allows you to recognize the names of prolific commenters and build up an identity for yourself. The Times may be too huge and sprawling to allow for this at all, and while workarounds might exist, as I’ll note below, they haven’t really been tried. Until now, the comments sections have appeared too unpredictably on articles to attract readers who aren’t inclined to seek them out, and there’s no support for threads, which allow real conversations to take place.
  3. A robust upvoting system. This is the big one. Comment sections are readable to the extent that they allow the best submissions to float to the top. When I click on an article on the Times, the column on the right automatically shows me the most recent comments, which, on average, are mediocre or worse, and it leaves me with little desire to explore further. The Times offers a “Reader’s Picks” category, but it isn’t the default setting, and it absolutely needs to be. Until then, it might get better policing from readers simply by posting every article as a link on Reddit and letting the comments live there.

It’s important to note that even if all these changes were implemented, they couldn’t replace a public editor, a high-profile position with access to the thought processes of editors and reporters that no group of outside commenters could provide. A good comment section can add value, but it’s a solution to a different problem. Claiming that beefing up the one allows you to eliminate the other is like removing the smoke alarm from your house because you’ve got three carbon monoxide detectors. But even if the Times was serious about turning its commenters into the equivalent of a public editor, like replacing one horse-sized duck with a hundred duck-sized horses, it hasn’t made the changes that would be required to make its comment sections useful. (Implementing items one and three would be fairly straightforward. Item two would be harder, but it might work if the Times pushed certain sections, like Movies or Sports, as portals in themselves, and then tried to expand the community from there.) It isn’t impossible, but it’s hard, and while it would probably cost less than paying a public editor, it would be more expensive than the deal with Google, in which the paper provides information about its readers to get an algorithm for free. And this gets at the real reason for the change. “The community desk has long sought quality of comments over quantity,” the Times writes—so why suddenly emphasize quantity now? The only answer is that it’s easier and cheaper than the alternative, which requires moderation by human beings who have to be paid a salary, rather than an algorithmic solution that is willing to work for data. Given the financial pressures on a site like the Times, which outlined the changes in the same article in which it announced that it would be offering buyouts to its newsroom staff, this is perfectly understandable. But pretending that a move based on cost efficiency is somehow better than the alternative is disingenuous at best, and the effort to link the two decisions points at something more insidious. Correlation isn’t causation, and just because Sulzberger mentions two things in successive paragraphs doesn’t mean they have anything to do with each other. I hate to say it, but it’s fake news. And the Times has just eliminated the one person on its staff who might have been able or willing to point this out.

Written by nevalalee

June 16, 2017 at 8:54 am

The logic of birdsong

with one comment

My favorite theory is that the structure of a bird song is determined by what will carry best in its home environment. Let’s say, you have one bird that lives in a coniferous forest and another in an oak forest. Since the song is passed down by tradition, then let’s say there’s an oak woodland dialect and coniferous woodland dialect. If you reproduce the sounds, you will find that the oak sound carries farther in an oak forest than it does in a coniferous forest, and vice versa…

[Bird songs] have an exposition of a theme. Very often, they have variations in theme reminiscent of canonical variations like Mozart’s Sonata in A major, where you have theme and variation. And eventually, they come back to the original theme. They probably do it for the same reasons that humans compose sonatas. Both humans and birds get bored with monotony. And to counter monotony, you always have to do something new to keep the brain aroused.

Luis F. Baptista, in an interview with the New York Times

Written by nevalalee

May 28, 2017 at 7:30 am

The A/B Test

with 2 comments

In this week’s issue of The New York Times Magazine, there’s a profile of Mark Zuckerberg by Farhad Manjoo, who describes how the founder of Facebook is coming to terms with his role in the world in the aftermath of last year’s election. I find myself thinking about Zuckerberg a lot these days, arguably even more than I use Facebook itself. We just missed overlapping in college, and with one possible exception, which I’ll mention later, he’s the most influential figure to emerge from those ranks in the last two decades. Manjoo depicts him as an intensely private man obliged to walk a fine line in public, leading him to be absurdly cautious about what he says: “When I asked if he had chatted with Obama about the former president’s critique of Facebook, Zuckerberg paused for several seconds, nearly to the point of awkwardness, before answering that he had.” Zuckerberg is trying to figure out what he believes—and how to act—under conditions of enormous scrutiny, but he also has more resources at his disposal than just about anyone else in history. Here’s the passage in the article that stuck with me the most:

The people who work on News Feed aren’t making decisions that turn on fuzzy human ideas like ethics, judgment, intuition, or seniority. They are concerned only with quantifiable outcomes about people’s actions on the site. That data, at Facebook, is the only real truth…This ideal runs so deep that the people who make News Feed often have to put aside their own notions of what’s best. “One of the things we’ve all learned over the years is that our intuition can be wrong a fair amount of the time,” John Hegeman, the vice president of product management and a News Feed team member, told me. “There are things you don’t expect will happen. And we learn a lot from that process: Why didn’t that happen, and what might that mean?”

Reading this, I began to reflect on how rarely we actually test our intuitions. I’ve spoken a lot on this blog about the role of intuitive thinking in the arts and sciences, mostly because it doesn’t get the emphasis it deserves, but there’s also no guarantee that intuition will steer us in the right direction. The psychologist Daniel Kahneman has devoted his career to showing how we tend to overvalue our gut reactions, particularly if we’ve been fortunate enough to be right in the past, and the study of human irrationality has become a rich avenue of research in the social sciences, which are often undermined by poor hunches of their own. It may not even be a matter of right or wrong. An intuitive choice may be better or worse than the alternative, but for the most part, we’ll never know. One of the quirks of Silicon Valley culture is that it claims to base everything on raw data, but it’s often in the service of notions that are outlandish, untested, and easy to misrepresent. Facebook comes closer than any company in existence to the ideal of an endless A/B test, in which the user base is randomly divided into two or more groups to see which approaches are the most effective. It’s the best lab ever developed for testing our hunches about human behavior. (Most controversially, Facebook modified the news feeds of hundreds of thousands of users to adjust the number of positive or negative posts, in order to gauge the emotional impact, and it has conducted similar tests on voter turnout.) And it shouldn’t surprise us if many of our intuitions turn out to be mistaken. If anything, we should expect them to be right about half the time—and if we can nudge that percentage just a little bit upward, in theory, it should give us a significant competitive advantage.

So what good is intuition, anyway? I like to start with William Goldman’s story about the Broadway producer George Abbott, who once passed a choreographer holding his head in his hands while the dancers stood around doing nothing. When Abbott asked what was wrong, the choreographer said that he couldn’t figure out what to do next. Abbott shot back: “Well, have them do something! That way we’ll have something to change.” Intuition, as I’ve argued before, is mostly about taking you from zero ideas to one idea, which you can then start to refine. John W. Campbell makes much the same argument in what might be his single best editorial, “The Value of Panic,” which begins with a maxim from the Harvard professor Wayne Batteau: “In total ignorance, try anything. Then you won’t be so ignorant.” Campbell argues that this provides an evolutionary rationale for panic, in which an animal acts “in a manner entirely different from the normal behavior patterns of the organism.” He continues:

Given: An organism with N characteristic behavior modes available. Given: An environmental situation which cannot be solved by any of the N available behavior modes, but which must be solved immediately if the organism is to survive. Logical conclusion: The organism will inevitably die. But…if we introduce Panic, allowing the organism to generate a purely random behavior mode not a member of the N modes characteristically available?

Campbell concludes: “When the probability of survival is zero on the basis of all known factors—it’s time to throw in an unknown.” In extreme situations, the result is panic; under less intense circumstances, it’s a blind hunch. You can even see them as points on a spectrum, the purpose of which is to provide us with a random action or idea that can then be revised into something better, assuming that we survive for long enough. But sometimes the animal just gets eaten.

The idea of refinement, revision, or testing is inseparable from intuition, and Zuckerberg has been granted the most powerful tool imaginable for asking hard questions and getting quantifiable answers. What he does with it is another matter entirely. But it’s also worth looking at his only peer from college who could conceivably challenge him in terms of global influence. On paper, Mark Zuckerberg and Jared Kushner have remarkable similarities. Both are young Jewish men—although Kushner is more observant—who were born less than four years and sixty miles apart. Kushner, whose acceptance to Harvard was so manifestly the result of his family’s wealth that it became a case study in a book on the subject, was a member of the final clubs that Zuckerberg badly wanted to join, or so Aaron Sorkin would have us believe. Both ended up as unlikely media magnates of a very different kind: Kushner, like Charles Foster Kane, took over a New York newspaper from a man named Carter. Yet their approaches to their newfound positions couldn’t be more different. Kushner has been called “a shadow secretary of state” whose portfolio includes Mexico, China, the Middle East, and the reorganization of the federal government, but it feels like one long improvisation, on the apparent assumption that he can wing it and succeed where so many others have failed. As Bruce Bartlett writes in the New York Times, without a staff, Kushner “is just a dilettante meddling in matters he lacks the depth or the resources to grasp,” and we may not have a chance to recover if his intuitions are wrong. In other words, he resembles his father-in-law, as Frank Bruni notes:

I’m told by insiders that when Trump’s long-shot campaign led to victory, he and Kushner became convinced not only that they’d tapped into something that everybody was missing about America, but that they’d tapped into something that everybody was missing about the two of them.

Zuckerberg and Kushner’s lives ran roughly in parallel for a long time, but now they’re diverging at a point at which they almost seem to be offering us two alternate versions of the future, like an A/B test with only one possible outcome. Neither is wholly positive, but that doesn’t make the choice any less stark. And if you think this sounds farfetched, bookmark this post, and read it again in about six years.

%d bloggers like this: