Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘New York Times

The Big One

leave a comment »

In a heartfelt appreciation of the novelist Philip Roth, who died earlier this week, the New York Times critic Dwight Garner describes him as “the last front-rank survivor of a generation of fecund and authoritative and, yes, white and male novelists…[that] included John Updike, Norman Mailer and Saul Bellow.” These four names seem fated to be linked together for as long as any of them is still read and remembered, and they’ve played varying roles in my own life. I was drawn first to Mailer, who for much of my adolescence was my ideal of what a writer should be, less because of his actual fiction than thanks to my repeated readings of the juiciest parts of Peter Manso’s oral biography. (If you squint hard and think generously, you can even see Mailer’s influence in the way I’ve tried to move between fiction and nonfiction, although in both cases it was more a question of survival.) Updike, my favorite, was a writer I discovered after college. I agree with Garner that he probably had the most “sheer talent” of them all, and he represents my current model, much more than Mailer, of an author who could apparently do anything. Bellow has circled in and out of my awareness over the years, and it’s only recently that I’ve started to figure out what he means to me, in part because of his ambiguous status as a subject of biography. And Roth was the one I knew least. I’d read Portnoy’s Complaint and one or two of the Zuckerman novels, but I always felt guilty over having never gotten around to such late masterpieces as American Pastoral—although the one that I should probably check out first these days is The Plot Against America.

Yet I’ve been thinking about Roth for about as long as I’ve wanted to be a writer, largely because he came as close as anyone ever could to having the perfect career, apart from the lack of the Nobel Prize. He won the National Book Award for his debut at the age of twenty-six; he had a huge bestseller at an age when he was properly equipped to enjoy it; and he closed out his oeuvre with a run of major novels that critics seemed to agree were among the best that he, or anyone, had ever written. (As Garner nicely puts it: “He turned on the afterburners.”) But he never seemed satisfied by his achievement, which you can take as an artist’s proper stance toward his work, a reflection of the fleeting nature of such rewards, a commentary on the inherent bitterness of the writer’s life, or all of the above. Toward the end of his career, Roth actively advised young writers not to become novelists, and in his retirement announcement, which he delivered almost casually to a French magazine, he quoted Joe Louis: “I did the best I could with what I had.” A month later, in an interview with Charles McGrath of the New York Times, he expanded on his reasoning:

I know I’m not going to write as well as I used to. I no longer have the stamina to endure the frustration. Writing is frustration—it’s daily frustration, not to mention humiliation. It’s just like baseball: you fail two-thirds of the time…I can’t face any more days when I write five pages and throw them away. I can’t do that anymore…I knew I wasn’t going to get another good idea, or if I did, I’d have to slave over it.

And on his computer, he posted a note that gave him strength when he looked at it each day: “The struggle with writing is over.”

Roth’s readers, of course, rarely expressed the same disillusionment, and he lives most vividly in my mind as a reference point against which other authors could measure themselves. In an interview with The Telegraph, John Updike made one of the most quietly revealing statements that I’ve ever heard from a writer, when asked if he felt that he and Roth were in competition:

Yes, I can’t help but feel it somewhat. Especially since Philip really has the upper hand in the rivalry as far as I can tell. I think in a list of admirable novelists there was a time when I might have been near the top, just tucked under Bellow. But since Bellow died I think Philip has…he’s certainly written more novels than I have, and seems more dedicated in a way to the act of writing as a means of really reshaping the world to your liking. But he’s been very good to have around as far as goading me to become a better writer.

I think about that “list of admirable novelists” all the time, and it wasn’t just a joke. In an excellent profile in The New Yorker, Claudia Roth Pierpoint memorably sketched in all the ways in which other writers warily circled Roth. When asked if the two of them were friends, Updike said, “Guardedly,” and Bellow seems to have initially held Roth at arm’s length, until his wife convinced him to give the younger writer a chance. Pierpont concludes of the relationship between Roth and Updike: “They were mutual admirers, wary competitors who were thrilled to have each other in the world to up their game: Picasso and Matisse.”

And they also remind me of another circle of writers whom I know somewhat better. If Bellow, Mailer, Updike, and Roth were the Big Four of the literary world, they naturally call to mind the Big Three of science fiction—Heinlein, Asimov, and Clarke. In each case, the group’s members were perfectly aware of how exceptional they were, and they carefully guarded their position. (Once, in a conference call with the other two authors, Asimov jokingly suggested that one of them should die to make room for their successors. Heinlein responded: “Fuck the other writers!”) Clarke and Asimov seem to have been genuinely “thrilled to have each other in the world,” but their relationship with the third point of the triangle was more fraught. Toward the end, Asimov started to “avoid” the combative Heinlein, who had a confrontation with Clarke over the Strategic Defense Initiative that effectively ended their friendship. In public, they remained cordial, but you can get a hint of their true feelings in a remarkable passage from the memoir I. Asimov:

[Clarke] and I are now widely known as the Big Two of science fiction. Until early 1988, as I’ve said, people spoke of the Big Three, but then Arthur fashioned a little human figurine of wax and with a long pin— At least, he has told me this. Perhaps he’s trying to warn me. I have made it quite plain to him, however, that if he were to find himself the Big One, he would be very lonely. At the thought of that, he was affected to the point of tears, so I think I’m safe.

As it turned out, Clarke, like Roth, outlived all the rest, and perhaps they felt lonely in the end. Longevity can amount to a kind of victory in itself. But it must be hard to be the Big One.

From Montgomery to Bilbao

leave a comment »

On August 16, 2016, the Equal Justice Initiative, a legal rights organization, unveiled its plans for the National Memorial for Peace and Justice, which would be constructed in Montgomery, Alabama. Today, less than two years later, it opens to the public, and the timing could hardly seem more appropriate, in ways that even those who conceived of it might never have imagined. As Campbell Robertson writes for the New York Times:

At the center is a grim cloister, a walkway with eight hundred weathered steel columns, all hanging from a roof. Etched on each column is the name of an American county and the people who were lynched there, most listed by name, many simply as “unknown.” The columns meet you first at eye level, like the headstones that lynching victims were rarely given. But as you walk, the floor steadily descends; by the end, the columns are all dangling above, leaving you in the position of the callous spectators in old photographs of public lynchings.

And the design represents a breakthrough in more ways than one. As the critic Philip Kennicott points out in the Washington Post: “Even more remarkable, this memorial…was built on a budget of only $15 million, in an age when major national memorials tend to cost $100 million and up.”

Of course, if the memorial had been more costly, it might not exist at all, and certainly not with the level of independence and the clear point of view that it expresses. Yet if there’s one striking thing about the coverage of the project, it’s the absence of the name of any one architect or designer. Neither of these two words even appears in the Times article, and in the Post, we only read that the memorial was “designed by [Equal Justice Initiative founder Bryan] Stevenson and his colleagues at EJI in collaboration with the Boston-based MASS Design Group.” When you go to the latter’s official website, twelve people are credited as members of the project design team. This is markedly different from the way in which we tend to talk about monuments, museums, and other architectural works that are meant to invite our attention. In many cases, the architect’s identity is a selling point in itself, as it invariably is with Frank Gehry, whose involvement in a project like the Guggenheim Museum Bilbao is consciously intended to rejuvenate an entire city. In Montgomery, by contrast, the designer is essentially anonymous, or part of a collaboration, which seems like an aesthetic choice as conscious as the design of the space itself. The individual personality of the architect departs, leaving the names and events to testify on their own behalf. Which is exactly as it should be.

And it’s hard not to compare this to the response to the design of the Vietnam Veterans Memorial in 1981. The otherwise excellent documentary by Ken Burns and Lynn Novick alludes to the firestorm that it caused, but it declines to explore how much of the opposition was personal in nature. As James Reston, Jr. writes in the definitive study A Rift in the Earth:

After Maya Lin’s design was chosen and announced, the public reaction was intense. Letters from outraged veterans poured into the Memorial Fund office. One claimed that Lin’s design had “the warmth and charm of an Abyssinian dagger.” “Nihilistic aesthetes” had chosen it…Predictably, the names of incendiary antiwar icons, Jane Fonda and Abbie Hoffman, were invoked as cheering for a design that made a mockery of the Vietnam dead…As for the winner with Chinese ancestry, [donor H. Ross] Perot began referring to her as “egg roll.”

If anything, the subject matter of the National Memorial for Peace and Justice is even more fraught, and the decision to place the designers in the background seems partially intended to focus the conversation on the museum itself, and not on those who made it.

Yet there’s a deeper lesson here about architecture and its creators. At first, you might think that a building with a singular message would need to arise from—or be identified with—an equally strong personality, but if anything, the trend in recent years has gone the other way. As Reinier de Graaf notes in Four Walls and a Roof, one of the more curious developments over the last few decades is the way in which celebrity architects, like Frank Gehry, have given up much of their own autonomy for the sake of unusual forms that no human hand or brain could properly design:

In partially delegating the production of form to the computer, the antibox has seemingly boosted the production of extravagant shapes beyond any apparent limits. What started as a deliberate meditation on the notion of form in the early antibodies has turned into a game of chance. Authorship has become relative: with creation now delegated to algorithms, the antibox’s main delight is the surprise it causes to the designers.

Its opposite number is the National Memorial for Peace and Justice, which was built with simple materials and techniques that rely for their impact entirely on the insight, empathy, and ingenuity of the designer, who then quietly fades away. The architect can afford to disappear, because the work speaks for those who are unable to speak for themselves. And that might be the most powerful message of all.

Who Needs the Kwik-E-Mart?

leave a comment »

Who needs the Kwik-E-Mart?
Now here’s the tricky part…

“Homer and Apu”

On October 8, 1995, The Simpsons aired the episode “Bart Sells His Soul,” which still hasn’t stopped rattling around in my brain. (A few days ago, my daughter asked: “Daddy, what’s the soul?” I may have responded with some variation on Lisa’s words: “Whether or not the soul is physically real, it’s the symbol of everything fine inside us.” On a more typical morning, though, I’m likely to mutter to myself: “Remember Alf? He’s back—in pog form!”) It’s one of the show’s finest installments, but it came close to being about something else entirely. On the commentary track for the episode, the producer Bill Oakley recalls:

There’s a few long-lived ideas that never made it. One of which is David Cohen’s “Homer the Narcoleptic,” which we’ve mentioned on other tracks. The other one was [Greg Daniels’s] one about racism in Springfield. Do you remember this? Something about Homer and Dr. Hibbert? Well, you pitched it several times and I think we were just…It was some exploration of the concept of race in Springfield, and we just said, you know, we don’t think this is the forum. The Simpsons can’t be the right forum to deal with racism.

Daniels—who went on to create Parks and Recreation and the American version of The Office—went with the pitch for “Bart Sells His Soul” instead, and the other premise evidently disappeared forever, including from his own memory. When Oakley brings it up, Daniels only asks: “What was it?”

Two decades later, The Simpsons has yet to deal with race in any satisfying way, even when the issue seems unavoidable. Last year, the comedian Hari Kondabolu released the documentary The Problem With Apu, which explores the complicated legacy of one of the show’s most prominent supporting characters. On Sunday, the show finally saw fit to respond to these concerns directly, and the results weren’t what anyone—apart perhaps from longtime showrunner Al Jean—might have wanted. As Sopan Deb of the New York Times describes it:

The episode, titled “No Good Read Goes Unpunished,” featured a scene with Marge Simpson sitting in bed with her daughter Lisa, reading a book called “The Princess in the Garden,” and attempting to make it inoffensive for 2018. At one point, Lisa turns to directly address the TV audience and says, “Something that started decades ago and was applauded and inoffensive is now politically incorrect. What can you do?” The shot then pans to a framed picture of Apu at the bedside with the line, “Don’t have a cow!” inscribed on it. Marge responds: “Some things will be dealt with at a later date.” Followed by Lisa saying, “If at all.”

Kondabolu responded on Twitter: “This is sad.” And it was. As Linda Holmes of NPR aptly notes: “Apu is not appearing in a fifty-year-old book by a now-dead author. Apu is a going concern. Someone draws him, over and over again.” And the fact the show decided to put these words into the mouth of Lisa Simpson, whose importance to viewers everywhere was recently underlined, makes it doubly disappointing.

But there’s one obvious change that The Simpsons could make, and while it wouldn’t be perfect, it would be a step in the right direction. If the role of Apu were recast with an actor of South Asian descent, it might not be enough in itself, but I honestly can’t see a downside. Hank Azaria would still be allowed to voice dozens of characters. Even if Apu sounded slightly different than before, this wouldn’t be unprecedented—Homer’s voice changed dramatically after the first season, and Julie Kavner’s work as Marge is noticeably more gravelly than it used to be. Most viewers who are still watching probably wouldn’t even notice, and the purists who might object undoubtedly left a long time ago. It would allow the show to feel newsworthy again, and not just on account of another gimmick. And even if we take this argument to its logical conclusion and ask that Carl, Officer Lou, Akira, Bumblebee Man, and all the rest be voiced by actors of the appropriate background, well, why not? (The show’s other most prominent minority character, Dr. Hibbert, seems to be on his way out for other reasons, and he evidently hasn’t appeared in almost two years.) For a series that has systematically undermined its own legacy in every conceivable way out of little more than boredom, it seems shortsighted to cling to the idea that Azaria is the only possible Apu. And even if it leaves many issues unresolved on the writing level, it also seems like a necessary precondition for change. At this late date, there isn’t much left to lose.

Of course, if The Simpsons were serious about this kind of effort, we wouldn’t be talking about its most recent episode at all. And the discussion is rightly complicated by the fact that Apu—like everything else from the show’s golden age—was swept up in the greatness of those five or six incomparable seasons. Before that unsuccessful pitch on race in Springfield, Greg Daniels was credited for “Homer and Apu,” which deserves to be ranked among the show’s twenty best episodes, and the week after “Bart Sells His Soul,” we got “Lisa the Vegetarian,” which gave Apu perhaps his finest moment, as he ushered Lisa to the rooftop garden to meet Paul and Linda McCartney. But the fact that Apu was a compelling character shouldn’t argue against further change, but in its favor. And what saddens me the most about the show’s response is that it undermines what The Simpsons, at its best, was supposed to be. It was the cartoon that dared to be richer and more complex than any other series on the air; it had the smartest writers in the world and a network that would leave them alone; it was just plain right about everything; and it gave us a metaphorical language for every conceivable situation. The Simpsons wasn’t just a sitcom, but a vocabulary, and it taught me how to think—or it shaped the way that I do think so deeply that there’s no real distinction to be made. As a work of art, it has quietly fallen short in ways both small and large for over fifteen years, but I was able to overlook it because I was no longer paying attention. It had done what it had to do, and I would be forever grateful. But this week, when the show was given the chance to rise again to everything that was fine inside of it, it faltered. Which only tells me that it lost its soul a long time ago.

The dawn of man

leave a comment »

Note: To celebrate the fiftieth anniversary of the release of 2001: A Space Odyssey, which held its premiere on April 2, 1968, I’ll be spending the week looking at various aspects of what remains the greatest science fiction movie ever made.

Almost from the moment that critics began to write about 2001, it became fashionable to observe that the best performance in the movie was by an actor playing a computer. In his review in Analog, for example, P. Schuyler Miller wrote:

The actors, except for the gentle voice of HAL, are thoroughly wooden and uninteresting, and I can’t help wondering whether this isn’t Kubrick’s subtle way of suggesting that the computer is really more “human” than they and fully justified in trying to get rid of them before they louse up an important mission. Someday we may know whether the theme of this part is a Clarke or a Kubrick contribution. I suspect it was the latter…perhaps just because Stanley Kubrick is said to like gadgets.

This criticism is often used to denigrate the other performances or the film’s supposed lack of humanity, but I prefer to take it as a tribute to the work of actor Douglas Rain, Kubrick and Clarke’s script, and the brilliant design of HAL himself. The fact that a computer is the character we remember best isn’t a flaw in the movie, but a testament to its skill and imagination. And as I’ve noted elsewhere, the acting is excellent—it’s just so understated and naturalistic that it seems vaguely incongruous in such spectacular settings. (Compare it to the performances in Destination Moon, for instance, and you see how good Keir Dullea and William Sylvester really are here.)

But I also think that the best performance in 2001 isn’t by Douglas Rain at all, but by Vivian Kubrick, in her short appearance on the phone as Heywood Floyd’s daughter. It’s a curious scene that breaks many of the rules of good storytelling—it doesn’t lead anywhere, it’s evidently designed to do nothing but show off a piece of hardware, and it peters out even as we watch it. The funniest line in the movie may be Floyd’s important message:

Listen, sweetheart, I want you to tell mommy something for me. Will you remember? Well, tell mommy that I telephoned. Okay? And that I’ll try to telephone tomorrow. Now will you tell her that?

But that’s oddly true to life as well. And when I watch the scene today, with a five-year-old daughter of my own, it seems to me that there’s no more realistic little girl in all of movies. (Kubrick shot the scene himself, asking the questions from offscreen, and there’s a revealing moment when the camera rises to stay with Vivian as she stands. This is sometimes singled out as a goof, although there’s no reason why a sufficiently sophisticated video phone wouldn’t be able to track her automatically.) It’s a scene that few other films would have even thought to include, and now that video chat is something that we all take for granted, we can see through the screen to the touchingly sweet girl on the other side. On some level, Kubrick simply wanted his daughter to be in the movie, and you can’t blame him.

At the time, 2001 was criticized as a soulless hunk of technology, but now it seems deeply human, at least compared to many of its imitators. Yesterday in the New York Times, Bruce Handy shared a story from Keir Dullea, who explained why he breaks the glass in the hotel room at the end, just before he comes face to face with himself as an old man:

Originally, Stanley’s concept for the scene was that I’d just be eating and hear something and get up. But I said, “Stanley, let me find some slightly different way that’s kind of an action where I’m reaching—let me knock the glass off, and then in mid-gesture, when I’m bending over to pick it up, let me hear the breathing from that bent-over position.” That’s all. And he says, “Oh, fine. That sounds good.” I just wanted to find a different way to play the scene than blankly hearing something. I just thought it was more interesting.

I love this anecdote, not just because it’s an example of an evocative moment that arose from an actor’s pragmatic considerations, but because it feels like an emblem of the production of the movie as a whole. 2001 remains the most technically ambitious movie of all time, but it was also a project in which countless issues were being figured out on the fly. Every solution was a response to a specific problem, and it covered a dizzying range of challenges—from the makeup for the apes to the air hostess walking upside down—that might have come from different movies entirely.

2001, in short, was made by hand—and it’s revealing that many viewers assume that computers had to be involved, when they didn’t figure in the process at all. (All of the “digital” readouts on the spacecraft, for instance, were individually animated, shot on separate reels of film, and projected onto those tiny screens on set, which staggers me even to think about it. And even after all these years, I still can’t get my head around the techniques behind the Star Gate sequence.) It reminds me, in fact, of another movie that happens to be celebrating an anniversary this year. As a recent video essay pointed out, if the visual effects in Jurassic Park have held up so well, it’s because most of them aren’t digital at all. The majority consist of a combination of practical effects, stop motion, animatronics, raptor costumes, and a healthy amount of misdirection, with computers used only when absolutely necessary. Each solution is targeted at the specific problems presented by a snippet of film that might last just for a few seconds, and it moves so freely from one trick to another that we rarely have a chance to see through it. It’s here, not in A.I., that Spielberg got closest to Kubrick, and it hints at something important about the movies that push the technical aspects of the medium. They’re often criticized for an absence of humanity, but in retrospect, they seem achingly human, if only because of the total engagement and attention that was required for every frame. Most of their successors lack the same imaginative intensity, which is a greater culprit than the use of digital tools themselves. Today, computers are used to create effects that are perfect, but immediately forgettable. And one of the wonderful ironies of 2001 is that it used nothing but practical effects to create a computer that no viewer can ever forget.

The axioms of behavior

with one comment

Earlier this week, Keith Raniere, the founder of an organization known as Nxivm, was arrested in Mexico, to which he had fled last year in the wake of a devastating investigation published in the New York Times. The article described a shady operation that combined aspects of a business seminar, a pyramid scheme, and a sex cult, with public workshops shading into a “secret sisterhood” that required its members to provide nude photographs or other compromising materials and be branded with Raniere’s initials. (In an email obtained by the Times, Raniere reassured one of his followers: “[It was] not originally intended as my initials but they rearranged it slightly for tribute.”) According to the report, about sixteen thousand people have taken the group’s courses, which are marketed as leading to “greater self-fulfillment by eliminating psychological and emotional barriers,” and some went even further. As the journalist Barry Meier wrote:

Most participants take some workshops, like the group’s “Executive Success Programs,” and resume their lives. But other people have become drawn more deeply into Nxivm, giving up careers, friends and families to become followers of its leader, Keith Raniere, who is known within the group as “Vanguard”…Former members have depicted [Raniere] as a man who manipulated his adherents, had sex with them and urged women to follow near-starvation diets to achieve the type of physique he found appealing.

And it gets even stranger. In 2003, Raniere sued the Cult Education Institute for posting passages from his training materials online. In his deposition for the suit, which was dismissed just last year, Raniere stated:

I discovered I had an exceptional aptitude for mathematics and computers when I was twelve. It was at the age of twelve I read The Second Foundation [sic] by Isaac Asimov and was inspired by the concepts on optimal human communication to start to develop the theory and practice of Rational Inquiry. This practice involves analyzing and optimizing how the mind handles data. It involves mathematical set theory applied in a computer programmatic fashion to processes such as memory and emotion. It also involves a projective methodology that can be used for optimal communication and decision making.

Raniere didn’t mention any specific quotations from Asimov, but they were presumably along the lines of the following, which actually appears in Foundation and Empire, spoken by none other than the Mule:

Intuition or insight or hunch-tendency, whatever you wish to call it, can be treated as an emotion. At least, I can treat it so…The human mind works at low efficiency. Twenty percent is the figure usually given. When, momentarily, there is a flash of greater power it is termed a hunch, or insight, or intuition. I found early that I could induce a continual use of high brain-efficiency. It is a killing process for the person affected, but it is useful.

At this point, one might be tempted to draw parallels to other cults, such as Aum Shinrikyo, that are also said to have taken inspiration from Asimov’s work. In this case, however, the connection to the Foundation series seems tangential at best. A lot of us read science fiction at the golden age of twelve, and while we might be intrigued by psychohistory or mental engineering, few of us take it in the direction that Raniere evidently did. (As one character observes in Umberto Eco’s Foucault’s Pendulum: “People don’t get the idea of going back to burn Troy just because they read Homer.”) In fact, Raniere comes off a lot more like L. Ron Hubbard, at least in the version of himself that he presents in public. In the deposition, he provided an exaggerated account of his accomplishments that will strike those who know Hubbard as familiar:

In 1988, I was accepted into the Mega Society. The requirements to be accepted into the Mega Society were to have a demonstrated IQ of 176…In 1989, I was accepted into the Guinness Book of World Records under the category “Highest IQ.” I also left my position as a Computer Programmer/Analyst and resumed business consulting with the intention to raise money to start the “Life Learning Institute.” At this point in time I became fascinated with how human motivation affected behavior. I started to refine my projective mathematical theory of the human mind to include a motivational behavior equation.

And when Raniere speaks of developing “a set of consistent axioms of how human behavior interfaced with the world,” it’s just a variation on an idea that has been recycled within the genre for decades.

Yet it’s also worth asking why the notion of a “mathematical psychology” appeals to these manipulative personalities, and why many of them have repackaged these ideas so successfully for their followers. You could argue that Raniere—or even Charles Manson—represents the psychotic fringe of an impulse toward transformation that has long been central to science fiction, culminating in the figure of the superman. (It’s probably just a coincidence, but I can’t help noting that two individuals who have been prominently linked with the group, the actresses Kristin Kreuk and Allison Mack, both appeared on Smallville.) And many cults hold out a promise of change for which the genre provides a convenient vocabulary. As Raniere said in his deposition:

In mathematics, all things are proven based on axioms and a step by step systematic construction. Computers work the same way. To program a computer one must first understand the axioms of the computer language, and then the step by step systematic construction of the problem-solution methodology. Finally, one must construct the problem-solution methodology in a step by step fashion using the axioms of the language. I discovered the human mind works the same way and I formalized the process.

This sounds a lot like Hubbard, particularly in the early days of dianetics, in which the influence of cybernetics was particularly strong. But it also represents a limited understanding of what the human mind can be, and it isn’t surprising that it attracts people who see others as objects to be altered, programmed, and controlled. The question of whether such figures as Hubbard or Raniere really buy into their own teachings resists any definitive answer, but one point seems clear enough. Even if they don’t believe it, they obviously wish that it were true.

Life on the last mile

with 2 comments

In telecommunications, there’s a concept called “the last mile,” which states that the final leg of a network—the one that actually reaches the user’s home, school or office—is the most difficult and expensive to build. It’s one thing to construct a massive trunkline, which is basically a huge but relatively straightforward feat of engineering, and quite another to deal with the tangle of equipment, wiring, and specifications on the level of thousands of individual households. More recently, the concept has been extended to public transportation, delivery and distribution services, and other fields that depend on connecting an industrial operation on the largest imaginable scale with specific situations on the retail side. (For instance, Amazon has been trying to cross the last mile through everything from its acquisition of Whole Foods to drone delivery, and the fact that these are seen as alternative approaches to the same problem points to how complicated it really is.) This isn’t just a matter of infrastructure, either, but of the difficulties inherent to any system in which a single pipeline has to split into many smaller branches, whether it’s carrying blood, water, mail, or data. Ninety percent of the wiring can be in that last mile, and success lies less in any overall principles than in the irritating particulars. It has to be solved on the ground, rather than in a design document, and you’ll never be able to anticipate all of the obstacles that you’ll face once those connections start to multiply. It’s literally about the ramifications.

I often feel the same way when it comes to writing. When I think back at how I’ve grown as a writer over the last decade or so, I see clear signs of progress. Thanks mostly to the guidelines that David Mamet presents in On Directing Film, it’s much easier for me to write a decent first draft than it was when I began. I rarely leave anything unfinished; I know how to outline and how to cut; and I’m unlikely to make any huge technical mistakes. In his book Which Lie Did I Tell?, William Goldman says something similar about screenwriting:

Stephen Sondheim once said this: “I cannot write a bad song. You begin it here, build, end there. The words will lay properly on the music so they can be sung, that kind of thing. You may hate it, but it will be a proper song.” I sometimes feel that way about my screenplays. I’ve been doing them for so long now, and I’ve attempted most genres. I know about entering the story as late as possible, entering each scene as late as possible, that kind of thing. You may hate it, but it will be a proper screenplay.

Craft, in other words, can take you most of the way—but it’s the final leg that kills you. As Goldman concludes of his initial pass on the script for Absolute Power: “This first draft was proper as hell—you just didn’t give a shit.” And sooner or later, most writers find that they spend most of their time on that last mile.

Like most other art forms, creative writing can indeed be taught—but only to the point that it still resembles an engineering problem. There are a few basic tricks of structure and technique that will improve almost anyone’s work, much like the skills that you learn in art books like Drawing on the Right Side of the Brain, and that kind of advancement can be enormously satisfying. When it comes to the last mile between you and your desired result, however, many of the rules start to seem useless. You aren’t dealing with the general principles that have gotten you this far, but with problems that arise on the level of individual words or sentences, each one of which needs to be tackled on its own. There’s no way of knowing whether or not you’ve made the right choice until you’ve looked at them all in a row, and even if something seems wrong, you may not know how to fix it. The comforting shape of the outline, which can be assembled in a reasonably logical fashion, is replaced by the chaos of the text, and the fact that you’ve done good work on this level before is no guarantee that you can do it right now. I’ve learned a lot about writing over the years, but to the extent that I’m not yet the writer that I want to be, it lies almost entirely in that last mile, where the ideal remains tantalizingly out of reach.

As a result, I end up revising endlessly, even a late stage, and although the draft always gets better, it never reaches perfection. After a while, you have to decide that it’s as good as it’s going to get, and then move on to something else—which is why it helps to have a deadline. But you can take comfort in the fact that the last mile affects even the best of us. In a recent New York Times profile of the playwright Tony Kushner, Charles McGrath writes:

What makes Angels in America so complicated to stage is not just Mr. Kushner’s need to supervise everything, but that Perestroika, the second part, is to a certain extent a work in progress and may always be. The first part, Millennium Approaches, was already up and running in the spring of 1991, when, with a deadline looming, Mr. Kushner retreated to a cabin in Northern California and wrote most of Perestroika in a feverish eight-day stint, hardly sleeping and living on junk food. He has been tinkering with it ever since…Even during rehearsal last month he was still cutting, rewriting, restructuring.

If Tony Kushner is still revising Angels in America, it makes me feel a little better about spending my life on that last mile. Or as John McPhee says about knowing when to stop: “What I know is that I can’t do any better; someone else might do better, but that’s all I can do; so I call it done.”

Instant karma

with one comment

Last year, my wife and I bought an Instant Pot. (If you’re already dreading the rest of this post, I promise in advance that it won’t be devoted solely to singing its praises.) If you somehow haven’t encountered one before, it’s a basically a programmable pressure cooker. It has a bunch of other functions, including slow cooking and making yogurt, but aside from its sauté setting, I haven’t had a chance to use them yet. At first, I suspected that it would be another appliance, like our bread maker, that we would take out of the box once and then never touch again, but somewhat to my surprise, I’ve found myself using it on a regular basis, and not just as a reliable topic for small talk at parties. Its great virtue is that it allows you to prepare certain tasty but otherwise time-consuming recipes—like the butter chicken so famous that it received its own writeup in The New Yorker—with a minimum of fuss. As I write these lines, my Instant Pot has just finished a batch of soft-boiled eggs, which is its most common function in my house these days, and I might use it tomorrow to make chicken adobo. Occasionally, I’ll be mildly annoyed by its minor shortcomings, such as the fact that an egg set for four minutes at low pressure might have a perfect runny yolk one day and verge on hard-boiled the next. It saves time, but when you add in the waiting period to build and then release the pressure, which isn’t factored into most recipes, it can still take an hour or more to make dinner. But it still marks the most significant step forward in my life in the kitchen since Mark Bittman taught me how to use the broiler more than a decade ago.

My wife hasn’t touched it. In fact, she probably wouldn’t mind if I said that she was scared of the Instant Pot—and she isn’t alone in this. A couple of weeks ago, the Wall Street Journal ran a feature by Ellen Byron titled “America’s Instant-Pot Anxiety,” with multiple anecdotes about home cooks who find themselves afraid of their new appliance:

Missing from the enclosed manual and recipe book is how to fix Instant Pot anxiety. Debbie Rochester, an elementary-school teacher in Atlanta, bought an Instant Pot months ago but returned it unopened. “It was too scary, too complicated,” she says. “The front of the thing has so many buttons.” After Ms. Rochester’s friends kept raving about their Instant Pot meals, she bought another one…Days later, Ms. Rochester began her first beef stew. After about ten minutes of cooking, it was time to release the pressure valve, the step she feared most. Ms. Rochester pulled her sweater over her hand, turned her back and twisted the knob without looking. “I was praying that nothing would blow up,” she says.

Elsewhere, the article quotes Sharon Gebauer of San Diego, who just wanted to make beef and barley soup, only to be filled with sudden misgivings: “I filled it up, started it pressure cooking, and then I started to think, what happens when the barley expands? I just said a prayer and stayed the hell away.”

Not surprisingly, the article has inspired derision from Instant Pot enthusiasts, among whom one common response seems to be: “People are dumb. They don’t read instruction manuals.” Yet I can testify firsthand that the Instant Pot can be intimidating. The manual is thick and not especially organized, and it does a poor job of explaining such crucial features as the steam release and float valve. (I had to watch a video to learn how to handle the former, and I didn’t figure out what the latter was until I had been using the pot for weeks.) But I’ve found that you can safely ignore most of it and fall back on a few basic tricks— as soon as you manage to get through at least one meal. Once I successfully prepared my first dish, my confidence increased enormously, and I barely remember how it felt to be nervous around it. And that may be the single most relevant point about the cult that the Instant Pot has inspired, which rivals the most fervent corners of fan culture. As Kevin Roose noted in a recent article in the New York Times:

A new religion has been born…Its deity is the Instant Pot, a line of electric multicookers that has become an internet phenomenon and inspired a legion of passionate foodies and home cooks. These devotees—they call themselves “Potheads”—use their Instant Pots for virtually every kitchen task imaginable: sautéing, pressure-cooking, steaming, even making yogurt and cheesecakes. Then, they evangelize on the internet, using social media to sing the gadget’s praises to the unconverted.

And when you look at the Instant Pot from a certain angle, you realize that it has all of the qualities required to create a specific kind of fan community. There’s an initial learning curve that’s daunting enough to keep out the casuals, but not so steep that it prevents a critical mass of enthusiasts from forming. Once you learn the basics, you forget how intimidating it seemed when you were on the outside. And it has a huge body of associated lore that discourages newbies from diving in, even if it doesn’t matter much in practice. (In the months that I’ve been using the Instant Pot, I’ve never used anything except the manual pressure and sauté functions, and I’ve disregarded the rest of the manual, just as I draw a blank on pretty much every element of the mytharc on The X-Files.) Most of all, perhaps, it takes something that is genuinely good, but imperfect, and elevates it into an object of veneration. There are plenty of examples in pop culture, from Doctor Who to Infinite Jest, and perhaps it isn’t a coincidence that the Instant Pot has a vaguely futuristic feel to it. A science fiction or fantasy franchise can turn off a lot of potential fans because of its history and complicated externals, even if most are peripheral to the actual experience. Using the Instant Pot for the first time is probably easier than trying to get into Doctor Who, or so I assume—I’ve steered clear of that franchise for many of the same reasons, reasonable or otherwise. There’s nothing wrong with being part of a group drawn together by the shared object of your affection. But once you’re on the inside, it can be hard to put yourself in the position of someone who might be afraid to try it because it has so many buttons.

Written by nevalalee

February 15, 2018 at 8:45 am

%d bloggers like this: