Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘The New Yorker

Life on the last mile

with 2 comments

In telecommunications, there’s a concept called “the last mile,” which states that the final leg of a network—the one that actually reaches the user’s home, school or office—is the most difficult and expensive to build. It’s one thing to construct a massive trunkline, which is basically a huge but relatively straightforward feat of engineering, and quite another to deal with the tangle of equipment, wiring, and specifications on the level of thousands of individual households. More recently, the concept has been extended to public transportation, delivery and distribution services, and other fields that depend on connecting an industrial operation on the largest imaginable scale with specific situations on the retail side. (For instance, Amazon has been trying to cross the last mile through everything from its acquisition of Whole Foods to drone delivery, and the fact that these are seen as alternative approaches to the same problem points to how complicated it really is.) This isn’t just a matter of infrastructure, either, but of the difficulties inherent to any system in which a single pipeline has to split into many smaller branches, whether it’s carrying blood, water, mail, or data. Ninety percent of the wiring can be in that last mile, and success lies less in any overall principles than in the irritating particulars. It has to be solved on the ground, rather than in a design document, and you’ll never be able to anticipate all of the obstacles that you’ll face once those connections start to multiply. It’s literally about the ramifications.

I often feel the same way when it comes to writing. When I think back at how I’ve grown as a writer over the last decade or so, I see clear signs of progress. Thanks mostly to the guidelines that David Mamet presents in On Directing Film, it’s much easier for me to write a decent first draft than it was when I began. I rarely leave anything unfinished; I know how to outline and how to cut; and I’m unlikely to make any huge technical mistakes. In his book Which Lie Did I Tell?, William Goldman says something similar about screenwriting:

Stephen Sondheim once said this: “I cannot write a bad song. You begin it here, build, end there. The words will lay properly on the music so they can be sung, that kind of thing. You may hate it, but it will be a proper song.” I sometimes feel that way about my screenplays. I’ve been doing them for so long now, and I’ve attempted most genres. I know about entering the story as late as possible, entering each scene as late as possible, that kind of thing. You may hate it, but it will be a proper screenplay.

Craft, in other words, can take you most of the way—but it’s the final leg that kills you. As Goldman concludes of his initial pass on the script for Absolute Power: “This first draft was proper as hell—you just didn’t give a shit.” And sooner or later, most writers find that they spend most of their time on that last mile.

Like most other art forms, creative writing can indeed be taught—but only to the point that it still resembles an engineering problem. There are a few basic tricks of structure and technique that will improve almost anyone’s work, much like the skills that you learn in art books like Drawing on the Right Side of the Brain, and that kind of advancement can be enormously satisfying. When it comes to the last mile between you and your desired result, however, many of the rules start to seem useless. You aren’t dealing with the general principles that have gotten you this far, but with problems that arise on the level of individual words or sentences, each one of which needs to be tackled on its own. There’s no way of knowing whether or not you’ve made the right choice until you’ve looked at them all in a row, and even if something seems wrong, you may not know how to fix it. The comforting shape of the outline, which can be assembled in a reasonably logical fashion, is replaced by the chaos of the text, and the fact that you’ve done good work on this level before is no guarantee that you can do it right now. I’ve learned a lot about writing over the years, but to the extent that I’m not yet the writer that I want to be, it lies almost entirely in that last mile, where the ideal remains tantalizingly out of reach.

As a result, I end up revising endlessly, even a late stage, and although the draft always gets better, it never reaches perfection. After a while, you have to decide that it’s as good as it’s going to get, and then move on to something else—which is why it helps to have a deadline. But you can take comfort in the fact that the last mile affects even the best of us. In a recent New York Times profile of the playwright Tony Kushner, Charles McGrath writes:

What makes Angels in America so complicated to stage is not just Mr. Kushner’s need to supervise everything, but that Perestroika, the second part, is to a certain extent a work in progress and may always be. The first part, Millennium Approaches, was already up and running in the spring of 1991, when, with a deadline looming, Mr. Kushner retreated to a cabin in Northern California and wrote most of Perestroika in a feverish eight-day stint, hardly sleeping and living on junk food. He has been tinkering with it ever since…Even during rehearsal last month he was still cutting, rewriting, restructuring.

If Tony Kushner is still revising Angels in America, it makes me feel a little better about spending my life on that last mile. Or as John McPhee says about knowing when to stop: “What I know is that I can’t do any better; someone else might do better, but that’s all I can do; so I call it done.”

The Indian Project

leave a comment »

A few weeks ago, the art critic Peter Schjeldahl published a rave review in The New Yorker of an exhibition at the National Museum of the American Indian in Washington, D.C. Writing of “Americans,” which focuses on the depiction of Native Americans in popular culture, Schjeldahl concluded:

The project gains drama, and a degree of peril, from occurring in the tax-funded Mall museum that is physically the nearest to the Capitol Building. Absent any correct attitude or even argument on offer, viewers will be thrown back on their own assumptions, if they think about them—and I expect that many will. The show’s disarming sweetness and its bracing challenge come down to the same thing: a Whitmanesque idea of what Americanness means not only involving Indians but as a possible solvent of antagonisms past, present, and fated.

Elsewhere, Schjeldahl praised the essay collection Everything You Know About Indians is Wrong by Paul Chaat Smith, one of the show’s curators, as “one of my favorite books of recent years.” This inspired me to check it out myself, and I read it from cover to cover over the span of about a week, despite the fact that I had plenty of other work to do—and I agree that it’s pretty great. It’s a perceptive, often funny book that, as Schjeldahl said, “make[s] me feel smart,” and Smith gets bonus points from me for his ability to quote freely from the Pet Shop Boys. But it also made me deeply uneasy, just as “Americans” is evidently meant to do, and in a way that seems important to value and preserve at the cultural moment in which we’ve all found ourselves.

Smith’s most memorable essay, by far, is “Americans Without Tears,” which opens with a startling assertion: “Generally speaking, white people who are interested in Indians are not very bright. Generally speaking, white people who take an active interest in Indians, who travel to visit Indians and study Indians, who seek to help Indians, are even more not very bright. I theorize that in the case of white North Americans, the less interest they have in Indians, the more likely it is that one (and here I mean me or another Indian person) could have an intelligent conversation with them.” He qualifies this at once by saying that there are plenty of exceptions, and his real point is one that implicates all of us:

I further theorize that, generally speaking, smart white people realize early on, probably even as children, that the whole Indian thing is an exhausting, dangerous, and complicated snake pit of lies. And that the really smart ones somehow intuit that these lies are mysteriously and profoundly linked to the basic construction of the reality of daily life, now and into the foreseeable future. And without it ever quite being a conscious thought, these intelligent white people come to understand that there is no percentage, none, in considering the Indian question, and so the acceptable result is to, at least subconsciously, acknowledge that everything they are likely to learn about Indians in school, from books and movies and television programs, from dialogue with Indians, from Indian art and stories, from museum exhibits about Indians, is probably going to be crap, so they should be avoided.

This observation would have stuck in my head in any case, but I soon found myself living it out in practice. In the very next essay in the book, Smith quotes the artist Jimmie Durham: “Europe is an Indian project.” This essentially means that if we acknowledge that the European discovery of the Americas is the pivotal event of the last thousand years, and I think that it is, Native Americans can only be at the center of it. I loved this observation, but when I dug deeper, I found a thicket of issues that were, well, exhausting and complicated. Last summer, in response to a major retrospective of Durham’s work, a group of Cherokee artists wrote in a scathing editorial: “Jimmie Durham is not a Cherokee in any legal or cultural sense. This is not a small matter of paperwork but a fundamental matter of tribal self-determination and self-governance. Durham has no Cherokee relatives; he does not live in or spend time in Cherokee communities; he does not participate in dances and does not belong to a ceremonial ground.” Durham, for his part, claims to be a quarter Cherokee, and an exhaustive look at his genealogy leaves the question, at best, unresolved. But there are questions of identity here that can’t be easily dismissed either way. In an excellent article on the dispute, Michael Slenske of Vulture quotes Gene Norris, the senior genealogist at the Cherokee Heritage Center in Oklahoma:

Almost every one of the recognized tribes of the United States use a federally mandated document called a base roll, so it’s a paper trail, it’s not genes and chromosomes flowing through your blood kind of evidence. The records that we use for genealogy are not meant for genealogy. They are very ambiguous and very incomplete and imprecise but that’s all we have to go by. That’s why records don’t reflect everything.

It’s impossible to cover this story, in other words, without wading into problems of tribal enrollment and identity that can’t be adequately grasped in ten minutes of research, so maybe the prudent course of action is to keep out of it altogether. (Even Smith, a longtime fan of Durham’s, declined to be interviewed by Vulture, which quotes his sly line from the catalog of the artist’s work: “Jimmie Durham is an Indian project.”) But maybe that confusion—or what Schjeldahl calls the absence of “any correct attitude”—is exactly the place from which an honest discussion of countless other issues has to begin, which is something we tend to forget. At a time in which so many debates are inflamed by what seems like utter certainty on both sides, Everything You Know About Indians is Wrong is like a dispatch from an aspect of American culture that remains unresolved because it’s largely unseen, unless it’s the other way around. When you read it and then turn to the opinion page of the New York Times, you feel the shock of newness being restored to problems to which we’ve long since become settled and numb. Smith writes:    

Because of the centrality of the Indian experience, and because of the particular place of privilege white people inhabit in relationship to that experience, many whites have only a few choices. They can become “interested in Indians” and completely ignore that centrality; they can recognize the centrality but shy away from engaging the issue because it’s all too complicated (the smarter ones); or, if they’re both smart and brave, they can honestly engage in dialogue.

Replace “the Indian experience” with your social cause of choice—or, even better, keep it exactly where it is—and you’re left with the indispensable starting point from which real understanding has to emerge. And what Smith says about the resulting dialogue is something that we should keep in mind: “This isn’t only subversive, it’s really difficult. Few can do it at all; hardly anybody knows how to do it well.”

The Potion of Circe

with one comment

Daniel Ellsberg

Note: I’m taking a few days off, so I’ll be republishing some of my favorite pieces from earlier in this blog’s run. This post originally appeared, in a slightly different form, on December 29, 2016.

In 1968, Daniel Ellsberg, the military analyst who would later become famous for leaking the Pentagon Papers, had a meeting with Henry Kissinger. At the time, Kissinger had spent most of his career as a consultant and an academic, and he was about to enter government service—as the National Security Advisor to Richard Nixon—for the first time. (Their conversation is described in Ellsberg’s memoir Secrets, and I owe my own discovery of it to a surprisingly fine article on Ellsberg and Edward Snowden by Malcolm Gladwell in The New Yorker.) Ellsberg, who had been brought in for a discussion about the Vietnam War, had a word of advice for Kissinger. He said:

Henry, there’s something I would like to tell you, for what it’s worth, something I wish I had been told years ago. You’ve been a consultant for a long time, and you’ve dealt a great deal with top secret information. But you’re about to receive a whole slew of special clearances, maybe fifteen or twenty of them, that are higher than top secret…I have a pretty good sense of what the effects of receiving these clearances are on a person who didn’t previously know they even existed. And the effects of reading the information that they will make available to you.

At first, Ellsberg said, Kissinger would feel “exhilarated” at having access to so much information. But he cautioned: 

Second, almost as fast, you will feel like a fool for having studied, written, talked about these subjects, criticized and analyzed decisions made by presidents for years without having known of the existence of all this information, which presidents and others had and you didn’t, and which must have influenced their decisions in ways you couldn’t even guess…You will feel like a fool, and that will last for about two weeks. Then, after you’ve started reading all this daily intelligence input and become used to using what amounts to whole libraries of hidden information…you will forget there ever was a time when you didn’t have it, and you’ll be aware only of the fact that you have it now and most others don’t…and that all those other people are fools.

Over a longer period of time—not too long, but a matter of two or three years—you’ll eventually become aware of the limitations of this information…In the meantime it will have become very hard for you to learn from anybody who doesn’t have these clearances. Because you’ll be thinking as you listen to them: ‘What would this man be telling me if he knew what I know?”

Henry Kissinger and Richard Nixon

After a while, Ellsberg concluded, this “mental exercise” would become so tortuous that Kissinger might cease to pay attention altogether: “The danger is, you’ll become something like a moron. You’ll become incapable of learning from most people in the world, no matter how much experience they may have in their particular areas that may be much greater than yours.” Ellsberg compared this sort of secret information to the potion of Circe, which turned Odysseus’s men into swine and left them incapable of working with other humans. And it’s a warning worth bearing in mind even for those of us who don’t have access to classified intelligence. Ellsberg’s admonition is really about distinguishing between raw information—which can be acquired with nothing but patience, money, or the right clearances—and the more elusive quality of insight. It applies to everyone who has ever wound up with more facts on a specific subject than anybody else he or she knows, which is just as true of writers of theses, research papers, and works of nonfiction as it is of government advisors. In researching my book Astounding, for instance, I’ve seen thousands of pages of letters and other documents that very few other living people have studied. They aren’t classified, but they’re hard to obtain and inconvenient to read, and I’m reasonably sure that I’m the only person in recent years who has tried to absorb them in their entirety. But a lot of other people could have done it. I didn’t have to be smart: I just had to be willing to reach out to the right librarians, sit in a chair for long periods, stare at a microfilm reader, and take decent notes.

There’s something to be said, of course, for being the one who actually goes out and does it. And there’s a sense in which this kind of drudgery is an indispensable precursor to insight: you’re more likely to come up with something worthwhile if you’ve mined the ore yourself, and there’s a big difference between taking the time to unearth it personally and having it handed to you. Reading a hundred grainy pages to discover the one fact you need isn’t the same thing as finding it on Wikipedia. It’s necessary, if not sufficient, and as Ellsberg notes, the “moron” stage is one that everyone needs to pass through in order to emerge on the other side. (A lot of us are also feeling nostalgic these days for the kind of government moron whom Ellsberg describes, who at least respected the information he had, rather than ignoring or dismissing any data that didn’t suit his political needs.) But it’s important to draw a line between the kind of expertise that accumulates steadily as a function of time—which any good drudge can acquire—and the kind that builds up erratically through thought and experience. It’s obvious in other people, but it can be hard to see it in ourselves. For long stretches, we’ll have acquired just enough knowledge to be dangerous, and we can only hope that we won’t do any lasting damage. And even if we’ve been warned, it’s a lesson that has to be learned firsthand. As Ellsberg ends the story: “Kissinger hadn’t interrupted this long warning…He seemed to understand that it was heartfelt, and he didn’t take it as patronizing, as I’d feared. But I knew it was too soon for him to appreciate fully what I was saying. He didn’t have the clearances yet.”

Written by nevalalee

March 1, 2018 at 9:00 am

This post has no title

leave a comment »

In John McPhee’s excellent new book on writing, Draft No. 4, which I mentioned here the other day, he shares an anecdote about his famous profile of the basketball player Bill Bradley. McPhee was going over a draft with William Shawn, the editor of The New Yorker, “talking three-two zones, blind passes, reverse pivots, and the setting of picks,” when he realized that he had overlooked something important:

For some reason—nerves, what else?—I had forgotten to find a title before submitting the piece. Editors of every ilk seem to think that titles are their prerogative—that they can buy a piece, cut the title off the top, and lay on one of their own. When I was young, this turned my skin pink and caused horripilation. I should add that I encountered such editors almost wholly at magazines other than The New YorkerVogue, Holiday, the Saturday Evening Post. The title is an integral part of a piece of writing, and one of the most important parts, and ought not to be written by anyone but the writer of what follows the title. Editors’ habit of replacing an author’s title with one of their own is like a photo of a tourist’s head on the cardboard body of Mao Zedong. But the title missing on the Bill Bradley piece was my oversight. I put no title on the manuscript. Shawn did. He hunted around in the text and found six words spoken by the subject, and when I saw the first New Yorker proof the piece was called “A Sense of Where You Are.”

The dynamic that McPhee describes at other publications still exists today—I’ve occasionally bristled at the titles that have appeared over the articles that I’ve written, which is a small part of the reason that I’ve moved most of my nonfiction onto this blog. (The freelance market also isn’t what it used to be, but that’s a subject for another post.) But a more insidious factor has invaded even the august halls of The New Yorker, and it has nothing to do with the preferences of any particular editor. Opening the most recent issue, for instance, I see that there’s an article by Jia Tolentino titled “Safer Spaces.” On the magazine’s website, it becomes “Is There a Smarter Way to Think About Sexual Assault on Campus?”, with a line at the bottom noting that it appears in the print edition under its alternate title. Joshua Rothman’s “Jambusters” becomes “Why Paper Jams Persist.” A huge piece by David Grann, “The White Darkness,” which seems destined to get optioned for the movies, earns slightly more privileged treatment, and it merely turns into “The White Darkness: A Journey Across Antarctica.” But that’s the exception. When I go back to the previous issue, I find that the same pattern holds true. Michael Chabon’s “The Recipe for Life” is spared, but David Owen’s “The Happiness Button” is retitled “Customer Satisfaction at the Push of a Button,” Rachel Aviv’s “The Death Debate” becomes “What Does It Mean to Die?”, and Ian Frazier’s “Airborne” becomes “The Trippy, High-Speed World of Drone Racing.” Which suggests to me that if McPhee’s piece appeared online today, it would be titled something like “Basketball Player Bill Bradley’s Sense of Where He Is.” And that’s if he were lucky.

The reasoning here isn’t a mystery. Headlines are written these days to maximize clicks and shares, and The New Yorker isn’t immune, even if it sometimes raises an eyebrow. Back in 2014, Maria Konnikova wrote an article for the magazine’s website titled “The Six Things That Make Stories Go Viral Will Amaze, and Maybe Infuriate, You,” in which she explained one aspect of the formula for online headlines: “The presence of a memory-inducing trigger is also important. We share what we’re thinking about—and we think about the things we can remember.” Viral headlines can’t be allusive, make a clever play on words, or depend on an evocative reference—they have to spell everything out. (To build on McPhee’s analogy, it’s less like a tourist’s face on the cardboard body of Mao Zedong than an oversized foam head of Mao himself.) A year later, The New Yorker ran an article by Andrew Marantz on the virality expert Emerson Spartz, and it amazed and maybe infuriated me. I’ve written about this profile elsewhere, but looking it over again now, my eye was caught by these lines:

Much of the company’s success online can be attributed to a proprietary algorithm that it has developed for “headline testing”—a practice that has become standard in the virality industry…Spartz’s algorithm measures which headline is attracting clicks most quickly, and after a few hours, when a statistically significant threshold is reached, the “winning” headline automatically supplants all others. “I’m really, really good at writing headlines,” he told me.

And it’s worth noting that while Marantz’s piece appeared in print as “The Virologist,” in an online search, it pops up as “King of Clickbait.” Even as the magazine gently mocked Spartz, it took his example to heart.

None of this is exactly scandalous, but when you think of a title as “an integral part of a piece of writing,” as McPhee does, it’s undeniably sad. There isn’t any one title for an article anymore, and most readers will probably only see its online incarnation. And this isn’t because of an editor’s tastes, but the result of an impersonal set of assumptions imposed on the entire industry. Emerson Spartz got his revenge on The New Yorker—he effectively ended up writing its headlines. And while I can’t blame any media company for doing whatever it can to stay viable, it’s also a real loss. McPhee is right when he says that selecting a title is an important part of the process, and in a perfect world, it would be left up to the writer. (It can even lead to valuable insights in itself. When I was working on my article on the fiction of L. Ron Hubbard, I was casting about randomly for a title when I came up with “Xenu’s Paradox.” I didn’t know what it meant, but it led me to start thinking about the paradoxical aspects of Hubbard’s career, and the result was a line of argument that ended up being integral not just to the article, but to the ensuing book. And I was amazed when it survived intact on Longreads.) When you look at the grindingly literal, unpoetic headlines that currently populate the homepage of The New Yorker, it’s hard not to feel nostalgic for an era in which an editor might nudge a title in the opposite direction. In 1966, when McPhee delivered a long piece on oranges in Florida, William Shawn read it over, focused on a quotation from the poet Andrew Marvell, and called it “Golden Lamps in a Green Night.” McPhee protested, and the article was finally published under the title that he had originally wanted. It was called “Oranges.”

Written by nevalalee

February 16, 2018 at 8:50 am

Instant karma

with one comment

Last year, my wife and I bought an Instant Pot. (If you’re already dreading the rest of this post, I promise in advance that it won’t be devoted solely to singing its praises.) If you somehow haven’t encountered one before, it’s a basically a programmable pressure cooker. It has a bunch of other functions, including slow cooking and making yogurt, but aside from its sauté setting, I haven’t had a chance to use them yet. At first, I suspected that it would be another appliance, like our bread maker, that we would take out of the box once and then never touch again, but somewhat to my surprise, I’ve found myself using it on a regular basis, and not just as a reliable topic for small talk at parties. Its great virtue is that it allows you to prepare certain tasty but otherwise time-consuming recipes—like the butter chicken so famous that it received its own writeup in The New Yorker—with a minimum of fuss. As I write these lines, my Instant Pot has just finished a batch of soft-boiled eggs, which is its most common function in my house these days, and I might use it tomorrow to make chicken adobo. Occasionally, I’ll be mildly annoyed by its minor shortcomings, such as the fact that an egg set for four minutes at low pressure might have a perfect runny yolk one day and verge on hard-boiled the next. It saves time, but when you add in the waiting period to build and then release the pressure, which isn’t factored into most recipes, it can still take an hour or more to make dinner. But it still marks the most significant step forward in my life in the kitchen since Mark Bittman taught me how to use the broiler more than a decade ago.

My wife hasn’t touched it. In fact, she probably wouldn’t mind if I said that she was scared of the Instant Pot—and she isn’t alone in this. A couple of weeks ago, the Wall Street Journal ran a feature by Ellen Byron titled “America’s Instant-Pot Anxiety,” with multiple anecdotes about home cooks who find themselves afraid of their new appliance:

Missing from the enclosed manual and recipe book is how to fix Instant Pot anxiety. Debbie Rochester, an elementary-school teacher in Atlanta, bought an Instant Pot months ago but returned it unopened. “It was too scary, too complicated,” she says. “The front of the thing has so many buttons.” After Ms. Rochester’s friends kept raving about their Instant Pot meals, she bought another one…Days later, Ms. Rochester began her first beef stew. After about ten minutes of cooking, it was time to release the pressure valve, the step she feared most. Ms. Rochester pulled her sweater over her hand, turned her back and twisted the knob without looking. “I was praying that nothing would blow up,” she says.

Elsewhere, the article quotes Sharon Gebauer of San Diego, who just wanted to make beef and barley soup, only to be filled with sudden misgivings: “I filled it up, started it pressure cooking, and then I started to think, what happens when the barley expands? I just said a prayer and stayed the hell away.”

Not surprisingly, the article has inspired derision from Instant Pot enthusiasts, among whom one common response seems to be: “People are dumb. They don’t read instruction manuals.” Yet I can testify firsthand that the Instant Pot can be intimidating. The manual is thick and not especially organized, and it does a poor job of explaining such crucial features as the steam release and float valve. (I had to watch a video to learn how to handle the former, and I didn’t figure out what the latter was until I had been using the pot for weeks.) But I’ve found that you can safely ignore most of it and fall back on a few basic tricks— as soon as you manage to get through at least one meal. Once I successfully prepared my first dish, my confidence increased enormously, and I barely remember how it felt to be nervous around it. And that may be the single most relevant point about the cult that the Instant Pot has inspired, which rivals the most fervent corners of fan culture. As Kevin Roose noted in a recent article in the New York Times:

A new religion has been born…Its deity is the Instant Pot, a line of electric multicookers that has become an internet phenomenon and inspired a legion of passionate foodies and home cooks. These devotees—they call themselves “Potheads”—use their Instant Pots for virtually every kitchen task imaginable: sautéing, pressure-cooking, steaming, even making yogurt and cheesecakes. Then, they evangelize on the internet, using social media to sing the gadget’s praises to the unconverted.

And when you look at the Instant Pot from a certain angle, you realize that it has all of the qualities required to create a specific kind of fan community. There’s an initial learning curve that’s daunting enough to keep out the casuals, but not so steep that it prevents a critical mass of enthusiasts from forming. Once you learn the basics, you forget how intimidating it seemed when you were on the outside. And it has a huge body of associated lore that discourages newbies from diving in, even if it doesn’t matter much in practice. (In the months that I’ve been using the Instant Pot, I’ve never used anything except the manual pressure and sauté functions, and I’ve disregarded the rest of the manual, just as I draw a blank on pretty much every element of the mytharc on The X-Files.) Most of all, perhaps, it takes something that is genuinely good, but imperfect, and elevates it into an object of veneration. There are plenty of examples in pop culture, from Doctor Who to Infinite Jest, and perhaps it isn’t a coincidence that the Instant Pot has a vaguely futuristic feel to it. A science fiction or fantasy franchise can turn off a lot of potential fans because of its history and complicated externals, even if most are peripheral to the actual experience. Using the Instant Pot for the first time is probably easier than trying to get into Doctor Who, or so I assume—I’ve steered clear of that franchise for many of the same reasons, reasonable or otherwise. There’s nothing wrong with being part of a group drawn together by the shared object of your affection. But once you’re on the inside, it can be hard to put yourself in the position of someone who might be afraid to try it because it has so many buttons.

Written by nevalalee

February 15, 2018 at 8:45 am

The lantern battery and the golem

leave a comment »

Science means simply the aggregate of all the recipes that are always successful. All the rest is literature.

—Paul Valéry

Yesterday morning, my wife asked me: “Have you seen the illustration for Michael Chabon’s new essay?” She thrust the latest issue of The New Yorker in my direction, and when I looked down, I saw a drawing by Greg Clarke of a little boy reading what was unmistakably a copy of Astounding Science Fiction. The kid is evidently meant to be Chabon himself, and his article, “The Recipe for Life,” is about nothing less than how his inner life was shaped by his father’s memories of an earlier era. Chabon writes:

He talked about comic books, radio dramas, Astounding magazine, and the stories they’d all told: of rocket-powered heroes, bug-eyed monsters, mad scientists bent on ruling the world. He described to me how he had saved box tops from cold cereals like Post Toasties, and redeemed them by mail for Junior G-Man badges or cardboard Flying Fortresses that carried payloads of black marbles. He told me about playing games like potsy, stickball, handball, and ringolevio, and, for the first time but by no means the last, about an enchanted pastry called a charlotte russe, a rosette of whipped cream on a disk of sponge cake served in a scalloped paper cup, topped with a Maraschino cherry. He described having spent weeks in the cellar of his Flatbush apartment building as a young teen-ager, with some mail-order chemicals, five pounds of kosher salt, and a lantern battery, trying to re-create “the original recipe for life on earth,” as detailed in the pages of Astounding.

The younger Chabon listened to his father intently, and all the while, he was “riding the solitary rails of my imagination into our mutual story, into the future we envisioned and the history we actually accumulated; into the vanished world that he once inhabited.”

Chabon’s father seems to have been born around 1938, or right around the time that John W. Campbell took over Astounding, positioning him to barely catch the tail end of the golden age. He would have been about twelve when the article “Dianetics: The Evolution of a Science” appeared in the May 1950 issue, which means that he snuck in right under the wire. (As the fan Peter Graham once said: “The golden age of science fiction is twelve.”) In fact, when you account for a gap in age of about eighteen years, the fragments of his childhood that we glimpse here are intriguingly reminiscent of Isaac Asimov. Both were bright Jewish boys growing up in Brooklyn—otherwise known as the center of the universe—and they shared the same vocabulary of nostalgia. Robert Chabon reminisced about stickball and the charlotte russe; Asimov lamented the disappearance of the egg cream and wrote in his memoirs:

We used to play “punchball,” for instance. This was a variant of baseball, played without a lot and without a bat. All you needed was a street (we called it a “gutter”) and a rubber ball. You hit the ball with your clenched fist and from then on it was pretty much like baseball.

I don’t know if kids these days still play punchball, but it survived for long enough to be fondly remembered by Stephen Jay Gould, who was born in 1941 in Queens. For Gould, punchball was nothing less than “the canonical ‘recess’ game…It was the game we would play unless kids specifically called for another form.”

Like many polymaths who thrived at the intersection between science and the arts, Gould and Asimov were raised in secular Jewish households, and Chabon’s essay unfolds against a similar, largely unstated cultural background. He notes that his father knew “the birth names of all five Marx Brothers,” as well as the rather startling fact that Underdog’s archenemy was named Simon Bar Sinister. Recalling his father’s “expression of calm intensity,” Chabon links it to another Jewish icon: “A few years later, I will watch Leonard Nimoy, as Mr. Spock, look up from his scanner on the bridge of the USS Enterprise, and catch an echo of my father’s face.” As he silently watches Fritz Lang’s science fiction epic Metropolis in his ailing father’s bedroom, he imagines the conversation that might have unfolded between them under happier circumstances: “Lang’s mother was Jewish. His wife was a member of the Nazi Party.” “Hey, that would make a great sitcom.” Chabon doesn’t emphasize these connections, perhaps because he’s explored them endlessly elsewhere. In his earlier essay “Imaginary Homelands,” he writes:

For a long time now I’ve been busy, in my life and in my work, with a pair of ongoing, overarching investigations: into my heritage—rights and privileges, duties and burdens—as a Jew and as a teller of Jewish stories; and into my heritage as a lover of genre fiction…Years spent writing novels and stories about golems and the Jewish roots of American superhero comic books, Sherlock Holmes and the Holocaust, medieval Jewish freebooters, Passover Seders attended by protégés of forgotten Lovecraftian horror writers, years of writing essays, memoirs, and nervous manifestos about genre fiction of Jewishness.

This is one of the richest veins imaginable for cultural exploration, and Chabon has conducted it so expertly for so long that he can trust us to make many of the associations for ourselves. Revealingly, this is actually the second essay that he has written under the title “The Recipe for Life.” The first, published almost two decades ago, was a meditation on the myth of the golem, a prototypical science fiction story with anticipatory shades of Frankenstein. In his earlier piece, Chabon quotes the philosopher Gershom Scholem: “Golem-making is dangerous; like all major creation it endangers the life of the creator—the source of danger, however, is not the golem…but the man himself.” Chabon continues:

When I read these words, I saw at once a connection to my own work. Anything good that I have written has, at some point during its composition, left me feeling uneasy and afraid. It has seemed, for a moment at least, to put me at risk…I have come to see this fear, this sense of my own imperilment by my creations, as not only an inevitable, necessary part of writing fiction but as virtual guarantor, insofar as such a thing is possible, of the power of my work: as a sign that I am on the right track, that I am following the recipe correctly, speaking the proper spells.

The recipe, Chabon implies, can come from either “The Idea of the Golem” or Astounding, and we owe much of his remarkable career to that insight, which he implicitly credits, in turn, to his father: “The past and the future became alloyed in my imagination: magic and science, heroes and villains, brick-and-steel Brooklyn and the chromium world of tomorrow.”

The minor key

with one comment

“What keeps science fiction a minor genre, for all the brilliance of its authors and apparent pertinence of its concerns?” The critic who asked this question was none other than John Updike, in his New Yorker review of David G. Hartwell’s anthology The World Treasury of Science Fiction, which was published at the end of the eighties. Updike immediately responded to his own question with his usual assurance:

The short answer is that each science-fiction story is so busy inventing its environment that little energy is left to be invested in the human subtleties. Ordinarily, “mainstream” fiction snatches what it needs from the contemporary environment and concentrates upon surprising us with details of behavior; science fiction tends to reverse the priorities…It rarely penetrates and involves us the way the quest realistic fiction can…”The writer,” Edmund Wilson wrote, “must always find expressions for something which has never yet been exposed, must master a new set of phenomena which has never yet been mastered.” Those rhapsodies, for instance, which Proust delivered upon the then-fresh inventions of the telephone, the automobile, and the airplane point up the larger relativities and magical connections of his great novel, as well as show the new century breaking upon a fin-de-siècle sensibility. The modest increments of fictional “news,” of phenomena whose presentation is unprecedented, have the cumulative weight of true science—a nudging, inching fidelity to human change ultimately far more impressive and momentous than the great glittering leaps of science fiction.

I’ll concede that Updike’s underlying point here is basically correct, and that a lot of science fiction has to spend so much time establishing the premise and the background that it has to shortchange or underplay other important qualities along the way. (At its highest level, this is less a reflection of the author’s limitations than a courtesy to the reader. It’s hard to innovate along every parameter at once, so complex works of speculative fiction as different as Gravity’s Rainbow and Inception need to strategically simplify wherever they can.) But there’s also a hidden fallacy in Updike’s description of science fiction as “a minor genre.” What, exactly, would a “major” genre look like? It’s hard to come up with a definitive list, but if we’re going to limit ourselves to a conception of genre that encompasses science fiction and not, say, modernist realism, we’d probably include fantasy, horror, western, romance, erotica, adventure, mystery, suspense, and historical fiction. When we ask ourselves whether Updike would be likely to consider any of these genres “major,” it’s pretty clear that the answer is no. Every genre, by definition, is minor, at least to many literary critics, which not only renders the distinction meaningless, but raises a host of other questions. If we honestly ask what keeps all genres—although not individual authors—in the minor category, there seem to be three possibilities. Either genre fiction fails to attract or keep major talent; it suffers from various systemic problems of the kind that Updike identified for science fiction; or there’s some other quirk in the way we think about fiction that relegates these genres to a secondary status, regardless of the quality of specific works or writers.

And while all three of these factors may play a role, it’s the third one that seems most plausible. (After all, when you average out the quality of all “literary fiction,” from Updike, Bellow, and Roth down to the work put out by the small presses and magazines, it seems fairly clear that Sturgeon’s Law applies here as much as anywhere else, and ninety percent of everything is crud. And modernist realism, like every category coherent enough to earn its own label, has plenty of clichés of its own.) In particular, if a genre writer is deemed good enough, his or her reward is to be elevated out of it entirely. You clearly see this with such authors as Jorge Luis Borges, perhaps the greatest writer of speculative fiction of the twentieth century, who was plucked out of that category to complete more effectively with Proust, Joyce, and Kafka—the last of whom was arguably also a genre writer who was forcibly promoted to the next level. It means that the genre as a whole can never win. Its best writers are promptly confiscated, freeing up critics to speculate about why it remains “minor.” As Daniel Handler noted in an interview several years ago:

I believe that children’s literature is a genre. I resisted the idea that children’s literature is just anything that children are reading. And I certainly resisted the idea that certain books should get promoted out of children’s literature just because adults are reading them. That idea is enraging too. That’s what happens to any genre, right? First you say, “Margaret Atwood isn’t really a science fiction writer.” Then you say, “There really aren’t any good science fiction writers.” That’s because you promoted them all!

And this pattern isn’t a new one. It’s revealing that Updike quoted Edmund Wilson, who in his essays “Why Do People Read Detective Stories” and “Who Cares Who Killed Roger Ackroyd?” dismissed the entire mystery genre as minor or worse. Yet when it came to defending his fondness for one author in particular, he fell back on a familiar trick:

I will now confess, in my turn, that, since my first looking into this subject last fall, I have myself become addicted, in spells, to reading myself to sleep with Sherlock Holmes, which I had gone back to, not having looked at it since childhood, in order to see how it compared with Conan Doyle’s latest imitators. I propose, however, to justify my pleasure in rereading Sherlock Holmes on grounds entirely different from those on which the consumers of the current product ordinarily defend their taste. My contention is that Sherlock Holmes is literature on a humble but not ignoble level, whereas the mystery writers most in vogue now are not. The old stories are literature, not because of the conjuring tricks and the puzzles, not because of the lively melodrama, which they have in common with many other detective stories, but by virtue of imagination and style. These are fairy-tales, as Conan Doyle intimated in his preface to his last collection, and they are among the most amusing of fairy-tales and not among the least distinguished.

Strip away the specifics, and the outlines of the argument are clear. Sherlock Holmes is good, and mysteries are bad, so Sherlock Holmes must be something other than mystery fiction. It’s maddening, but from the point of view of a working critic, it makes perfect sense. You get to hold onto the works that you like, while keeping the rest of the genre safely minor—and then you can read yourself happily to sleep.

%d bloggers like this: