Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘Louis Menand

Thinkers of the unthinkable

with 4 comments

At the symposium that I attended over the weekend, the figure whose name seemed to come up the most was Herman Kahn, the futurologist and military strategist best known for his book On Thermonuclear War. Kahn died in 1983, but he still looms large over futures studies, and there was a period in which he was equally inescapable in the mainstream. As Louis Menand writes in a harshly critical piece in The New Yorker: “Herman Kahn was the heavyweight of the Megadeath Intellectuals, the men who, in the early years of the Cold War, made it their business to think about the unthinkable, and to design the game plan for nuclear war—how to prevent it, or, if it could not be prevented, how to win it, or, if it could not be won, how to survive it…The message of [his] book seemed to be that thermonuclear war will be terrible but we’ll get over it.” And it isn’t surprising that Kahn engaged in a dialogue throughout his life with science fiction. In her book The Worlds of Herman Kahn, Sharon Ghamari-Tabrizi relates:

Early in life [Kahn] discovered science fiction, and he remained an avid reader throughout adulthood. While it nurtured in him a rich appreciation for plausible possibilities, [his collaborator Anthony] Wiener observed that Kahn was quite clear about the purposes to which he put his own scenarios. “Herman would say, ‘Don’t imagine that it’s an arbitrary choice as though you were writing science fiction, where every interesting idea is worth exploring.’ He would have insisted on that. The scenario must focus attention on a possibility that would be important if it occurred.” The heuristic or explanatory value of a scenario mattered more to him than its accuracy.

Yet Kahn’s thinking was inevitably informed by the genre. Ghamari-Tabrizi, who refers to nuclear strategy as an “intuitive science,” sees hints of “the scientist-sleuth pulp hero” in On Thermonuclear War, which is just another name for the competent man, and Kahn himself openly acknowledged the speculative thread in his work: “What you are doing today fundamentally is organizing a Utopian society. You are sitting down and deciding on paper how a society at war works.” On at least one occasion, he invoked psychohistory directly. In the revised edition of the book Thinking About the Unthinkable, Kahn writes of one potential trigger for a nuclear war:

Here we turn from historical fact to science fiction. Isaac Asimov’s Foundation novels describe a galaxy where there is a planet of technicians who have developed a long-term plan for the survival of civilization. The plan is devised on the basis of a scientific calculation of history. But the plan is upset and the technicians are conquered by an interplanetary adventurer named the Mule. He appears from nowhere, a biological mutant with formidable personal abilities—an exception to the normal laws of history. By definition, such mutants rarely appear but they are not impossible. In a sense, we have already seen a “mule” in this century—Hitler—and another such “mutant” could conceivably come to power in the Soviet Union.

And it’s both frightening and revealing, I think, that Kahn—even as he was thinking about the unthinkable—doesn’t take the next obvious step, and observe that such a mutant could also emerge in the United States.

Asimov wouldn’t have been favorably inclined toward the notion of a “winnable” nuclear war, but Kahn did become friendly with a writer whose attitudes were more closely aligned with his own. In the second volume of Robert A. Heinlein: In Dialogue with His Century, William H. Patterson describes the first encounter between the two men:

By September 20, 1962, [the Heinleins] were in Las Vegas…[They] met Dr. Edward Teller, who had been so supportive of the Patrick Henry campaign, as well as one of Teller’s colleagues, Herman Kahn. Heinlein’s ears pricked up when he was introduced to this jolly, bearded fat man who looked, he said, more like a young priest than one of the sharpest minds in current political thinking…Kahn was a science fiction reader and most emphatically a Heinlein fan.

Three years later, Heinlein attended a seminar, “The Next Ten Years: Scenarios and Possibilities,” that Kahn held at the Hudson Institute in New York. Heinlein—who looked like Quixote to Kahn’s Sancho Panza—was flattered by the reception:

If I attend an ordinary cocktail party, perhaps two or three out of a large crowd will know who I am. If I go to a political meeting or a church or such, I may not be spotted at all…But at Hudson Institute, over two-thirds of the staff and over half of the students button-holed me. This causes me to have a high opinion of the group—its taste, IQ, patriotism, sex appeal, charm, etc. Writers are incurably conceited and pathologically unsure of themselves; they respond to stroking the way a cat does.

And it wasn’t just the “stroking” that Heinlein liked, of course. He admired Thinking About the Unthinkable and On Thermonuclear War, both of which would be interesting to read alongside Farnham’s Freehold, which was published just a few years later. Both Heinlein and Kahn thought about the future through stories, in a pursuit that carried a slightly disreputable air, as Kahn implied in his use of the word “scenario”:

As near as I can tell, the term scenario was first used in this sense in a group I worked with at the RAND Corporation. We deliberately choose the word to deglamorize the concept. In writing the scenarios for various situations, we kept saying “Remember, it’s only a scenario,” the kind of thing that is produced by Hollywood writers, both hacks and geniuses.

You could say much the same about science fiction. And perhaps it’s appropriate that Kahn’s most lasting cultural contribution came out of Hollywood. Along with Wernher von Braun, he was one of the two most likely models for the title character in Dr. Strangelove. Stanley Kubrick immersed himself in Kahn’s work—the two men met a number of times—and Kahn’s reaction to the film was that of a writer, not a scientist. As Ghamari-Tabrizi writes:

The Doomsday Machine was Kahn’s idea. “Since Stanley lifted lines from On Thermonuclear War without change but out of context,” Khan told reporters, he thought he was entitled to royalties from the film. He pestered him several times about it, but Kubrick held firm. “It doesn’t work that way!” he snapped, and that was that.

The back matter

with 2 comments

“Annotation may seem a mindless and mechanical task,” Louis Menand wrote a few years ago in The New Yorker. “In fact, it calls both for superb fine-motor skills and for adherence to the most exiguous formal demands.” Like most other aspects of writing, it can be all these things at once: mindless and an exercise of meticulous skill, mechanical and formally challenging. I’ve been working on the notes for Astounding for the last week and a half, and although I was initially dreading it, the task has turned out to be weirdly absorbing, in the way that any activity that requires repetitive motion but also continuous mild engagement can amount to a kind of hypnotism. The current draft has about two thousand notes, and I’m roughly three quarters of the way through. So far, the process has been relatively painless, although I’ve naturally tended to postpone the tricker ones for later, which means that I’ll end up with a big stack of problem cases to work through at the end. (My plan is to focus on notes exclusively for two weeks, then address the leftovers at odd moments until the book is due in December.) In the meantime, I’m spending hours every day organizing notes, which feels like a temporary career change. They live in their own Word file, like an independent work in themselves, and the fact that they’ll be bundled together as endnotes, rather than footnotes, encourages me to see them as a kind of bonus volume attached to the first, like a vestigial twin that clings to the book like a withered but still vigorous version of its larger sibling.

When you spend weeks at a time on your notes, you end up with strong opinions about how they should be presented. I don’t like numbered endnotes, mostly because the numeric superscripts disrupt the text, and it can frustrating to match them up with the back matter when you’re looking for one in particular. (When I read Nate Silver’s The Signal and the Noise, I found myself distracted by his determination to provide a numbered footnote for seemingly every factual statement, from the date of the Industrial Revolution to the source of the phrase “nothing new under the sun,” and that’s just the first couple of pages. Part of the art of notation is knowing what information you can leave out, and no two writers will come to exactly the same conclusions.) I prefer the keyword system, in which notes are linked to their referent in the body of the book by the page number and a snippet of text. This can lead to a telegraphic, even poetic summary of the contents when you run your eye down the left margin of the page, as in the section of my book about L. Ron Hubbard in the early sixties: “Of course Scientology,” “If President Kennedy did grant me an audience,” “Things go well,” “[Hubbard] chases able people away,” “intellectual garbage,” “Some of [Hubbard’s] claims,” “It is carefully arranged,” “very space opera.” They don’t thrust themselves on your attention until you need them, but when you do, they’re right there. These days, it’s increasingly common for the notes to be provided online, and I can’t guarantee that mine won’t be. But I hope that they’ll take their proper place at the end, where they’ll live unnoticed until readers realize that their book includes the original bonus feature.

The notion that endnotes can take on a life of their own is one that novelists from Nabokov to David Foster Wallace have brilliantly exploited. When reading Wallace’s Infinite Jest, the first thing that strikes most readers, aside from its sheer size, is its back matter, which takes up close to a hundred pages of closely printed notes at the end of the book. Most of us probably wish that the notes were a little more accessible, as did Dave Eggers, who observes of his first experience reading it: “It was frustrating that the footnotes were at the end of the book, rather than at the bottom of the page.” Yet this wasn’t an accident. As D.T. Max recounts in his fascinating profile of Wallace:

In Bloomington, Wallace struggled with the size of his book. He hit upon the idea of endnotes to shorten it. In April, 1994, he presented the idea to [editor Michael] Pietsch…He explained that endnotes “allow…me to make the primary-text an easier read while at once 1) allowing a discursive, authorial intrusive style w/o Finneganizing the story, 2) mimic the information-flood and data-triage I expect’d be an even bigger part of US life 15 years hence. 3) have a lot more technical/medical verisimilitude 4) allow/make the reader go literally physically ‘back and forth’ in a way that perhaps cutely mimics some of the story’s thematic concerns…5) feel emotionally like I’m satisfying your request for compression of text without sacrificing enormous amounts of stuff.” He also said, “I pray this is nothing like hypertext, but it seems to be interesting and the best way to get the exfoliating curve-line plot I wanted.” Pietsch countered with an offer of footnotes, which readers would find less cumbersome, but eventually agreed.

What’s particularly interesting here is that the endnotes physically shrink the size of Infinite Jest—simply because they’re set in smaller type—while also increasing how long it takes the diligent reader to finish it. Notes allow a writer to play games not just with space, but with time. (This is true even of the most boring kind of scholarly note, which amounts to a form of postponement, allowing readers to engage with it at their leisure, or even never.) In a more recent piece in The New Yorker, Nathan Heller offers a defense of notes in their proper place at the end of the book:

Many readers, and perhaps some publishers, seem to view endnotes, indexes, and the like as gratuitous dressing—the literary equivalent of purple kale leaves at the edges of the crudités platter. You put them there to round out and dignify the main text, but they’re too raw to digest, and often stiff. That’s partly true…Still, the back matter is not simply a garnish. Indexes open a text up. Notes are often integral to meaning, and, occasionally, they’re beautiful, too.

An index turns the book into an object that can be read across multiple dimensions, while notes are a set of tendrils that bind the text to the world, in Robert Frost’s words, “by countless silken ties of love and thought.” As Heller writes of his youthful job at an academic press: “My first responsibility there was proofreading the back matter of books…The tasks were modest, but those of us who carried them out felt that we were doing holy work. We were taking something intricate and powerful and giving it a final polish. I still believe in that refinement.” And so should we.

You are here

with one comment

Adam Driver in Star Wars: The Force Awakens

Remember when you were watching Star Wars: The Force Awakens and Adam Driver took off his mask, and you thought you were looking at some kind of advanced alien? You don’t? That’s strange, because it says you did, right here in Anthony Lane’s review in The New Yorker:

So well is Driver cast against type here that evil may turn out to be his type, and so extraordinary are his features, long and quiveringly gaunt, that even when he removes his headpiece you still believe that you’re gazing at some form of advanced alien.

I’m picking on Lane a little here, because the use of the second person is so common in movie reviews and other types of criticism—including this blog—that we hardly notice it, any more than we notice the “we” in this very sentence. Film criticism, like any form of writing, evolves its own language, and using that insinuating “you,” as if your impressions had melded seamlessly with the critic’s, is one of its favorite conventions. (For instance, in Manohla Dargis’s New York Times review of the same film, she says: “It also has appealingly imperfect men and women whose blunders and victories, decency and goofiness remind you that a pop mythology like Star Wars needs more than old gods to sustain it.”) But who is this “you,” exactly? And why has it started to irk me so much?

The second person has been used by critics for a long time, but in its current form, it almost certainly goes back to Pauline Kael, who employed it in the service of images or insights that could have occurred to no other brain on the planet, as when she wrote of Madeline Kahn in Young Frankenstein: “When you look at her, you see a water bed at just the right temperature.” This tic of Kael’s has been noted and derided for almost four decades, going back to Renata Adler’s memorable takedown in the early eighties, in which she called it “the intrusive ‘you'” and noted shrewdly: “But ‘you’ is most often Ms. Kael’s ‘I,’ or a member or prospective member of her ‘we.'” Adam Gopnik later said: “It wasn’t her making all those judgments. It was the Pop Audience there beside her.” And “the second-person address” clearly bugged Louis Menand, too, although his dislike of it was somewhat undermined by the fact that he internalized it so completely:

James Agee, in his brief service as movie critic of The Nation, reviewed many nondescript and now long-forgotten pictures; but as soon as you finish reading one of his pieces, you want to read it again, just to see how he did it…You know what you think about Bonnie and Clyde by now, though, and so [Kael’s] insights have lost their freshness. On the other hand, she is a large part of the reason you think as you do.

Pauline Kael

Kael’s style was so influential—I hear echoes of it in almost everything I write—that it’s no surprise that her intrusive “you” has been unconsciously absorbed by the generations of film critics that followed. If it bothers you as it does me, you can quietly replace it throughout with “I” without losing much in the way of meaning. But that’s part of the problem. The “you” of film criticism conceals a neurotic distrust of the first person that prevents critics from honoring their opinions as their own. Kael said that she used “you” because she didn’t like “one,” which is fair enough, but there’s also nothing wrong with “I,” which she wasn’t shy about using elsewhere. To a large extent, Kael was forging her own language, and I’m willing to forgive that “you,” along with so much else, because of the oceanic force of the sensibilities to which it was attached. But separating the second person from Kael’s unique voice and turning it into a crutch to be indiscriminately employed by critics everywhere yields a more troubling result. It becomes a tactic that distances the writer slightly from his or her own judgments, creating an impression of objectivity and paradoxical intimacy that has no business in a serious review. Frame these observations in “I,” and the critic would feel more of an obligation to own them and make sense of them; stick them in a convenient “you,” and they’re just one more insight to be tossed off, as if the critic happened to observe it unfolding in your brain and can record it here without comment.

Obviously, there’s nothing wrong with wanting to avoid the first person in certain kinds of writing. It rarely has a place in serious reportage, for instance, despite the efforts of countless aspiring gonzo journalists who try to do what Norman Mailer, Hunter S. Thompson, and only a handful of others have ever done well. (It can even plague otherwise gifted writers: I was looking forward to Ben Lerner’s recent New Yorker piece about art conservation, but I couldn’t get past his insistent use of the first person.) But that “I” absolutely belongs in criticism, which is fundamentally a record of a specific viewer, listener, or reader’s impressions of his or her encounter with a piece of art. All great critics, whether they use that “you” or not, are aware of this, and it can be painful to read a review by an inexperienced writer that labors hard to seem “objective.” But if our best critics so often fall into the “you” trap, it’s a sign that even they aren’t entirely comfortable with giving us all of themselves, and I’ve started to see it as a tiny betrayal—meaningful or not—of what ought to be the critic’s intensely personal engagement with the work. And if it’s only a tic or a trick, then we sacrifice nothing by losing it. Replace that “you” with “I” throughout, making whatever other adjustments seem necessary, and the result is heightened and clarified, with a much better sense of who was really sitting there in the dark, feeling emotions that no other human being would ever feel in quite the same way.

Should a writer go to college?

with 10 comments

A few years ago, I woke up with the startling realization that of all my friends from college, I was by far the least educated. I don’t mean that in any kind of absolute sense, but simply as a matter of numbers: most of my college friends went on to get master’s or professional degrees, and many of them have gone much further. By contrast, I, who loved college and would happily have spent the rest of my life in Widener Library, took my bachelor’s degree and went looking for a job, with the idea that I’d go back to school at some point after seeing something of the larger world. The reality, of course, was very different. And while I don’t regret any of the choices I’ve made, I do sometimes wonder if I might have benefited from, or at least enjoyed, some sort of postgraduate education.

Of course, it’s also possible that even my bachelor’s degree was a bad investment, a sentiment that seems increasingly common these days. College seniors, we’re frequently reminded, are graduating into a lousy job market. As Louis Menand points out in this week’s New Yorker, it’s unclear whether the American college system is doing the job it’s intended to do, whether you think of it primarily as a winnowing system or as a means of student enrichment. And then we have the controversial Thiel Fellowship, which is designed to encourage gifted entrepreneurs to drop out of college altogether. One of the fellowship’s first recipients recently argued that “higher education is broken,” a position that might be easier to credit if he wasn’t nineteen years old and hadn’t just received a $100,000 check to drop out of school. Which doesn’t necessarily make him wrong.

More interesting, perhaps, is the position of David Mamet, whose new book The Secret Knowledge includes a remarkable jeremiad against the whole idea of a liberal education. “Though much has been made of the necessity of a college education,” Mamet writes, “the extended study of the Liberal Arts actually trains one for nothing.” Mamet has said this before, most notably two years ago in a speech at Stanford University, where he compared the process of higher education to that of a laboratory rat pulling a lever to get a pellet. Of course, he’s been saying the same thing for a long time with respect to the uselessness of education for playwrights (not to mention ping-pong players). And as far as playwrights are concerned, I suspect he may be right, although he gets into trouble when he tries to expand the argument to everyone else.

So is college useful? In particular, is it useful for aspiring members of the creative class? Anecdotal information cuts both ways: for every Tom Stoppard, who didn’t go to college at all, there’s an Umberto Eco, who became a famous novelist after—and because of—a lifetime of academic achievement. Considered objectively, though, the answer seems to lie somewhere in the middle. In Origins of Genius, Dean Simonton writes:

Indeed, empirical research has often found that achieved eminence as a creator is a curvilinear, inverted-U function of the level of formal education. That is, formal education first increases the probability of attaining creative success, but after an optimum point, additional formal education may actually lower the odds. The location of this peak varies according to the specific type of creativity. In particular, for creators in the arts and humanities, the optimum is reached in the last two years of undergraduate instruction, whereas for scientific creators the optimum may be delayed until the first couple of years of graduate school. [Italics mine.]

Which implies that a few years of higher education is useful for artists, since it exposes them to interesting people and gives them a basic level of necessary knowledge, but that too much is unhelpful, or even damaging, if it encourages greater conformity. The bottom line, not surprisingly, is that if you want to be a writer, yes, you should probably go to college. But that doesn’t mean you need to stay there.

%d bloggers like this: