Posts Tagged ‘Dave Eggers’
The back matter
“Annotation may seem a mindless and mechanical task,” Louis Menand wrote a few years ago in The New Yorker. “In fact, it calls both for superb fine-motor skills and for adherence to the most exiguous formal demands.” Like most other aspects of writing, it can be all these things at once: mindless and an exercise of meticulous skill, mechanical and formally challenging. I’ve been working on the notes for Astounding for the last week and a half, and although I was initially dreading it, the task has turned out to be weirdly absorbing, in the way that any activity that requires repetitive motion but also continuous mild engagement can amount to a kind of hypnotism. The current draft has about two thousand notes, and I’m roughly three quarters of the way through. So far, the process has been relatively painless, although I’ve naturally tended to postpone the tricker ones for later, which means that I’ll end up with a big stack of problem cases to work through at the end. (My plan is to focus on notes exclusively for two weeks, then address the leftovers at odd moments until the book is due in December.) In the meantime, I’m spending hours every day organizing notes, which feels like a temporary career change. They live in their own Word file, like an independent work in themselves, and the fact that they’ll be bundled together as endnotes, rather than footnotes, encourages me to see them as a kind of bonus volume attached to the first, like a vestigial twin that clings to the book like a withered but still vigorous version of its larger sibling.
When you spend weeks at a time on your notes, you end up with strong opinions about how they should be presented. I don’t like numbered endnotes, mostly because the numeric superscripts disrupt the text, and it can frustrating to match them up with the back matter when you’re looking for one in particular. (When I read Nate Silver’s The Signal and the Noise, I found myself distracted by his determination to provide a numbered footnote for seemingly every factual statement, from the date of the Industrial Revolution to the source of the phrase “nothing new under the sun,” and that’s just the first couple of pages. Part of the art of notation is knowing what information you can leave out, and no two writers will come to exactly the same conclusions.) I prefer the keyword system, in which notes are linked to their referent in the body of the book by the page number and a snippet of text. This can lead to a telegraphic, even poetic summary of the contents when you run your eye down the left margin of the page, as in the section of my book about L. Ron Hubbard in the early sixties: “Of course Scientology,” “If President Kennedy did grant me an audience,” “Things go well,” “[Hubbard] chases able people away,” “intellectual garbage,” “Some of [Hubbard’s] claims,” “It is carefully arranged,” “very space opera.” They don’t thrust themselves on your attention until you need them, but when you do, they’re right there. These days, it’s increasingly common for the notes to be provided online, and I can’t guarantee that mine won’t be. But I hope that they’ll take their proper place at the end, where they’ll live unnoticed until readers realize that their book includes the original bonus feature.
The notion that endnotes can take on a life of their own is one that novelists from Nabokov to David Foster Wallace have brilliantly exploited. When reading Wallace’s Infinite Jest, the first thing that strikes most readers, aside from its sheer size, is its back matter, which takes up close to a hundred pages of closely printed notes at the end of the book. Most of us probably wish that the notes were a little more accessible, as did Dave Eggers, who observes of his first experience reading it: “It was frustrating that the footnotes were at the end of the book, rather than at the bottom of the page.” Yet this wasn’t an accident. As D.T. Max recounts in his fascinating profile of Wallace:
In Bloomington, Wallace struggled with the size of his book. He hit upon the idea of endnotes to shorten it. In April, 1994, he presented the idea to [editor Michael] Pietsch…He explained that endnotes “allow…me to make the primary-text an easier read while at once 1) allowing a discursive, authorial intrusive style w/o Finneganizing the story, 2) mimic the information-flood and data-triage I expect’d be an even bigger part of US life 15 years hence. 3) have a lot more technical/medical verisimilitude 4) allow/make the reader go literally physically ‘back and forth’ in a way that perhaps cutely mimics some of the story’s thematic concerns…5) feel emotionally like I’m satisfying your request for compression of text without sacrificing enormous amounts of stuff.” He also said, “I pray this is nothing like hypertext, but it seems to be interesting and the best way to get the exfoliating curve-line plot I wanted.” Pietsch countered with an offer of footnotes, which readers would find less cumbersome, but eventually agreed.
What’s particularly interesting here is that the endnotes physically shrink the size of Infinite Jest—simply because they’re set in smaller type—while also increasing how long it takes the diligent reader to finish it. Notes allow a writer to play games not just with space, but with time. (This is true even of the most boring kind of scholarly note, which amounts to a form of postponement, allowing readers to engage with it at their leisure, or even never.) In a more recent piece in The New Yorker, Nathan Heller offers a defense of notes in their proper place at the end of the book:
Many readers, and perhaps some publishers, seem to view endnotes, indexes, and the like as gratuitous dressing—the literary equivalent of purple kale leaves at the edges of the crudités platter. You put them there to round out and dignify the main text, but they’re too raw to digest, and often stiff. That’s partly true…Still, the back matter is not simply a garnish. Indexes open a text up. Notes are often integral to meaning, and, occasionally, they’re beautiful, too.
An index turns the book into an object that can be read across multiple dimensions, while notes are a set of tendrils that bind the text to the world, in Robert Frost’s words, “by countless silken ties of love and thought.” As Heller writes of his youthful job at an academic press: “My first responsibility there was proofreading the back matter of books…The tasks were modest, but those of us who carried them out felt that we were doing holy work. We were taking something intricate and powerful and giving it a final polish. I still believe in that refinement.” And so should we.
The problem of narrative complexity
Earlier this month, faced with a break between projects, I began reading Infinite Jest for the first time. If you’re anything like me, this is a book you’ve been regarding with apprehension for a while now—I bought my copy five or six years ago, and it’s followed me through at least three moves without being opened beyond the first page. At the moment, I’m a couple of hundred pages in, and although I’m enjoying it, I’m also glad I waited: Wallace is tremendously original, but he also pushes against his predecessors, particularly Pynchon, in fascinating ways, and I’m better equipped to engage him now than I would have been earlier on. The fact that I’ve published two novels in the meantime also helps. As a writer, I’m endlessly fascinated by the problem of managing complexity—of giving a reader enough intermediate rewards to justify the demands the author makes—and Wallace handles this beautifully. Dave Eggers, in the introduction to the edition I’m reading now, does a nice job of summing it up:
A Wallace reader gets the impression of being in a room with a very talkative and brilliant uncle or cousin who, just when he’s about to push it too far, to try our patience with too much detail, has the good sense to throw in a good lowbrow joke.
And the ability to balance payoff with frustration is a quality shared by many of our greatest novels. It’s relatively easy to write a impenetrable book that tries the reader’s patience, just as it’s easy to create a difficult video game that drives players up the wall, but parceling out small satisfactions to balance out the hard parts takes craft and experience. Mike Meginnis of Uncanny Valley makes a similar point in an excellent blog post about the narrative lessons of video games. While discussing the problem of rules and game mechanics, he writes:
In short, while it might seem that richness suggests excess and maximal inclusion, we actually need to be selective about the elements we include, or the novel will not be rich so much as an incomprehensible blur, a smear of language. Think about the very real limitations of Pynchon as a novelist: many complain about his flat characters and slapstick humor, but without those elements to manage the text and simplify it, his already dangerously complex fiction would become unreadable.
Pynchon, of course, casts a huge shadow over Wallace—sometimes literally, as when two characters in Infinite Jest contemplate their vast silhouettes while standing on a mountain range, as another pair does in Gravity’s Rainbow. And I’m curious to see how Wallace, who seems much more interested than Pynchon in creating plausible human beings, deals with this particular problem.
The problem of managing complexity is one that has come up on this blog several times, notably in my discussion of the work of Christopher Nolan: Inception‘s characters, however appealing, are basically flat, and the action is surprisingly straightforward once we’ve accepted the premise. Otherwise, the movie would fall apart from trying to push complexity in more than one direction at once. Even works that we don’t normally consider accessible to a casual reader often incorporate elements of selection or order into their design. The Homeric parallels in Joyce’s Ulysses are sometimes dismissed as an irrelevant trick—Borges, in particular, didn’t find them interesting—but they’re very helpful for a reader trying to cut a path through the novel for the first time. When Joyce dispensed with that device, the result was Finnegans Wake, a novel greatly admired and rarely read. That’s why encyclopedic fictions, from The Divine Comedy to Moby-Dick, tend to be structured around a journey or other familiar structure, which gives the reader a compass and map to navigate the authorial wilderness.
On a more modest level, I’ve frequently found myself doing this in my own work. I’ve mentioned before that I wanted one of the three narrative strands in The Icon Thief to be a police procedural, which, with its familiar beats and elements, would serve as a kind of thread to pull the reader past some of the book’s complexities. More generally, this is the real purpose of plot. Kurt Vonnegut, who was right about almost everything, says as much in one of those writing aphorisms that I never tire of quoting:
I guarantee you that no modern story scheme, even plotlessness, will give a reader genuine satisfaction, unless one of those old-fashioned plots is smuggled in somewhere. I don’t praise plots as accurate representations of life, but as ways to keep readers reading.
The emphasis is mine. Plot is really a way of easing the reader into that greatest of imaginative leaps, which all stories, whatever their ambitions, have in common: the illusion that these events are really taking place, and that characters who never existed are worthy of our attention and sympathy. Plot, structure, and other incidental pleasures are what keep the reader nourished while the real work of the story is taking place. If we take it for granted, it’s because it’s a trick that most storytellers learned a long time ago. But the closer we look at its apparent simplicity, the sooner we realize that, well, it’s complicated.
Are authors really too nice?
Like it or not, authors have to live with other authors. Some may prefer otherwise, and do their best to keep their distance, but most of us end up spending a fair amount of time—in person, in print, and online—interacting with our fellow writers. You can check it up to camaraderie, careerism, or the simple sense that there’s no one else with whom we can talk about the things that matter most to us, as well as the knowledge that, for better or worse, we’re going to be collaborating and competing with these people for a long time. As a result, most of us generally avoid criticizing one another’s work, at least in public. Which isn’t to say that writers aren’t neurotic, needy, petty people—most of us certainly are. But while we may secretly begrudge a friend’s success or agree that this year’s big book is a big bore, we generally keep these opinions to ourselves or share them only in private. As a result, only a handful of major novelists—Updike, Vidal, maybe a few others—have also been major critics. It isn’t for lack of intelligence; it’s more out of prudence or caution.
That’s why I don’t agree with Dwight Garner’s recent assertion that Twitter has somehow made writers less willing to criticize one another in public. Most writers have long since concluded, and rightly so, that it isn’t worth the headache. At best, we tend to reserve our critical arrows for those unlikely to be hurt by what we say, or even to read it at all, which is the real reason why the dead, the famous, and the canonized are such tempting targets. But when it comes to writers on our own level, there’s little to gain and much to lose by criticizing them in print. This isn’t omerta, or a gentlemen’s agreement, but a modus vivendi that avoids problems down the line. Even Norman Mailer, no stranger to conflict, came to the same conclusion. Fifty years ago, in his essay “Some Children of the Goddess,” he took potshots at contemporaries like Styron, Salinger, and Roth, and some never forgave him for it. Ever since, he avoided criticizing his peers, or lobbed his missiles at more resilient targets like Tom Wolfe. And if Mailer, of all people, decided that being a critic was more trouble that it was worth, I can’t blame other writers for concluding the same thing.
And yet it’s also a genuine loss. Dave Eggers isn’t wrong when he advises us not to criticize a novel until we’ve written one, or a movie until we’ve made one. There’s no question that we’d avoid a lot of the nonsense written about movies and books—like the idea, for instance, that a director is the sole author of a film, despite all evidence to the contrary—if more criticism were written by people with experience in the creative field in question. As someone who has done a bit of freelancing myself, I can say that while critics can be driven by ambitions and impulses of their own, these are qualitatively different from the process that underlies the creation of any extended, original work of art. Ideally, then, a literary critic would know something about how a novel is put together, with all the compromises, accidents, and beartraps involved—and there’s no one more qualified to do this than working novelists themselves. But for all the reasons I’ve listed above, there are good reasons why most writers prefer to keep out of it, especially when it comes to the contemporaries about whom they know the most.
In short, the people best equipped to write intelligently about contemporary literature—the writers themselves—have more than enough reason to stand down, and it isn’t necessarily realistic or fair to expect otherwise. Consequently, our best literary critics have often been those with some experience of creative work who have since thrown in their lot on the critical side, which is how we end up with valuable voices like Edmund Wilson or James Wood, who have written novels of their own but found their true calling elsewhere. This isn’t a perfect solution, but it’s a pretty good one, and I’d much rather be reviewed by a critic who at least knew what writing a publishable novel was like. In the end, though, this will always be an issue for literary criticism, which differs from all other fields in that critics and their subjects use the same tools and draw on the same pool of talent. It makes objectivity, bravery, and expertise in a critic all the more precious. And if you want to know what a writer really thinks of his peers—well, just corner him at a party, and believe me, you’ll get an earful.
Criticizing the critical critic
Last week, Dwight Garner of the New York Times—arguably one of the two or three most famous literary critics now at work, along with his colleague Michiko Kakutani and The New Yorker‘s James Wood—wrote a long opinion piece titled “A Critic’s Case for Critics Who Are Actually Critical.” In it, he decries what he sees as the decline of serious criticism, as well as the hostility toward the role of critics themselves, who are seen, at least by authors, as negative, dismissive, and cruel. To illustrate this view, he quotes a decade-old interview with Dave Eggers, who says:
Do not dismiss a book until you have written one, and do not dismiss a movie until you have made one, and do not dismiss a person until you have met them. It is a fuckload of work to be open-minded and generous and understanding and forgiving and accepting, but Christ, that is what matters. What matters is saying yes.
(Incidentally, Eggers conducted this interview with my old college literary magazine, whose fiction board I joined a few months later. Garner doesn’t quote the interview’s last few lines, which, if I recall correctly, became something of a running joke around the Advocate building for years afterward: “And if anyone wants to hurt me for that, or dismiss me for saying that, for saying yes, I say Oh do it, do it you motherfuckers, finally, finally, finally.”)
Well, Garner finally, finally, finally goes after Eggers, a writer he says he admires, saying that he “deplores” the stance expressed above: “The sad truth about the book world,” Garner writes, “is that it doesn’t need more yes-saying novelists and certainly no more yes-saying critics. We are drowning in them.” What the world really needs, he argues, are uncompromising critics who are willing to honestly engage with works of art, both good and bad, and to be harsh when the situation requires it. He says that the best work of critics like Pauline Kael “is more valuable—and more stimulating—than all but the most first-rate novels.” He points out that any writer who consents for his or her novel to be published tacitly agrees to allow critics to review it however they like. And he bemoans the fact that social media has made it hard for critics to be as honest and hard as they should be. Twitter, he says, has degenerated into a mutual lovefest between authors, and doesn’t allow for anything like real criticism: “On it, negative words have the same effect as a bat flying into a bridal shower.”
The trouble with Garner’s argument, aside from its quixotic attempt to persuade authors to feel kindly toward critics, is that I don’t think it’s factually correct. Garner quotes Jonah Peretti’s observation that “Twitter is a simple service used by smart people,” which isn’t true at all—Twitter, for better or worse, is used by all kinds of people, and when we venture out of our own carefully cultivated circles, we’re treated to the sight of humanity in its purest form, including people who didn’t realize the Titanic was real. The same goes for the comments section of any news or opinion site, which is generally a swamp of negativity. The trouble with social media isn’t that it encourages people to be uncritically positive or negative: it’s that it encourages unconsidered discourse of all kinds. Twitter, by design, isn’t a place for reasoned commentary; at its best, it’s more like a vehicle for small talk. And we shouldn’t judge it by the same standards that use for other forms of criticism, any more than we should judge guests at a cocktail party for not saying what they really feel about the people around them. That’s also why attempts at criticism on Twitter tend to look uglier than the author may have intended—it’s the nature of the form.
And when we’re dealing with the choice, admittedly not a great one, between uncritical positivity and negativity, I’d have to say that the former is the lesser of two evils. That’s what Eggers is saying in the interview quoted above: he isn’t proposing, as Garner would have it, “mass intellectual suicide,” but an extreme solution to what he rightly sees as an extreme problem, which is the ease in which we can fall back into dismissive snark, long before “snark” had even attained its current meaning. It’s best, of course, to make nuanced, perceptive, complex arguments, but if we don’t have the time for it—and being a good critic takes time—then it’s marginally better, at least for our own souls, to be enthusiastic bores. I’ve argued before, and I still believe, that every worthwhile critic builds his or her work on a foundation of genuine enthusiasm for the art in question. Hard intellectual engagement comes later, as a sort of refinement of joy, and when it doesn’t, that’s the worst kind of intellectual suicide, which disguises itself as its opposite. Dwight Garner is a really good critic. But to get where Garner is now, you need to pass through Eggers first.