Posts Tagged ‘Harold Bloom’
Reading the rocks
“[Our] ignorance of planetary history undermines any claims we may make to modernity,” the geologist Marcia Bjornerud writes in her new book Timefulness: How Thinking Like a Geologist Can Help Save the World. In an excerpt that appeared last week on Nautilus, Bjornerud makes a case for geology as a way of seeing that I find poetic and compelling:
Early in an introductory geology course, one begins to understand that rocks are not nouns but verbs—visible evidence of processes: a volcanic eruption, the accretion of a coral reef, the growth of a mountain belt. Everywhere one looks, rocks bear witness to events that unfolded over long stretches of time. Little by little, over more than two centuries, the local stories told by rocks in all parts of the world have been stitched together into a great global tapestry—the geologic timescale. This “map” of Deep Time represents one of the great intellectual achievements of humanity, arduously constructed by stratigraphers, paleontologists, geochemists, and geochronologists from many cultures and faiths. It is still a work in progress to which details are constantly being added and finer and finer calibrations being made.
This is a lovely passage in itself, but I was equally struck by how it resembles the arguments that are often advanced in defense of the great books. One of that movement’s favorite talking points is the notion of “The Great Conversation,” or the idea that canonical books and authors aren’t dead or antiquated, but engaged in a vital dialogue between themselves and the present. And its defenders frequently make their case in terms much like those that Bjornerud employs. In the book The Great Conversation, which serves as the opening volume of Great Books of the Western World, the educator Robert Maynard Hutchins writes: “This set of books is offered in no antiquarian spirit. We have not seen our task as that of taking tourists on a visit to ancient ruins or to the quaint productions of primitive peoples.” And the justifications presented for the two fields are similar as well. As Bjornerud’s subtitle indicates, she suggests that a greater awareness of geologic timescales can serve as a way for us to address the problems of our own era, while Hutchins uses language that has a contemporary ring:
We are as concerned as anybody else at the headlong plunge into the abyss that Western civilization seems to be taking. We believe that the voices that may recall the West to sanity are those which have taken part in the Great Conversation. We want them to be heard again not because we want to go back to antiquity, or the Middle Ages, or the Renaissance, or the Eighteenth Century. We are quite aware that we do not live in any time but the present, and, distressing as the present is, we would not care to live in any other time if we could.
“We want the voices of the Great Conversation to be heard again because we think they may help us to learn to live better now,” Hutchins concludes. Bjornerud sounds much the same when she speaks on behalf of geology, sounding a dire warning against “temporal illiteracy,” which leads us to ignore our own impact on environmental processes in the present. In both cases, a seemingly static body of knowledge is reimagined as timely and urgent. I’ve spent much of my life in service to this notion, in one way or another, and I badly want to believe it. Yet I sometimes have my doubts. The great books have been central to my thinking for decades, and their proponents tend to praise their role in building cultural and civic awareness, but the truth isn’t quite that simple. As Harold Bloom memorably points out in The Western Canon: “Reading the very best writers—let us say Homer, Dante, Shakespeare, Tolstoy—is not going to make us better citizens.” And a few pages later, he makes a case that strikes me as more convincing than anything that Hutchins says:
The silliest way to defend the Western Canon is to insist that it incarnates all of the seven deadly moral virtues that make up our supposed range of normative values and democratic principles. This is palpably untrue…The West’s greatest writers are subversive of all values, both ours and their own…If we read the Western Canon in order to form our social, political, or personal moral values, I firmly believe we will become monsters of selfishness and exploitation. To read in the service of any ideology is not, in my judgment, to read at all.
And while I’m certainly sympathetic to Bjornerud’s argument, I suspect that the same might hold true if we turn to geology for lessons about time. Good science, like great literature, is morally neutral, and we run into trouble when we ask it to stand for anything but itself. (Bjornerud notes in passing that many geologists are employed by petroleum companies, which doesn’t help her case that access to knowledge about the “deep, rich, grand geologic story” of our planet will lead to a better sense of environmental stewardship.) And this line of argument has a way of highlighting a field’s supposed relevance at the moments when it seems most endangered. The humanities have long fought against the possibility, as Bloom dryly puts it, that “our English and other literature departments [will] shrink to the dimensions of our current Classics departments,” and Bjornerud is equally concerned for geology:
Lowly geology has never achieved the glossy prestige of the other sciences. It has no Nobel Prize, no high school Advanced Placement courses, and a public persona that is musty and dull. This of course rankles geologists, but it also has serious consequences for society…The perceived value of a science profoundly influences the funding it receives.
When a field seems threatened, it’s tempting to make it seem urgently necessary. I’ve done plenty of this sort of thing myself, and I hope that it works. In the end, though, I have a feeling that Bjornerud’s “timefulness” has exactly the same practical value as the virtue that Bloom attributes to books, which is priceless enough: “All that the Western Canon can bring one is the proper use of one’s own solitude, that solitude whose final form is one’s confrontation with one’s own mortality.”
Shakespeare and the art of revision
Note: I’m taking the day off, so I’m republishing a piece from earlier in this blog’s run. This post originally appeared, in a slightly different form, on April 22, 2016.
When we think of William Shakespeare, we don’t tend to see him as an author who meticulously revised his work. His reputation as a prodigy of nature, pouring out raw poetry onto the page, owes a lot to Ben Jonson’s short reminiscence of his friend, which is still the most valuable portrait we have of how Shakespeare seemed to those who knew him best:
I remember the players have often mentioned it as an honor to Shakespeare, that in his writing, whatsoever he penned, he never blotted out a line. My answer hath been, “Would he had blotted a thousand,” which they thought a malevolent speech. I had not told posterity this but for their ignorance, who chose that circumstance to commend their friend by wherein he most faulted; and to justify mine own candor, for I loved the man, and do honor his memory on this side idolatry as much as any. He was, indeed, honest, and of an open and free nature; had an excellent fancy, brave notions, and gentle expressions, wherein he flowed with that facility that sometime it was necessary he should be stopped…His wit was in his own power; would the rule of it had been so too…But he redeemed his vices with his virtues. There was ever more in him to be praised than to be pardoned.
And even Shakespeare’s admirers admit that his sheer imaginative fertility—the greatest of any writer who ever lived—led him to produce bad lines as well as good, often side by side. (My favorite example is the last stanza of “The Phoenix and the Turtle.” I don’t think it’s possible to read “For these dead birds sigh a prayer” as anything other than one of the worst lines of poetry ever written.)
But he did revise, both on the overarching levels of character and theme and on the level of the individual line. Harold Bloom, among others, has advanced the ingenious theory that the lost Ur-Hamlet, which we know only through offhand references by contemporaries, was nothing less than an early draft by the young Shakespeare himself. We know that it wasn’t particularly good: the author Thomas Lodge refers to the king’s ghost crying “Hamlet, revenge!” in a way that implies that it became a running joke among theatergoers. But the idea that Shakespeare went back and revised it so many years later is inherently revealing. We know that the story was personally meaningful to him—he named his own son after Hamlet—and that the lost version would have been one of the first plays he ever wrote. And Hamlet itself, when we read it in this light, looks a lot like a play that found its final form through repeated acts of revision. F. Scott Fitzgerald once called himself a “taker-outer,” while his friend Thomas Wolfe was a “putter-inner,” which prompted Wolfe to reply:
You say that the great writer like Flaubert has consciously left out the stuff that Bill or Joe will come along presently and put in. Well, don’t forget, Scott, that a great writer is not only a leaver-outer but also a putter-inner, and that Shakespeare and Cervantes and Dostoevsky were great putter-inners—greater putter-inners, in fact, than taker-outers and will be remembered for what they put in—remembered, I venture to say, as long as Monsieur Flaubert will be remembered for what he left out.
And Hamlet stands as the one instance in which Shakespeare, while revising the first draft, put in everything he wanted, even if the result was close to unplayable on stage.
There’s an even more compelling glimpse of Shakespeare the reviser, and it comes in the unlikely form of Timon of Athens, which, by all measure, was the weirdest play he ever wrote. Scholars have attributed its stranger qualities—the loose ends, the characters who are introduced only to disappear for no reason—to a collaboration between Shakespeare and Thomas Middleton, and textual analysis seems to bear this out. But it also looks like a rough draft that Shakespeare never had a chance to revise, and if we take it as a kind of snapshot of his creative process, it’s a document of unbelievable importance. In the speech by the servant that I’ve reproduced above, you can see that it starts out as prose, then shifts halfway through to verse, a peculiar transition that occurs repeatedly in Timon but has few parallels in the other plays. This suggests that Shakespeare began by roughing out large sections of the play in prose form, and then went back to convert it into poetry. Timon just happens to be the one play in which the process of revision was interrupted, leaving the work in an unfinished state. It implies that Shakespeare’s approach wasn’t so different from the one that I’ve advocated here in the past: you write an entire first draft before going back to polish it, just as a painter might do a sketch or cartoon of the whole canvas before drilling down to the fine details. It isn’t until you’ve written a story that you know what it’s really about. And the little that we know about Shakespeare’s methods seems to confirm that he followed this approach.
But his revisions didn’t end there, either. These plays were meant for performance, and like all theatrical works, they evolved in response to rehearsals, the needs of the actors, and the reactions of the audience. (The natural fluidity of the text on the stage goes a long way toward explaining why certain plays, like King Lear, exist in radically different versions in folio or quarto form. Some scholars seem bewildered by the fact that Shakespeare could be so indifferent to his own work that he didn’t bother to finalize a definitive version of Lear, but it may not have even struck him as a problem. The plays took different shapes in response to the needs of the moment, and Shakespeare, the ultimate pragmatist, knew that there was always more where that came from.) And the idea of ongoing revision is inseparable from his conception of the world. Bloom famously talks about Shakespearean characters “overhearing” themselves, which lies at the center of his imaginative achievement: figures like Richard II and Hamlet seem to listen to themselves speaking, and they evolve and deepen before our eyes in response to what they hear in their own words. But what Bloom calls “the depiction of self-change on the basis of self-overhearing” is a lesson that could only have come out of the revision process, in which the writer figures out his own feelings through the act of rewriting. As E.M. Forster wrote in Aspects of the Novel: “How can I tell what I think till I see what I say?” Shakespeare knew this, too. And thanks to his work—and his revisions—we can echo it in our own lives: “How can we know who we are until we hear what we think?”
From Venice to Yale
In a recent issue of The New Yorker, the scholar Stephen Greenblatt offers an insightful consideration of a Shakespearean comedy toward which he—like most of us—can hardly help having mixed feelings: “There is something very strange about experiencing The Merchant of Venice when you are somehow imaginatively implicated in the character and actions of its villain.” After recalling his uncomfortable experience as a Jewish undergraduate at Yale in the sixties, Greenblatt provides a beautiful summation of the pragmatic solution at which he arrived:
I wouldn’t attempt to hide my otherness and pass for what I was not. I wouldn’t turn away from works that caused me pain as well as pleasure. Instead, insofar as I could, I would pore over the whole vast, messy enterprise of culture as if it were my birthright…I was eager to expand my horizons, not to retreat into a defensive crouch. Prowling the stacks of Yale’s vast library, I sometimes felt giddy with excitement. I had a right to all of it, or, at least, to as much of it as I could seize and chew upon. And the same was true of everyone else.
Greenblatt, of course, went on to become one of our most valuable literary critics, and his evaluation of The Merchant of Venice is among the best I’ve seen: “If Shylock had behaved himself and remained a mere comic foil…there would have been no disturbance. But Shakespeare conferred too much energy on his Jewish usurer for the boundaries of native and alien, us and them, to remain intact…He did so not by creating a lovable alien—his Jew is a villain who connives at legal murder—but by giving Shylock more theatrical vitality, quite simply more urgent, compelling life, than anyone else in his world has.”
I’ve spent more time thinking about The Merchant of Venice than all but a handful of Shakespeare’s plays, precisely because of the “excess of life” that Greenblatt sees in Shylock, which is at its most impressive in a context where it has no business existing at all. Elsewhere, I’ve argued that Shakespeare’s genius is most visible when you compare him to his sources, which he transforms so completely that it destroys the notion that he was an opportunist who simply borrowed most of his plots. The Merchant of Venice is unique because its models are somehow right there on stage, existing simultaneously with the text, since we can hardly watch it and be unaware of the contrast between the antisemitic caricature of the original and Shylock’s uncanny power. Harold Bloom captures this quality in an extraordinary passage from Shakespeare: The Invention of the Human:
I have never seen The Merchant of Venice staged with Shylock as comic villain, but that is certainly how the play should be performed…If I were a director, I would instruct my Shylock to act like a hallucinatory bogeyman, a walking nightmare flamboyant with a big false nose and a bright red wig, that is to say, to look like Marlowe’s Barabas. We can imagine the surrealistic effect of such a figure when he begins to speak with the nervous intensity, the realistic energy of Shylock, who is so much of a personality as to at least rival his handful of lively precursors in Shakespeare: Faulconbridge the Bastard in King John, Mercurio and the Nurse in Romeo and Juliet, and Bottom the Weaver in A Midsummer Night’s Dream. But these characters all fit their roles, even if we can conceive of them as personalities outside of their plays. Shylock simply does not fit his role; he is the wrong Jew in the right play.
On some level, Shylock is a darker miracle of characterization than even Hamlet or Lear, because so much of his impact seems involuntary, even counterproductive. Shakespeare had no particular reason to make him into anything more than a stock villain, and in fact, his vividness actively detracts from the logic of the story itself, as Greenblatt notes: “Shylock came perilously close to wrecking the comic structure of the play, a structure that Shakespeare only barely rescued by making the moneylender disappear for good at the end of the fourth act.” Bloom, in turn, speaks of “the gap between the human that Shakespeare invents and the role that as playmaker he condemns Shylock to act,” a cognitive divide that tells us more about his art than the plays in which every part has been revised to fit like magic. I often learn more about craft from works of art that I resist than ones with which I agree completely, which only makes sense. When we want to believe in a story’s message, we’re less likely to scrutinize its methods, and we may even forgive lapses of taste or skill because we want to give it the benefit of the doubt. (This is the real reason why aspiring authors should avoid making overt political statements in a story, which encourages friendly critics to read the result more generously than it deserves. It’s gratifying in the moment, but it also can lead to faults going unaddressed until it’s too late to fix them.) Its opposite number is a work of art that we’d love to dismiss on moral or intellectual grounds, but which refuses to let us go. Since we have no imaginable reason to grant it a free pass, its craft stands out all the more clearly. The Merchant of Venice is the ultimate example. It’s the first play that I’d use to illustrate Shakespeare’s gift at creating characters who can seem more real to us than ourselves—which doesn’t make it any easier to read, teach, or perform.
This brings us back to the figure of Greenblatt at Yale, who saw the works that pained him as an essential part of his education. He writes:
I’m now an English professor at Harvard, and in recent years some of my students have seemed acutely anxious when they are asked to confront the crueler strains of our cultural legacy. In my own life, that reflex would have meant closing many of the books I found most fascinating, or succumbing to the general melancholy of my parents. They could not look out at a broad meadow from the windows of our car without sighing and talking about the number of European Jews who could have been saved from annihilation and settled in that very space. (For my parents, meadows should have come with what we now call “trigger warnings.”) I was eager to expand my horizons, not to retreat into a defensive crouch.
The question of how students should confront the problematic works of the past is one that I don’t expect to resolve here, except by noting that The Merchant of Venice represents a crucial data point. Without it, our picture of Shakespeare—and even of his greatness as a writer—is necessarily incomplete. When it comes to matters of education, it helps to keep a few simple tests in mind, and the humanities have an obligation to enable the possibility of this kind of confrontation, while also providing the framework within which it can be processed. Instead of working forward from a set of abstract principles, perhaps we should work backward from the desired result, which is to have the tools that we need when we reach the end of the labyrinth and find Shylock waiting for us. Even if we aren’t ready for him, we may not have a choice. As Bloom observes: “It would have been better for the Jews, if not for most of The Merchant of Venice’s audiences, had Shylock been a character less conspicuously alive.”
The space between us all
In an interview published in the July 12, 1970 issue of Rolling Stone, the rock star David Crosby said: “My time has gotta be devoted to my highest priority projects, which starts with tryin’ to save the human race and then works its way down from there.” The journalist Ben Fong-Torres prompted him gently: “But through your music, if you affect the people you come in contact with in public, that’s your way of saving the human race.” And I’ve never forgotten Crosby’s response:
But somehow operating on that premise for the last couple of years hasn’t done it, see? Somehow Sgt. Pepper’s did not stop the Vietnam War. Somehow it didn’t work. Somebody isn’t listening. I ain’t saying stop trying; I know we’re doing the right thing to live, full on. Get it on and do it good. But the inertia we’re up against, I think everybody’s kind of underestimated it. I would’ve thought Sgt. Pepper’s could’ve stopped the war just by putting too many good vibes in the air for anybody to have a war around.
He was right about one thing—the Beatles didn’t stop the war. And while it might seem as if there’s nothing new left to say about Sgt. Pepper’s Lonely Hearts Club Band, which celebrates its fiftieth anniversary today, it’s worth asking what it tells us about the inability of even our greatest works of art to inspire lasting change. It’s probably ridiculous to ask this of any album. But if a test case exists, it’s here.
It seems fair to say that if any piece of music could have changed the world, it would have been Sgt. Pepper. As the academic Langdon Winner famously wrote:
The closest Western Civilization has come to unity since the Congress of Vienna in 1815 was the week the Sgt. Pepper album was released…At the time I happened to be driving across the country on Interstate 80. In each city where I stopped for gas or food—Laramie, Ogallala, Moline, South Bend—the melodies wafted in from some far-off transistor radio or portable hi-fi. It was the most amazing thing I’ve ever heard. For a brief while, the irreparably fragmented consciousness of the West was unified, at least in the minds of the young.
The crucial qualifier, of course, is “at least in the minds of the young,” which we’ll revisit later. To the critic Michael Bérubé, it was nothing less than the one week in which there was “a common culture of widely shared values and knowledge in the United States at any point between 1956 and 1976,” which seems to undervalue the moon landing, but never mind. Yet even this transient unity is more apparent than real. By the end of the sixties, the album had sold about three million copies in America alone. It’s a huge number, but even if you multiply it by ten to include those who were profoundly affected by it on the radio or on a friend’s record player, you end up with a tiny fraction of the population. To put it another way, three times as many people voted for George Wallace for president as bought a copy of Sgt. Pepper in those years.
But that’s just how it is. Even our most inescapable works of art seem to fade into insignificance when you consider the sheer number of human lives involved, in which even an apparently ubiquitous phenomenon is statistically unable to reach a majority of adults. (Fewer than one in three Americans paid to see The Force Awakens in theaters, which is as close as we’ve come in recent memory to total cultural saturation.) The art that feels axiomatic to us barely touches the lives of others, and it may leave only the faintest of marks on those who listen to it closely. The Beatles undoubtedly changed lives, but they were more likely to catalyze impulses that were already there, providing a shape and direction for what might otherwise have remained unexpressed. As Roger Ebert wrote in his retrospective review of A Hard Day’s Night:
The film was so influential in its androgynous imagery that untold thousands of young men walked into the theater with short haircuts, and their hair started growing during the movie and didn’t get cut again until the 1970s.
We shouldn’t underestimate this. But if you were eighteen when A Hard Day’s Night came out, it also means that you were born the same year as Donald Trump, who decisively won voters who were old enough to buy Sgt. Pepper on its initial release. Even if you took its message to heart, there’s a difference between the kind of change that marshals you the way that you were going and the sort that realigns society as a whole. It just isn’t what art is built to do. As David Thomson writes in Rosebud, alluding to Trump’s favorite movie: “The world is very large and the greatest films so small.”
If Sgt. Pepper failed to get us out of Vietnam, it was partially because those who were most deeply moved by it were more likely to be drafted and shipped overseas than to affect the policies of their own country. As Winner says, it united our consciousness, “at least in the young,” but all the while, the old men, as George McGovern put it, were dreaming up wars for young men to die in. But it may not have mattered. Wars are the result of forces that care nothing for what art has to say, and their operations are often indistinguishable from random chance. Sgt. Pepper may well have been “a decisive moment in the history of Western civilization,” as Kenneth Tynan hyperbolically claimed, but as Harold Bloom reminds us in The Western Canon:
Reading the very best writers—let us say Homer, Dante, Shakespeare, Tolstoy—is not going to make us better citizens. Art is perfectly useless, according to the sublime Oscar Wilde, who was right about everything.
Great works of art exist despite, not because of, the impersonal machine of history. It’s only fitting that the anniversary of Sgt. Pepper happens to coincide with a day on which our civilization’s response to climate change will be decided in a public ceremony with overtones of reality television—a more authentic reflection of our culture, as well as a more profound moment of global unity, willing or otherwise. If the opinions of rock stars or novelists counted for anything, we’d be in a very different situation right now. In “Within You Without You,” George Harrison laments “the people who gain the world and lose their soul,” which neatly elides the accurate observation that they, not the artists, are the ones who do in fact tend to gain the world. (They’re also “the people who hide themselves behind a wall.”) All that art can provide is private consolation, and joy, and the reminder that there are times when we just have to laugh, even when the news is rather sad.
My ten great books #3: The Magic Mountain
Whenever I think of Thomas Mann’s The Magic Mountain, I always begin with the blankets. They’re a pair of lovely camel-hair blankets, “extra long and wide, in a natural beige fabric that was delightfully soft to the touch,” used by the residents of a sanitarium in the Alps while lounging on their balconies for the daily rest cure, which can last for hours. They certainly sound cozy:
Whether it was the texture of the cushions, the perfect slant of the back support, the proper height and width of the armrests, or simply the practical consistency of the neck roll—whatever it was, nothing could possibly have offered more humane benefits for a body at rest than this splendid lounge chair.
If you can relate to the appeal of those blankets—and of their promise of a life spent in blissful inactivity—you can begin to grasp what makes this novel so fascinating, despite its imposing appearance. As I’ve mentioned before, The Magic Mountain may be the least inviting of all major twentieth-century novels: it lacks the snob appeal of Ulysses or Proust, its structure is classical and crystalline, and a plot summary doesn’t exactly make it sound like a page-turner. The first necessary step is a leap of the imagination, a willingness to acknowledge the part of yourself that, like the young Hans Castorp, is drawn to the idea of giving up all advancement, all ambition, all action, for the sake of a life spent in the confines of a comfortable chair. Hans’s reasoning may not be airtight, but it’s hard to deny its power, especially in the decade before the First World War:
On the whole, however, it seemed to him that although honor had its advantages, so, too, did disgrace, and that indeed the advantages of the latter were almost boundless.
In the end, Hans, a perfectly healthy young man, ends up staying at the sanitarium for seven years. Of course, both he and the reader soon find that this apparent retreat into inactivity is secretly a plunge into something else. Despite its unlikely subject matter, The Magic Mountain vibrates on every page with life, intelligence, and insight. Mann likes to remind us, a bit too insistently, that Hans is “ordinary,” but really, as Harold Bloom points out, he’s immensely likable and curious, and you come to identify with him enormously. The story in which he finds himself has often been called a novel of ideas, and it is, but it’s much more: Mann stuffs it with compelling set pieces—Walpurgis Night, Hans’s nearly fatal misadventure in the snowstorm, the séance, the duel between Naptha and Settembrini—that would be high points in any novel, and it isn’t hard to see why the book was a massive bestseller in its time. Like Proust, Mann has useful insights into a dazzling variety of subjects, ranging from medicine to music to the nature of time, even as he depicts a world in which these ideas are on the verge of being destroyed. (As Clive James wrote: “The worst you can say about Thomas Mann is that his ego was so big he took even history personally; but at least he knew it was history.”) The characters are rendered with uncanny vividness: when you’re done, you feel as if you’ve passed half a lifetime in their company, and the memory is charged with nostalgia, longing, and regret. It took me a long time to come around to this book, and it sat unread on my shelf for years. When I finally started it for real, it was with a distinct sense of obligation. And what I found, much to my surprise, was that it was the novel for which I’d been searching my entire life.
Hamlet’s birthday
Last year, on my birthday, I wrote a post reflecting on how it felt to turn thirty-five, drawing liberally on The Divine Comedy, which opens when Dante is the same age—or, as he puts it, “When I had journeyed half of our life’s way.” When I look back, the comparison seems even more forced now than it did then, but it came out of a place of real feeling. I was going through a rough period as a writer, after a number of projects had failed to gain traction, and I was thinking more intensely than usual about what might come next. “A human life,” I wrote at the time, “makes a pattern that none of us can predict. And even as we reach the halfway point, its true shape may only be beginning.” When I typed those words, there was an element of wishful thinking involved, but they turned out to be more true than I could have guessed. Today, I’m working on a book that I couldn’t possibly have anticipated a year ago, and I’m already feeling the impact. In startup jargon, it was a career pivot, or a course correction, and although it emerged naturally from my background and interests, it still took me by surprise. In all likelihood, Astounding will turn out to be the most interesting book I’ve ever written, or ever will, which means that when I wrote that birthday post, I was on the verge of providing an inadvertent case study of how even the most considered plan can continue to generate surprises long after you think its outlines have been fixed. Which, I suppose, is what Dante was saying all along.
It might seem strange to use the age of a literary character as a benchmark for evaluating your own life, but it’s no weirder than measuring yourself against peers your own age or, ugh, even younger, which all writers inevitably do. (My favorite observation on the subject comes courtesy of Tom Lehrer: “It is a sobering thought, for example, that when Mozart was my age, he had been dead for two years.”) And it isn’t just Dante who inspires this kind of reflection. You can hear an echo of it in the trendy notion of “the Jesus year,” which, if anything, is even more pretentious. Most intriguing of all is the case of Hamlet, whose age is as vague as Dante’s is precise. In the first four acts of the play that bears his name, Hamlet strikes us, as Harold Bloom puts it, as “a young man of about twenty or less,” which squares neatly with the fact that he’s a student at Wittenberg University. Yet in Act V, the gravedigger explicitly says that the prince is thirty. This has been explained away as a mistake in the text or an artifact of Shakespeare’s repeated revisions, which overlooks how psychologically and dramatically sound it is: the Hamlet of the last act seems far wiser and more mature than the one we’ve met before, and I actually prefer the joke theory that he somehow ages a decade or more in his brief trip overseas. Hamlet has undergone a dramatic change in his absence, and his illogical increase in age is a subliminal clue as to how we’re supposed to perceive his transformation.
And that curious fusion of the twenty- and thirty-year-old versions of the prince hints at one of the most unforgettable qualities of his character, even as it also explains why the actors with the ability to play him tend to be closer to forty. In Shakespeare: The Invention of the Human, Bloom notes that “no one else in all Shakespeare seems potentially so free as the crown prince of Denmark,” and he goes on to list a few of the possibilities:
There is a bewildering range of freedoms available to Hamlet: he could marry Ophelia, ascend to the throne after Claudius if waiting was bearable, cut Claudius down at almost any time, leave for Wittenberg without permission, organize a coup (being the favorite of the people), or even devote himself to botching plays for the theater. Like his father, he could center upon being a soldier, akin to the younger Fortinbras, or conversely he could turn his superb mind to more organized speculation, philosophical or hermetic, than has been his custom. Ophelia describes him, in her lament for his madness, as having been courtier, soldier, and scholar, the exemplar of form and fashion for all Denmark. If The Tragedy of Hamlet, Prince of Denmark is “poem unlimited,” beyond genre and rules, then its protagonist is character unlimited, beyond even such precursors as the biblical David or the classical Brutus. But how much freedom can be afforded Hamlet by a tragic play? What project can be large enough for him?
But that’s how everyone feels at twenty. Or at least it’s how I did. You think you’re capable of anything, and there were times in my twenties when I felt as potentially free as Hamlet at the beginning of the play. But age closes off the number of paths available, one by one, until you’re more like Hamlet at the end: resigned, with equanimity or otherwise, to the role that fate has assigned to you. That’s why Hamlet continues to fascinate us. He’s our greatest image of youthful potential, until he isn’t, which is why he somehow manages to seem both twenty and thirty within the span of a few weeks. Yet that juxtaposition, for all its absurdity, gets at something fundamental in how we all see ourselves: as a superimposition of all the people we were in the past, coexisting together in the more limited person we necessarily embody today. (Or as Frank Sinatra says more eloquently in Sinatra at the Sands: “Now I guess you folks have heard, or read, or been told somewhere that recently I became fifty years old, and I’m here to tell you right now, it’s a dirty Communist lie. Direct from Hanoi—it came right outta there! My body may be fifty, but I’m twenty-eight!” Sinatra goes on to add: “And I would further like to say that I’d be twenty-two if I hadn’t spent all those years drinking with Joe E. Lewis, who nearly wrecked me.”) Shakespeare, as it happens, was thirty-seven when he wrote Hamlet, or just a year older than I am now. That’s enough to make a mockery of anyone’s ambitions, but it also gives me hope. We’re all walking the same path through the forest—and our greatest consolation is that Dante and Shakespeare have been there before us.
Shakespeare and the art of revision
When we think of William Shakespeare, we don’t often see him as a writer who meticulously revised his own work. His reputation as a prodigy of nature, pouring out poetry unaltered onto the page, owes a lot to Ben Jonson’s short reminiscence of his friend, which is still the most valuable portrait we have of how he seemed to those who knew him best:
I remember the players have often mentioned it as an honor to Shakespeare, that in his writing, whatsoever he penned, he never blotted out a line. My answer hath been, “Would he had blotted a thousand,” which they thought a malevolent speech. I had not told posterity this but for their ignorance, who chose that circumstance to commend their friend by wherein he most faulted; and to justify mine own candor, for I loved the man, and do honor his memory on this side idolatry as much as any. He was, indeed, honest, and of an open and free nature; had an excellent fancy, brave notions, and gentle expressions, wherein he flowed with that facility that sometime it was necessary he should be stopped…His wit was in his own power; would the rule of it had been so too…But he redeemed his vices with his virtues. There was ever more in him to be praised than to be pardoned.
And even Shakespeare’s admirers have to admit that his sheer imaginative fertility—the greatest of any writer who ever lived—led him to produce bad lines as well as good, often side by side. (My favorite example is the ending of “The Phoenix and the Turtle”: I don’t think it’s possible to read “For these dead birds sigh a prayer” as anything other than one of the worst lines of poetry ever written.)
But he did revise, both on the overarching levels of character and theme and on the level of the individual line. Harold Bloom, among others, has advanced the ingenious theory that the lost Ur-Hamlet, which we know only through passing references by contemporaries, was nothing less than an early draft by the young Shakespeare himself. We know that it wasn’t particularly good: the author Thomas Lodge refers to the king’s ghost crying “Hamlet, revenge!” in a way that implies that it became a running joke among theatergoers. But the idea that Shakespeare went back and revised it so many years later is revealing in itself. We know that the story was personally meaningful to him—he named his own son after Hamlet—and that the lost version would have been one of the first plays he ever wrote. And Hamlet itself, when we read it in this light, looks a lot like a play that found its final form through repeated acts of revision. F. Scott Fitzgerald once called himself a “taker-outer,” while his friend Thomas Wolfe was a “putter-inner,” which prompted Wolfe to reply:
You say that the great writer like Flaubert has consciously left out the stuff that Bill or Joe will come along presently and put in. Well, don’t forget, Scott, that a great writer is not only a leaver-outer but also a putter-inner, and that Shakespeare and Cervantes and Dostoevsky were great putter-inners—greater putter-inners, in fact, than taker-outers and will be remembered for what they put in—remembered, I venture to say, as long as Monsieur Flaubert will be remembered for what he left out.
And Hamlet stands as the one instance in which Shakespeare, in revision, put in everything he wanted, even if the result was close to unplayable on stage.
There’s an even more compelling glimpse of Shakespeare the reviser, and it comes in the unlikely form of Timon of Athens, which, by all measure, was the weirdest play he ever wrote. Scholars have attributed its stranger qualities—the loose ends, the characters who are introduced only to disappear for no reason—to a collaboration between Shakespeare and Thomas Middleton, and textual analysis seems to bear this out. But it also looks like a rough draft that Shakespeare never had a chance to revise, and if we take it as a kind of snapshot of his creative process, it’s a document of unbelievable importance. In the speech by the servant that I’ve reproduced above, you can see that it starts out as prose, then shifts halfway through to verse, a peculiar transition that occurs repeatedly in Timon but has few parallels in the other plays. This suggests that Shakespeare began by roughing out large sections of the play in prose form, and then went back to convert it into poetry. Timon just happens to be the one play in which the process of revision was interrupted, leaving the work in an unfinished state. It implies that Shakespeare’s approach wasn’t so different from the one that I’ve advocated here in the past: you write an entire first draft before going back to polish it, just as a painter might do a sketch or cartoon of the whole canvas before drilling down to the fine details. It isn’t until you’ve written a story in its entirety that you know what it’s really about. And the little we know about Shakespeare’s methods seems to confirm this.
But his revisions didn’t end there, either. These plays were meant for performance, and like all theatrical works, they evolved in response to rehearsals, the needs of the actors, and the reactions of the audience. (The natural fluidity of the text on the stage goes a long way toward explaining why certain plays, like King Lear, exist in radically different versions in folio or quarto form. Some scholars seem bewildered by the fact that Shakespeare could be so indifferent to his own work that he didn’t bother to finalize a definitive version of Lear, but I’m not sure if it even struck him as a problem. The plays took different shapes in response to the needs of the moment, and Shakespeare, the ultimate pragmatist, knew that there was always more where that came from.) And the idea of ongoing revision is inseparable from his conception of the world. Bloom famously talks about Shakespearean characters “overhearing” themselves, which lies at the center of his imaginative achievement: figures like Richard II and Hamlet seem to listen to themselves speaking, and they evolve and deepen before our eyes in response to what they hear in their own words. But what Bloom calls “the depiction of self-change on the basis of self-overhearing” is a lesson that could only have come out of the revision process, in which the writer figures out his own feelings in the act of rewriting. As E.M. Forster wrote in Aspects of the Novel: “How can I tell what I think till I see what I say?” Shakespeare knew this, too. And thanks to his work—and his revisions—we can echo it in our own lives: “How can we know who we are until we hear what we say?”
Is storytelling kid’s stuff?
Over the last few months, I’ve been spending a lot of time with my daughter at the main branch of the Oak Park Public Library. When you’re a full-time dad, you’re constantly in search of places where your child can romp happily for half an hour without continuous supervision, and our library fills that need admirably: it’s full of physical books, toys, activities, and new faces and friends, so I can grab a chair in the corner and take a richly deserved minute or two for myself while Beatrix goes exploring within my line of sight. Sometimes, when it looks like she’ll be staying put for a while, I’ll get up to browse the books on the shelves, both with an eye to my daughter’s reading and to my own. I’ll often pick up a title I remember and find myself lost in it all over again, and it’s a pleasure to discover that old favorites as different as The Way Things Work, The Eleventh Hour, and D’Aulaires’ Norse Myths have lost none of their fascination. There’s a considerable overlap between what kids and adults find interesting, and the best children’s books, like the best movies, can hold anyone’s attention.
I recently found myself thinking about this more intently, after discovering a shelf at the library that I’d somehow overlooked before. It’s a section devoted to classic literature for kids, and all of the usual suspects are here, from Anne of Green Gables to Alice’s Adventures in Wonderland—the latter of which is still the best children’s book ever written, and possibly, as Alan Perlis observed, the best book ever written about anything. But there were also many titles that weren’t originally written for younger readers but have been retroactively absorbed into the young adult canon. There was a generous selection of Dickens, for example, not far from Richard Lattimore’s translation of the Iliad and the collected stories of Edgar Allan Poe, and the same process has already gone to work on J.R.R. Tolkien. Novels of an earlier era that were written by grownups for other grownups start to look like children’s books: neither The Last of the Mohicans nor Huckleberry Finn nor To Kill a Mockingbird were conceived as works for young readers, but now we’re as likely to see them here as Laura Ingalls Wilder.
There are a lot of possible explanations for this phenomenon, none of which are especially mysterious. Most of these books were four-quadrant novels in the first place: Dickens, like J.K. Rowling, was devoured by everyone at the time who could read. Many feature younger protagonists, so we naturally tend to classify them, rightly or wrongly, as children’s books, which also applies to stories, like the Greek myths, that contain elements of what look today like fantasy. And a lot of them are on school curricula. But there’s also a sense in which the novel, like any art form, advances in such a way to make its most innovative early examples feel a bit naive, or like more primal forms of storytelling that appeal to readers who are still working their way into the medium. Plato says that if the mythical sculptor Daedalus were to appear and start make statues again, we’d all laugh at him, and something similar seems to take place within literature. As the opening paragraph of James Wood’s recent review of the new David Mitchell novel makes clear, critics have a way of regarding storytelling as somewhat suspicious: “The embrace of sheer occurrence, unburdened by deeper meaning.” It feels, in short, like kid’s stuff.
But it isn’t, not really, and it’s easy to invert the argument I’ve given above: the books that last long enough to be assimilated into children’s literature are the ones that offer universal narrative pleasures that have allowed them to survive. Don Quixote can be found in the children’s section, at least in its abridged form, but it’s also, as Harold Bloom says, “the most advanced work of prose fiction we have.” A bright kid wants to read Homer or Poe because of the virtues that make them appealing to everyone—and it’s worth noting that most libraries keep two sets of each on hand, one in the children’s section, the other for adults. Every generation produces reams of stories written specifically for children, and nearly all of them have gone out of print, leaving only those books that pursued story without regard for any particular audience. The slow creep of classic literature into the children’s library is only a mirror image of the more rapid incursion, which we’ve seen in recent years, of young adult literature into the hands of grownups, and I don’t think there’s any doubt as to which is the most positive trend. But they’re both reflections of the same principle. Storytelling breaks through all the categories we impose, and the real measure of value comes when we see what children are reading, on their own, a hundred years from now.
The hero paradox
Every year, the Academy Awards telecast makes us sit through a bunch of pointless montages, and every year, we get to complain about it. As I mentioned last week, I’ve long since gotten over most of the weird choices made by the Oscars—I like to remind myself that the ceremony isn’t designed for the television audience, but for the movers and shakers sitting in the auditorium itself—and I’ve resigned myself to the prospect of a few pointless production numbers. But the montages always seem particularly strange. They don’t add much in the way of entertainment value, and the opportunity cost for what is already an overlong show is unforgivably high: one fewer montage, and perhaps we might have had room for Dennis Farina in the In Memoriam reel, not to mention the canceled appearance by Batkid. This year’s ceremony, with its “salute to heroes” theme, resulted in an even more random assortment of clips than usual: here’s Gandhi, and Lawrence of Arabia, and just as we start to think there’s a pattern emerging, here’s Sidney Poitier as Mr. Tibbs. (I actually had to look up In the Heat of the Night to reassure myself that it hadn’t been based on a true story.)
The result was inexplicable enough that it inspired Todd VanDerWerff of The A.V. Club to tweet: “Next year: A tribute to protagonists!” But it also raises the larger question of what a hero really is, at least in terms of what we look for in storytelling. From a producer’s point of view, the answer is simple: a hero is the actor with the greatest amount of screen time, or whose face takes up the most room on the poster. (Or as the producer Scott Rudin once said when asked what a movie was about: “It’s about two movie stars.”) A writer might put it somewhat differently. The protagonist of a movie is the character whose actions and decisions drive the plot, and if he or she happens to embody qualities that we associate with heroism—courage, integrity, selflessness, resourcefulness—it’s because these attributes lend themselves both to wishful identification from the audience and to interesting choices and behavior within the confines of the story. All things being equal, a brave, committed individual will end up doing things on camera that we’ll hopefully want to watch. It has nothing to do with morality; it’s a logistical choice that results in more entertaining narratives. Or at least it should be.
The trouble, of course, is that when you’re not sure about your own story, you tend to fixate more on what the hero is than the more crucial matter of what he does. Screenwriters are always told to make their leading characters more heroic and likable, as if this were something that could be separated from the narrative itself. At worst, the movie simply serves up a chosen one, either explicitly or implicitly, which is often an excuse to give us a protagonist who is interesting and important just because we’re told he is. Sometimes, this problem can be a subtle one. Watching The Hunger Games: Catching Fire for the first time over the weekend, I felt that even though Jennifer Lawrence sells the hell out of the part, Katniss Everdeen herself is something of a wet blanket. This isn’t anyone’s fault: Katniss as written is almost unplayable, since she needs to be admirable enough to inspire a revolution and carry a franchise, vulnerable enough to serve as one corner of a love triangle, and a resourceful warrior who also hates the idea of killing. That’s a lot for any one character to shoulder, and it means that poor Katniss herself is often the least interesting person on the screen.
In general, though, it’s hard for a hero to come to life in the way a more incidental character can, simply because he’s under so much pressure to advance the plot. The great character actor Stephen Tobolowsky hinted at this last week on Reddit:
The difference between character actors and the leading men is that everything the leading men do is on film. Character actors have to invent that life off screen and bring that reality on screen. It’s much more imaginative work and the hours are better.
That’s why we often find ourselves wishing that we could spend more time with the supporting cast of a television show: they’re so much more full of life and vitality than the lead, whose every action is designed to carry forward a huge, creaking machine. Being a hero is a thankless role, both in fiction and in real life, and it inevitably leads to a loss of freedom, when in theory the hero should be more free than anyone else. As Harold Bloom observes of Hamlet, he could be anything in the world, but he’s doomed to play out the role in which he has been cast. Finding a way to balance a hero’s narrative burden with the freedom he needs to come alive in the imagination is one of a writer’s greatest challenges. And if the movies succeeded at this more often, those montages at the Oscars would have made a lot more sense.
“And they lived happily ever after…”
In old age, I accept unhappy endings in Shakespearean tragedy, Flaubert, and Tolstoy, but back away from them in lesser works. Desdemona, Cordelia, Emma Bovary, and Anna Karenina are slain by their creators, and we are compelled to absorb the greatness of the loss. Perhaps it trains us to withstand better the terrible deaths of friends, family, and lovers, and to contemplate more stoically our own dissolution. But I increasingly avoid most movies with unhappy endings, since few among them aesthetically earn the suffering they attempt to inflict upon us.
I’m starting to feel the same way. For most of my life, I’ve never shied away from works of art with unhappy endings: in movies, the list begins and ends with Vertigo, the greatest of all sucker punches ever inflicted on an audience, and includes films as different as The Red Shoes, The Third Man, and Dancer in the Dark. When I’m given a choice between ambiguous interpretations, as in Inception, I’m often inclined to go with the darker reading. But as time goes on, I’ve found that I prefer happy endings, both from a purely technical standpoint and as a matter of personal taste.
Which isn’t to say that unhappy endings can’t work. Yesterday, I cited Bruno Bettelheim on the subject of fairy tales, which invariably end on an unambiguously happy note to encourage children to absorb their implicit lessons about life. As adults, our artistic needs are more complicated, if not entirely dissimilar. An unhappy ending of the sort that we find in the myth of Oedipus or Madame Bovary is psychological training of a different sort, preparing us, as Bloom notes, for the tragic losses that we all eventually experience. Just as scary movies acquaint us with feelings of terror that we’d rarely feel under ordinary circumstances, great works of art serve as a kind of exercise room for the emotions, expanding our capacity to feel in ways that would never happen if we only drew on the material of our everyday lives. If the happy endings in fairy tales prepare and encourage children to venture outside the safe confines of family into the wider world, unhappy endings in adult fiction do the opposite: they turn our attention inward, forcing us to scrutinize aspects of ourselves that we’ve been trained to avoid as we focus on our respectable adult responsibilities.
In order for this to work, though, that unhappiness has to be authentically earned, and the number of works that pull it off is vanishingly small. Endings, whether happy or unhappy, are very hard, and a lot of writers, including myself, are often unsure if they’ve found the right way to end a story. But given that uncertainty, it’s wisest, when you don’t know the answer, to err on positive side, and to ignore the voice that insists that an unhappy ending is somehow more realistic and uncompromising. In fact, a bleak, unearned ending is just as false to the way the world works as an undeserved happy one, and at greater cost to the reader. A sentimental happy ending may leave us unsatisfied with the author’s work, but that’s nothing compared to our sense of being cheated by a dark conclusion that arises from cynicism or creative exhaustion. Simply as a matter of craft, stories work best when they’re about the restoration of order, and one that ends with the characters dead or destroyed by failure technically meets that requirement. But for most writers, I’d argue that being able to restore a positive order to the tangle of complications they’ve created is a sign of greater artistic maturity.
And while it’s nice to believe that a happy or unhappy ending should flow naturally from the events that came before, a casual look at the history of literature indicates that this isn’t the case. Anna Karenina survived in Tolstoy’s first draft. Until its final act, Romeo and Juliet isn’t so different in tone from many of Shakespeare’s comedies, and if the ending had been changed to happily reunite the two lovers, it’s likely that we’d have trouble imagining it in any other way—although it’s equally likely that we’d file it permanently among his minor plays. On the opposite end of the spectrum, The Winter’s Tale is saved from becoming a tragedy only by the most arbitrary, unconvincing, and deeply moving of authorial contrivances. In practice, the nature of an ending is determined less by the inexorable logic of the plot than by the author’s intuition when the time comes to bring the story to a close, and as we’ve seen, it can often go either way. A writer has no choice but to check his gut to see what feels right, and I don’t think it’s too much to say that the burden lies with the unhappy ending to prove that it belongs there. Any halfway competent writer can herd his characters into the nearest available chasm. But when in doubt, get them out.
On the novelist’s couch
Recently, I’ve been thinking a lot about Freud. Psychoanalysis may be a dying science, or religion, with its place in our lives usurped by neurology and medication, but Freud’s influence on the way we talk about ourselves remains as strong as ever, not least because he was a marvelous writer. Harold Bloom aptly includes him in a line of great essayists stretching back to Montaigne, and he’s far and away the most readable and likable of all modern sages. His writings, especially his lectures and case notes, are fascinating, and they’re peppered with remarkable insights, metaphors, and tidbits of humor and practical advice. Bloom has argued convincingly for Freud as a close reader of Shakespeare, however much he might have resisted acknowledging it—he believed until the end of his days that Shakespeare’s plays had really been written by the Earl of Oxford, a conjecture known endearingly as the Looney hypothesis—and he’s as much a prose poet as he is an analytical thinker. Like most geniuses, he’s as interesting in his mistakes as in his successes, and even if you dismiss his core ideas as an ingeniously elaborated fantasy, there’s no denying that he constructed the central mythology of our century. When we talk about the libido, repression, anal retentiveness, the death instinct, we’re speaking in the terms that Freud established.
And I’ve long been struck by the parallels between psychoanalysis and what writers do for a living. Freud’s case studies read like novels, or more accurately like detective stories, with the analyst and the patient navigating through many wild guesses and wrong turns to reach the heart of the mystery. In her classic study Psychoanalysis: The Impossible Profession, Janet Malcolm writes:
In the Dora paper, Freud illustrates the double vision of the patient which the analyst must maintain in order to do his work: he must invent the patient as well as investigate him; he must invest him with the magic of myth and romance as well as reduce him to the pitiful bits and pieces of science and psychopathology. Only thus can the analyst sustain his obsessive interest in another—the fixation of a lover or a criminal investigator—and keep in sight the benign raison d’être of its relentlessness.
To “the fixation of a lover or a criminal investigator,” I might also add “of a writer.” The major figures in a novel can be as unknowable as the patient on the couch, and to sustain the obsession that finishing a book requires, a writer often has to start with an imperfect, idealized version of each character, then grope slowly back toward something more true. (Journalists, as Malcolm has pointed out elsewhere, sometimes find themselves doing the same thing.)
The hard part, for novelists and analysts alike, is balancing this kind of intense engagement with the objectivity required for good fiction or therapy. James Joyce writes that a novelist, “like the God of the creation, remains within or behind or beyond or above his handiwork, invisible, refined out of existence, indifferent, paring his fingernails,” and that’s as fine a description as any of the perfect psychoanalyst, who sits on a chair behind the patient’s couch, pointedly out of sight. It’s worth remembering that psychoanalysis, in its original form, has little in common with the more cuddly brands of therapy that have largely taken its place: the analyst is told to remain detached, impersonal, a blank slate on which the patient can project his or her emotions. At times, the formal nature of this relationship can resemble a kind of clinical cruelty, with earnest debates, for instance, over whether an analyst should express sympathy if a patient tells him that her mother has died. This may seem extreme, but it’s also a way of guarding against the greatest danger of analysis: that transference, in which the patient begins to use the analyst as an object of love or hate, can run the other way. Analysts do fall in love with their patients, as well as patients with their analysts, and the rigors of the psychoanalytic method are designed to anticipate, deflect, and use this.
It’s in the resulting dance between detachment and connection that psychoanalysis most resembles the creative arts. Authors, like analysts, are prone to develop strong feelings toward their characters, and it’s always problematic when a writer falls in love with the wrong person: witness the case of Thomas Harris and Hannibal Lecter—who, as a psychiatrist himself, could have warned his author of the risk he was taking. Here, authors can take a page from their psychoanalytic counterparts, who are encouraged to turn the same detached scrutiny on their own feelings, not for what it says about themselves, but about their patients. In psychoanalysis, everything, including the seemingly irrelevant thoughts and emotions that occur to the analyst during a session, is a clue, and Freud displays the same endless diligence in teasing out their underlying meaning as a good novelist does when dissecting his own feelings about the story he’s writing. Whether anyone is improved by either process is another question entirely, but psychoanalysis, like fiction, knows to be modest in its moral and personal claims. What Freud said of the patient may well be true of the author: “But you will see for yourself that much has been gained if we succeed in turning your hysterical misery into common unhappiness.”
Honor among writers
Writers, by nature, are highly competitive. In principle, writing isn’t a contest, but it certainly feels like one, and in practical terms, you find yourself competing with other contemporary writers for all sorts of things that seem available only in finite amounts: attention from editors, book sales, awards, an intangible sense of where you rank in the literary pecking order. Near the top, among the handful of great novelists in any generation, the sense of being a member of a tiny club—in which the old guard is periodically pushed out to make room for the new—can turn into a weird kind of office politics. And don’t think that the authors themselves aren’t acutely conscious of where they stand. Shortly before his death, John Updike, speaking of Philip Roth, said this to the Telegraph:
Philip really has the upper hand in the rivalry, as far as I can tell…I think in a list of admirable novelists there was a time when I might have been near the top, just tucked under Bellow.
It’s an illuminating glimpse of what Updike thought of Roth, but I also like that offhand reference to a “list of admirable novelists,” to which Updike seems to have devoted a fair amount of thought.
I found this quote in Claudia Roth Pierpont’s recent piece in The New Yorker about the friendships between Roth and his contemporaries, including Bellow, Updike, and others, with material drawn from her acclaimed new Roth biography. (At this point, Pierpont might as well legally change her name to “Claudia Roth Pierpoint, no relation.”) The picture we get from the profile is that of a circle of astoundingly talented writers who were pleased to have rivals worthy of their time, but who weren’t always entirely comfortable in one another’s company. You get a sense what it must have been like for two ambitious writers of the same age—Updike was “a year and a day” older than Roth—to rub elbows from Roth’s description of Updike’s “leaping, kangaroo-like energy” as a younger man, followed at once by the wry observation: “I was not un-kangaroo-like myself.” It’s hard for two kangaroos to share a room, especially at a New York dinner party, and for all their mutual admiration, there was also an underlying wariness. Roth referred to the two of them as “friends at a distance,” and when asked by the Telegraph if he and Roth were friends, Updike responded: “Guardedly.”
Much the same went for Roth and Saul Bellow, at least in the early days. Ultimately, their acquaintance blossomed into a lasting friendship, but Bellow seems to have initially held the younger writer—eighteen years his junior—at arm’s length. Harold Bloom has famously written of the anxiety of influence, that almost Oedipal ambivalence with which artists regard the predecessors whom they admire and long to imitate, and when two authors are alive at the same time, it runs both ways: a literary mentorship often has less in common with Finding Forrester than with All About Eve. In time, Bellow warmed up to Roth, thanks in part to the influence of his wife, Janis Freedman Bellow, whom Roth imagines saying: “What’s the matter, this guy really likes you, he really admires you, he wants to be your friend.” Freedman Bellow demurs: “I had that conciliatory gene. But it’s not like I was kicking him under the table.” (Bellow’s guardedness toward Roth reminds me a little of how Maxim Gorky described Tolstoy and another rival: “Two bears in one den.” In Tolstoy’s case, the rival was God.)
Yet this kind of rivalry is essential for the cause of art, since it forces the writers themselves to operate at a higher level. Pierpont compares Roth and Updike, fruitfully, to Picasso and Matisse, “wary competitors who were thrilled to have each other in the world to up their game,” and it’s a feeling to which many authors can relate. In his essay “Some Children of the Goddess,” Norman Mailer memorably recalls his feelings about James Jones, one of the few novelists he seemed willing to consider as a peer, and the failure of Jones’s novel Some Came Running:
I was in the doldrums, I needed a charge of dynamite. If Some Came Running had turned out to be the best novel any of us had written since the war, I would have had to get to work. It would have meant the Bitch was in love with someone else, and I would have had to try to win her back.
Artistic rivalry can be murder on the writers themselves—Updike and Roth eventually had a disagreement that led them to break off contact for the last ten years of Updike’s life—but it’s undeniably good for readers, even if the immediate result is what Bellow himself once observed: “Writers seldom wish other writers well.”
My ten great books #3: The Magic Mountain
(Note: For the rest of the month, I’m counting down the ten works of fiction that have had the greatest influence on my life as an author and reader, in order of their first publication. For earlier entries in the series, please see here.)
Whenever I think of Thomas Mann’s The Magic Mountain, I always begin with the blankets. They’re a pair of lovely camel-hair blankets, “extra long and wide, in a natural beige fabric that was delightfully soft to the touch,” and they’re used by the residents of a sanitarium in the Alps while lounging on their balconies for their daily rest cure, which can last for hours. They certainly sound cozy:
Whether it was the texture of the cushions, the perfect slant of the back support, the proper height and width of the armrests, or simply the practical consistency of the neck roll—whatever it was, nothing could possibly have offered more humane benefits for a body at rest than this splendid lounge chair.
If you can understand the appeal of those blankets—and of their promise of a life spent in glorious inactivity—you can begin to grasp what makes this novel so fascinating, despite its daunting appearance. As I’ve mentioned before, The Magic Mountain may be the least inviting of all major twentieth-century novels: it lacks the snob appeal of Ulysses or Proust, its structure is classical and crystalline, and a plot summary doesn’t exactly make it sound like a page-turner. The first necessary step is a leap of the imagination, a willingness to acknowledge the part of yourself that, like the young Hans Castorp, is drawn to the idea of giving up all ambition, all advancement, all action, for the sake of a life spent in the confines of a comfortable chair. Hans Castorp’s reasoning may not be airtight, but it’s hard to deny its power: “On the whole, however, it seemed to him that although honor had its advantages, so, too, did disgrace, and that indeed the advantages of the latter were almost boundless.”
In the end, Hans, a perfectly healthy young man, ends up staying at the sanitarium for seven years. Of course, what he and the reader soon discover is that this retreat into inactivity is secretly a plunge into something else. Despite its unlikely subject matter, The Magic Mountain vibrates on every page with life, intelligence, and insight. Mann likes to remind us, a bit too insistently, that Hans is “ordinary,” but really, as Harold Bloom points out, he’s immensely likable and curious, and you come to identify with him enormously. The story in which he finds himself has often been called a novel of ideas, and it is, but it’s much more: Mann stuffs it with compelling set pieces—Walpurgis Night, Hans’s nearly fatal misadventure in the snowstorm, the séance, the duel between Naptha and Settembrini—that would be high points in any novel, and it isn’t hard to see why the book was a huge bestseller in its time. Like Proust, Mann has useful insights into a dazzling variety of subjects, ranging from medicine to music to the nature of time, even as he depicts a world in which these ideas are on the verge of being destroyed. The characters are rendered with uncanny vividness, and when you’re done, you feel as if you’ve passed half a lifetime in their company, and the memory is charged with nostalgia, longing, and regret. It took me a long time to come around to this book, and it sat unread on my shelf for years. When I finally started it for real, it was with a distinct sense of obligation. And what I found, much to my surprise, was that it was the novel I’d been looking for my entire life.
The better part of valor
This morning, I published an essay in The Daily Beast on Karl Rove’s curious affection for the great Argentine author Jorge Luis Borges, a connection that I’ve found intriguing ever since Rove mentioned it two years ago in a Proust questionnaire for Vanity Fair. Borges, as I’ve mentioned before, is one of my favorite writers, and it’s surprising, to say the least, to find myself agreeing with Rove on something so fundamental. It’s also hard to imagine two men who have less in common. While Rove jumped with both feet into a political career, and was cheerfully engaging in dirty tricks before he was out of college, Borges survived the Peron regime largely by keeping his head down, and in later years seemed pointedly detached from events in Argentina. It’s a mistake to think of him as an entirely apolitical writer—few authors of his time wrote more eloquently against the rise of Nazism—but it’s clear that for much of his life, he just wanted to be left alone. As a result, he’s been criticized, and not without reason, for literally turning a blind eye on the atrocities of the Dirty War, claiming that his loss of eyesight made it impossible to read the newspapers.
This policy of avoidance is one that we often see in the greatest writers, who prudently decline to engage in politics, often for reasons of survival. Shakespeare was more than willing, when the occasion demanded it, to serve as the master of revels for the crown, but as Harold Bloom points out, he carefully avoided any treatment of the political controversies of his time, perhaps mindful of the cautionary fate of Christopher Marlowe. Discretion, as Falstaff advises us, is the better part of valor, and also of poetry, at least if the poet wants to settle into a comfortable retirement in Stratford. Dante, Shakespeare’s only peer among Western poets, might seem like an exception to the rule—he certainly didn’t shy away from political attacks—but his most passionate jeremiads were composed far from Florence. “Beyond a doubt he was the wisest, most resolute man of his time,” Erich Auerbach writes. “According to the Platonic principle which is still valid whenever a man is manifestly endowed with the gift of leadership, he was born to rule; however, he did not rule, but led a life of solitary poverty.”
Borges, too, chose exile, spending his declining years overseas, and finally died in Geneva. It’s a pattern that we see repeatedly in the lives of major poets and artists, especially those who emerge from nations with a history of political strife. The great works of encyclopedic fiction, as Edward Mendelson reminds us, tend to be written beyond the borders of the countries they document so vividly: the closing words of Ulysses, the encyclopedia of Dublin, are “Trieste-Zurich-Paris.” This is partly the product of sensible caution, but it’s also a professional necessity. Most creative work is founded on solitude, quiet, and a prudent detachment from the world, and any degree of immersion in politics tends to destroy the delicate thread of thought necessary for artistic production. Even when writers are tempted by worldly power, they’re usually well aware of the consequences. Norman Mailer, writing of his doomed run for mayor of New York, observes of himself, in the third person: “He would never write again if he were Mayor (the job would doubtless strain his talent to extinction) but he would have his hand on the rump of History, and Norman was not without such lust.”
In the end, as Mailer notes acidly, “He came in fourth in a field of five, and politics was behind him.” Which is all for the best—otherwise, we never would have gotten The Executioner’s Song or Of a Fire on the Moon, not to mention Ancient Evenings, which is the sort of foolhardy masterpiece, written over the course of a decade, that could only be written by a man whose political ambitions have been otherwise frustrated. Besides, as I’ve pointed out elsewhere, novelists don’t make good politicians. And their work is often the better for it. In the case of Borges, there’s no question that much of what makes him great—his obsession with ideas, his receptivity to the structures of speculative fiction, his lifelong dialogue with all of world literature—arose from this tactical refusal to engage in politics. Unable or unwilling to criticize the government, he turned instead to a life of ideas, leaving behind a body of extraordinary fiction defined as much by what it leaves out as by what it includes. And I don’t think any sympathetic reader would want it any other way.
What I really learned from my classical education
Last week, while discussing Cormac McCarthy’s Blood Meridian, I wrote: “This isn’t an unstructured novel by any means, but the structure is paratactic rather than periodic—the plot doesn’t advance so much as proceed inexorably from one bloody set piece to the next.” I chose the terms “paratactic” and “periodic” without thinking, but the more I look at this sentence, the more amused I feel at the return of these particular words. They are, in fact, fossils from my classical education, in which I spent a ridiculous amount of class time dividing authors into one of these two camps. (For those who spent their time in college in more useful ways, “periodic” is a style of prose, typified by Cicero, in which the meaning of a thought depends on the structure of the entire sentence, and often isn’t clear until the very last word, while “paratactic” refers to a style in which relatively short, separate ideas are starkly connected like links in a chain.) And this represents only one example of how my thinking has been shaped by the education I received—although not always in the way I expected at the time.
When I entered college, I wasn’t sure what I wanted to concentrate in, but I knew that I wanted to be a writer. On my application, I wrote “English” as my expected major, but over the course of my freshman year, I cycled variously, and with greater or lesser degrees of seriousness, between psychology, the history of science, and even, briefly, engineering. (I nixed the latter after realizing that if I were really serious about engineering, I should have gone to the college just down the road.) I arrived at Classics for several reasons: 1) I knew that I wanted to learn Latin and Greek, but quickly realized that since I was starting from scratch, I couldn’t just study them as electives. 2) My college had the most prestigious Department of Classics in the country, so if nothing else, I’d be studying with the best. 3) I’d just read a book by the classicists Victor Davis Hanson and John Heath lamenting the decline of classical education, and I’m a sucker for a lost cause. 4) Most of all, I wanted a major that would give me the kind of broadly generalist education I thought I needed as a writer, and a field that required at least a superficial grounding in grammar, linguistics, rhetoric, history, art, literature, religion, archaeology, and other subjects seemed to fit the bill.
Well, now it’s more than ten years since I graduated, which seems like a good time to take stock of what I really learned from the experience. If I’d hoped to emerge with a permanent knowledge of Greek and Latin, that unfortunately wasn’t the case: it took a long time to ramp up to the point where I was at all competent in these languages, and although there was a period of about six months when I could capably sight-read Euripides, it didn’t last long. (I’m secretly convinced that I could probably regain a lot of these skills if I sat down and tried it, but I haven’t yet put this to the test.) I read a lot of great literature, but aside from Homer, Plato, Antigone, the Gospels, and possibly Virgil, their lasting impact hasn’t been as significant as that of other writers I’ve read before and since. While I still believe strongly in the importance of classical education, I’m no longer as dogmatic about this as I used to be—and it certainly didn’t help to realize that Victor Davis Hanson is basically a crazy neoconservative. Classics is undoubtedly the key to understanding much of the literature that followed, but for most of us, reading these works in good translations is probably more than sufficient. And as Harold Bloom likes to point out, even the greatest literature isn’t likely to turn us into better citizens.
Yet if I had the chance to go back and try again, I do exactly the same thing. Classics is in my blood, in ways I’ve internalized so completely that the effects are often invisible. The most important college class I ever took, at least in terms of how it affected my subsequent life, was a single course in introductory Latin prose composition, in which I hacked my way ineptly through Bradley’s Arnold and laboriously composed short paragraphs in a dead language. I was never much of a Latinist, but the experience indelibly shaped my style as a writer, to the point where it sometimes seems too proper—I’ve had to work hard to restore the informality and roughness that fiction sometimes requires. It gave me a permanent distrust of semicolons. And it provided me with a critical vocabulary and toolbox that I use to evaluate everything I write. When I’m working on a short story or a chapter in a novel, I’m never consciously taking classical examples into account, but they’ve quietly enforced many of my feelings about narrative clarity, transparency, and elegance. Would I have come to the same conclusions anyway? Probably—but I couldn’t have implemented them nearly as well. Classics wasn’t an end in itself, but an essential starting point. And even as I’ve largely left it behind, I’m grateful that I had a chance to begin there.
The writer’s toolbox
Last week, I wrote about my enduring fascination with the Great Books of the Western World, having been mildly obsessed with this set ever since first encountering its fifty-four volumes in my high school library. What I didn’t really talk about is how this collection, and the idea of canons and reading lists in general, is intimately tied up with my identity as a writer. I’ve always known that I wanted to be a novelist, and as a result, I spent many years thinking about what a writer’s education ought to look like. What it involved, as best as I could determine, was writing as much as possible; carefully studying one’s own language, and perhaps a few others; exploring a variety of narrative art beyond the printed page, especially film and theater; traveling and seeking out other kinds of life experience; and reading as widely as possible. In my adult life, I’ve often fallen short of these high standards, but I’ve done the best I could. And as far as reading was concerned, even at the age of seventeen, it seemed clear to me that the great books were far from the worst place to start.
So was I right? Reading the great books, as Harold Bloom has pointed out, won’t make us better citizens, but will it make us better writers? The evidence, in my own experience, is mixed: if I’ve learned anything since high school, it’s that an aspiring author will learn more from writing and revising one mediocre novel than reading a semester’s worth of the world’s classics. But if reading great books doesn’t make us better writers, it’s hard to think of anything else that will. As I’ve said before, writing is such a ridiculously competitive activity that a writer has to seek out sources of incremental advantage wherever possible, and it’s hard not to suspect that we might benefit from reading Moby-Dick or Middlemarch or Anna Karenina, even if it’s tricky to pin down why. Consciously or not, most of a writer’s life is spent acquiring the skills that he or she needs to produce good work, and in the great books, we have what looks like a very enticing toolbox, even if it’s up to the individual writer to put these tools to use.
This might be why writers tend to at least be cautiously respectful of the idea of great books. These days, we don’t hear much about the culture wars, perhaps because pundits on both the right and left are worried about being tagged as elitists. It’s worth pointing out, however, that of all the attacks that the great books have sustained over the years, very few have come from professional writers. The reason, I suspect, is that while writers know that there’s something inherently ridiculous about canons and reading lists, they also can’t afford to ignore them, at least not entirely. For most people, reading Virgil or Milton feels nice and virtuous, but it’s hard to see how useful it is, compared to, say, electrical engineering or wood shop. It’s only for the writer that the apparently contradictory goals of liberal education and vocational training are essentially the same. For a novelist who is serious about acquiring the tools that he or she needs, four years at St. John’s College is as practical as a certificate from DeVry.
So am I telling you to read the great books? Not necessarily. A reader who plows dutifully through all fifty-four volumes in the Great Books set may turn out to be a good writer, but is more likely to end up a drudge. Most novelists are more like the fox than the hedgehog; their education is accurate, but unsystematic, with lots of wrong turns and diverting detours. True talent will take inspiration from any source: great careers have been nourished by comic books, television, and the movies, and speaking for myself, I’ve been more inspired by the works of Kubrick, Powell and Pressburger, and the like than by most of the authors I read in Latin and Greek. That said, if you’re not sure where to start, it certainly can’t hurt to begin with a reading list: even as your education takes you farther afield, Tom Jones and Tristram Shandy will always be waiting. In the end, the balance between high art and pop culture, and the canonical and the unsung, is one that every writer needs to discover on his or her own, and the fully equipped toolbox will have room for both.
The unstructured magic of Little, Big
Over the past few years, there have been few contemporary novels I approached with such anticipation, aside perhaps from Cloud Atlas, as John Crowley’s Little, Big. Harold Bloom, who praises dead authors effusively but is much more restrained about recent fiction, has famously called it one of the four or five best novels by any living writer, and the consensus seems to be that this is one of the greatest fantasy novels of all time, and certainly one of the best by an American author. Earlier this week, then, after a long, leisurely reading process periodically interrupted and resumed by other commitments, I finally finished it. And while I admire it greatly, my reaction is more complex and ambivalent than I expected, which is perhaps fitting for such a strange, pointedly elusive novel.
First, a word about structure. I love structure, perhaps because I love the movies, which depend utterly on structure for their power. Structure, at its most basic, is an author’s arrangement of narrative elements into an overall whole, which often coincides with plot, but can also reflect a different sort of logic. At its best, a novel’s structure describes a shape—a pyramid, a circle, a series of spirals—that the reader can stand back and admire, something like the Borgesian conception of the divine mind. As a result, I respond strongly both to perfectly structured conventional novels, like Coetzee’s Disgrace, and to novels that make an unusual structure seem inevitable, like Gravity’s Rainbow, in which the author’s engagement with form becomes a character in itself. And, perhaps inevitably, I have trouble enjoying novels that seem deliberately unstructured.
At first glance, Little, Big has the appearance of intricate, almost obsessive structure: six books, twenty-six chapters (half the number of weeks in a year or cards in a deck), each with its own smaller divisions. On a deeper level, however, it seems designed to provoke, then frustrate, our expectations about a conventionally shapely novel. It begins with a leisurely account of the lives of several families in an imaginary New England, hints at the existence of fairies, then abruptly skips forward twenty-five years, alternating languorous descriptions of rooms and scenery with breathless events barely glimpsed or left entirely offstage. The novel’s technique, like that of House of Leaves, is one of implication, postponement, reticence, full of clues, but no answers, with small vivid scenes that promise to break out into a larger narrative, but either remain isolated in the gorgeous swamp of language or fade decorously away.
Reading Little, Big, I was reminded that an unstructured novel is something quite different from a structureless one. Structurelessness in itself is a narrative choice, and if such a work states its intentions early on—as in Terrence Malick’s Tree of Life—it can be as satisfying as any conventional story. The reason why Little, Big often feels so frustrating is that it constantly knocks on the door of structure, only to shy away. It’s an uneasy hybrid of the shapeless family novel and conventional fantasy, with its supernatural events, prophecies, and air of intrigue, and the two elements push endlessly against each other, which can be exhilarating, but more often exhausting. To attribute this to artistic confusion or laziness, as certain commenters have done at the A.V. Club, is to give Crowley insufficient credit: every paragraph of this novel testifies to his intelligence and skill. But it’s fair to wonder if he intended to inspire such bewilderment in many, if not most, readers, while also inspiring rapturous joy in a few.
Little, Big, then, is precisely what its reputation suggests: a cult novel. And while I can’t quite count myself as a member of that cult, I’m at least one of its sympathizers. There are wonderful things here: the dense but lyrical language, the reappropriation of Rosicrucianism and Theosophy, and many of the self-contained set pieces, like George Mouse’s encounter with the changeling, which is a perfect little horror story in itself. Above all, there’s the evocation of a fantastical New England and the family home, Edgewood, which I can’t help but associate with my strong feelings about looking for a house of my own. I may not read Little, Big again—its five hundred pages remain as daunting as before—but I’ll certainly be reading in it for the rest of my life, because there’s magic here. And it’s more magical, perhaps, in that you’re forced to dig for it, without the reassuring map of structure, and always with the promise of finding something more.
The Anatomy of Harold Bloom’s Influence
The release of Harold Bloom’s The Anatomy of Influence, a grand summation of a life in letters by a major critic at the age of eighty, gives me a welcome excuse to reflect on the legacy of our leading reader, canonical champion, and defender of the great books. As I’ll point out below, Bloom has severe limitations as a critic of contemporary literature, and he’s often made himself into a figure of fun. His evolution from serious academic into something close to a brand name hasn’t been entirely painless. But there’s no doubt that he’s one of our greatest living intellectuals—his omission from both editions of the Prospect public intellectuals poll is a crime—and his impact on my own life and reading has been surprisingly substantial.
First, the bad news. Bloom has various minor shortcomings as a writer—notably his tendency to repeat himself endlessly, with slight variations, which makes me suspect that his books lack a strong editorial hand—but his real problem is that he no longer seems capable of discussing authors with anything other than unqualified praise or sweeping condemnation. When he’s talking about Shakespeare or Tolstoy, no one is more eloquent or insightful, but he seems incapable of performing nuanced readings of lesser writers. This leads him to brusquely dismiss certain authors of unquestioned canonicity, such as Poe, and into such travesties as his attack on the National Book Awards Medal for Stephen King, in which his only evidence was a critique, also completely nonfactual, of J.K. Rowling. (As I pointed out at the time, this is sort of like saying that Steven Spielberg can’t be a good director because Attack of the Clones was a lousy movie.)
It’s clear, then, that we shouldn’t turn to the current Bloom for credible opinions on contemporary culture, but for deep, almost aspirational readings on authors whose canonical eminence is undisputed. And he remains unmatched in this regard, both for his passion and his readability. At times, it isn’t clear what his point is, except to create in us a state of mind receptive to being changed by literature—which is a worthwhile goal in itself. And his isolated insights are often exceptional. His thoughts on the strangeness of the Yahwist—as in the uncanny moment in Exodus 4:24, for instance, when God tries to kill Moses—and his writings on Joseph Smith, whom he considers a great American prophet, have deeply influenced the novel I’m writing now. And his observations on sexual jealousy in Othello have shaped my understanding not only of that play, but of Eyes Wide Shut:
Shakespeare’s greatest insight into male sexual jealousy is that it is a mask for the fear of being castrated by death. Men imagine that there can never be enough time and space for themselves, and they find in cuckoldry, real or imagined, the image of their own vanishing, the realization that the world will go on without them.
In recent years, Bloom has become less a literary critic than a sort of affable cheerleader, moving past his old polemics on “the age of resentment” to simply extoll the cause of close reading of great books for the pleasure they provide. It’s a simple message, but a necessary one, and one that he is qualified above all other living critics to convey, with his prodigious reading, infinite memory, and nervous, expansive prose. I’ve always been a sucker for canons—I tried to read all fifty-four volumes of the Britannica Great Books series in high school, came close to applying to a similar program at St. John’s College, and finally ended up in the Classics—and Bloom remains my primary gateway into the great books, as he is for many of us. For that, his influence has been incalculable, and I’m glad we still have him around.
Quote of the Day
Reading the very best writers—let us say Homer, Dante, Shakespeare, Tolstoy—is not going to make us better citizens. Art is perfectly useless, according to the sublime Oscar Wilde, who was right about everything. He also told us that all bad poetry is sincere. Had I the power to do so, I would command that these words be engraved above every gate at every university, so that each student might ponder the splendor of the insight.
24 and art’s dubious morality
Today the AV Club tackles an issue that is very close to my own heart: to what extent can we enjoy art that contradicts our own moral beliefs? The ensuing discussion spans a wide range of works, from Gone With the Wind to the films of Roman Polanski and Mel Gibson, but I’m most intrigued by an unspoken implication: that morally problematic works of art are often more interesting, and powerful, than those that merely confirm our existing points of view. When our moral convictions are challenged, it seems, it can yield the same sort of pleasurable dissonance that we get from works that subvert our aesthetic assumptions. The result can be great art, or at least great entertainment.
For me, the quintessential example is 24, a show that I loved for a long time, until it declined precipitously after the end of the fifth season. Before then, it was the best dramatic series on television, and its reactionary politics were inseparable from its appeal. Granted, the show’s politics were more about process than result—nearly every season ended with the exposure of a vast right-wing conspiracy, even if it was inevitably uncovered through massive violations of due process and civil rights—and it seems that the majority of the show’s writers and producers, aside from its creator, were politically liberal to moderate. Still, the question remains: how did they end up writing eight seasons’ worth of stories that routinely endorsed the use of torture?
The answer, I think, is that the writers were remaining true to the rules that the show had established: in a series where the American public is constantly in danger, and where the real-time structure of the show itself rules out the possibility of extended investigations—or even interrogations that last more than five minutes—it’s easier and more efficient to show your characters using torture to uncover information. The logic of torture on 24 wasn’t political, but dramatic. And while we might well debate the consequences of this portrayal on behavior in the real world, there’s no denying that it resulted in compelling television, at least for the first five seasons.
The lesson here, as problematic as it might seem, is that art needs to follow its own premises to their logical conclusion, even if the result takes us into dangerous places. (As Harold Bloom likes to point out, reading Shakespeare will not turn us into better citizens.) And this is merely the flip side of another crucial point, which is that works of art knowingly designed to endorse a particular philosophy are usually awful, no matter where they fall on the political spectrum. At worst, such works are nothing but propaganda; and even at their best, they seem calculated and artificial, rather than honestly derived, however unwillingly, from the author’s own experience. As usual, John Gardner, in The Art of Fiction, says it better than I can:
The question, to pose it one last way, is this: Can an argument manipulated from the start by the writer have the same emotional and intellectual power as an argument to which the writer is forced by his intuition of how life works? Comparisons are odious but instructive: Can a Gulliver’s Travels, however brilliantly executed, ever touch the hem of the garment of a play like King Lear? Or: Why is the Aeneid so markedly inferior to the Iliad?
In my own work, I’ve found that it’s often more productive to deliberately construct a story that contradicts my own beliefs and see where it leads me from there. My novelette “The Last Resort” (Analog, September 2009) is designed to imply sympathy, or even complicity, with ecoterrorism, which certainly goes against my own inclinations. And I’m in the middle of outlining a novel in which the main character is a doubting Mormon whose experiences, at least as I currently conceive the story, actually lead her to become more devout. This sort of thing is harder than writing stories that justify what I already believe, but that’s part of the point. In writing, if not in life, it’s often more useful to do things the hard way.