Posts Tagged ‘Errol Morris’
Broyles’s Law and the Ken Burns effect
For most of my life as a moviegoer, I’ve followed a rule that has served me pretty well. Whenever the director of a documentary narrates the story in the first person, or, worse, appears on camera, I start to get suspicious. I’m not talking about movies like Roger and Me or even the loathsome Catfish, in which the filmmakers, for better or worse, are inherently part of the action, but about films in which the director inserts himself into the frame for no particular reason. Occasionally, I can forgive this, as I did with the brilliant The Cove, but usually, I feel a moment of doubt whenever the director’s voiceover begins. (In its worst form, it opens the movie with a redundant narration: “I first came across the story that you’re about to hear in the summer of 1990…”) But while I still think that this is a danger sign, I’ve recently concluded that I was wrong about why. I had always assumed that it was a sign of ego—that these directors were imposing themselves on a story that was really about other people, because they thought that it was all about them. In reality, it seems more likely that it’s a solution to a technical problem. What happens, I think, is that the director sits down to review his footage and discovers that it can’t be cut together as a coherent narrative. Perhaps there are are crucial scenes or beats missing, but the events that the movie depicts are long over, or there’s no budget to go back and shoot more. An interview might bridge the gaps, but maybe this isn’t logistically feasible. In the end, the director is left with just one person who is available to say all the right things on the soundtrack to provide the necessary transitions and clarifications. It’s himself. In a perfect world, if he had gotten the material that he needed, he wouldn’t have to be in his own movie at all, but he doesn’t have a choice. It isn’t a failure of character, but of technique, and the result ends up being much the same.
I got to thinking about this after reading a recent New Yorker profile by Ian Parker of the documentarian Ken Burns, whose upcoming series on the Vietnam War is poised to become a major cultural event. The article takes an irreverent tone toward Burns, whose cultural status encourages him to speechification in private: “His default conversational setting is Commencement Address, involving quotation from nineteenth-century heroes and from his own previous commentary, and moments of almost rhapsodic self-appreciation. He is readier than most people to regard his creative decisions as courageous.” But Parker also shares a fascinating anecdote about which I wish I knew more:
In the mid-eighties, Burns was working on a deft, entertaining documentary about Huey Long, the populist Louisiana politician. He asked two historians, William Leuchtenburg and Alan Brinkley, about a photograph he hoped to use, as a part of the account of Long’s assassination; it showed him protected by a phalanx of state troopers. Brinkley told him that the image might mislead; Long usually had plainclothes bodyguards. Burns felt thwarted. Then Leuchtenburg spoke. He’d just watched a football game in which Frank Broyles, the former University of Arkansas coach, was a commentator. When the game paused to allow a hurt player to be examined, Broyles explained that coaches tend to gauge the seriousness of an injury by asking a player his name or the time of day; if he can’t answer correctly, it’s serious. As Burns recalled it, Broyles went on, “But, of course, if the player is important to the game, we tell him what his name is, we tell him what time it is, and we send him back in.”
Hence Broyles’s Law: “If it’s super-important, if it’s working, you tell him what his name is, and you send him back into the game.” Burns decided to leave the photo in the movie. Parker continues:
Was this, perhaps, a terrible law? Burns laughed. “It’s a terrible law!” But, he went on, it didn’t let him off the hook, ethically. “This would be Werner Herzog’s ‘ecstatic truth’—‘I can do anything I want. I’ll pay the town drunk to crawl across the ice in the Russian village.’” He was referring to scenes in Herzog’s Bells from the Deep, which Herzog has been happy to describe, and defend, as stage-managed. “If he chooses to do that, that’s okay. And then there are other people who’d rather do reenactments than have a photograph that’s vague.” Instead, Burns said, “We do enough research that we can pretty much convince ourselves—in the best sense of the word—that we’ve done the honorable job.”
The reasoning in this paragraph is a little muddled, but Burns seems to be saying that he isn’t relying on “the ecstatic truth” of Herzog, who blurs the line between fiction and reality, or the reenactments favored by Errol Morris, who sometimes seems to be making a feature film interspersed with footage of talking heads. Instead, Burns is assembling a narrative solely out of primary sources, and if an image furthers the viewer’s intellectual understanding or emotional engagement, it can be included, even if it isn’t strictly accurate. These are the compromises that you make when you’re determined to use nothing but the visuals that you have available, and you trust in your understanding of the material to tell whether or not you’ve made the “honorable” choice.
On some level, this is basically what every author of nonfiction has to consider when assembling sources, which involves countless judgment calls about emphasis, order, and selection, as I’ve discussed here before. But I’m more interested in the point that this emerges from a technical issue inherent to the form of the documentary itself, in which the viewer always has to be looking at something. When the perfect image isn’t available, you have a few different options. You can ignore the problem; you can cut to an interview subject who tells the viewers about what they’re not seeing; or you can shoot a reenactment. (Recent documentaries seem to lean heavily on animation, presumably because it’s cheaper and easier to control in the studio.) Or, like Burns, you can make do with what you have, because that’s how you’ve defined the task for yourself. Burns wants to use nothing but interviews, narration, and archival materials, and the technical tricks that we’ve come to associate with his style—like the camera pan across photos that Apple actually calls the Ken Burns effect—arise directly out of those constraints. The result is often brilliant, in large part because Burns has no choice but to think hard about how to use the materials that he has. Broyles’s Law may be “terrible,” but it’s better than most of the alternatives. Burns has the luxury of big budgets, a huge staff, and a lot of time, which allows him to be fastidious about his solutions to such problems. But a desperate documentary filmmaker, faced with no money and a hole in the story to fill, may have no other recourse than to grab a microphone, sit down in the editing bay, and start to speak: “I first came across the story that you’re about to hear in the summer of 1990…”
Thinking inside the panel
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What non-comic creative type do you want to see make a comic?”
Earlier this year, I discovered Radio: An Illustrated Guide, the nifty little manual written by cartoonist Jessica Abel and Ira Glass of This American Life. At the time, the book’s premise struck me as a subtle joke in its own right, and I wrote:
The idea of a visual guide to radio is faintly amusing in itself, particularly when you consider the differences between the two art forms: comics are about as nonlinear a medium as you can get between two covers, with the reader’s eye prone to skip freely across the page.
The more I think about it, though, the more it seems to me that these two art forms share surprising affinities. They’re both venerable mediums with histories that stretch back for close to a century, and they’ve both positioned themselves in relation to a third, invisible other, namely film and television. On a practical level, whether its proponents like it or not, both radio and comics have come to be defined by the ways in which they depart from what a movie or television show can do. In the absence of any visual cues, radio has to relentlessly manage the listener’s attention—”Anecdote then reflection, over and over,” as Glass puts it—and much of the grammar of the comic book emerged from attempts to replicate, transcend, and improve upon the way images are juxtaposed in the editing room.
And smart practitioners in both fields have always found ways of learning from their imposing big brothers, while remaining true to the possibilities that their chosen formats offer in themselves. As Daniel Clowes once said:
To me, the most useful experience in working in “the film industry” has been watching and learning the editing process. You can write whatever you want and try to film whatever you want, but the whole thing really happens in that editing room. How do you edit comics? If you do them in a certain way, the standard way, it’s basically impossible. That’s what led me to this approach of breaking my stories into segments that all have a beginning and end on one, two, three pages. This makes it much easier to shift things around, to rearrange parts of the story sequence.
Meanwhile, the success of a podcast like Serial represents both an attempt to draw upon the lessons of modern prestige television and a return to the roots of this kind of storytelling. Radio has done serialized narratives better than any other art form, and Serial, for all its flaws, was an ambitious attempt to reframe those traditions in a shape that spoke to contemporary listeners.
What’s a little surprising is that we haven’t witnessed a similar mainstream renaissance in nonfiction comics, particularly from writers and directors who have made their mark in traditional documentaries. Nonfiction has always long been central to the comic format, of course, ranging from memoirs like Maus or Persepolis to more didactic works like Logicomix or The Cartoon History of the Universe. More recently, webcomics like The Oatmeal or Randall Munroe’s What If? have explained complicated issues in remarkable ways. What I’d really love to see, though, are original works of documentary storytelling in comic book form, the graphic novel equivalent of This American Life. You could say that the reenactments we see in works like Man on Wire or The Jinx, and even the animated segments in the films of Brett Morgen, are attempts to push against the resources to which documentaries have traditionally been restricted, particularly when it comes to stories set in the past—talking heads, archive footage, and the obligatory Ken Burns effect. At times, such reconstructions can feel like cheating, as if the director were bristling at having to work with the available material. Telling such stories in the form of comics instead would be an elegant way of circumventing those limitations while remaining true to the medium’s logic.
And certain documentaries would work even better as comics, particularly if they require the audience to process large amounts of complicated detail. Serial, with its endless, somewhat confusing discussions of timelines and cell phone towers, might have worked better as a comic book, which would have allowed readers to review the chain of events more easily. And a director like Errol Morris, who has made brilliant use of diagrams and illustrations in his published work, would be a natural fit. There’s no denying that some documentaries would lose something in the translation: the haunted face of Robert Durst in The Jinx has a power that can’t be replicated in a comic panel. But comics, at their best, are an astonishing way of conveying and managing information, and for certain stories, I can’t imagine anything more effective. We’re living in a time in which we seem to be confronting complex systems every day, and as a result, artists of all kinds have begun to address what Zadie Smith has called the problem of “how the world works,” with stories that are as much about data, interpretation, and information overload as about individual human beings. For the latter, narrative formats that can offer us a real face or voice may still hold an edge. But for many of the subjects that documentarians in film, television, or radio will continue to tackle, the comics may be the best solution they’ll ever have.
The Serial jinx
In the weeks since the devastating finale of The Jinx, the conversation around Andrew Jarecki’s brilliant HBO documentary—which played a crucial role in the arrest for murder of millionaire Robert Durst—has revolved around one of two themes. The first uses The Jinx as a club to beat what remains of the legacy of Serial: we’re told that this is how you tell an extended nonfiction crime story, with a series of tense, surprising revelations building to a conclusion more definitive than any viewer could have imagined. The second, more problematic discussion centers on the inconsistencies in the show’s timeline. It’s a tangled issue, outlined most capably by Kate Aurthur at Buzzfeed, but it seems clear that the filmmakers deliberately misrepresented the timing of their own interactions with Durst, playing with the chronology to create a sense of cause and effect that didn’t exist. This would be troubling enough in itself, but it also raises questions about when and how the producers decided to bring crucial evidence to the police. And while it isn’t enough to diminish Jarecki’s achievement—this is still by far the best television of any kind I’ve watched all year—it can’t help but complicate my feelings about it.
Yet the more you look at those two streams of opinion, the more they feel like variations on the same fact. What separates The Jinx from Serial isn’t artistry, intelligence, or even luck, but the fact that the former show was painstakingly edited together over a long period of postproduction, while the latter was thrown together on the fly. The Jinx goes out of its way to disguise how long its filming lasted, but it appears, at minimum, to have covered four years, two of which came after its final interview with Durst. It results in one of the most beautifully assembled works of nonfiction narrative I’ve ever seen: there’s never any sense, as we often see in other documentaries, that the filmmakers are scrambling to fill gaps in the footage. Each interview subject is presented as articulate and intelligent, without a trace of condescension, and each is allowed to say his or her piece. It’s all here, and it fits together like a fine watch. (There’s a fascinating, unspoken subtext involving the role of wealth on both sides of the camera. Durst’s alleged crimes may have been enabled by his fortune, but so was the investigation: Jarecki, who comes from a wealthy family and became a millionaire himself thanks to his involvement in the founding of Moviefone, has long used his own resources to fund explorations into the darkest sides of human nature, and it’s doubtful if another director would have had the time or ability to dwell as long on a single subject.)
And it’s hard to understate the importance of time in this kind of storytelling. The two great variables in any documentary are chance and organization: either the director stumbles across a fantastic piece of material, as Jarecki did with Capturing the Friedmans, or he fits something more recalcitrant into a beautiful shape, as Errol Morris has done consistently for decades. In both cases, time is the critical factor. Obviously, the longer you spend—or the more footage you shoot—on any subject, the greater the odds of collecting a few precious fragments of serendipity: a twist in a human life, a big revelation, an indelible moment caught on camera. It can take the same amount of time, or more, to figure out how to structure the story. Jarecki had the financial means to stick with Durst for as long as necessary, but documentarians with far fewer resources have pulled it off out of sheer will: Crumb, perhaps the best documentary ever made, was shot over a period of nine years, much of which director Terry Zwigoff spent in crippling poverty and physical pain. Shoah took eleven years: six for production, five for editing. I’ve noted before that there seems to be a fixed amount of time in which a work of art has to percolate in the creator’s brain, and for documentaries, that rendering period needs to be multiplied by a factor of five.
The real question, then, is whether Serial might have left us with an impression like that of The Jinx, if it had been edited and refined for years before being released in its entirety. I think the answer is yes. Take the exact same material, boil it down to four hours, construct it so that instead of coming in and out of focus it saved its most devastating questions and answers for the end, and the result would have felt like a definitive case for Adnan Syed’s innocence, whether or not it was right. This is more or less exactly what The Jinx does. (I don’t know why the filmmakers fudged the timeline so blatantly—you could lift out the offending sequence entirely without making the finale any less compelling—but I suspect it had something to do with hanging on to some juicy footage while still ending on Durst’s accidental confession. Once you make the smart decision to conclude the series there, it’s easy for chronological juggling to shade into outright trickery.) Which only reminds us that what Serial tried to accomplish, doing in real time what other forms of storytelling spend years perfecting in private, was close to inherently impossible. I don’t know what form Serial will take next season, and there’s no question that its structure, with the story evolving in public from week to week, was a huge selling point in its favor. But when it comes to telling a satisfying story, it may have already jinxed itself.
The Necker Cube of Serial
On January 13, 1999, a teenage girl named Hae Min Lee disappeared in Baltimore. The following month, shortly after her body was discovered, her former boyfriend, Adnan Syed, was charged with her murder. Listeners of Serial, the extraordinary radio series currently unfolding on NPR, know exactly how much this bare description leaves unsaid. I don’t feel qualified to comment on the case itself, and in any event, there are plenty of resources available for those who want to dive into the intricacies of cell phone towers and whether or not there was a pay phone at that particular Best Buy. As a writer, though, I’ve been thinking a lot about the implications of Serial itself. As far as I know, it’s an unprecedented experiment in any medium, an ongoing nonfiction narrative unspooling before an audience of millions. Producer Sarah Koenig has said that she doesn’t know how the series will end, or even what will happen from one week to the next, but this doesn’t mean she lacks information available to others: it’s the shape it will take and her ultimate conclusions that remain unclear. As such, it’s not so different from any kind of serial narrative, whether it’s Tom Wolfe writing The Bonfire of the Vanities week by week, Stephen King publishing installments of The Green Mile without knowing the ending, or even my own experience of writing a trilogy with only the vaguest idea of its final form.
The difference is that Serial is centered on factual events, and the obsessiveness, verging on paranoia, that it encourages in its audience can’t be separated from Koenig’s own efforts to resolve the tangle of problems she has imposed on herself. And its fascination lies less in any particular detail or narrative element than in the overall mindset it encourages. It implicates the listener in Koenig’s own uncertainty, in which every fact, no matter how unambiguous, can be read in at least two ways. To take one minor example: Koenig notes that after Hae’s disappearance, Adnan never tried to page her, despite the fact that he’d called her at home three times the night before she disappeared. On its face, this seems suspicious, as if Adnan knew that Hae could no longer be reached. Think about it a little longer, though, and the detail inverts itself: if Adnan were really the “charming sociopath” that prosecutors implied he was, paging Hae after her murder would have provided a convenient indication of his innocence. The fact that it never occurred to him becomes, paradoxically, a point in his favor. Or maybe not. Everything in Serial starts to take on this double significance: Koenig refers to the case as a Rubik’s Cube she’s trying to solve, but an even better analogy might be that of a Necker cube, which oscillates constantly between one of two readings. We even sense this in the way Koenig talks about her own objectives. In the beginning, it feels like a quest for Adnan’s exoneration, but as her doubts continue to multiply, it becomes less a crusade than a search for clarity of any kind.
Perhaps inevitably, then, Serial occasionally suffers from the same qualities that make it so addictive, and it often undermines the very clarity it claims to be seeking. Listening to it, I’m frequently reminded of the work of Errol Morris, who exonerated a man wrongfully convicted of murder in The Thin Blue Line and has gone on to explore countless aspects of information, memory, and the interpretation of evidence. But Morris would have covered the relevant points in two densely packed hours, while Koenig is closing in on fifteen hours or more. Sometimes the length of time granted by the serial format allows her to explore interesting byways, like the odd backstory of “Mr. S,” who discovered Hae’s body; elsewhere, it feels a little like padding. Koenig devotes most of an episode, for instance, to Deirdre Enright, who runs the Innocence Project at the University of Virginia Law School, but they spend the better part of ten minutes simply commiserating over material we’ve seen before. Morris would have introduced Enright with a brief explanatory caption, given her two vivid minutes on screen, and moved on. Serial is never anything less than absorbing, but there’s often a sense that its expansive runtime has allowed it to avoid the hard choices that other nonfiction narratives demand. As a result, we’re sometimes left with the suspicion that our own confusions have less to do with the ambiguity of the case than with the sheer amount of information—not all of it relevant—we’re being asked to process.
But that’s part of the point. Koenig herself becomes one of her most provocative characters: she has a nice, dry, ingratiating manner that encourages an unusual degree of intimacy with her interview subjects, but her sheer fluency as a radio personality sometimes leaves us questioning how much of that closeness is an illusion. Which is exactly how we’re meant to feel about everyone involved. For me, the most memorable moment in the entire series comes courtesy of Adnan himself, speaking by phone from Maryland Correctional Facility:
I feel like I want to shoot myself if I hear someone else say, I don’t think he did it cause you’re a nice guy, Adnan…I would love someone to say, I don’t think that you did it because I looked at the case and it looks kind of flimsy. I would rather someone say, Adnan, I think you’re a jerk, you’re selfish, you know, you’re a crazy SOB, you should just stay in there for the rest of your life except that I looked at your case and it looks, you know, like a little off. You know, like something’s not right.
If Serial has a message, it’s that it’s necessary to look past our instinctively good or bad impressions of a person to focus on the evidence itself, even if this defies what we’ve been programmed to do as human beings. At its best, it’s a show about how inadequate our intuitions can be when faced with reality in all its complexity, which turns the search for clarity itself into a losing game. It’s a game we’ve all been playing long before the show began, and regardless of how it ends, we’ll be playing it long after it’s over.
Transformed by print
Somewhere in his useful and opinionated book Trial and Error, the legendary pulp writer Jack Woodford says that if you feel that your work isn’t as good as the fiction you see in stores, there’s a simple test to see if this is actually true. Take a page from a recent novel you admire—or one that has seen big sales or critical praise—and type the whole thing out unchanged. When you see the words in your own typewriter or computer screen, stripped of their superficial prettiness, they suddenly seem a lot less impressive. There’s something about professional typesetting that elevates even the sloppiest prose: it attains a kind of dignity and solidity that can be hard to see in its unpublished form. It isn’t just a story now; it’s an art object. And restoring it to the malleable, unpolished medium of a manuscript page often reveals how arbitrary the author’s choices really were, just as we tend to be hard on our own work because our rough drafts don’t look as nice as the stories we see in print.
There’s something undeniably mysterious about how visual cues affect the way we think about the words we’re reading, whether they’re our own or others. Daniel Kahneman has written about how we tend to read texts more critically when they’re printed in unattractive fonts, and Errol Morris recently ran an online experiment to test this by asking readers for their opinions about a short written statement, without revealing that some saw it in Baskerville and others in Comic Sans. (Significantly more of those who read it in Baskerville thought the argument was persuasive, while those who saw it in Comic Sans were less impressed, presumably because they were too busy clawing out their eyes.) Kindle, as in so many other respects, is the great leveler: it strips books of their protective sheen and forces us to evaluate them on their own merits. And I’d be curious to see a study on how the average review varies between those who read a novel in print and those who saw it in electronic form.
This is is also why I can’t bear to read my own manuscripts in anything other than Times New Roman, which is the font in which they were originally composed. When I’m writing a story, I’m primarily thinking about the content, yes, but I’m also consciously shaping how the text appears on the screen. As I’ve mentioned before, I’ve acquired a lot of odd tics and aversions from years spent staring at my own words on a computer monitor, and I’ve evolved just as many strategies for coping. I don’t like the look of a ragged right margin, for instance, so all my manuscripts are justified and hyphenated, at least until they go out to readers. I generally prefer it when the concluding line of a paragraph ends somewhere on the left half of the page, and I’ll often rewrite the text accordingly. And I like my short lines of dialogue to be exactly one page width long. All this disappears, of course, the second the manuscript is typeset, but as a way of maintaining my sanity throughout the writing process, these rituals play an important role.
And I don’t seem to mind their absence when I finally see my work in print, which introduces another level of detachment: these words don’t look like mine anymore, but someone else’s. (There are occasional happy exceptions: by sheer accident, the line widths in The Year’s Best Science Fiction Vol. 29 happen to exactly match the ones I use at home, so “The Boneless One” looks pretty much like it did on my computer, down to the shape of the paragraphs.) Last week, I finally received an advance copy of my novel Eternal Empire, hot off the presses, and I was struck by how little it felt like a book I’d written. Part of this is because it’s been almost a year since I finished the first draft, I’ve been working on unrelated projects since then, and a lot has happened in the meantime. But there’s also something about the cold permanence of the printed page that keeps me at arm’s length from my work. Once a story can no longer be changed, it ceases to be quite as alive as it once was. It’s still special. But it’s no longer a part of you.
You are not the story
As I see it, two lessons can be drawn from the Mike Daisey fiasco: 1. If a story seems too good to be true, it probably is. 2. A “journalist” who makes himself the star of his own story is automatically suspect. This last point is especially worth considering. I’ve spoken before about the importance of detachment toward one’s own work, primarily as a practical matter: the more objective you are, the more likely you are to produce something that will be of interest to others. But there’s an ethical component here as well. Every writer, by definition, has a tendency toward self-centeredness: if we didn’t believe that our own thoughts and feelings, or at least our modes of expression, were exceptionally meaningful, we wouldn’t feel compelled to share them. When properly managed, this need to impose our personalities on the world is what results in most works of art. Left unchecked, it can lead to arrogance, solipsism, and a troubling tendency to insert ourselves into the spotlight. This isn’t just an artistic shortcoming, but a moral one. John Gardner called it frigidity: an inability to see what really counts. And frigidity paired with egotism is a dangerous combination.
Simply put, whenever an author, especially of a supposed work of nonfiction, makes himself the star of a story where he obviously doesn’t belong, it’s a warning sign. This isn’t just because it reveals a lack of perspective—a refusal to subordinate oneself to the real source of interest, which is almost never the author himself—but because it implies that other compromises have been made. Mike Daisey is far from the worst such offender. Consider the case of Greg Mortenson, who put himself at the center of Three Cups of Tea in the most self-flattering way imaginable, and was later revealed not only to have fabricated elements of his story, but to have misused the funds his charity raised as a result. At first glance, the two transgressions might not seem to have much in common, but the root cause is the same: a tendency to place the author’s self and personality above all other considerations. On one level, it led to self-aggrandizing falsehood in a supposed memoir; on another, to a charity that spent much of its money, instead of building schools, on Mortenson’s speaking tours and advertisements for his books.
It’s true that some works of nonfiction benefit from the artist’s presence: I wouldn’t want to take Werner Herzog out of Grizzly Man or Claude Lanzmann out of Shoah. But for the most part, documentaries that place the filmmaker at the center of the action should raise our doubts as viewers. Sometimes it leads to a blurring of the message, as when Michael Moore’s ego overwhelms the valid points he makes. Occasionally, it results in a film like Catfish, in which the blatant self-interest of the filmmakers taints the entire movie. And it’s especially problematic in films that try to tackle complex social issues. (It took me a long time to see past the director’s presence in The Cove, for instance, to accept it as the very good movie it really is. But it would have been even better without the director’s face onscreen.)
One could argue, of course, that all forms of journalism, no matter how objective, are implicitly written in the first person, and that every documentary is shaped by an invisible process of selection and arrangement. Which is true enough. But a real artist expresses himself in his choice of details in the editing room, not by inserting himself distractingly into the frame. We rarely, if ever, see Errol Morris in his own movies, while David Simon—who manifestly does not suffer from a lack of ego—appears in Homicide: A Year on the Killing Streets only in the last couple of pages. These are men with real personalities and sensibilities who express themselves unforgettably in the depiction of other strong personalities in their movies and books. In the end, we care about Morris and Simon because they’ve made us care about other people. They’ve earned the right to interest us in their opinions through the painstaking application of craft, not, like Mortenson or Daisey, with self-promoting fabrication. There will always be exceptions, but in most cases, an artist’s best approach lies in invisibility and detachment. Because in the end, you’re only as interesting as the facts you present.
Twenty years later: Oliver Stone’s JFK
Twenty years ago today, Oliver Stone’s JFK was released in theaters, sparking a pop cultural phenomenon that seems all the more peculiar with the passage of time. It wasn’t merely the fact that such a dense, layered film was a big commercial hit, although it was—it grossed more than $70 million domestically, equivalent to over $130 million today—or that it had obviously been made with all the resources of a major studio. It’s that for a few months, even before its release, the movie seemed to occupy the center of the national conversation, inspiring magazine covers, a resurgence of interest in the Kennedy assassination that has never died down, and memorable parodies on Seinfeld and The Simpsons. In my own life, for better or worse, it’s had a curious but undeniable influence: many of my current literary and cultural obsessions can be traced back to three years in my early teens, when I saw JFK, read Foucault’s Pendulum, and became a fan of The X-Files. As a result, for several years, I may have been the only teenager in the world with a JFK poster on his bedroom wall.
Of course, none of this would have happened if the movie itself weren’t so ridiculously entertaining. Over the years, I’ve gone back and forth on the merits of JFK, but these days, I believe that it’s a genuinely great movie, one of the few recent Hollywood films—along with Stone’s equally fascinating but underrated Nixon—to advance and build upon what Orson Welles did with Citizen Kane. It’s hard to imagine this now, in the days of W and Wall Street 2, but there was a time when Oliver Stone was the most interesting director in America. At his peak, when he was in the zone, I don’t think anyone—not Scorsese, not Spielberg—could match Stone for sheer technical ability. JFK, his best movie, is one of the most expertly crafted films ever made, an incredibly detailed movie of over three hours that never allows the eye to wander. In particular, the cinematography and editing (at least in the original version, not the less focused director’s cut available on Blu-ray) set a standard that hasn’t been matched since, even as its use of multiple film stocks and documentary footage has become routine enough to be imitated by Transformers 3.
Watching it again earlier this year, I was newly dazzled by the riches on display. There’s the film’s effortless evocation of New Orleans, Dallas, and Washington in the sixties, with the local color of countless locations and neighborhoods picked up on the fly. There’s the compression of the marriage of Lee Harvey and Marina Oswald into five sad minutes—a compelling short film in itself. There’s Donald Sutherland’s loony, endless monologue as the mysterious X, which covers as much conspiracy material as a season’s worth of The X-Files. There’s the astounding supporting cast, which has proven so central to the Kevin Bacon game, and the mother of all courtroom speeches. And most unexpectedly, there’s Kevin Costner, at the height of his stardom, providing a calm center for all this visual, narrative, and textural complexity. It’s safe to say that JFK would never have been made without Costner, whose considerable charisma does more than anything else to turn Jim Garrison, one of the shiftier characters in recent memory, into something like Eliot Ness.
And that’s the problem. JFK is magnificent as cinema, but ludicrous as history. There’s something frightening about how Stone musters such vibrant craft to such questionable ends: in the years since, nearly every point that the movie makes has been systematically dismantled, and if Stephen King’s 11/22/63 is any indication of the cultural mood, it seems that many of us are finally coming around to the realization that, as unthinkable as it seems, Oswald probably acted alone. It’s perhaps only now, then, that we can watch this film with a cool head, as a great work of fiction that bears only superficial resemblance to actual events, and whose paranoid vision of history is actually less strange than the truth. JFK needs to be seen, studied, and appreciated, but first, one should watch Zodiac, or, even better, Errol Morris’s beguiling “The Umbrella Man,” posted earlier this month at the New York Times website. Morris is working on his own movie about the assassination, and if this sample is any indication, it’s the corrective that JFK, for all its brilliance, sorely needs. As subject Josiah “Tink” Thompson says:
What it means is, if you have any fact which you think is really sinister…Forget it, man. Because you can never, on your own, think up all the non-sinister, perfectly valid explanations for that fact. A cautionary tale!
McKinney versus McGonagall
On Saturday, my wife and I went to see Tabloid, Errol Morris’s hugely entertaining new documentary about the strange life of Joyce McKinney, former beauty queen, dog cloner, and kidnapper of the manacled Mormon. We went to see it at Landmark Century, one of Chicago’s leading art house theaters, and because certain shows can get pretty crowded on the weekends, I made sure that we got there forty minutes early. Once we arrived, though, I was surprised to find that the theater itself was almost dead, and we were the first ones to be seated for Tabloid. And while the other seats gradually filled, the auditorium was never more than halfway full. It was almost, I mused to myself, as if everyone else in the world was off seeing some other movie.
That movie, of course, was Harry Potter and the Deathly Hallows Part 2, which we ended up seeing the following day. The contrast couldn’t have been greater: although we saw Harry Potter at an early matinee on Sunday afternoon, the theater was packed, mostly with adults, all doing their part to contribute to the most lucrative opening weekend of all time. It’s tempting, then, to see these two films as extreme ends of the moviegoing spectrum. Tabloid is a modest production even by Morris’s standards—he doesn’t do any of his usual reenactments or even any shooting on location, with the entire film consisting of talking heads, graphics, and archival footage—while Deathly Hallows is one of the most expensive movies ever made. Taken together, its two parts cost something like $250 million, meaning that Morris’s entire filmography could probably be financed by the first five minutes alone.
Beyond their scale and subject matter, the films also differ radically in their conceptions of storytelling. Tabloid is structured around an unfolding sequence of surprises: it’s best to go in without knowing anything about McKinney’s peculiar story, but even if you’ve studied it closely, you’re almost certainly going to be startled by some of the revelations in store. Deathly Hallows, by contrast, is built on a complete absence of surprise: for the most part, viewers are hoping to see the literal realization of events that they’ve been anticipating in detail for years, and in many cases have all but memorized before entering the theater. Deathly Hallows isn’t out to surprise us, but to satisfy us with the exemplary execution of a foreordained plot—which is something that it does very well.
But while I have to admit that I liked Tabloid just a bit better than Deathly Hallows, there’s room in this world for both kinds of stories. They also have more in common than you might think, at least when it comes to fulfilling our expectations. It’s absurd to expect a $250 million movie based on the most popular fantasy series of all time to surprise us in more than superficial ways. (This is the same reason why a Pixar film, as I’ve said before, generally can’t be as beguiling or strange as a Miyazaki movie.) And there’s also something predictable about Morris’s very unpredictability. As much as a Harry Potter fan goes into Deathly Hallows expecting something very specific, I go into a Morris movie expecting eccentricity, odd twists, and weird lights on human behavior. His brand, in some ways, is as consistent as Potter’s. Both are necessary; both are oddly comforting. And there’s room in everyone’s life for both.
Hayao Miyazaki and the future of animation
Yesterday was the seventieth birthday of Japanese filmmaker Hayao Miyazaki, the director of Spirited Away, which makes this as appropriate a time as any to ask whether Miyazaki might be, in fact, the greatest living director in any medium. He certainly presents a strong case. My own short list, based solely on ongoing quality of output rather than the strength of past successes, includes Martin Scorsese, Wong Kar-Wai, and Errol Morris, but after some disappointing recent work by these last three, Miyazaki remains the only one who no longer seems capable of delivering anything less than a masterpiece. And he’s also going to be the hardest to replace.
Why is that? Trying to pin down what makes Miyazaki so special is hard for the same reason that it’s challenging to analyze any great work of children’s fiction: it takes the fun out of it. I’m superstitiously opposed to trying to figure out how the Alice books work, for example, in a way that I’m not for Joyce or Nabokov. Similarly, the prospect of taking apart a Miyazaki movie makes me worry that I’ll come off as a spoilsport—or, worse, that the magic will somehow disappear. That’s one reason why I ration out my viewings of Ponyo, one of the most magical movies ever made, so carefully. And it’s why I’m going to tread cautiously here. But it’s still possible to hint at some of the qualities that set Miyazaki apart from even the greatest animators.
The difference, and I apologize in advance for my evasiveness, comes down to a quality of spirit. Miyazaki is as technically skilled as any animator in history, of course, but his craft would mean little without his compassion, and what I might also call his eccentricity. Miyazaki has a highly personal attachment to the Japanese countryside—its depiction of the satoyama is much of what makes My Neighbor Totoro so charming—as well as the inner lives of small children, especially girls. He knows how children think, look, and behave, which shapes both his characters and their surrounding movies. His films can seem as capricious and odd as the stories that very young children tell to themselves, so that Spirited Away feels both beguilingly strange and like a story that you’ve always known and only recently rediscovered.
Which is why Miyazaki is greater than Pixar. Don’t get me wrong: Pixar has had an amazing run, but it’s a singularly corporate excellence. The craft, humor, and love of storytelling that we see in the best Pixar movies feels learned, rather than intuitive; it’s the work of a Silicon Valley company teaching itself to be compassionate. Even the interest in children, which is very real, seems like it has been deliberately cultivated. Pixar, I suspect, is run by men who love animation for its own sake, and who care about children only incidentally, which was also true of Walt Disney himself. (If they could make animated movies solely for adults, I think they would, as the career trajectory of Brad Bird seems to indicate. If nothing else, it would make it easier for them to win an Oscar for Best Picture.)
By contrast, the best Miyazaki movies, like the Alice books, are made for children without a hint of condescension, or any sense that children are anything but the best audience in the world. And as traditional animation is replaced by monsters of CGI that can cost $200 million or more, I’m afraid that this quality will grow increasingly rare. We’ve already seen a loss of personality that can’t be recovered: it’s impossible to be entirely original, not to mention eccentric, with so much money on the line. The result, at best, is a technically marvelous movie that seems to have been crafted by committee, even if it’s a committee of geniuses. Toy Story 3 is a masterpiece, and not good enough.
Miyazaki is seventy now, and judging from Ponyo, he’s still at the top of his game. I hope he keeps making movies for a long time to come. Because it’s unclear if the world of animation, as it currently exists, will ever produce anyone quite like him again.