Note: I’m counting down my ten favorite works of nonfiction, in order of the publication dates of their first editions, and with an emphasis on books that deserve a wider readership. You can find the earlier installments here.
When it comes to giving advice on something as inherently unteachable as writing, books on the subject tend to fall into one of three categories. The first treats the writing manual as an extension of the self-help genre, offering what amounts to an extended pep talk that is long on encouragement but short on specifics. A second, more useful approach is to consolidate material on a variety of potential strategies, either through the voices of multiple writers—as George Plimpton did so wonderfully in The Writer’s Chapbook, which assembles the best of the legendary interviews given to The Paris Review—or through the perspective of a writer and teacher, like John Gardner, generous enough to consider the full range of what the art of fiction can be. And the third, exemplified by David Mamet’s On Directing Film, is to lay out a single, highly prescriptive recipe for constructing stories. This last approach might seem unduly severe. Yet after a lifetime of reading what other writers have to say on the subject, Mamet’s little book is still the best I’ve ever found, not just for film, but for fiction and narrative nonfiction as well. On one level, it can serve as a starting point for your own thoughts about how the writing process should look: Mamet provides a strict, almost mathematical set of tools for building a plot from first principles, and even if you disagree with his methods, they clarify your thinking in a way that a more generalized treatment might not. But even if you just take it at face value, it’s still the closest thing I know to a foolproof formula for generating rock-solid first drafts. (If Mamet himself has a flaw as a director, it’s that he often stops there.) In fact, it’s so useful, so lucid, and so reliable that I sometimes feel reluctant to recommend it, as if I were giving away an industrial secret to my competitors.
Mamet’s principles are easy to grasp, but endlessly challenging to follow. You start by figuring out what every scene is about, mostly by asking one question: “What does the protagonist want?” You then divide each scene up into a sequence of beats, consisting of an immediate objective and a logical action that the protagonist takes to achieve it, ideally in a form that can be told in visual terms, without the need for expository dialogue. And you repeat the process until the protagonist succeeds or fails at his or her ultimate objective, at which point the story is over. This may sound straightforward, but as soon as you start forcing yourself to think this way consistently, you discover how tough it can be. Mamet’s book consists of a few simple examples, teased out in a series of discussions at a class he taught at Columbia, and it’s studded with insights that once heard are never forgotten: “We don’t want our protagonist to do things that are interesting. We want him to do things that are logical.” “Here is a tool—choose your shots, beats, scenes, objectives, and always refer to them by the names you chose.” “Keep it simple, stupid, and don’t violate those rules that you do know. If you don’t know which rule applies, just don’t muck up the more general rules.” “The audience doesn’t want to read a sign; they want to watch a motion picture.” “A good writer gets better only by learning to cut, to remove the ornamental, the descriptive, the narrative, and especially the deeply felt and meaningful.” “Now, why did all those Olympic skaters fall down? The only answer I know is that they hadn’t practiced enough.” And my own personal favorite: “The nail doesn’t have to look like a house; it is not a house. It is a nail. If the house is going to stand, the nail must do the work of a nail. To do the work of the nail, it has to look like a nail.”
Note: I’m counting down my ten favorite works of nonfiction, in order of the publication dates of their first editions, and with an emphasis on books that deserve a wider readership. You can find the earlier installments here.
David Thomson’s Biographical Dictionary of Film is one of the weirdest books in all of literature, and more than the work of any other critic, it has subtly changed the way I think about both life and the movies. His central theme—which is stated everywhere and nowhere—is the essential strangeness of turning shadows on a screen into men and women who can seem more real to us than the people in our own lives. His writing isn’t conventional criticism so much as a single huge work of fiction, with Thomson himself as both protagonist and nemesis. It isn’t a coincidence that one of his earliest books was a biography of Laurence Sterne, author of Tristram Shandy: his entire career can be read as one long Shandean exercise, in which Thomson, as a fictional character in his own work, is cheerfully willing to come off as something of a creep, as long as it illuminates our reasons for going to the movies. And his looniness is part of his charm. Edmund Wilson once playfully speculated that George Saintsbury, the great English critic, invented his own Toryism “in the same way that a dramatist or novelist arranges contrasting elements,” and there are times when I suspect that Thomson is doing much the same thing. (If his work is a secret novel, its real precursor is Pale Fire, in which Thomson plays the role of Kinbote, and every article seems to hint darkly at some monstrous underlying truth. A recent, bewildered review of his latest book on The A.V. Club is a good example of the reaction he gets from readers who aren’t in on the joke.)
But if you leave him with nothing but his perversity and obsessiveness, you end up with Armond White, while Thomson succeeds because he’s also lucid, encyclopedically informed, and ultimately sane, although he does his best to hide it. The various editions of The Biographical Dictionary of Film haven’t been revised so much as they’ve accumulated: Thomson rarely goes back to rewrite earlier entries, but tacks on new thoughts to the end of each article, so that it grows by a process of accretion, like a coral reef. The result can be confusing, but when I go back to his earlier articles, I remember at once why this is still the essential book on film. I’ll look at Thomson on Coppola (“He is Sonny and Michael Corleone for sure, but there are traces of Fredo, too”); on Sydney Greenstreet (“Indeed, there were several men trapped in his grossness: the conventional thin man; a young man; an aesthete; a romantic”); or on Eleanor Powell’s dance with Astaire in Broadway Melody of 1940 (“Maybe the loveliest moment in films is the last second or so, as the dancers finish, and Powell’s alive frock has another half-turn, like a spirit embracing the person”). Or, perhaps most memorably of all, his thoughts on Citizen Kane, which, lest we forget, is about the futile search of a reporter named Thompson:
As if Welles knew that Kane would hang over his own future, regularly being used to denigrate his later works, the film is shot through with his vast, melancholy nostalgia for self-destructive talent…Kane is Welles, just as every apparent point of view in the film is warmed by Kane’s own memories, as if the entire film were his dream in the instant before death.
It’s a strange, seductive, indispensable book, and to paraphrase Thomson’s own musings on Welles, it’s the greatest career in film criticism, the most tragic, and the one with the most warnings for the rest of us.
Note: Since I’m taking a deserved break for the holidays, I’m reposting a couple of my favorite entries from early in this blog’s run. This post was originally published, in a slightly different form, on January 13, 2011. Visual spoilers follow. Cover your eyes!
As I’ve noted before, the last line of a novel is almost always of interest, but the last line of a movie generally isn’t. It isn’t hard to understand why: movies are primarily a visual medium, and there’s a sense in which even the most brilliant dialogue can often seem beside the point. And as much the writer in me wants to believe otherwise, audiences don’t go to the movies to listen to words: they go to look at pictures.
Perhaps inevitably, then, there are significantly more great closing shots in film than there are great curtain lines. Indeed, the last shot of nearly every great film is memorable, so the list of finalists can easily expand into the dozens. Here, though, in no particular order, are twelve of my favorites. Click for the titles:
Working to timings and synchronising your musical thoughts with the film can be stimulating rather than restrictive. Scoring is a limitation but like any limitation it can be made to work for you. Verdi, except for a handful of pieces, worked best when he was “turned on” by a libretto. The most difficult problem in music is form, and in a film you already have this problem solved for you. You are presented with a basic structure, a blueprint, and provided the film has been well put together, well edited, it often suggests its own rhythms and tempo. The quality of the music is strictly up to the composer. Many people seem to assume that because film music serves the visual it must be something of secondary value. Well, the function of any art is to serve a purpose in society. For many years, music and painting served religion. The thing to bear in mind is that film is the youngest of the arts, and that scoring is the youngest of the music arts. We have a great deal of development ahead of us.
To me, the most useful experience in working in “the film industry” has been watching and learning the editing process. You can write whatever you want and try to film whatever you want, but the whole thing really happens in that editing room. How do you edit comics? If you do them in a certain way, the standard way, it’s basically impossible. That’s what led me to this approach of breaking my stories into segments that all have a beginning and end on one, two, three pages. This makes it much easier to shift things around, to rearrange parts of the story sequence. It’s something that I’m really interested in trying to figure out, but there are pluses and minuses to every approach. For instance, I think if you did all your panels exactly the same size and left a certain amount of “breathing room” throughout the story, you could make fairly extensive after-the-fact changes, but you’d sacrifice a lot by doing that…
It’s a very mysterious process: you put together a cut of the film and at the first viewing it always seems just terrible, then you work on it for two weeks and you can’t imagine what else you could do with it; then six months later, you’re still working on it and making significant changes every day. It’s very odd, but you kind of know when it’s there.
—Daniel Clowes, quoted by Todd Hignite in In the Studio: Visits with Contemporary Cartoonists
Of all the movies I’ve ever seen, Curtis Hanson’s adaptation of James Ellroy’s L.A. Confidential has influenced my own work the most. This isn’t to say that it’s my favorite movie of all time—although it’s certainly in the top ten—or even that I find its themes especially resonant: I have huge admiration for Ellroy’s talents, but it’s safe to say that he and I are operating under a different set of obsessions. Rather, it’s the structure of the film that I find so compelling: three protagonists, with three main stories, that interweave and overlap in unexpected ways until they finally converge at the climax. It’s a narrative structure that has influenced just about every novel I’ve ever written, or tried to write—and the result, ironically, has made my own work less adaptable for the movies.
Movies, you see, aren’t especially good at multiple plots and protagonists. Most screenplays center, with good reason, on a single character, the star part, whose personal story is the story of the movie. Anything that departs from this form is seen as inherently problematic, which is why L.A. Confidential’s example is so singular, so seductive, and so misleading. As epic and layered as the movie is, Ellroy’s novel is infinitely larger: it covers a longer span of time, with more characters and subplots, to the point where entire storylines—like that of a particularly gruesome serial killer—were jettisoned completely for the movie version. Originally it was optioned as a possible miniseries, which would have made a lot of sense, but to the eternal credit of Hanson and screenwriter Brian Helgeland, they decided that there might also be a movie here.
To narrow things down, they started with my own favorite creative tool: they made a list. As the excellent bonus materials for the film make clear, Hanson and Helgeland began with a list of characters or plot points they wanted to keep: Bloody Christmas, the Nite Owl massacre, Bud White’s romance with Lynn Bracken, and so on. Then they ruthlessly pared away the rest of the novel, keeping the strands they liked, finding ways to link them together, and writing new material when necessary, to the point where some of the film’s most memorable moments—including the valediction of Jack Vincennes and the final showdown at the Victory Motel, which repurposes elements of the book’s prologue—are entirely invented. And the result, as Ellroy says, was a kind of “alternate life” for the characters he had envisioned.
So what are the lessons here? For aspiring screenwriters, surprisingly few: a film like L.A. Confidential appears only a couple of times each decade, and the fact that it was made at all, without visible compromise, is one of the unheralded miracles of modern movies. If nothing else, though, it’s a reminder that adaptation is less about literal faithfulness than fidelity of spirit. L.A. Confidential may keep less than half of Ellroy’s original material, but it feels as turbulent and teeming with possibility, and gives us the sense that some of the missing stories may still be happening here, only slightly offscreen. Any attempt to adapt similarly complex material without that kind of winnowing process, as in the unfortunate Watchmen, usually leaves audiences bewildered. The key is to find the material’s alternate life. And no other movie has done it so well.
It’s been just over twenty years now since The Silence of the Lambs was released in theaters, and the passage of time—and its undisputed status as a classic—sometimes threatens to blind us to the fact that it’s such a peculiar movie. At the time, it certainly seemed like a dubious prospect: it had a director known better for comedy than suspense, an exceptional cast but no real stars, and a story whose violence verged on outright kinkiness. If it emphatically overcame those doubts, it was with its mastery of tone and style, a pair of iconic performances, and, not incidentally, the best movie poster of the modern era. And the fact that it not only became a financial success but took home the Academy Award for Best Picture, as well as the four other major Oscars, remains genre filmmaking’s single most unqualified triumph.
It also had the benefit of some extraordinary source material. I’ve written at length about Thomas Harris elsewhere, but what’s worth emphasizing about his original novel is that it’s the product of several diverse temperaments. Harris began his career as a journalist, and there’s a reportorial streak running through all his best early books, with their fascination with the technical language, tools, and arcana of various esoteric professions, from forensic profiling to brain tanning. He also has a Gothic sensibility that has only grown more pronounced with time, a love of language fed by the poetry of William Blake and John Donne, and, in a quality that is sometimes undervalued, the instincts of a great pulp novelist. The result is an endlessly fascinating book poised halfway between calculated bestseller and major novel, and all the better for that underlying tension.
Which is why it pains me as a writer to say that as good as the book is, the movie is better. Part of this is due to the inherent differences in the way we experience movies and popular fiction: for detailed character studies, novels have the edge, but for a character who is seen mostly from the outside, as an enigma, nothing in Harris prepares us for what Anthony Hopkins does with Hannibal Lecter, even if it amounts to nothing more than a few careful acting decisions for his eyes and voice. It’s also an example of how a popular novel can benefit from an intelligent, respectful adaptation. Over time, Ted Tally’s fine screenplay has come to seem less like a variation on Harris’s novel than a superlative second draft: Tally keeps all that is good in the book, pares away the excesses, and even improves the dialogue. (It’s the difference between eating a census taker’s liver with “a big Amarone” and “a nice Chianti.”)
And while the movie is a sleeker, more streamlined animal, it still benefits from the novel’s strangeness. For better or worse, The Silence of the Lambs created an entire genre—the sleek, modern serial killer movie—but like most founding works, it has a fundamental oddity that leaves it out of place among its own successors. The details of its crimes are horrible, but what lingers are its elegance, its dry humor, and the curious rhythms of its central relationship, which feels like a love story in ways that Hannibal made unfortunately explicit. It’s genuinely concerned with women, even as it subjects them to horrible fates, and in its look and mood, it’s a work of stark realism shading inexorably into a fairy tale. That ability to combine strangeness with ruthless efficiency is the greatest thing a thriller in any medium can do. Few movies, or books, have managed it since, even after twenty years of trying.
A few months ago, after greatly enjoying The Conversations, Michael Ondaatje’s delightful book-length interview with Walter Murch, I decided to read Ondaatje’s The English Patient for the first time. I went through it very slowly, only a handful of pages each day, in parallel with my own work on the sequel to The Icon Thief. Upon finishing it last week, I was deeply impressed, not just by the writing, which had drawn me to the book in the first place, but also by the novel’s structural ingenuity—derived, Ondaatje says, from a long process of rewriting and revision—and the richness of its research. This is one of the few novels where detailed historical background has been integrated seamlessly into the poetry of the story itself, and it reflects a real, uniquely novelistic curiosity about other times and places. It’s a great book.
Reading The English Patient also made me want to check out the movie, which I hadn’t seen in more than a decade, when I watched it as part of a special screening for a college course. I recalled admiring it, although in a rather detached way, and found that I didn’t remember much about the story, aside from a few moments and images (and the phrase “suprasternal notch”). But I sensed it would be worth revisiting, both because I’d just finished the book and because I’ve become deeply interested, over the past few years, in the career of editor Walter Murch. Murch is one of film’s last true polymaths, an enormously intelligent man who just happened to settle into editing and sound design, and The English Patient, for which he won two Oscars (including the first ever awarded for a digitally edited movie), is a landmark in his career. It was with a great deal of interest, then, that I watched the film again last night.
First, the good news. The adaptation, by director Anthony Minghella, is very intelligently done. It was probably impossible to film Ondaatje’s full story, with its impressionistic collage of lives and memories, in any kind of commercially viable way, so the decision was wisely made to focus on the central romantic episode, the doomed love affair between Almásy (Ralph Fiennes) and Katherine Clifton (Kristin Scott Thomas). Doing so involved inventing a lot of new, explicitly cinematic material, some satisfying (the car crash and sandstorm in the desert), some less so (Almásy’s melodramatic escape from the prison train). The film also makes the stakes more personal: the mission of Caravaggio (Willem Dafoe) is less about simple fact-finding, as it was in the book, than about revenge. And the new ending, with Almásy silently asking Hana (Juliette Binoche) to end his life, gives the film a sense of resolution that the book deliberately lacks.
These changes, while extensive, are smartly done, and they respect the book while acknowledging its limitations as source material. As Roger Ebert points out in his review of Apocalypse Now, another milestone in Murch’s career, movies aren’t very good at conveying abstract ideas, but they’re great for showing us “the look of a battle, the expression on a face, the mood of a country.” On this level, The English Patient sustains comparison with the works of David Lean, with a greater interest in women, and remains, as David Thomson says, “one of the most deeply textured of films.” Murch’s work, in particular, is astonishing, and the level of craft on display here is very impressive.
Yet the pieces don’t quite come together. The novel’s tentative, intellectual nature, which the adaptation doesn’t try to match, infects the movie as well. It feels like an art film that has willed itself into being an epic romance, when in fact the great epic romances need to be a little vulgar—just look at Gone With the Wind. Doomed romances may obsess their participants in real life, but in fiction, seen from the outside, they can seem silly or absurd. The English Patient understands a great deal about the craft of the romantic epic, the genre in which it has chosen to plant itself, but nothing of its absurdity. In the end, it’s just too intelligent, too beautifully made, to move us on more than an abstract level. It’s a heroic effort; I just wish it were something a little more, or a lot less.
Warning: Visual spoilers follow. Cover your eyes!
As I’ve noted before, the last line of a novel is almost always of interest, but the last line of a movie generally isn’t. It isn’t hard to understand why: movies are primarily a visual medium, after all, and there’s a sense in which even the most brilliant dialogue can often seem beside the point. And as much the writer in me wants to believe otherwise, audiences don’t go to the movies to listen to words: they go to look at pictures.
Perhaps inevitably, then, there are significantly more great closing shots in film than there are great curtain lines. Indeed, the last shot of nearly every great film is memorable, so the list of finalists can easily expand into the dozens. Here, though, in no particular order, are twelve of my favorites. Click or mouse over for the titles:
In his book A New Theory of Urban Design, which was published thirty years ago, the architect Christopher Alexander opens with a consideration of the basic problem confronting all city planners. He draws an analogy between the process of urban design and that of creating a work of art or studying a biological organism, but he also points out their fundamental differences:
With a city, we don’t have the luxury of either of these cases. We don’t have the luxury of a single artist whose unconscious process will produce wholeness spontaneously, without having to understand it—there are simply too many people involved. And we don’t have the luxury of the patient biologist, who may still have to wait a few more decades to overcome his ignorance.
What happens in the city, happens to us. If the process fails to produce wholeness, we suffer right away. So, somehow, we must overcome our ignorance, and learn to understand the city as a product of a huge network of processes, and learn just what features might make the cooperation of these processes produce a whole.
And wherever he writes “city,” you can replace it with any complicated system—a nation, a government, an environmental crisis—that seems too daunting for any individual to affect on his or her own, and toward which it’s easy to despair over our own helplessness, especially, as Alexander notes, when it’s happening to us.
Alexander continues: “We must therefore learn to understand the laws which produce wholeness in the city. Since thousands of people must cooperate to produce even a small part of a city, wholeness in the city will only be created to the extent that we can make these laws explicit, and can then introduce them, openly, explicitly, into the normal process of urban development.” We can pause here to note that this is as good an explanation as any of why rules play a role in all forms of human activity. It’s easy to fetishize or dismiss the rules to the point where we overlook why they exist in the first place, but you could say that they emerge whenever we’re dealing with a process that is too complicated for us to wing it. Some degree of improvisation enters into much of what we do, and in many cases—when we’re performing a small task for the first time with minimal stakes—it’s fine to make it up as we go along. The larger, more important, or more complex the task, however, the more useful it becomes to have a few guidelines on which we can fall back whenever our intuition or conscience fails us. Rules are nice because they mean that we don’t constantly have to reason from first principles whenever we’re faced with a choice. They often need to be amended, supplemented, or repealed, and we should never stop interrogating them, but they’re unavoidable. Every time we discard a rule, we implicitly replace it with another. And it can be hard to strike the right balance between a reasonable skepticism of the existing rules and an understanding of why they’re pragmatically good to have around.
Before we can develop a set of rules for any endeavor, however, it helps to formulate what Alexander calls “a single, overriding rule” that governs the rest. It’s worth quoting him at length here, because the challenge of figuring out a rule for urban design is much the same as that for any meaningful project that involves a lot of stakeholders:
The growth of a town is made up of many processes—processes of construction of new buildings, architectural competitions, developers trying to make a living, people building additions to their houses, gardening, industrial production, the activities of the department of public works, street cleaning and maintenance…But these many activities are confusing and hard to integrate, because they are not only different in their concrete aspects—they are also guided by entirely different motives…One might say that this hodgepodge is highly democratic, and that it is precisely this hodgepodge which most beautifully reflects the richness and multiplicity of human aspirations.
But the trouble is that within this view, there is no sense of balance, no reasonable way of deciding how much weight to give the different aims within the hodgepodge…For this reason, we propose to begin entirely differently. We propose to imagine a single process…one which works at many levels, in many different ways…but still essentially a single process, in virtue of the fact that it has a single goal.
And Alexander arrives at a single, overriding rule that is so memorable that I seem to think about it all the time: “Every increment of construction must be made in such a way as to heal the city.”
But it isn’t hard to understand why this rule isn’t more widely known. It’s difficult to imagine invoking it at a city planning meeting, and it has a mystical ring to it that I suspect makes many people uncomfortable. Yet this is less a shortcoming in the rule itself than a reflection of the kind of language that we need to develop an intuition about what other rules to follow. Alexander argues that most of us have a “a rather good intuitive sense” of what this rule means, and he points out: “It is, therefore, a very useful kind of inner voice, which forces people to pay attention to the balance between different goals, and to put things together in a balanced fashion.” The italics are mine. Human beings have trouble keeping all of their own rules in their heads at once, much less those that apply to others, so our best bet is to develop an inner voice that will guide us when we don’t have ready access to the rules for a specific situation. (As David Mamet says of writing: “Keep it simple, stupid, and don’t violate the rules that you do know. If you don’t know which rule applies, just don’t muck up the more general rules.”) Most belief systems amount to an attempt to cultivate that voice, and if Alexander’s advice has a religious overtone, it’s because we tend to associate such admonitions with the contexts in which they’ve historically arisen. “Love your enemies” is one example. “Desire is suffering” is another. Such precepts naturally give rise to other rules, which lead in turn to others, and one of the shared dangers in city planning and religion is the failure to remember the underlying purpose when faced with a mass of regulations. Ideally, they serve as a system of best practices, but they often have no greater goal than to perpetuate themselves. And as Alexander points out, it isn’t until you’ve taken the time to articulate the one rule that governs the rest that you can begin to tell the difference.
The mingling of object and image in collage, of given fact and conscious artifice, corresponds to the illusion-producing processes of contemporary civilization. In advertisements, news stories, films, and political campaigns, lumps of unassailable data are implanted in preconceived formats in order to make the entire fabrication credible. Documents waved at hearings by Joseph McCarthy to substantiate his fictive accusations were a version of collage, as is the corpse of Lenin, inserted by Stalin into the Moscow mausoleum to authenticate his own contrived ideology. Twentieth-century fictions are rarely made up of the whole cloth, perhaps because the public has been trained to have faith in “information.” Collage is the primary formula of the aesthetics of mystification developed in our time.
Lord Rowton…says that he once asked Disraeli what was the most remarkable, the most self-sustained and powerful sentence he knew. Dizzy paused for a moment, and then said, “Sufficient unto the day is the evil thereof.”
—Augustus J.C. Hare, The Story of My Life
Disraeli was a politician and a novelist, which is an unusual combination, and he knew his business. Politics and writing have less to do with each other than a lot of authors might like to believe, and the fact that you can create a compelling world on paper doesn’t mean that you can do the same thing in real life. (One of the hidden themes of Astounding is that the skills that many science fiction writers acquired in organizing ideas on the page turned out to be notably inadequate when it came to getting anything done during World War II.) Yet both disciplines can be equally daunting and infuriating to novices, in large part because they both involve enormously complicated projects—often requiring years of effort—that need to be approached one day at a time. A single day’s work is rarely very satisfying in itself, and you have to cling to the belief that countless invisible actions and compromises will somehow result in something real. It doesn’t always happen, and even if it does, you may never get credit or praise. The ability to deal with the everyday tedium of politics or writing is what separates professionals from amateurs. And in both cases, the greatest accomplishments are usually achieved by freaks who can combine an overarching vision with a finicky obsession with minute particulars. As Eugène-Melchior de Vogüé, who was both a diplomat and literary critic, said of Tolstoy, it requires “a queer combination of the brain of an English chemist with the soul of an Indian Buddhist.”
And if you go into either field without the necessary degree of patience, the results can be unfortunate. If you’re a writer who can’t subordinate yourself to the routine of writing on a daily basis, the most probable outcome is that you’ll never finish your novel. In politics, you end up with something very much like what we’ve all observed over the last few weeks. Regardless of what you might think about the presidential refugee order, its rollout was clearly botched, thanks mostly to a president and staff that want to skip over all the boring parts of governing and get right to the good stuff. And it’s tempting to draw a contrast between the incumbent, who achieved his greatest success on reality television, and his predecessor, a detail-oriented introvert who once thought about becoming a novelist. (I’m also struck, yet again, by the analogy to L. Ron Hubbard. He spent most of his career fantasizing about a life of adventure, but when he finally got into the Navy, he made a series of stupid mistakes—including attacking two nonexistent submarines off the coast of Oregon—that ultimately caused him to be stripped of his command. The pattern repeated itself so many times that it hints at a fundamental aspect of his personality. He was too impatient to deal with the tedious reality of life during wartime, which failed to live up to the version he had dreamed of himself. And while I don’t want to push this too far, it’s hard not to notice the difference between Hubbard, who cranked out his fiction without much regard for quality, and Heinlein, a far more disciplined writer who was able to consciously tame his own natural impatience into a productive role at the Philadelphia Navy Yard.)
Which brings us back to the sentence that impressed Disraeli. It’s easy to interpret it as an admonition not to think about the future, which isn’t quite right. We can start by observing that it comes at the end of what The Five Gospels notes is possibly “the longest connected discourse that can be directly attributed to Jesus.” It’s the one that asks us to consider the birds of the air and the lilies of the field, which, for a lot of us, prompts an immediate flashback to The Life of Brian. (“Consider the lilies?” “Uh, well, the birds, then.” “What birds?” “Any birds.” “Why?” “Well, have they got jobs?”) But whether or not you agree with the argument, it’s worth noticing that the advice to focus on the evils of each day comes only after an extended attempt at defining a larger set of values—what matters, what doesn’t, and what, if anything, you can change by worrying. You’re only in a position to figure out how best to spend your time after you’ve considered the big questions. As the physician William Osler put it:
[My ideal is] to do the day’s work well and not to bother about tomorrow. You may say that is not a satisfactory ideal. It is; and there is not one which the student can carry with him into practice with greater effect. To it more than anything else I owe whatever success I have had—to this power of settling down to the day’s work and trying to do it well to the best of my ability, and letting the future take care of itself.
This has important implications for both writers and politicians, as well as for progressives who wonder how they’ll be able to get through the next twenty-four hours, much less the next four years. When you’re working on any important project, even the most ambitious agenda comes down to what you’re going to do right now. In On Directing Film, David Mamet expresses it rather differently:
Now, you don’t eat a whole turkey, right? You take off the drumstick and you take a bite of the drumstick. Okay. Eventually you get the whole turkey done. It’ll probably get dry before you do, unless you have an incredibly good refrigerator and a very small turkey, but that is outside the scope of this lecture.
A lot of frustration in art, politics, and life in general comes from attempting to swallow the turkey in one bite. Jesus, I think, was aware of the susceptibility of his followers to grandiose but meaningless gestures, which is why he offered up the advice, so easy to remember and so hard to follow, to simultaneously focus on the given day while keeping the kingdom of heaven in mind. Nearly every piece of practical wisdom in any field is about maintaining that double awareness. Fortunately, it goes in both directions: small acts of discipline aid us in grasping the whole, and awareness of the whole tells us what to do in the moment. As R.H. Blyth says of Zen: “That is all religion is: eat when you are hungry, sleep when you are tired.” And don’t try to eat the entire turkey at once.
“Jean Renoir once suggested that most true creators have only one idea and spend their lives reworking it,” the director Peter Greenaway said in an interview a quarter of a century ago. “But then very rapidly he added that most people don’t have any ideas at all, so one idea is pretty amazing.” I haven’t been able to find the original version of this quote, but it remains true enough even if we attribute it to Greenaway himself, who might otherwise not seem to have much in common with Renoir. Over time, I’ve come to sympathize with the notion that the important thing for an artist is to have an idea, as long as it’s a good one. This wasn’t always how I felt. In college, I was deeply impressed by Isaiah Berlin’s The Hedgehog and the Fox, in which he drew a famous contrast between writers who are hedgehogs, with one overarching obsession that they pursue for all their lives, and the foxes who move restlessly from one idea to another. (Berlin took his inspiration from a fragment of Archilochus—“The fox knows many things, but the hedgehog knows one big thing”—which may mean nothing more than the fact that the fox, for all its cleverness, is ultimately defeated by the hedgehog’s one good defense of rolling itself into a ball.) My natural loyalty at the time was to such foxes as Shakespeare, Joyce, and Pushkin, as much as I came to love such hedgehogs as Dante and Proust. That’s probably how it should be at twenty, when most of us, as Berlin writes, “lead lives, perform acts, and entertain ideas that are centrifugal rather than centripetal.”
Even at the time, however, I sensed that there was a difference between a truly omnivorous intelligence and the simple inability to make up one’s mind. And as I’ve grown older, I’ve begun to feel more respect for the hedgehogs. (It’s worth noting, by the way, that this classification really only makes sense when applied to exceptional creative geniuses. For the rest of us, identifying as a fox is more likely to become an excuse for a lack of fixed ideas, while a hedgehog’s perspective can become indistinguishable from tunnel vision. I’m neither a hedgehog nor a fox, and neither are you—we’re just trying to muddle along and make sense of the world as best as we can.) It takes courage to devote your entire career to a single idea, more so, in many ways, than building it around the act of creation itself. Neither approach is inherently better than the other, but both have their associated pitfalls. When you stick to one idea, you run the obvious risk of being unable to change your mind even if you’re wrong, and of distorting the evidence around you to fit your preconceived notions. But the danger of throwing in your lot with process is no less real. It can result in the sort of empty technical facility that levels all values until they become indistinguishable, and it can lead you astray just as surely as a fixation on a single argument can. These wrong turns may last just a year or two, rather than a lifetime, but a life made up of twenty dead ends in succession isn’t that much different than one spent tunneling for decades in the wrong direction. You wind up repeating the same behaviors in an endless cycle of tiny variations, and if it were a movie, you could call it Hedgehog Day.
I don’t mean to denigrate the acquisition of technical experience, which is a difficult and honorable calling in itself. But it’s necessary to remember that once we become competent in any art, the skills that we’ve acquired are largely fungible, and we become part of a stratum of practitioners who are mostly interchangeable with others at the same level. You can see this most clearly in the movies, which is the medium in which financial and market pressures tend to equalize talent the most ruthlessly. It’s rare to see a film these days that isn’t shot, lit, mixed, and scored with a high degree of proficiency, simply because the competition within those fields is so intense, and based solely on ability, that any movie with a reasonable budget can get excellent craftspeople to fill those roles. It’s in the underlying idea and its execution that films tend to fall short. (There are countless examples, but the one that has been on my mind the most is Batman v. Superman. There’s a perfectly legitimate story that could be told by a film of that title—in which Superman stands for unyielding law and order and Batman represents a more ambiguous form of vigilante justice—but the movie, for whatever reason, declines to use it. Instead, it tries to graft its showdown onto the alien messiah narrative of Man of Steel, which isn’t a bad concept in itself: it just happens to be fundamentally incompatible with the ethical conflict between these two superheroes. Zack Snyder has a great eye, the cast is excellent, and the technical elements are all exquisite. But it’s a movie so misconceived that it could only have been saved by throwing out the entire script and starting again.)
Good ideas, as I’ve often said before, are cheap, but the ones worthy of fueling a great novel or movie or even a lifetime are indescribably precious, and the whole point of developing technical proficiency is to defend those ideas from those who would destroy them, even inadvertently. There’s a reason why screenwriting is the one aspect of filmmaking that doesn’t seem to have advanced at all over the last century. It’s because most studio executives wouldn’t dream of trying to interfere with sound mixing, lighting, or cinematography, but they also believe that their story ideas are as good as anyone else’s. This attitude is particularly stark in the movies, but it’s present in almost any field where ideas are evaluated less on their own merits than on their convenience to the structures that are already in place. We claim to value ideas, but we’re all too willing to drop or ignore uncomfortable truths, or, even more damagingly, to quietly replace them with their counterfeit equivalents. Even a hedgehog needs to be something of a fox to keep an idea alive in the face of all the forces that would oppose it or kill it with indifference. Not every belief is worth fighting or dying for, and history is full of otherwise capable men and women—John W. Campbell among them—who sacrificed their reputations on the altar of an unexamined idea. We need to be willing to change course in light of new evidence and to be as crafty as Odysseus to find our way home. But all that cleverness and tenacity and tactical brilliance become worthless if they aren’t given shape by a clear vision, even if it’s a modest one. Not all of us can be hedgehogs or foxes. But we can’t afford to be ostriches, either.
I think America is going through a paroxysm of rage…But I think there’s going to be a happy ending in November.
—Steven Spielberg, to Sky News, July 17, 2016
Last month, Steven Spielberg celebrated his seventieth birthday. Just a few weeks later, Yale University Press released Steven Spielberg: A Life in Films by the critic Molly Haskell, which has received a surprising amount of attention for a relatively slender book from an academic publisher, including a long consideration by David Denby in The New Yorker. I haven’t read Haskell’s book, but it seems likely that its reception is partially a question of good timing. We’re in the mood to talk about Spielberg, and not just because of his merits as a filmmaker or the fact that he’s entering the final phase of his career. Spielberg, it’s fair to say, is the most quintessentially American of all directors, despite a filmography that ranges freely between cultures and seems equally comfortable in the past and in the future. He’s often called a mythmaker, and if there’s a place where his glossy period pieces, suburban landscapes, and visionary adventures meet, it’s somewhere in the nation’s collective unconscious: its secret reveries of what it used to be, what it is, and what it might be again. Spielberg country, as Stranger Things was determined to remind us, is one of small towns and kids on bikes, but it also still vividly remembers how it beat the Nazis, and it can’t keep from turning John Hammond from a calculating billionaire into a grandfatherly, harmless dreamer. No other artist of the last half century has done so much to shape how we feel about ourselves. He took over where Walt Disney left off. But what has he really done?
To put it in the harshest possible terms, it’s worth asking whether Spielberg—whose personal politics are impeccably liberal—is responsible in part for our current predicament. He taught the New Hollywood how to make movies that force audiences to feel without asking them to think, to encourage an illusion of empathy instead of the real thing, and to create happy endings that confirm viewers in their complacency. You can’t appeal to all four quadrants, as Spielberg did to a greater extent than anyone who has ever lived, without consistently telling people exactly what they want to hear. I’ve spoken elsewhere of how film serves as an exercise ground for the emotions, bringing us closer on a regular basis to the terror, wonder, and despair that many of us would otherwise experience only rarely. It reminds the middle class of what it means to feel pain or awe. But I worry that when we discharge these feelings at the movies, it reduces our capacity to experience them in real life, or, even more insidiously, makes us think that we’re more empathetic and compassionate than we actually are. Few movies have made viewers cry as much as E.T., and few have presented a dilemma further removed than anything a real person is likely to face. (Turn E.T. into an illegal alien being sheltered from a government agency, maybe, and you’d be onto something.) Nearly every film from the first half of Spielberg’s career can be taken as a metaphor for something else. But great popular entertainment has a way of referring to nothing but itself, in a cognitive bridge to nowhere, and his images are so overwhelming that it can seem superfluous to give them any larger meaning.
If Spielberg had been content to be nothing but a propagandist, he would have been the greatest one who ever lived. (Hence, perhaps, his queasy fascination with the films of Leni Riefenstahl, who has affinities with Spielberg that make nonsense out of political or religious labels.) Instead, he grew into something that is much harder to define. Jaws, his second film, became the most successful movie ever made, and when he followed it up with Close Encounters, it became obvious that he was in a position with few parallels in the history of art—he occupied a central place in the culture and was also one of its most advanced craftsmen, at a younger age than Damien Chazelle is now. If you’re talented enough to assume that role and smart enough to stay there, your work will inevitably be put to uses that you never could have anticipated. It’s possible to pull clips from Spielberg’s films that make him seem like the cuddliest, most repellent reactionary imaginable, of the sort that once prompted Tony Kushner to say:
Steven Spielberg is apparently a Democrat. He just gave a big party for Bill Clinton. I guess that means he’s probably idiotic…Jurassic Park is sublimely good, hideously reactionary art. E.T. and Close Encounters of the Third Kind are the flagship aesthetic statements of Reaganism. They’re fascinating for that reason, because Spielberg is somebody who has just an astonishing ear for the rumblings of reaction, and he just goes right for it and he knows exactly what to do with it.
Kushner, of course, later became Spielberg’s most devoted screenwriter. And the total transformation of the leading playwright of his generation is the greatest testament imaginable to this director’s uncanny power and importance.
In reality, Spielberg has always been more interesting than he had any right to be, and if his movies have been used to shake people up in the dark while numbing them in other ways, or to confirm the received notions of those who are nostalgic for an America that never existed, it’s hard to conceive of a director of his stature for whom this wouldn’t have been the case. To his credit, Spielberg clearly grasps the uniqueness of his position, and he has done what he could with it, in ways that can seem overly studied. For the last two decades, he has worked hard to challenge some of our assumptions, and at least one of his efforts, Munich, is a masterpiece. But if I’m honest, the film that I find myself thinking about the most is Indiana Jones and the Temple of Doom. It isn’t my favorite Indiana Jones movie—I’d rank it a distant third. For long stretches, it isn’t even all that good. It also trades in the kind of casual racial stereotyping that would be unthinkable today, and it isn’t any more excusable because it deliberately harks back to the conventions of an earlier era. (The fact that it’s even watchable now only indicates how much ground East and South Asians have yet to cover.) But its best scenes are so exciting, so wonderful, and so conductive to dreams that I’ve never gotten over it. Spielberg himself was never particularly pleased with the result, and if asked, he might express discomfort with some of the decisions he made. But there’s no greater tribute to his artistry, which executed that misguided project with such unthinking skill that he exhilarated us almost against his better judgment. It tells us how dangerous he might have been if he hadn’t been so deeply humane. And we should count ourselves lucky that he turned out to be as good of a man as he did, because we’d never have known if he hadn’t.
Note: I’m discussing the origins of “Retention,” the episode that I wrote for the audio science fiction anthology series The Outer Reach. It’s available for streaming here on the Howl podcast network, and you can get a free month of access by using the promotional code REACH.
One of the unsung benefits of writing for film, television, or radio is that it requires the writer to conform to a fixed format on the printed page. The stylistic conventions of the screenplay originally evolved for the sake of everyone but the screenwriter: it’s full of small courtesies for the director, actors, sound editor, production designer, and line producer, and in theory, it’s supposed to result in one minute of running time per page—although, in practice, the differences between filmmakers and genres make even this rule of thumb almost meaningless. But it also offers certain advantages for writers, too, even if it’s mostly by accident. It can be helpful for authors to force themselves to work within the typeface, margins, and arbitrary formatting rules that the script imposes: it leaves them with minimal freedom except in the choice of the words themselves. Because all the dialogue is indented, you can see the balance between talk and action at a glance, and you eventually develop an intuition about how a good script should look when you flip rapidly through the pages. (The average studio executive, I suspect, rarely does much more than this.) Its typographical constraints amount to a kind of poetic form, and you find yourself thinking in terms of the logic of that space. As the screenwriter Terry Rossio put it:
In retrospect, my dedication—or my obsession—toward getting the script to look exactly the way it should, no matter how long it took—that’s an example of the sort of focus one needs to make it in this industry…If you find yourself with this sort of obsessive behavior—like coming up with inventive ways to cheat the page count!—then, I think, you’ve got the right kind of attitude to make it in Hollywood.
When it came time to write “Retention,” I was looking forward to working within a new template: the radio play. I studied other radio scripts and did my best to make the final result look right. This was more for my own sake than for anybody else’s, and I’m pretty sure that my producer would have been happy to get a readable script in any form. But I had a feeling that it would be helpful to adapt my habitual style to the standard format, and it was. In many ways, this story was a more straightforward piece of writing than most: it’s just two actors talking with minimal sound effects. Yet the stark look of the radio script, which consists of nothing but numbered lines of dialogue alternating between characters, had a way of clarifying the backbone of the narrative. Once I had an outline, I began by typing the dialogue as quickly as I could, almost in real time, without even indicating the identities of the speakers. Then I copied and pasted the transcript—which is how I came to think of it—into the radio play template. For the second draft, I found myself making small changes, as I always do, so that the result would look good on the page, rewriting lines to make for an even right margin and tightening speeches so that they wouldn’t fall across a page break. My goal was to come up with a document that would be readable and compelling in itself. And what distinguished it from my other projects was that I knew that it would ultimately be translated into performance, which was how its intended audience would experience it.
I delivered a draft of the script to Nick White, my producer, on January 8, 2016, which should give you a sense of how long it takes for something like this to come to fruition. Nick made a few edits, and I did one more pass on the whole thing, but we essentially had a finished version by the end of the month. After that, there was a long stretch of waiting, as we ran the script past the Howl network and began the process of casting. It went out to a number of potential actors, and it wasn’t until September that Aparna Nancherla and Echo Kellum came on board. (I also finally got paid for the script, which was noteworthy in itself—not many similar projects can afford to pay their writers. The amount was fairly modest, but it was more than reasonable for what amounted to a week of work.) In November, I got a rough cut of the episode, and I was able to make a few small suggestions. Finally, on December 21, it premiered online. All told, it took about a year to transform my initial idea into fifteen minutes of audio, so I was able to listen to the result with a decent amount of detachment. I’m relieved to say that I’m pleased with how it turned out. Casting Aparna Nancherla as Lisa, in particular, was an inspired touch. And although I hadn’t anticipated the decision to process her voice to make it more obvious from the beginning that she was a chatbot, on balance, I think that it was a valid choice. It’s probably the most predictable of the story’s twists, and by tipping it in advance, it serves as a kind of mislead for listeners, who might catch onto it quickly and conclude, incorrectly, that it was the only surprise in store.
What I found most interesting about the whole process was how it felt to deliver what amounted to a blueprint of a story for others to execute. Playwrights and screenwriters do it all the time, but for me, it was a novel experience: I may not be entirely happy with every story I’ve published, but they’re all mine, and I bear full responsibility for the outcome. “Retention” gave me a taste, in a modest way, of how it feels to hand an idea over to someone else, and of the peculiar relationship between a script and the dramatic work itself. Many aspiring screenwriters like to think that their vision on the page is complete, but it isn’t, and it has to pass through many intermediaries—the actors, the producer, the editor, the technical team—before it comes out on the other side. On balance, I prefer writing my own stuff, but I came away from “Retention” with valuable lessons that I expect to put into practice, whether or not I write for audio again. (I’m hopeful that there will be a second season of The Outer Reach, and I’d love to be a part of it, but its future is still up in the air.) I’ve spent most of my career worrying about issues of clarity, and in the case of a script, this isn’t an abstract goal, but a strategic element that can determine how faithfully the story is translated into its final form. Any fuzzy thinking early on will only be magnified in subsequent stages, so there’s a huge incentive for the writer to make the pieces as transparent and logical as possible. This is especially true when you’re providing a sketch for someone else to finish, but it also applies when you’re writing for ordinary readers, who are doing nothing else, after all, but turning the story into a movie in their heads.
When I was younger, there was a period in which I seriously considered becoming a clown. To understand why, you need to know two things. The first is that a clown lives out of a trunk. I was probably about six years old when I saw the Ringling Bros. and Barnum & Bailey Circus for the first time—it was the year that featured the notorious “living unicorn”—and I don’t remember much about the show itself. What I recall most vividly is the souvenir program, which I brought home and read to pieces. It was packed with information about the performers and their lives, but the tidbit that made the greatest impression on me was the fact that they’re always on the road: they travel by train, and if you want to be a clown, you need to fit all your possessions into that trunk. As a kid, I was always fantasizing about running off to live with nothing, relying on luck and my wits, and this seemed like the ultimate example. It fascinated me for the same reason that I’ve always been intrigued by buskers, except that a clown doesn’t work alone: he’s part of a community of circus folk with their own language and traditions who have managed to survive while moving from one gig to the next. I’ve written here before of how ephemeral the career of a dancer can seem, but clowning takes it to another level. It lacks even the superficial glamor of dance, leaving you with nothing but the life of which Homer Simpson once lamented: “When I started this clown thing, I thought it would be nothing but glory. You know, the glory of being a clown?”
The other important thing about clowns is that they have a college. (Or at least they did when I was growing up, although the original Ringling Bros. and Barnum & Bailey Clown College closed its doors nearly twenty years ago.) I first read about clown college in that souvenir program, and I don’t think I’ve ever gotten over the discovery that it existed. Even now, I can recite the description of the curriculum almost from memory. Tuition was free, but students had to pay for their own room, board, and grease paint. Subjects included costume and makeup design, tumbling, acrobatics, pantomime, juggling, stilt walking, and the history of comedy. The one catch is that if the circus offered you a contract at the end of the term, you were obliged to accept it for a year. Its graduates, I later learned, included Penn Gillette, Bill Irwin, and David Strathairn. But what stuck me the most was that this was a place where instructors and students could come together to discuss something as peculiar as the theory and practice of clowning. When I look back at it, it seems possible that it was my first exposure to the idea of college of any kind, and the basic appeal of it never changed, even when I traded my fantasies of Venice, Florida for Cambridge, Massachusetts. You could argue that learning how to become a clown is more practical than majoring in creative writing or film studies. And in each case, it allows an unlikely community to form around people who are otherwise persistently odd.
It’s that sense of collective effort in the pursuit of strangeness, I think, that makes the circus so enticing. When we talk about running off to join the circus, what we’re actually saying is that we want to leave our responsibilities behind and join a troupe of likeminded individuals: free artists of themselves who require nothing but a vacant lot in order to put on a show. If I was saddened by the recent news that Ringling Bros. is closing after well over a century of operation, it’s because I feel a sense of loss at the end of the dream that it represented. There were aspects of the circus that deserved to be retired, and I wasn’t sorry when they finally put an end to their animal acts. But I never dreamed about being a lion tamer. I identified with the clowns, the trapeze artists, the acrobats, the contortionists, and all the others who symbolized the romance of devoting a life to a form of art that is inherently transient. To some extent, this is true of every artist, but what sets the circus apart is that its performers do it together, on the road, and for years on end. Directors like Fellini and Max Ophüls have been instinctively drawn to circus imagery, because it captures something fundamental about what they do for a living: they’re ringmasters with the ability to harness chaos for just long enough to make a movie. Yet this isn’t quite the same thing. A film, in theory, is something permanent, but a circus is over as soon as the show ends, and to make it last, you have to keep up the act forever.
And I’ve gradually come to realize that I did become a clown, at least in all the ways that count. (As Werner Herzog observes in Werner Herzog Eats His Shoe: “As you see [filmmaking] makes me into a clown. And that happens to everyone—just look at Orson Welles or look at even people like Truffaut. They have become clowns.”) I spent four years at college studying two dead languages that I haven’t used since graduation, which is either a cosmic joke in itself or an acknowledgment that the knowledge you acquire is less important than the fact that you’ve pursued it in the company of others. Later, I left my job to become a writer, an activity that I’ve since begun to understand is as ephemeral, in some respects, as that of a clown or ballet dancer: few of its fruits last for any longer than the time it takes to write them down, and you’re left with nothing but the process. Along the way, I’ve successively joined and departed from various communities of people who share the same goals. We’ve never traveled on a real train together, but we’re all bound for a common destination, and we’ve developed the same set of strategies to get there. The promise of the circus is that you can get paid for being a clown, if you’re willing to sacrifice every practical consideration and assume every indignity along the way, and that you’re not alone. In the end, the joke might be on you. But the joke is ultimately on all of us. And maybe the clowns are the only ones sane enough to understand this.
Over the last few decades, we’ve seen a series of mostly unheralded technological and cultural developments that have allowed movies to be shaped more like novels—that is, as works of art that remain malleable and open to revision almost up to the last minute. Digital editing tools allow for cuts and rearrangements to be made relatively quickly, and they open up the possibility of even more sophisticated adjustments. The Girl With the Dragon Tattoo, for instance, includes shots in which an actor’s performance from one take was invisibly combined with another, while the use of high-definition digital video made it possible to crop the frame, recenter images, and even create camera movement where none was there before. In the old days, the addition of new material in postproduction was mostly restricted to voiceovers that played over an existing shot to clarify a plot point, or to pickup shots, which are usually inserts that can filmed on the cheap. (It would be amusing to make a list of closeup shots of hands in movies that actually belong to the editor or director. I can think of two examples off the top of my head: The Conversation and The Usual Suspects.) In some cases, you can get the main cast back for new scenes, and directors like Peter Jackson, who learned how useful it could be to have an actor constantly available during the shooting of The Lord of the Rings, have begun to allocate a few weeks into their schedule explicitly for reshoots.
As I was writing this post, my eye was caught by an article in the New York Times that notes that the famous subway grate scene in The Seven Year Itch was actually a reshoot, which reminds us that this isn’t anything new. But it feels more like a standard part of the blockbuster toolbox than it ever did before, and along with the resources provided by digital editing, it means that movies can be continually refined almost up to the release date. (It’s worth noting, of course, that the full range of such tools are available only to big tentpole pictures, which means that millions of dollars are required to recreate the kind of creative freedom that every writer possesses while sitting alone at his or her desk.) But we still tend to associate reshoots with a troubled production. Reports of new footage being shot dogged Rogue One for most of the summer, and the obvious rejoinder, which was made at the time, is to argue that such reshoots are routine. In fact, the truth was a bit more complicated. As the Hollywood Reporter pointed out, the screenwriter Tony Gilroy was initially brought in for a rewrite, but his role quickly expanded:
Tony Gilroy…will pocket north of $5 million for his efforts…[He] first was brought in to help write dialogue and scenes for Rogue’s reshoots and was being paid $200,000 a week, according to several sources. That figure is fairly normal for a top-tier writer on a big-budget studio film. But as the workload (and the reshoots) expanded, so did Gilroy’s time and paycheck.
The article continued: “Gilroy started on Rogue One in June, and by August, he was taking a leading role with Edwards in postproduction, which lasted well into the fall. The reshoots are said to have tackled several issues in the film, including the ending.” This is fairly unprecedented, at least in the way it’s being presented here. You’ll occasionally hear about one director taking over for another, but not about one helming reshoots for a comparable salary and a writer’s credit alone. In part, this may be a matter of optics: Disney wouldn’t have wanted to openly replace the director for such an important release. It may also reflect Tony Gilroy’s peculiar position in Hollywood. Gilroy never seemed particularly content as a screenwriter, but his track record as a director has been mixed, so he responded by turning himself into a professional fixer, like a Robert Towne upgraded for the digital age. And the reshoots appear to have been both unusually extensive and conceived with a writerly touch. As editor John Gilroy—Tony’s brother—told Yahoo Movies UK:
[The reshoots] gave you the film that you see today. I think they were incredibly helpful. The story was reconceptualized to some degree, there were scenes that were added at the beginning and fleshed out. We wanted to make more of the other characters, like Cassian’s character, and Bodhi’s character…The scene with Cassian’s introduction with the spy, Bodhi traipsing through Jedha on his way to see Saw, these are things that were added. Also Jyn, how we set her up and her escape from the transporter, that was all done to set up the story better.
The editor Colin Goudie added: “The point with the opening scenes that John was just describing was that the introductions in the opening scene, in the prologue, [were] always the same. Jyn’s just a little girl, so when you see her as an adult, what you saw initially was her in a meeting. That’s not a nice introduction. So having her in prison and then a prison breakout, with Cassian on a mission…everybody was a bit more ballsy, or a bit more exciting, and a bit more interesting.” In other words, the new scenes didn’t just clarify what was already there, but brought out character points that didn’t exist at all, which is exactly the sort of thing that a writer does in a rewrite. And it worked. Rogue One can feel a little mechanical at times, but all of the pieces come together in a satisfying way, and it has a cleaner and more coherent narrative line than The Force Awakens. The strategies that it used to get there, from the story reel to the reshoot, were on a larger scale than usual, but that was almost certainly due to the tyranny of the calendar. Even more than its predecessor, Rogue One had to come out on schedule and live up to expectations: it’s the film that sets the pattern for an annual Star Wars movie between now and the end of time. The editorial team’s objective was to deliver it in the window available, and they succeeded. (Goudie notes that the first assembly was just ten minutes longer than the final cut, thanks largely to the insights that the story reel provided—it bought them time at the beginning that they could cash in at the end.) Every film requires some combination of time, money, and ingenuity, and as Rogue One demonstrates, any two of the three can be used to make for a lack of the third. As Goudie concludes: “It was like life imitating art. Let’s get a band of people and put them together on this secret mission.”