Posts Tagged ‘William Goldman’
When I’m looking for insights into writing, I often turn to the nonliterary arts, and the one that I’ve found the most consistently stimulating is film editing. This is partially because the basic problem that a movie editor confronts—the arrangement and distillation of a huge mass of unorganized material into a coherent shape—is roughly analogous to what a writer does, but at a larger scale and under conditions of greater scrutiny and pressure, which encourages the development of pragmatic technical solutions. This was especially true in the era before digital editing. As Walter Murch, my hero, has pointed out, one minute of film equals a pound of celluloid. A movie like Apocalypse Now generates something like seven tons of raw footage, so an editor, as Murch notes, needs “a strong back and arms.” At the same time, incredibly, he or she also has to keep track of the location of individual frames, which weigh just a few thousandths of an ounce. With such software tools as Final Cut Pro, this kind of bookkeeping becomes relatively easier, and I doubt that many professional editors are inclined to be sentimental about the old days. But there’s also a sense in which wrestling with celluloid required habits of mind and organization that are slowly being lost. In A Guide for the Perplexed, which I once described as the first book I’d recommend to anyone about almost anything, Werner Herzog writes:
I can edit almost as fast as I can think because I’m able to sink details of fifty hours of footage into my mind. This might have something to do with the fact that I started working on film, when there was so much celluloid about the place that you had to know where absolutely every frame was. But my memory of all this footage never lasts long, and within two days of finishing editing it becomes a blur in my mind.
On a more practical level, editing a movie means keeping good notes, and all editors eventually come up with their own system. Here’s how Herzog describes his method:
The way I work is to look through everything I have—very quickly, over a couple of days—and make notes. For all my films over the past decade I have kept a logbook in which I briefly describe, in longhand, the details of every shot and what people are saying. I know there’s a particularly wonderful moment at minute 4:13 on tape eight because I have marked the description of the action with an exclamation point. These days my editor Joe Bini and I just move from one exclamation point to the next; anything unmarked is almost always bypassed. When it comes to those invaluable clips with three exclamation marks, I tell Joe, “If these moments don’t appear in the finished film, I have lived in vain.”
What I like about Herzog’s approach to editing is its simplicity. Other editors, including Murch, keep detailed notes on each take, but Herzog knows that all he has to do is flag it and move on. When the time comes, he’ll remember why it seemed important, and he has implicit faith in the instincts of his past self, which he trusts to steer him in the right direction. It’s like blazing a trail through the woods. A few marks on a tree or a pile of stones, properly used, are all you need to indicate the path, but instead of trying to communicate with hikers who come after you, you’re sending a message to yourself in the future. As Herzog writes: “I feel safe in my skills of navigation.”
Reading Herzog’s description of his editorial notes, I realized that I do much the same thing with the books that I read for my work, whether it’s fiction or nonfiction. Whenever I go back to revisit a source, I’ll often see underlinings or other marks that I left on a previous pass, and I naturally look at those sections more closely, in order to remind myself why it seemed to matter. (I’ve learned to mark passages with a single vertical line in the outer margin, which allows me to flip quickly through the book to scan for key sections.) The screenwriter William Goldman describes a similar method of signaling to himself in his great book Which Lie Did I Tell?, in which he talks about the process of adapting novels to the screen:
Here is how I adapt and it’s very simple: I read the text again. And I read it this time with a pen in my hand—let’s pick a color, blue. Armed with that, I go back to the book, slower this time than when I was a traveler. And as I go through the book word by word, page by page, every time I hit anything I think might be useful—dialogue line, sequence, description—I make a mark in the margin…Then maybe two weeks later, I read the book again, this time with a different color pen…And I repeat the same marking process—a line in the margin for anything I think might make the screenplay…When I am done with all my various color-marked readings—five or six of them—I should have the spine. I should know where the story starts, where it ends. The people should be in my head now.
Goldman doesn’t say this explicitly, but he implies that if a passage struck him on multiple passes, which he undertook at different times and states of mind, it’s likely to be more useful than one that caught his eye only once. Speaking of a page in Stephen King’s novel Misery that ended up with six lines in the margin—it’s the scene in which Annie cuts off Paul’s foot—Goldman writes: “It’s pretty obvious that whatever the spine of the piece was, I knew from the start it had to pass through this sequence.”
And a line or an exclamation point is sometimes all you need. Trying to keep more involved notes can even be a hindrance: not only do they slow you down, but they can distort your subsequent impressions. If a thought is worth having, it will probably occur to you each time you encounter the same passage. You often won’t know its true significance until later, and in the meantime, you should just keep going. (This is part of the reason why Walter Mosley recommends that writers put a red question mark next to any unresolved questions in the first draft, rather than trying to work them out then and there. Stopping to research something the first time around can easily turn into a form of procrastination, and when you go back, you may find that you didn’t need it at all.) Finally, it’s worth remembering that an exclamation point, a line in the margin, or a red question mark are subtly different on paper than on a computer screen. There are plenty of ways to flag sections in a text document, and I often use the search function in Microsoft Word that allows me to review everything I’ve underlined. But having a physical document that you periodically mark up in ink has benefits of its own. When you repeatedly go back to the same book, manuscript, or journal over the course of a project, you find that you’ve changed, but the pages have stayed the same. It starts to feel like a piece of yourself that you’ve externalized and put in a safe place. You’ll often be surprised by the clues that your past self has left behind, like a hobo leaving signs for others, or Leonard writing notes to himself in Memento, and it helps if the hints are a little opaque. Faced with that exclamation point, you ask yourself: “What was I thinking?” And there’s no better way to figure out what you’re thinking right now.
Note: Spoilers follow for the Westworld episode “The Stray.”
There’s a clever moment in the third episode of Westworld when Teddy, the clean-cut gunslinger played by James Marsden, is finally given a backstory. Teddy has spoken vaguely of a guilty secret in his past, but when he’s pressed for the details, he doesn’t elaborate. That’s the mark of a good hero. As William Goldman points out in his wonderful book Which Lie Did I Tell?, protagonists need to have mystery, and when you give them a sob story, here’s what happens:
They make [him] a wimp. They make him a loser. He’s just another whiny asshole who went to pieces when the gods pissed on him. “Oh, you cannot know the depth of my pain” is what that seems to be saying to the audience. Well, if I’m in that audience, what I think is this: Fuck you. I know people who are dying of cancer, I know people who are close to vegetables, and guess what—they play it as it lays.
Of course, we know that Teddy is really an android, and if he doesn’t talk about his past, it’s for good reason: as Dr. Ford, his creator, gently explains, the writers never bothered to give him one. With a few commands on a touchscreen, a complete backstory is uploaded into his system, and Teddy sets off on a doomed quest in pursuit of his old enemy, Wyatt, against whom he has sworn undying revenge. We don’t know how this plot thread ties into the rest of Dr. Ford’s plan, but we can only assume that it’s going somewhere—and it’s lucky for him that he had a convenient hero available to fill that role.
There are several levels of sly commentary here. When you’re writing a television show—or a series of novels—you want to avoid filling in anybody’s backstory for as long as possible. Part of the reason, as Goldman notes above, is to maintain a sense of mystery, and for the sake of narrative momentum, it makes sense to avoid dwelling on what happened before the story began. But it’s also a good idea to keep this information in your back pocket for when you really need it. If you know how to deploy it strategically, backstory can be very useful, and it can get you out of trouble or provide a targeted nudge when you need to push the plot in a particular direction. If you’re too explicit about it too soon, you narrow your range of options. (You also make it harder for viewers to project their own notions onto the characters, which is what Westworld, the theme park, is all about.) I almost wish that Westworld had saved this moment with Teddy for later in the show’s run, which would underline its narrative point. We’re only a third of the way through the first season, but within the world of the show itself, the park has been running for decades with the same generic storylines. Dr. Ford has a few ideas about how to shake things up, and Teddy is a handy blank slate. Television showrunners make that sort of judgment call all the time. In the internal logic of the park, this isn’t the first season, but more like its fifth or sixth, when a scripted drama tends to go off the rails, and the accumulation of years of backstory starts to feel like a burden.
“The Stray,” in fact, is essentially about backstory, on the level both of the park and of the humans who are running it. Shortly after filling in the details of Teddy’s past, Dr. Ford does exactly the same thing for himself: he delivers a long, not entirely convincing monologue about a mysterious business partner, Arnold, who died in the park and was later removed from its corporate history. At the end of the speech, he looks at Bernard, his head of programming, and tells him that he knows how much his son’s death still haunts him. It’s a little on the nose, but I think it’s supposed to be. It makes us wonder if Bernard might unknowingly be a robot himself, a la Blade Runner, and whether his flashbacks of his son are just as artificial as Teddy’s memories of Wyatt. I hope that this isn’t the big twist, if only because it seems too obvious, but in a way, it doesn’t really matter. Bernard may or may not be a robot, but there’s no question that Bernard, Dr. Ford, and all the other humans in sight are characters on a show called Westworld, and whatever backstories they’ve been given by Jonathan Nolan and Lisa Joy are as calculated as the ones that the androids have received. Even if Bernard’s memories are “real,” we’re being shown them for a reason. (It helps that Dr. Ford and Bernard are played by Anthony Hopkins and Jeffrey Wright, two actors who are good at giving technically exquisite performances that draw subtle attention to their own artifice. Wright’s trademark whisper—he’s like a man of great passion who refuses to raise his voice—draws the viewer into a conspiracy with the actor, as if he’s letting us in on a secret.)
The trouble with this reading, of course, is that it allows us to excuse instances of narrative sloppiness under the assumption that the series is deliberately commenting on itself. I’m willing to see Dr. Ford’s speech about Arnold as a winking nod to the tendency of television shows to dispense backstory in big infodumps, but I’m less sure about the moment in which he berates a lab technician for covering up a robot’s naked body and slashes at the android’s face. It’s doesn’t seem like the Dr. Ford of the pilot, talking nostalgically to Old Bill in storage, and while we’re presumably supposed to see him as a man of contradictions, it feels more like a juxtaposition of two character beats that weren’t meant to be so close together. (I have a hunch that it also reflects Hopkins’s availability: the show seems to have him for about two scenes per episode, which means that it has to do in five minutes what might have been better done in ten.) Westworld, as you might expect from a show from one of the Nolan brothers, has more ideas than it knows how handle: it hurries past a reference to Julian Jaynes’s The Origin of Consciousness in the Breakdown of the Bicameral Mind so quickly that it’s as if the writers just want to let us know that they’ve read the book. But I still have faith in this show’s potential. When Teddy is ignominiously killed yet again by Wyatt’s henchmen, it forces Dolores to face the familiar attackers in her own storyline by herself—an ingenious way of getting her to where she needs to be, but also a reminder, I think, of how the choices that a storyteller makes in one place can have unexpected consequences somewhere else. It’s a risk that all writers take. And Westworld is playing the same tricky game as the characters whose stories it tells.
Earlier this week, I devoured the long, excellent article by Josef Adalian and Maria Elena Fernandez of Vulture on the business of peak television. It’s full of useful insights and even better gossip—and it names plenty of names—but there’s one passage that really caught my eye, in a section about the huge salaries that movie stars are being paid to make the switch to the small screen:
A top agent defends the sums his clients are commanding, explaining that, in the overall scheme of things, the extra money isn’t all that significant. “Look at it this way,” he says. “If you’re Amazon and you’re going to launch a David E. Kelley show, that’s gonna cost $4 million an episode [to produce], right? That’s $40 million. You can have Bradley Whitford starring in it, [who is] gonna cost you $150,000 an episode. That’s $1.5 million of your $40 million. Or you could spend another $3.5 million [to get Costner] on what will end up being a $60 million investment by the time you market and promote it. You can either spend $60 [million] and have the Bradley Whitford show, or $63.5 [million] and have the Kevin Costner show. It makes a lot of sense when you look at it that way.”
With all due apologies to Bradley Whitford, I found this thought experiment fascinating, and not just for the reasons that the agent presumably shared it. It implies, for one thing, that television—which is often said to be overtaking Hollywood in terms of quality—is becoming more like feature filmmaking in another respect: it’s the last refuge of the traditional star. We frequently hear that movie stardom is dead and that audiences are drawn more to franchises than to recognizable faces, so the fact that cable and streaming networks seem intensely interested in signing film stars, in a post-True Detective world, implies that their model is different. Some of it may be due to the fact, as William Goldman once said, that no studio executive ever got fired for hiring a movie star: as the new platforms fight to establish themselves, it makes sense that they’d fall back on the idea of star power, which is one of the few things that corporate storytelling has ever been able to quantify or understand. It may also be because the marketing strategy for television inherently differs from that for film: an online series is unusually dependent on media coverage to stand out from the pack, and signing a star always generates headlines. Or at least it once did. (The Vulture article notes that Woody Allen’s new series for Amazon “may end up marking peak Peak TV,” and it seems a lot like a deal that was made for the sake of the coverage it would produce.)
But the most plausible explanation lies in simple economics. As the article explains, Netflix and the other streaming companies operate according to a “cost-plus” model: “Rather than holding out the promise of syndication gold, the company instead pays its studio and showrunner talent a guaranteed up-front profit—typically twenty or thirty percent above what it takes to make a show. In exchange, it owns all or most of the rights to distribute the show, domestically and internationally.” This limits the initial risk to the studio, but also the potential upside: nobody involved in producing the show itself will see any money on the back end. In addition, it means that even the lead actors of the series are paid a flat dollar amount, which makes them a more attractive investment than they might be for a movie. Most of the major stars in Hollywood earn gross points, which means that they get a cut of the box office receipts before the film turns a profit—a “first dollar” deal that makes the mathematics of breaking even much more complicated. The thought experiment about Bradley Whitford and Kevin Costner only makes sense if you can get Costner at a fixed salary per episode. In other words, movie stars are being actively courted by television because its model is a throwback to an earlier era, when actors were held under contract by a studio without any profit participation, and before stars and their agents negotiated better deals that ended up undermining the economic basis of the star system entirely.
And it’s revealing that Costner, of all actors, appears in this example. His name came up mostly because multiple sources told Vulture that he was offered $500,000 per episode to star in a streaming series: “He passed,” the article says, “but industry insiders predict he’ll eventually say ‘yes’ to the right offer.” But he also resonates because he stands for a kind of movie stardom that was already on the wane when he first became famous. It has something to do with the quintessentially American roles that he liked to play—even JFK is starting to seem like the last great national epic—and an aura that somehow kept him in leading parts two decades after his career as a major star was essentially over. That’s weirdly impressive in itself, and it testifies to how intriguing a figure he remains, even if audiences aren’t likely to pay to see him in a movie. Whenever I think of Costner, I remember what the studio executive Mike Medavoy once claimed to have told him right at the beginning of his career:
“You know,” I said to him over lunch, “I have this sense that I’m sitting here with someone who is going to become a great big star. You’re going to want to direct your own movies, produce your own movies, and you’re going to end up leaving your wife and going through the whole Hollywood movie-star cycle.”
Costner did, in fact, end up leaving his first wife. And if he also leaves film for television, even temporarily, it may reveal that “the whole Hollywood movie-star cycle” has a surprising final act that few of us could have anticipated.
“When I start a play, I’ll think, does it matter if this character is a man or a woman?” David Lindsay-Abaire once said. “And if it doesn’t, I make it a woman.” I do pretty much the same thing. And I’d like to think that we both take this approach for an utterly unsentimental reason: it results in better stories. There’s a tendency for writers, male and female alike, to use male characters as default placeholders, especially in genres that have traditionally been dominated by men. By systematically visualizing women instead—even if it’s nothing more than an initial sketch—you’ve already redirected your thought processes at a slightly different angle, which can only be good for the outcome. Whenever I read stories from the golden age of science fiction, I’m struck by the absence of women, which seems less like a sin than a mistake. It’s hard to think of a story from that era that wouldn’t have been improved by turning half of the men into women, without any other revisions aside from the relevant pronouns, as was done, much later, with Ripley in Alien. And I would have addressed this advice squarely to those pragmatic hacks who were only interested in making a living. There are so few writing rules of any value that a professional ought to utilize anything that works on a consistent basis, and the fact that so many of the women we see in these stories are either love interests or secretaries, even in the far future, feels like a missed opportunity.
There’s even a handy empirical test that you can use to verify this. Take a story from any genre in which the genders of the main characters are mostly irrelevant—that is, in which you could rewrite most of the men as women, or vice versa, while leaving the overall plot unchanged. Now mentally change a few of the men into women. The result, in most cases, is more interesting: it generates registers of meaning that weren’t there before. Now mentally turn some of the women in the original story into men. I’m willing to bet that it has the net opposite result: it actually saps the narrative of interest, and makes the whole thing flatter and duller. If you don’t believe me, just try it a few times. Even better, do it when you’re constructing a story, and see which version you like better. In the book Which Lie Did I Tell?, the screenwriter William Goldman writes:
I remember once being in an office with a studio guy and a couple of people were sitting around, fighting the story. And once of the people said this: “What if they’re all women?” Now the story, as I remember, was a male adventure flick. And the studio guy commented on that—“This is an adventure movie here, how stupid a suggestion is that?” Naturally the writer was finished for that day.
The truth, as Goldman points out, is that it was an excellent idea: “Making them all women opened up the world. I use it a lot myself now.” And that’s all the more reason to do it automatically at the earliest possible stage.
Which isn’t to say that you can just change the names and pronouns and be done with it. This exercise is only useful if you follow through on the implications that come with making a character a woman, especially in a genre like suspense, which defines itself so casually in terms of action and violence. In my novels, you could change most of the women to men without affecting the main outlines of the plot, but there would be a real loss of meaning. In part, this is because I unconsciously situated these characters in worlds in which women face particular challenges. For Maddy, it was the world of art and finance; for Wolfe, of law enforcement; and for Asthana, of thieves and criminals. These tensions are mostly just implied, but I’d like to think that they quietly affect the way we see these characters, who are enriched by the choices they must have made before the story began. In retrospect, this explains, for instance, why Wolfe is so much more interesting than Alan Powell, to whom I devoted a third of The Icon Thief before mostly shelving him in Eternal Empire. Wolfe would have had to prove herself in ways that someone like Powell never would, and it shows, even if it’s unstated. And I have a hunch that my endless struggles with Powell as a character might have been avoided entirely if I’d done the logical thing and made him a woman as well.
There’s another missed chance in this series, and it involves the character of Asthana. The only time I come close to exploring the peculiar position she holds—as a woman of color in a criminal world—is in Chapter 48 of Eternal Empire, in which she enters a house in Sochi occupied entirely by Russian thieves. Her thoughts turn briefly to the fact that she’ll always be regarded as an outsider, and I try to show how she establishes herself in the pecking order by being a little smarter than the men around her. But I don’t do nearly enough. Part of this is simply due to a lack of space, and to the fact that it felt more important to define Asthana in relation to Wolfe. Still, her presence here raises a lot of questions that go mostly unanswered, and I can’t help but feel that I could have touched on them more. (If I were doing it all over again today, I would have remembered what Christopher McQuarrie says about Rebecca Ferguson’s character in Mission: Impossible—Rogue Nation: “They’re not men. They’re women that are not trying to be men…You’re here on your own terms and you’re in a shitty situation created by people in power above you. How do you escape this situation and maintain your dignity?”) If anything, the result would have made Asthana an even more formidable antagonist for Wolfe. And although there’s a showdown coming soon between these two women, the most interesting parts of this story will mostly remain unspoken…
A few days ago, my wife sent me a link to “Jamie and Jeff’s Note to the Babysitter,” a McSweeney’s piece by Paul William Davies. I thought it was hilarious, both because I’ve written similar letters myself and because it’s a true rarity: a properly constructed page of humorous writing that fully develops its funny conceit from start to finish. Like many of its peers, it basically takes the form of a list, a format that the Harvard Lampoon pioneered decades ago, but unlike most, it doesn’t rely on that framework as an excuse to string together a loose series of unrelated gags. Instead, it benefits from the fact that its central idea lends itself naturally to the list structure, and above all from its last line, which Davies clearly knows is gold. Like Vijith Assar’s very different but equally excellent “Interactive Guide to Ambiguous Grammar”—which is probably my favorite McSweeney’s piece ever—it has a punchline. And that makes all the difference. (The lack of a punchline is why so many “Shouts and Murmurs” pieces in The New Yorker seem to wither away into nothing: they tend to suffer from what I’ve elsewhere identified as that magazine’s distrust of neat endings, which leads to articles that conclude at the most arbitrary place imaginable, as if the writer had suffered a stroke before typing the final paragraph.)
And it got me thinking about the power of the punchline, not just to end a piece on a strong note, but to enable everything that comes before it. In his commentary track for Mission: Impossible—Rogue Nation, Christopher McQuarrie talks at length about the challenges involved in structuring the fantastic sequence set at the Vienna Opera House. I’ve watched it maybe five times now, and it gets better with every viewing: I’m convinced that if it had been directed by, say, Brian DePalma, we’d already be calling it one of the most virtuosic scenes that the genre has ever produced. It’s an immensely complicated piece of suspense with simultaneous action unfolding on three or four different levels, and it was evidently a nightmare to stage and edit. But McQuarrie had an ace up his sleeve. The moment when Ethan has to figure out how to save the Chancellor of Austria from two different assassins, with only a single bullet at his disposal, is priceless, and the whole crazy machine builds to that punchline. McQuarrie knew it would work. And although I don’t think he says so explicitly, he obviously felt liberated to indulge in such a teasingly long, complex set piece because he had that destination in mind. (And he probably wishes he’d done the same with the rest of the movie, the ending of which was being constantly rewritten even as the film was being shot—not that you can tell from the final result.)
A punchline, in short, can reach backward in a work of art to allow for greater flexibility in the journey, which is something that most writers eventually learn. In Adventures in the Screen Trade, William Goldman makes the same point in a discussion of the famous twenty-minute chase in Butch Cassidy and the Sundance Kid:
There were two reasons I wrote it so long. One: I felt without such an implacable, irresistible enemy, the move to South America wouldn’t wash. Two: I wrote it so long because I had the confidence to be able to do it. And that confidence was born of one thing—I knew the Sundance Kid couldn’t swim…
When you have what you hope is gold in your hands, you can ruin it all by poor placement. If, for example, when Butch and Sundance were fording the stream on their way to Hole-in-the-Wall, Butch had said, “Why do you always get nervous around water?” and Sundance had said, “Because I can’t swim,” that wouldn’t have been so smart.
So I saved it for the moment just before the jump off the cliff. In point of fact, the entire Superposse chase is structured toward that moment. I was positive that no matter how badly the chase as a whole might be done, the swimming revelation, followed by the jump off the cliff, would save me. The jump was, had to be, surefire.
In other words, when you know you’ve got a good punchline, you’re free to develop what comes before it in the fashion it deserves. The opposite point also holds true: when you don’t know where you’re going, you’re more likely to flail around, casting about for ways to make the action more “interesting” when you lack a basic end point. I always try to keep a residue of unresolved problems—to borrow a phrase from the film editor Walter Murch—throughout the writing process, but I also know more or less where a story will conclude, and whenever I’ve broken that rule, as in my short story “Cryptids,” I think the weakness shows. On the plus side of the column, I allowed myself to take The Icon Thief into strange byways because I knew that the ending, in which Maddy breaks into the installation at the Philadelphia Museum of Art, would be memorable no matter what I did, and a story like “The Whale God” hinges almost entirely on its killer last line. And while writing my first radio script, for a project that I hope to be able to discuss in more detail soon, I gained confidence from the knowledge that the ending would work. A good punchline is a great thing in itself, but it’s even more valuable as a kind of seed crystal that shapes the preceding material before the reader is even aware of it, so that the ending comes as both surprising and inevitable. Or in the words of David Mamet: “Turn the thing around in the last two minutes, and you can live quite nicely. Turn it around in the last ten seconds and you can buy a house in Bel Air.”
Difficult to see. Always in motion is the future.
—Yoda, The Empire Strikes Back
At some point over the next few hours, perhaps as you’re reading this post, The Force Awakens is projected to surge past Avatar to become the highest-grossing movie in the history of the North American box office. We usually don’t adjust such figures for inflation, of course, probably because there wouldn’t be as many records broken each year if we did, and it’s all but certain that the original Star Wars will remain tops in the franchise in terms of tickets sold. Yet it’s impossible to discount this achievement. If the latest installment continues on its present trajectory, it has a good chance of cracking the adjusted top ten of all time—it would need to gross somewhere north of $948 million domestic to exceed Snow White and the Seven Dwarfs and earn a spot on that rarefied list, and this is starting to feel like a genuine possibility. Given the changes in the entertainment landscape over the last century, this is beyond flabbergasting. But even this doesn’t get at the real, singular nature of what we’re witnessing today. The most unexpected thing about the success of The Force Awakens is how expected it was. And at a time when Hollywood is moving increasingly toward a tentpole model in which a handful of blockbusters finance all the rest, it represents both a historic high point for the industry and an accomplishment that we’re unlikely to ever see again.
When you look at the lineal timeline of the most successful films at the domestic box office, you have to go back seventy-five years to find a title that even the shrewdest industry insider could have reasonably foreseen. This list, unadjusted for inflation, consists of Gone With the Wind, The Sound of Music, The Godfather, Jaws, Star Wars, E.T., Titanic, and Avatar. Gone With the Wind, which claimed the title that The Birth of a Nation had won a quarter of a century earlier, is the one exception: there’s no doubt that David O. Selznick hoped that it could be the biggest film of its era, even before the first match had been struck for the burning of Atlanta. Every other movie here is a headscratcher. No studio insider at the time would have been willing to bet that The Sound of Music—which Pauline Kael later called The Sound of Money—would outgross not just Doctor Zhivago and Thunderball that year, but every other movie ever made. The Godfather and Jaws were both based on bestselling novels, but that’s hardly a guarantee of success, and both were troubled productions with untested directors at the helm. Star Wars itself hardly needs to be discussed here. Columbia famously passed on E.T., and Titanic was widely regarded before its release as a looming disaster. And even Avatar, which everyone thought would be huge, exceeded all expectations: when you take regression to the mean into account, the idea that James Cameron could break his own record is so implausible that I have a hard time believing it even now.
Which is just another way of saying that these movies were all outliers: unique, idiosyncratic projects, not part of any existing franchise, that audiences discovered gradually, often to the bewilderment of the studios themselves. The Force Awakens was different. It had barely been announced before pundits were speculating that it could set the domestic record, and although Disney spent much of buildup to its opening weekend downplaying such forecasts—with the implication that rival studios were inflating projections to make its final performance seem disappointing—it’s hard to believe that the possibility hadn’t crossed everybody’s mind. Most movie fans will remember that William Goldman said “Nobody knows anything” in Adventures in the Screen Trade, but it’s worth quoting the relevant paragraph in full. After noting that everyone in town except for Paramount turned down Raiders of the Lost Ark, he continues:
Why did Paramount say yes? Because nobody knows anything. And why did all the other studios say no? Because nobody knows anything. And why did Universal, the mightiest studio of all, pass on Star Wars, a decision that may just cost them, when all the sequels and spinoffs and toy money and book money and video-game money are totaled, over a billion dollars? Because nobody, nobody—not now, not ever—knows the least goddam thing about what is or isn’t going to work at the box office.
If Hollywood has learned anything since, it’s that you don’t pass on Star Wars. Whatever you might think of its merits as a movie, The Force Awakens marks the one and only time that somebody knew something. And it’s probably the last time, too. It may turn into the reassuring bedtime story that studio executives use to lull themselves to sleep, and Disney may plan on releasing a new installment on an annual basis forever, but the triumphant rebirth of the franchise after ten years of dormancy—or three decades, depending on how you feel about the prequels—is the kind of epochal moment that the industry is doing its best to see never happens again. We aren’t going to have another chance to miss Star Wars because it isn’t going to go away, and the excitement that arose around its return can’t be repeated. The Force Awakens is both the ultimate vindication of the blockbuster model and a high-water mark that will make everything that follows seem like diminishing returns. (More insidiously, it may be the Jedi mind trick that convinces the studios that they know more than they do, which can only lead to heartbreak.) Records are made to be broken, and at some point in my lifetime, another movie will take the crown, if only because inflation will proceed to a point where the mathematics become inevitable. But it won’t be a Star Wars sequel. And it won’t be a movie that anyone, not even a Jedi, can see coming.