I cannot listen to Mahler’s Ninth Symphony with anything like the old melancholy mixed with the high pleasure I used to take from this music. There was a time, not long ago, when what I heard, especially in the final movement, was an open acknowledgement of death and at the same time a quiet celebration of the tranquility connected to the process. I took this music as a metaphor for reassurance, confirming my own strong hunch that the dying of every living creature, the most natural of all experiences, has to be a peaceful experience. I rely on nature. The long passages on all the strings at the end, as close as music can come to expressing silence itself, I used to hear as Mahler’s idea of leave-taking at its best. But always, I have heard this music as a solitary, private listener, thinking about death.
Now I hear it differently. I cannot listen to the last movement of the Mahler Ninth without the door-smashing intrusion of a huge new thought: death everywhere, the dying of everything, the end of humanity. The easy sadness expressed with such gentleness and delicacy by that repeated phrase on faded strings, over and over again, no longer comes to me as old, familiar news of the cycle of living and dying…If I were very young, sixteen or seventeen years old, I think I would begin, perhaps very slowly and imperceptibly, to go crazy…If I were sixteen or seventeen years old, I would not feel the cracking of my own brain, but I would know for sure that the whole world was coming unhinged. I can remember with some clarity what it was like to be sixteen. I had discovered the Brahms symphonies. I knew that there was something going on in the late Beethoven quartets that I would have to figure out, and I knew that there was plenty of time ahead for all the figuring I would ever have to do. I had never heard of Mahler. I was in no hurry. I was a college sophomore and had decided that Wallace Stevens and I possessed a comprehensive understanding of everything needed for a life…
The man on television, Sunday midday, middle-aged and solid, nice-looking chap, all the facts at his fingertips, more dependable looking than most high-school principals, is talking about civilian defense, his responsibility in Washington. It can make an enormous difference, he is saying. Instead of the outright death of eighty million American citizens in twenty minutes, he says, we can, by careful planning and practice, get that number down to only forty million, maybe even twenty…If I were sixteen or seventeen years old and had to listen to that, or read things like that, I would want to give up listening and reading. I would begin thinking up new kinds of sounds, different from any music heard before, and I would be twisting and turning to rid myself of human language.
Note: Major spoilers follow for the entire run of Westworld.
“The Adversary” is far from a bad hour of television, but it’s one of the weaker episodes of Westworld. We’re just past the halfway point of the season, which is when a show has to start focusing on its endgame, and in practice, this often means that we get an installment devoted to what showrunners call “laying pipe,” or setting up information that will pay off later on. There’s a lot of material being delivered to the viewer here, but it lacks some of the urgency of earlier installments, and on an emotional level, it’s more detached than usual. (The exception is gorgeous silent sequence that leans heavily on an orchestral version of Radiohead’s heartbreaking “Motion Picture Soundtrack,” a musical crutch that I’ll forgive because it’s so effective.) For the most part, though, it puts advancing the mystery ahead of spending time with the characters, and when we look back at the season as a whole, I have a feeling it will turn out to have been structurally necessary. I like all the intrigue surrounding the maze, the acts of industrial espionage in the park, and the enigmatic figure of Arnold—which are beginning to look as if they’re just different aspects of the same thing. But it’s all fairly standard for a series like this, and it isn’t the reason I keep watching. Westworld has so much going on, both for good and for bad, that its mystery box aspects seem less like the main attraction than like a convenient spine. And it means that the show sometimes has to take care of a few practical matters to prepare for the big finish.
What surprised me the most about the episode, though, was the reason I found it a little less compelling than usual. It was the absence of Dolores. She’s obviously an important figure—she’s the show’s nominal lead, no less—and her journey is central to the overall arc of the season. If you’d asked me if she was my favorite character, though, I would have said that she wasn’t: I get more pleasure out of our time with Bernard. But if you take her out of an episode entirely, something interesting happens. Westworld, like Game of Thrones, is an ensemble series that spends much of its time checking in on various groups of characters, and it means that you often won’t see important players at all, or for no more than a minute or two. And it’s only in their absences that you start to figure out who is truly essential. When Bernard was offscreen for most of last week, except for a brief conversation with Elsie, I was aware that I missed him, but it didn’t detract from the rest of the story. With Dolores gone, it’s as if the engine of the show has been removed. It’s surprising, because her scenes with William and Logan haven’t exactly jumped off the screen, and her storyline is the one area where the show seems to be stalling, because it’s clearly saving her big moments for closer to the end. But Dolores’s gradual movement toward consciousness is such a crucial thread that removing it leaves the show feeling a bit like Game of Thrones at its worst: a collection of scenes without a center. We aren’t supposed to identify with Dolores, exactly, but she’s the most dynamic character in sight, and her evolution is what gives the series its narrative thrust.
This is why I’m wary of the popular fan theory, which has been exhaustively discussed online, that the show is taking place in different timelines. The gist of the argument, in case you haven’t heard it, is that the scenes involving Dolores, William, and Logan are flashbacks that are occurring more than thirty years before the rest of the show, and that William is really a younger version of the Man in Black. Its proponents bolster their case using details like the two different versions of the Westworld park logo, the changing typeface on a can of condensed milk, and the fact that we never see William or Logan interacting with any of the other human characters. There’s plenty of evidence to the contrary, but nothing that can’t be explained away in isolation as a deliberate mislead, and I don’t think the conspiracy theorists will give up until William and the Man in Black meet face to face. It’s a clever reading, and it isn’t inconsistent with what we know about the past tactics of creator Jonathan Nolan. For all I know, it may turn out to be true. It’s certainly a better surprise than most shows have managed. But I hope it isn’t what’s really happening here—and for many of the same reasons that I gave above. Dolores’s story is the heart of the series, and placing her scenes with William three decades earlier makes nonsense of the show’s central conceit: that Dolores is slowly edging her way toward greater self-awareness because she’s been growing all this time. The flashback theory implies that she was already experiencing flashes of deeper consciousness almost from the beginning, which requires us to throw out most of what we know about her so far.
This isn’t always a bad thing, and some of the most effective twists in the history of storytelling have forced the audience to radically revise what it thinks it knows about the protagonist. But I think it would be a mistake here. It has the advantage of turning William, who has been kind of a bore, into a vastly more interesting figure, but only at the cost of making Dolores considerably less interesting—a puppet of the plot, rather than a character who can drive the narrative forward in her own right. It’s possible that this may turn out to be a commentary on her lack of agency as a robot: the series might be fooling us into reading more into Dolores than we should, just like William does, which would be an inspired trick indeed. But Dolores is such a load-bearing character that I’m worried that the show would lose more than it gained by the reveal. Her story may be nothing but a bridge that can be blown to smithereens as soon as the other characters have crossed safely to the other side, as James Joyce memorably put it. But I’m skeptical. As “The Adversary” demonstrates, when you remove Dolores from the equation, you end up with a show that provides memorable moments but little in the way of an overarching shape. (The scene in which Maeve blackmails Felix and Sylvester into making her more intelligent only highlights how much more intriguing Dolores’s organic discovery of her true nature has been.) The multiple timeline theory, as described, would remove the Dolores we know from the story forever. It would be a fantastic twist. But I’m not sure the show could survive it.
Yesterday, I was leafing through my copy of The Conversations: Water Murch and the Art of Editing Film, in which the novelist Michael Ondaatje interviews the movie editor whom Lawrence Weschler has called “the smartest person in America.” Murch, who worked on many of the films of Francis Ford Coppola and directed Return to Oz, has long been one of my heroes, and it’s worth listening to just about everything he says. (When my wife recently asked me if I could stand to hear anyone talk for four hours straight, I mentioned Murch first, followed by David Mamet and Werner Herzog.) As I was browsing through the book last night, however, I came across a line that I didn’t remember reading before:
As I’ve gone through life, I’ve found that your chances for happiness are increased if you wind up doing something that is a reflection of what you loved most when you were somewhere between nine and eleven years old.
I was very moved by this, because I’ve often thought the same thing. In the past, I’ve said that my ideal reader is myself in fifth grade—which doesn’t mean that I’m writing for kids—and that I judge my life by how closely it lives up to the hopes and expectations of that eleven year old. And although I haven’t always met that high standard, it’s still the closest thing that I have to a reliable moral compass.
Murch evidently agrees, but he also goes much further in identifying why this would be true. He continues:
At that age, you know enough of the world to have opinions of things, but you’re not old enough yet to be overly influenced by the crowd or by what other people are doing or what you think you “should” be doing. If what you do later on ties into that reservoir in some way, then you are nurturing some essential part of yourself. It’s certainly been true in my case. I’m doing now, at fifty-eight, almost exactly what most excited me when I was eleven.
And I think he’s getting at something immensely important here. The ages between nine and eleven strike me as a precious island of rationality, in its deepest and most meaningful sense. A boy of ten is a miniature adult in a lot of ways: it’s an age at which he is able to systematically follow up on his interests without much in the way of outside guidance, which may explain why the obsessions that he acquires around that time can be so lasting. For a few years, he’s thinking independently: he’s old enough to know that there’s more to the world than the toys and television shows that his schoolmates happen to like, and still young enough that he hasn’t started to feel anxious about his own preferences. In the language of biology, which obviously plays a central role here, it’s the narrow window of time in which the brain has achieved a certain structural maturity, but it hasn’t been taken over by puberty yet.
As Murch implies, it’s the choices that we make in that relatively objective life stage that reflect who we really are. A lot of complications are around the corner, which isn’t necessarily a bad thing—they’re the individual experiences that make us special, even if they assemble themselves in ways that we can’t control. I’ve noted before that I’m essentially the product of a handful of books, movies, and other media that I happened to encounter around the age of thirteen, but I don’t think I’ve ever made the connection with the more profound turning point that occurred a few years earlier. By the time I was ten, I knew that I wanted to be a writer, but for the specifics of how that would look, I had to wait until the world had given me a unique set of material. Elsewhere, I’ve described this process as a random one, but that isn’t really true: you’re exposed to dozens or hundreds of discrete influences in your early teens, and if five or six of them survive to shape who you are as an adult, that isn’t arbitrary at all. The result is such a useful source of insight about what truly matters to us that we probably should try to access those memories of ourselves more diligently. I haven’t accomplished everything I’ve tried to do, and I’ve got my share of regrets. But if I’ve been relatively happy in my work and life, it’s because I combined the goals that I set for myself at the age of ten with the pieces that stuck in my head when I was thirteen, as refined by the perspective of an adult. The closer I’ve kept to that standard, the happier I’ve been, and whenever I’ve strayed, I’ve been forcibly corrected.
The trouble, of course, is that the ages between nine and thirteen are exactly the ones that our culture tends to neglect. We’ve never been able to figure out what to do with kids in middle school, in part because they present such a wide range of development that there’s no single approach that makes sense, and perhaps because we’re still too traumatized by our own memories to look at it very closely. It’s also possible—and while I don’t want to believe this, I can’t rule it out entirely—that the neglect is intentional. Adolescence enforces conformity and undermines a lot of dreams, and I doubt many people get out of high school with their childhood ideals still intact. (If anything, it takes a conscious effort, in college and afterward, to go back and retrieve them.) But there’s an incentive for society to allow it to happen. Middle school and high school are particular kinds of hell that are designed to produce functional adults, and individual happiness isn’t a priority. At best, when we grow up, we’re allowed hobbies and side interests that appeal to who we were as children, even if our adult lives take us ever further away from those values. For most people, this isn’t a bad compromise, but it tends to separate the two halves, when we should be trying to bring them together. Our culture only becomes infantilized, paradoxically, when we no longer take our childhood selves seriously, or if we underestimate what we wanted for ourselves as grownups. And if it’s important to return to those dreams whenever we can, it’s not for the sake of the children we once were, but for the adults we could still become.
In the latest issue of The New York Times Magazine, the film critic Wesley Morris has a reflective piece titled “Last Taboo,” the subheadline of which reads: “Why Pop Culture Just Can’t Deal With Black Male Sexuality.” Morris, who is a gay black man, notes that full-frontal male nudity has become more common in recent years in movies and television, but it’s usually white men who are being undressed for the camera, which tells us a lot about the unresolved but highly charged feelings that the culture still has toward the black male body. As Morris writes:
Black men [are] desired on one hand and feared on the other…Here’s our original sin metastasized into a perverted sticking point: The white dick means nothing, while, whether out of revulsion or lust, the black dick means too much.
And although I don’t want to detract from the importance of the point that Morris is making here, I’ll admit that as I read these words, another thought ran though my mind. If the white penis means nothing, then the Asian penis, by extension, must mean—well, less than nothing. I don’t mean to equate the desexualization of Asian males in popular culture with the treatment of black men in fiction and in real life. But both seem to provide crucial data points, from opposite ends, for our understanding of the underlying phenomenon, which is how writers and other artists have historically treated the bodies of those who look different than they do.
I read Morris’s piece after seeing a tweet by the New Yorker critic Emily Nussbaum, who connected it to an awful scene in last night’s episode of Westworld, in which an otherwise likable character makes a joke about a well-endowed black robot. It’s a weirdly dissonant moment for a series that is so controlled in other respects, and it’s possible that it reflects nothing more than Jonathan Nolan’s clumsiness—which he shares with his older brother—whenever he makes a stab at humor. (I also suspect, given the show’s production delays, that the line was written and shot a long time ago, before these questions assumed a more prominent role in the cultural conversation. Which doesn’t make it any easier to figure out what the writers were thinking.) Race hasn’t played much of a role on the series so far, and it may not be fair to pass judgment on a show that has only aired five episodes and clearly has a lot of other stuff on its mind. But it’s hard not to wonder. The cast is diverse, but the guests are mostly white men, undoubtedly because, as Nussbaum notes elsewhere, they’re the natural target audience for the park’s central fantasy. And the show has a strange habit of using its Asian cast members, who are mostly just faces in the background, as verbal punching bags for the other characters, a trend so peculiar that my wife and I both noticed it separately. It’s likely that this has all been muddied by what seems to be shaping up to be an actual storyline for Felix, played by Leonardo Nam, who looks as if he’s about to respond to his casual mistreatment by rising to a larger role in the story. But even for a show with a lot of moving parts, it strikes me as a lazy way of prodding a character into action.
Over the last few months, as it happens, I’ve been thinking a lot about the representation of Asians in science fiction. (As I’ve mentioned before, I’m Eurasian—half Chinese, half Finnish and Estonian.) I may as well start with Robert A. Heinlein’s Sixth Column, a novel that he wrote on assignment for Astounding Science Fiction, based in part on All, an earlier, unpublished serial by John W. Campbell. Both stories, which were written long before Pearl Harbor, are about the invasion of the United States by a combined Chinese and Japanese empire, which inspires an underground resistance movement in the form of a fake religion. Heinlein later wrote that he tried to rework the narrative to tone down its more objectionable elements, but it pains me to say that Sixth Column actually reads as more racist than All, simply because Heinlein was the stronger writer. When you read All, you don’t feel much of anything, because Campbell was a stiff and awkward stylist. Heinlein, by contrast, spent much of his career bringing immense technical skill to even the most questionable projects, and he can’t keep from investing his characters with real rhetorical vigor as they talk about “flat-faced apes” and “our slant-eyed lords.” I don’t even mind the idea of an Asian menace, as long as the bad guys are treated as worthy antagonists, which Heinlein mostly does. But when the leaders of the resistance decide to grow beards in order to fill the invaders with “a feeling of womanly inferiority,” it’s hard to excuse it. And the most offensive moment of all involves Mitsui, the only sympathetic Asian character in sight, who sacrifices himself for the sake of his friends and is rewarded with the epitaph: “But they had no time to dwell on the end of little Mitsui’s tragic life.”
That’s the kind of racism that rankles me: not the diabolical Asian villain, who can be invested with a kind of sinister allure, as much as the legion of little Mitsuis who still populate so much of our fiction. (This may be why I’ve always sort of liked Michael Cimino’s indefensible Year of the Dragon, which at least treats John Lone’s character as a formidable, glamorous foe. It’s certainly less full of hate than The Deer Hunter.) And it complicates my reactions to other issues. When it was announced that Sulu would be unobtrusively presented as gay in Star Trek Beyond, it filled me with mixed feelings, and not just because George Takei didn’t seem to care for the idea. As much as I appreciated what the filmmakers were trying to do, I couldn’t help but think that it would have been just as innovative, if not more so, to depict Sulu as straight. I’m aware that this risks making it all seem like a zero-sum game, which it isn’t. But these points deserve to be raised, if only because they enrich the larger conversation. If a single scene on Westworld can spark a discussion of how we treat black men as sexual objects, we can do the same with the show’s treatment of Asians. The series presumably didn’t invite or expect such scrutiny, but it occupies a cultural position—as a prestige drama on a premium cable channel—in which it has no choice but to play that part. Science fiction, in particular, has always been a sandbox in which these issues can be investigated in ways that wouldn’t be possible in narratives set in the present, from the original run of Star Trek on down. Westworld belongs squarely in that tradition. And these are frontiers that it ought to explore.
In last week’s issue of The New Yorker, the critic Emily Nussbaum delivers one of the most useful takes I’ve seen so far on Westworld. She opens with many of the same points that I made after the premiere—that this is really a series about storytelling, and, in particular, about the challenges of mounting an expensive prestige drama on a premium network during the golden age of television. Nussbaum describes her own ambivalence toward the show’s treatment of women and minorities, and she concludes:
This is not to say that the show is feminist in any clear or uncontradictory way—like many series of this school, it often treats male fantasy as a default setting, something that everyone can enjoy. It’s baffling why certain demographics would ever pay to visit Westworld…The American Old West is a logical fantasy only if you’re the cowboy—or if your fantasy is to be exploited or enslaved, a desire left unexplored…So female customers get scattered like raisins into the oatmeal of male action; and, while the cast is visually polyglot, the dialogue is color-blind. The result is a layer of insoluble instability, a puzzle that the viewer has to work out for herself: Is Westworld the blinkered macho fantasy, or is that Westworld? It’s a meta-cliffhanger with its own allure, leaving us only one way to find out: stay tuned for next week’s episode.
I agree with many of her reservations, especially when it comes to race, but I think that she overlooks or omits one important point: conscious or otherwise, it’s a brilliant narrative strategy to make a work of art partially about the process of its own creation, which can add a layer of depth even to its compromises and mistakes. I’ve drawn a comparison already to Mad Men, which was a show about advertising that ended up subliminally criticizing its own tactics—how it drew viewers into complex, often bleak stories using the surface allure of its sets, costumes, and attractive cast. If you want to stick with the Nolan family, half of Chris’s movies can be read as commentaries on themselves, whether it’s his stricken identification with the Joker as the master of ceremonies in The Dark Knight or his analysis of his own tricks in The Prestige. Inception is less about the construction of dreams than it is about making movies, with characters who stand in for the director, the producer, the set designer, and the audience. And perhaps the greatest cinematic example of them all is Vertigo, in which Scotty’s treatment of Madeline is inseparable from the use that Hitchcock makes of Kim Novak, as he did with so many other blonde leading ladies. In each case, we can enjoy the story on its own merits, but it gains added resonance when we think of it as a dramatization of what happened behind the scenes. It’s an approach that is uniquely forgiving of flawed masterpieces, which comment on themselves better than any critic can, until we wonder about the extent to which they’re aware of their own limitations.
And this kind of thing works best when it isn’t too literal. Movies about filmmaking are often disappointing, either because they’re too close to their subject for the allegory to resonate or because the movie within the movie seems clumsy compared to the subtlety of the larger film. It’s why Being John Malkovich is so much more beguiling a statement than the more obvious Adaptation. In television, the most unfortunate recent example is UnREAL. You’d expect that a show that was so smart about the making of a reality series would begin to refer intriguingly to itself, and it did, but not in a good way. Its second season was a disappointment, evidently because of the same factors that beset its fictional show Everlasting: interference from the network, conceptual confusion, tensions between producers on the set. It seemed strange that UnREAL, of all shows, could display such a lack of insight into its own problems, but maybe it isn’t so surprising. A good analogy needs to hold us at arm’s length, both to grant some perspective and to allow for surprising discoveries in the gaps. The ballet company in The Red Shoes and the New York Inquirer in Citizen Kane are surrogates for the movie studio, and both films become even more interesting when you realize how much the lead character is a portrait of the director. Sometimes it’s unclear how much of this is intentional, but this doesn’t hurt. So much of any work of art is out of your control that you need to find an approach that automatically converts your liabilities into assets, and you can start by conceiving a premise that encourages the viewer or reader to play along at home.
Which brings us back to Westworld. In her critique, Nussbaum writes: “Westworld [is] a come-hither drama that introduces itself as a science-fiction thriller about cyborgs who become self-aware, then reveals its true identity as what happens when an HBO drama struggles to do the same.” She implies that this is a bug, but it’s really a feature. Westworld wouldn’t be nearly as interesting if it weren’t being produced with this cast, on this network, and on this scale. We’re supposed to be impressed by the time and money that have gone into the park—they’ve spared no expense, as John Hammond might say—but it isn’t all that different from the resources that go into a big-budget drama like this. In the most recent episode, “Dissonance Theory,” the show invokes the image of the maze, as we might expect from a series by a Nolan brother: get to the center to the labyrinth, it says, and you’ve won. But it’s more like what Douglas R. Hofstadter describes in I Am a Strange Loop:
What I mean by “strange loop” is—here goes a first stab, anyway—not a physical circuit but an abstract loop in which, in the series of stages that constitute the cycling-around, there is a shift from one level of abstraction (or structure) to another, which feels like an upwards movement in a hierarchy, and yet somehow the successive “upward” shifts turn out to give rise to a closed cycle. That is, despite one’s sense of departing ever further from one’s origin, one winds up, to one’s shock, exactly where one had started out.
This neatly describes both the park and the series. And it’s only through such strange loops, as Hofstadter has long argued, that any complex system—whether it’s the human brain, a robot, or a television show—can hope to achieve full consciousness.
At last night’s presidential debate, when moderator Chris Wallace asked if he would accept the outcome of the election, Donald Trump replied: “I’ll keep you in suspense, okay?” It was an extraordinary moment that immediately dominated the headlines, and not just because it was an unprecedented repudiation of a crucial cornerstone of the democratic process. Trump’s statement—it seems inaccurate to call it a “gaffe,” since it clearly reflects his actual views—was perhaps the most damaging remark anyone could have made in that setting, and it reveals a curious degree of indifference, or incompetence, in a candidate who has long taken pride in his understanding of the media. It was a short, unforgettable sound bite that could instantly be brought to members of both parties for comment. And it wasn’t an arcane matter of policy or an irrelevant personal issue, but an instantly graspable attack on assumptions shared by every democratically elected official in America, and presumably by the vast majority of voters. Even if Trump had won the rest of the debate, which he didn’t, those six words would have erased whatever gains he might have made. Not only was it politically and philosophically indefensible, but it was a ludicrous tactical mistake, an unforced error in response to a question that he and his advisors knew was going to be asked. As Julia Azari put it during the live chat on FiveThirtyEight: “The American presidency is not the latest Tana French novel—leaders can’t keep the people in suspense.”
But the phrase that he used tells us a lot about Trump. I’m speaking as someone who has devoted my fair share of thought to suspense itself: I’ve written a trilogy of thrillers and blogged here about the topic at length. When I think about the subject, I often start with what John Updike wrote in a review of Nabokov’s Glory, which is that it “never really awakens to its condition as a novel, its obligation to generate suspense.” What Updike meant is that stories are supposed to make us wonder about what’s going to happen next, and it’s that state of pleasurable anticipation that keeps us reading. It can be an end in itself, but it can also be a literary tool for sustaining the reader’s interest while the writer tackles other goals. As Kurt Vonnegut once said of plot, it isn’t necessarily an accurate representation of life, but a way to keep readers turning pages. Over time, the techniques of suspense have developed to the point where you can simulate it using purely mechanical tricks. If you watch enough reality television, you start to notice how the grammar of the editing repeats itself, whether you’re talking about Top Chef or Project Runway or Jim Henson’s Creature Shop. The delay before the judges deliver their decision, the closeups of the faces of the contestants, the way in which an editor pads out the moment by inserting cutaways between every word that Padma Lakshmi says—these are all practical tools that can give a routine stretch of footage the weight of the verdict in the O.J. Simpson trial. You can rely on them when you can’t rely on the events of the show itself.
And the best trick of all is to have a host who keeps things moving whenever the contestants or guests start to drag. That’s where someone like Trump comes in. He’s an embarrassment, but he’s far from untalented, at least within the narrow range of competence in which he used to operate. When I spent a season watching The Celebrity Apprentice—my friend’s older sister was on it—I was struck by how little Trump had to do: he was only onscreen for a few minutes in each episode. But he was good at his job, and he was also the obedient instrument of his producers. He has approached the campaign with the same mindset, but with few of the resources that are at an actual reality show’s disposal. Trump’s strategy has been built around the idea that he doesn’t need to spend money on advertising or a ground game, as long as the media provides him with free coverage. It’s an interesting experiment, but there’s a limit to how effective it can be. In practice, Trump is less like the producer or the host than a contestant, which reduces him to acting like a reality star who wants to maximize his screen time: say alarming things, pick fights, act unpredictably, and generate the footage that the show needs, while never realizing that the incentives of the contestants and producers are fundamentally misaligned. (He should have just watched the first season of UnREAL.) When he says that he’ll keep us in suspense about accepting the results of the election, he’s just following the reality show playbook, which is to milk such climactic moments for all they’re worth.
Yet this approach has backfired, and television provides us with some important clues as to why. I once believed that the best analogy to Trump’s campaign was the rake gag made famous by The Simpsons. As producer Al Jean described it: “Sam Simon had a theory that if you repeat a joke too many times, it stops being funny, but if you keep on repeating it, it might get really funny.” Trump performed a rake gag in public for months. First we were offended when he made fun of John McCain’s military service; then he said so many offensive things that we became numb to it; and then it passed a tipping point, and we got really offended. I still think that’s true. But there’s an even better analogy from television, which is the practice of keeping the audience awake by killing off major characters without warning. As I’ve said here before, it’s a narrative trick that used to seem daring, but now it’s a form of laziness: it’s easier to deliver shocking death scenes than to tell interesting stories about the characters who are still alive. In Trump’s case, the victims are ideas, or key constituents of the electorate: minorities, immigrants, women. When Trump turned on Paul Ryan, it was the equivalent of one of those moments, like the Red Wedding on Game of Thrones, when you’re supposed to gasp and realize that nobody is safe. His attack on a basic principle of democracy might seem like more of the same, but there’s a difference. The strategy might work for a few seasons, but there comes a point at which the show cuts itself too deeply, and there aren’t any characters left that we care about. This is where Trump is now. And by telling us that he’s going to keep us in suspense, he may have just made the ending a lot less suspenseful.
Note: Spoilers follow for the Westworld episode “The Stray.”
There’s a clever moment in the third episode of Westworld when Teddy, the clean-cut gunslinger played by James Marsden, is finally given a backstory. Teddy has spoken vaguely of a guilty secret in his past, but when he’s pressed for the details, he doesn’t elaborate. That’s the mark of a good hero. As William Goldman points out in his wonderful book Which Lie Did I Tell?, protagonists need to have mystery, and when you give them a sob story, here’s what happens:
They make [him] a wimp. They make him a loser. He’s just another whiny asshole who went to pieces when the gods pissed on him. “Oh, you cannot know the depth of my pain” is what that seems to be saying to the audience. Well, if I’m in that audience, what I think is this: Fuck you. I know people who are dying of cancer, I know people who are close to vegetables, and guess what—they play it as it lays.
Of course, we know that Teddy is really an android, and if he doesn’t talk about his past, it’s for good reason: as Dr. Ford, his creator, gently explains, the writers never bothered to give him one. With a few commands on a touchscreen, a complete backstory is uploaded into his system, and Teddy sets off on a doomed quest in pursuit of his old enemy, Wyatt, against whom he has sworn undying revenge. We don’t know how this plot thread ties into the rest of Dr. Ford’s plan, but we can only assume that it’s going somewhere—and it’s lucky for him that he had a convenient hero available to fill that role.
There are several levels of sly commentary here. When you’re writing a television show—or a series of novels—you want to avoid filling in anybody’s backstory for as long as possible. Part of the reason, as Goldman notes above, is to maintain a sense of mystery, and for the sake of narrative momentum, it makes sense to avoid dwelling on what happened before the story began. But it’s also a good idea to keep this information in your back pocket for when you really need it. If you know how to deploy it strategically, backstory can be very useful, and it can get you out of trouble or provide a targeted nudge when you need to push the plot in a particular direction. If you’re too explicit about it too soon, you narrow your range of options. (You also make it harder for viewers to project their own notions onto the characters, which is what Westworld, the theme park, is all about.) I almost wish that Westworld had saved this moment with Teddy for later in the show’s run, which would underline its narrative point. We’re only a third of the way through the first season, but within the world of the show itself, the park has been running for decades with the same generic storylines. Dr. Ford has a few ideas about how to shake things up, and Teddy is a handy blank slate. Television showrunners make that sort of judgment call all the time. In the internal logic of the park, this isn’t the first season, but more like its fifth or sixth, when a scripted drama tends to go off the rails, and the accumulation of years of backstory starts to feel like a burden.
“The Stray,” in fact, is essentially about backstory, on the level both of the park and of the humans who are running it. Shortly after filling in the details of Teddy’s past, Dr. Ford does exactly the same thing for himself: he delivers a long, not entirely convincing monologue about a mysterious business partner, Arnold, who died in the park and was later removed from its corporate history. At the end of the speech, he looks at Bernard, his head of programming, and tells him that he knows how much his son’s death still haunts him. It’s a little on the nose, but I think it’s supposed to be. It makes us wonder if Bernard might unknowingly be a robot himself, a la Blade Runner, and whether his flashbacks of his son are just as artificial as Teddy’s memories of Wyatt. I hope that this isn’t the big twist, if only because it seems too obvious, but in a way, it doesn’t really matter. Bernard may or may not be a robot, but there’s no question that Bernard, Dr. Ford, and all the other humans in sight are characters on a show called Westworld, and whatever backstories they’ve been given by Jonathan Nolan and Lisa Joy are as calculated as the ones that the androids have received. Even if Bernard’s memories are “real,” we’re being shown them for a reason. (It helps that Dr. Ford and Bernard are played by Anthony Hopkins and Jeffrey Wright, two actors who are good at giving technically exquisite performances that draw subtle attention to their own artifice. Wright’s trademark whisper—he’s like a man of great passion who refuses to raise his voice—draws the viewer into a conspiracy with the actor, as if he’s letting us in on a secret.)
The trouble with this reading, of course, is that it allows us to excuse instances of narrative sloppiness under the assumption that the series is deliberately commenting on itself. I’m willing to see Dr. Ford’s speech about Arnold as a winking nod to the tendency of television shows to dispense backstory in big infodumps, but I’m less sure about the moment in which he berates a lab technician for covering up a robot’s naked body and slashes at the android’s face. It’s doesn’t seem like the Dr. Ford of the pilot, talking nostalgically to Old Bill in storage, and while we’re presumably supposed to see him as a man of contradictions, it feels more like a juxtaposition of two character beats that weren’t meant to be so close together. (I have a hunch that it also reflects Hopkins’s availability: the show seems to have him for about two scenes per episode, which means that it has to do in five minutes what might have been better done in ten.) Westworld, as you might expect from a show from one of the Nolan brothers, has more ideas than it knows how handle: it hurries past a reference to Julian Jaynes’s The Origin of Consciousness in the Breakdown of the Bicameral Mind so quickly that it’s as if the writers just want to let us know that they’ve read the book. But I still have faith in this show’s potential. When Teddy is ignominiously killed yet again by Wyatt’s henchmen, it forces Dolores to face the familiar attackers in her own storyline by herself—an ingenious way of getting her to where she needs to be, but also a reminder, I think, of how the choices that a storyteller makes in one place can have unexpected consequences somewhere else. It’s a risk that all writers take. And Westworld is playing the same tricky game as the characters whose stories it tells.
As we were watching the premiere of Westworld last week, my wife turned to me and said: “Why would they make it a western park?” Or maybe I asked her—I can’t quite remember. But it’s a more interesting question than it sounds. When Michael Crichton’s original movie was released in the early seventies, the western was still a viable genre. It had clearly fallen from its peak, but major stars were doing important work in cowboy boots: Eastwood, of course, but also Newman, Redford, and Hoffman. John Wayne was still alive, which may have been the single most meaningful factor of all. As a result, it wasn’t hard to imagine a theme park with androids designed to fulfill that particular fantasy. These days, the situation has changed. The western is so beleaguered an art form that whenever one succeeds, it’s treated as newsworthy, and that’s been true for the last twenty years. Given the staggering expense and investment involved in a park like this, it’s hard to see why the western would be anybody’s first choice. (Even with the movie, I suspect that Crichton’s awareness of his relatively low budget was part of the decision: it was his first film as a director, with all of the limitations that implies, and a western could be shot cheaply on standing sets in the studio backlot.) Our daydreams simply run along different lines, and it’s easier to imagine a park being, say, set in a medieval fantasy era, or in the future, or with dinosaurs. In fact, there was even a sequel, Futureworld, that explored some of these possibilities, although it’s fair to say that nobody remembers it.
The television series Westworld, which is arriving in a markedly different pop cultural landscape, can’t exactly ditch the premise—it’s right there in the title. But the nice thing about the second episode, “Chestnut,” is that it goes a long way toward explaining why you’d still want to structure an experience like this around those conventions. It does this mostly by focusing on a new character, William, who arrives at the park knowing implausibly little about it, but who allows us to see it through the eyes of someone encountering it for the first time. What he’s told, basically, is that the appeal of Westworld is that it allows you to find out who you really are: you’re limited only by your inhibitions, your abilities, and your sense of right and wrong. That’s true of the real world, to some extent, but we’re also more conscious of the rules. And if the western refuses to go away as a genre, it’s because it’s the purest distillation of that seductive sense of lawlessness. The trouble with telling certain stories in the present day is that there isn’t room for the protagonist that thrillers have taught us to expect: a self-driven hero who solves his problems for himself in matters of life and death. That isn’t how most of us respond to a crisis, and in order to address the issue of why the main character doesn’t just go to the police, writers are forced to fall back on various makeshift solutions. You can focus on liminal figures, like cops or criminals, who can take justice into their own hands; you can establish an elaborate reason why the authorities are helpless, indifferent, or hostile; or you can set your story in a time or place where the rules are different or nonexistent.
The western, in theory, is an ideal setting for a story in which the hero has to rely on himself. It’s a genre made up of limitless open spaces, nonexistent government, unreliable law enforcement, and a hostile native population. If there’s too much civilization for your story to work, your characters can just keep riding. To move west, or to leave the center of the theme park, is to move back in time, increasing the extent to which you’re defined by your own agency. (A western, revealingly, is a celebration of the qualities that we tend to ignore or dismiss in our contemporary immigrant population: the desire for a new life, the ability to overcome insurmountable obstacles, and the plain observation that those who uproot themselves and start from scratch are likely to be more competent and imaginative, on average, than those who remain behind.) The western is the best narrative sandbox ever invented, and if it ultimately exhausted itself, it was for reasons that were inseparable from its initial success. Its basic components were limited: there were only so many ways that you could combine those pieces. Telling escapist stories involved overlooking inconvenient truths about Native Americans, women, and minorities, and the tension between the myth and its reality eventually became too strong to sustain. Most of all, its core parts were taken over by other genres, and in particular by science fiction and fantasy. This began as an accidental discovery of pulp western writers who switched genres and realized that their tricks worked equally well in Astounding, and it was only confirmed by Star Trek—which Gene Roddenberry famously pitched as Wagon Train in space—and Star Wars, which absorbed those clichés so completely that they became new again.
What I like about Westworld, the series, is that it reminds us of how artificial this narrative always was, even in its original form. The Old West symbolizes freedom, but only if you envision yourself in the role of the stock protagonist, who is usually a white male antihero making the journey of his own volition. It falls apart when you try to imagine the lives of the people in the background, who exist in such stories solely to enable the protagonist’s fragile range of options. In reality, the frontier brutally circumscribed the lives of most of those who tried to carve out an existence there, and the whole western genre is enabled by a narrative illusion, or a conspiracy, that keeps its solitary and brutish aspects safely in the hands of the characters at the edges of the frame. Westworld takes that notion to its limit, by casting all the supporting roles with literal automatons. They aren’t meant to have inner lives, any more than the peripheral figures in any conventional western, and the gradual emergence of their consciousness implies that the park will eventually come to deconstruct itself. (The premiere quoted cleverly from The Searchers and Unforgiven, but I almost wish that it had saved those references until later, so that the series could unfold as a miniature history of the genre as it slowly attained self-awareness.) If you want to talk about how we picture ourselves in the heroes of our own stories, while minimizing or reducing the lives of those at the margins, it’s hard to imagine a better place to do it than the western, which depended on a process of historical amnesia and dehumanization from the very beginning. I’m not sure I’d want to visit a park like Westworld. But there will always be those who would.
One of the greatest compliments that we can pay to any story is that it seems shorter than it actually is. It’s obviously best for a narrative to be only as long as it has to be, and no more, which means that the creator needs to be willing to cut wherever necessary. (Sometimes it’s even better if these time or length limits are imposed from the outside. I’ve always maintained that Blue Velvet, my favorite American movie ever, was tremendously improved by a contractual stipulation that forced David Lynch and editor Duwayne Dunham to cut it from three hours down to two. And as much as I’m enjoying the streaming renaissance on Netflix, I sometimes wish that the episodes of these shows were shorter: without a fixed time slot, there’s no incentive to trim any given installment, and a literal hour of television tends to drag toward the end.) But it’s nice when a movie, in particular, grips us so completely that we don’t realize how long we’ve been watching it. I still remember being so absorbed by Michael Mann’s The Insider that I was startled to realize, when I checked my watch after the screening, that it was two and a half hours long: I would have guessed that it was closer to ninety minutes. And you only need to compare the experience of watching the original cut of Seven Samurai with, say, four episodes of the second season of True Detective to realize that three and a half hours can be something very different in subjective and objective time.
But there’s another storytelling trick that deserves just as much attention, which is the ability to make a short work of art seem longer. I’m not talking about the way in which even a twenty minutes of a bad sitcom can seem interminable, but of how a story can somehow persuade us that we’ve lived through a longer and more meaningful experience than seems possible to encompass within a limited timeframe. On some level, this is an illusion that you encounter in most narratives of any kind: with the exception of the rare works designed to unfold in real time, we’re asked to believe that the relatively short period that it takes to physically view or read the story really covers days, weeks, or months of action, and occasionally much longer. Many biopics, for instance, ask us to go through an entire lifetime in a couple of hours, and the fact that the result is usually so unsatisfying only indicates how hard it is to pull this off. But it has a greater chance of succeeding when it uses our perceptions of time to convince us, in a pleasurable way, that we’ve seen and felt more than could be packed into a single sitting. We could start with Citizen Kane, which is exactly a minute short of two hours long—which, like Blue Velvet, probably reflects an attempt to meet a contractually mandated length. Yet more than any other movie, it feels like a full picture of a man’s life, and the fact that it asks us to assemble Kane’s story from the fragments of other people’s memories offers a very important clue as to how this kind of thing works.
Because one of the best ways to create a subjective impression of length is through contrasts: the alternation of big and little, loud and soft, fast and slow. I got to thinking about this while listening to “Yorktown (The World Turned Upside Down),” which is one of the two or three best songs in Hamilton. It’s as epic a number as you could imagine, and it leaves you feeling as if you’ve lived through an unforgettable experience, but it lasts just four minutes. In his notes in Hamilton: The Revolution, Lin-Manuel Miranda explains how it works:
Part of the inspiration for the structure of “Yorktown” is what I call the “Busta Rhymes soft-loud-soft technique. On countless songs, Busta will give you the smoothest, quietest delivery and then full-on scream the next verse. It makes for a delightful tension and release, and it’s entirely vocal. Same here. “I have everything I wanted but I can’t die today / We’re going into battle / Here’s what my friends are doing / Hercules Mulligan!” Thank you and God bless you, Busta Rhymes.
It isn’t hard to see why this kind of alternation creates an impression of length, in the much same way that we find with the experiments with chronology in Kane. With every transition, the listener has to readjust, and the mental effort of these regroupings draws out our perception of time passing. The switching costs of moving from one moment to the next allow the story to do with a juxtaposition what would otherwise require a pause. As the old proverb says, a change is as good as a rest.
And this phenomenon emerges from something fundamental in how our brains are wired. As the neurologist David Eagleman says about the perception of time in everyday life:
When our brains receive new information, it doesn’t necessarily come in the proper order. This information needs to be reorganized and presented to us in a form we understand. When familiar information is processed, this doesn’t take much time at all. New information, however, is a bit slower and makes time feel elongated.
In other words, it takes a while for the brain to process new information, leading to a subjective impression of extended time. It’s why travel or a change of scenery can make our lives seem to slow down, and why we’re advised to use surprise or variety to keep the days from turning into a blur. The real challenge for artists is to combine different kinds of time within the same narrative. A movie or book that consists of nothing but action will quickly become boring, and so will a string of talky interior scenes. If you can speed it up and slow it down in the right proportions, the result, at its finest, will make you feel as if you’ve lived a rich, fulfilling life over the course of two hours. Hamilton does this beautifully. So does Kane—and you could even argue that the best reason to use a nonlinear narrative, rather than as a gimmick, is the ability it presents to treat time as a tool. You’re not just painting a picture; you’re asking the audience to assemble a puzzle. And it helps to use different kinds of pieces.
As hard as it is to believe these days, I spent most of my early twenties working at a hedge fund in New York. I got there by a process that was circuitous even by my standards: I’d moved to the city after college, hoping to land a job at a newspaper or magazine while writing fiction on the side, but my prospects weren’t great, and I was nearly at the end of the savings that I’d set aside to get me through the summer. When I was invited to interview at a financial firm that actively recruited Ivy League graduates with good grades and no previous experience, I set the letter aside, and I didn’t pick it up again until my other avenues had dried up. But when I decided to give it a shot, I took it seriously. I checked out a guide to hedge funds from the local library in Queens, along with a book on interview questions along the lines of “How many gas stations are in the United States?” It also seemed like a good idea to pick up a recent book on finance, in case my interviewer asked what I’d been reading on the subject. After browsing at the Strand Bookstore, I picked up a promising title that I’d seen mentioned elsewhere, and I read the whole thing in about an hour. I did one interview over the phone, and I did well enough that they asked me to come by the office in person. In the end, I got the job, and it turned out to be the right choice: I learned a lot, saved some money, and made friends who have had an incalculable impact on my life. That’s a story for another time. But I’m lucky that nobody asked me what I’d been reading—and if they had, I’m not sure they would have hired me. Because the book I chose was Robert Kiyosaki’s Rich Dad, Poor Dad.
Even now, almost fifteen years later, it embarrasses me to type this. Kiyosaki has more or less disappeared from the national consciousness, and he’s remembered now, if at all, as a relic of the peculiar financial bubble of the early twenty-first century, just after the tech bust and shortly before the subprime crisis. His books consist of about a paragraph of actual advice—on the level of a personal finance article in Parade magazine—padded out to a couple of hundred pages with platitudes, misleading examples, and sales pitches for other items in his product line. The autobiographical narrative that he provides in Rich Dad, Poor Dad is blatantly fictionalized. (For a more thorough review of Kiyosaki’s evasions, fabrications, and bad ideas, I urge you to check out the comprehensive takedown by real estate guru John T. Reed, which is more than a decade old, but remains one of my favorite things on the Internet.) But the key point about Kiyosaki is that he’s a branding expert masquerading as a real estate and investing authority. His wealth didn’t come from buying, selling, and managing properties, but from hocking his books through organizations like Amway. When he’s pressed for specifics, Kiyosaki, who spends most of his life promoting his own success, suddenly turns coy, and refuses to provide any details on his holdings. He once claimed that his net worth fluctuated between $50 and $100 million, “depending on the day.” And if any of this sounds familiar, you shouldn’t be surprised: Kiyosaki later partnered with Donald Trump on the books Why We Want You to Be Rich and Midas Touch, most of which were devoted to steering readers to network marketing companies. They feel, frankly, like artifacts of a more innocent time.
But what interests me the most now are the reasons why I bought a copy of Kiyosaki’s book in the first place. First, I didn’t know any better. Second, it had been positioned by many reviewers at the time as a legitimate book on personal finance. Both points, I think, are illuminating. I was a smart kid, and I’d graduated with honors from a good college, but I didn’t know the first thing about finance or investing. Over the next few years, I learned a lot, but I still remember how little I understood when I started, and how dependent I was on outside sources, many of them actively misleading, to point me in the right direction. I wasn’t alone, either. Many of my friends in their twenties were freaking out over how unprepared they were to manage their own money. The language of finance seemed too daunting to master, and there was a palpable sense that we were all faking our way into adulthood. At cocktail parties, whenever I had to explain what I did for a living, I’d ask: “Well, do you know what a mutual fund is?” If the answer was yes, I would go on to explain how a hedge fund was different—but the answer was usually no. And I don’t blame anyone for this. There was good advice to be had: I became a regular on the Bogleheads forum, which is still where I’d advise an aspiring investor to poke around first. But you had to seek it out, at a time when a huckster like Kiyosaki was receiving respectful press as long as his books were selling. It was easier to write stories about his run on the bestseller list than to honestly interrogate the statements he was making. People bought Rich Dad, Poor Dad because they heard that other people were buying it, and it’s what finally gave Kiyosaki the wealth that he claimed to have earned. The snake ate its own tail.
And that was the most insidious phenomenon of all. I’ve been thinking about Kiyosaki a lot recently, and not just because the Republican presidential nominee is a self-help financial guru with an unreliable memoir of his own. If Trump and Kiyosaki were drawn to each other, it’s because they were kindred spirits. Like Kiyosaki, Trump appears to have made most of his current wealth from brand extension, licensing, and his work as a television personality on The Apprentice, and he benefited from indulgent media coverage that treated him for years as a property developer rather than as an entertainer. Trump is uncannily adept at promising the world to his followers while refusing to provide any specifics about how his goals could be achieved, which is a skill that he honed as a financial guru: you always tease the reader by hinting at the answers that will be revealed in the next book, class or seminar. And he benefits, above all, from the same lack of basic knowledge—and the hunger for guidance of any kind—that led an intelligent college graduate, on the verge of applying for a position at a global hedge fund, to turn to Kiyosaki as a source of advice. If it weren’t for that fundamental confusion about how economic value is created, Trump wouldn’t be able to sell himself as someone with the business expertise to run the country, or as someone who “brilliantly” used almost a billion dollars of losses in a single year to avoid paying federal income taxes for two decades. And I can’t fault people for wanting to believe him, any more than I can blame them for buying into the seductive, empty pitch that Kiyosaki peddled for years. Because whenever I feel tempted to condescend to Trump’s supporters, I remind myself that I once fell for it, too.
Note: Spoilers follow for the series premiere of Westworld.
Producing a television series, as I’ve often said here before, is perhaps the greatest test imaginable of the amount of control that a storyteller can impose on any work of art. You may have a narrative arc in mind that works beautifully over five seasons, but before you even begin, you know that you’ll have to change the plan to deal with the unexpected: the departure of a star, budgetary limitations, negotiations with the network. Hanging overhead at all times is the specter of cancellation, which means that you don’t know if your story will be told over an hour, one season, or many years. You may not even be sure what your audience really wants. Maybe you’ve devoted a lot of thought to creating nuanced, complicated characters, only to realize that most viewers are tuning in for sex, violence, and sudden death scenes. It might even be to your advantage to make the story less realistic, keeping it all safely escapist to avoid raising uncomfortable questions. If you’re going to be a four-quadrant hit, you can’t appeal to just one demographic, so you’ve got to target some combination of teenagers and adults of both sexes. This doesn’t even include the critics, who are likely to nitpick the outcome no matter what. All you can really do, in the end, is set the machine going, adjust it as necessary on the fly, try to keep the big picture in mind, and remain open to the possibility that your creation will surprise you—which are conditions that the best shows create on purpose. But it doesn’t always go as it should, and successes and failures alike tend to wreak havoc with the plans of their creators. Television, you might say, finds a way.
The wonderful thing about Westworld, which might have the best pilot for any show since Mad Men, is that it delivers exceptional entertainment while also functioning as an allegory that you can read in any number of ways. Michael Crichton’s original movie, which I haven’t seen, was pitched as a commentary on the artificially cultivated experience offered to us by parks like Disney World, an idea that he later revisited with far more lucrative results. Four decades later, the immersive, open world experience that Westworld evokes is more likely to remind us of certain video games, which serve as a sandbox in which we can indulge in our best or worst impulses with maximum freedom of movement. (The character played by Ed Harris is like a player who has explored the game so throughly that he’s more interested now in looking for exploits or glitches in the code.) Its central premise—a theme park full of androids that are gradually attaining sentience—suggests plenty of other parallels, and I’m sure the series will investigate most of them eventually. But I’m frankly most inclined to see it as a show about the act of making television itself. Series creators Jonathan Nolan and Lisa Joy have evidently mapped out a narrative for something like the next five or six seasons, which feels like an attempt to reassure viewers frustrated by the way in which serialized, mythology-driven shows tend to peter out toward the end, or to endlessly tease mysteries without ever delivering satisfying answers. But I wonder if Nolan and Joy also see themselves in Dr. Ford, played here with unusual restraint and cleverness by Anthony Hopkins, who looks at his own creations and muses about how little control he really has over the result.
It’s always dangerous to predict a show’s future from the pilot alone, and I haven’t seen the other episodes that were sent to critics for review. Westworld’s premise is also designed to make you even more wary than usual about trying to forecast a system as complicated as an ambitious cable series, especially one produced by J.J. Abrams. (There are references to the vagaries of television production in the pilot itself, much of which revolves around a technical problem that forces the park’s head writer to rewrite scenes overnight, cranking up the body count in hopes that guests won’t notice the gaps in the narrative. And one of its most chilling moments comes down to the decision to recast a key supporting role with a more cooperative performer.) After the premiere, which we both loved, my wife worried that we’ll just get disillusioned by the show over time, as we did with Game of Thrones. It’s always possible, and the number of shows over the last decade that have sustained a high level of excellence from first episode to last basically starts and ends with Mad Men—which, interestingly, was also a show about writing, and the way in which difficult concepts have to be sold and marketed to a large popular audience. But I have high hopes. The underlying trouble with Game of Thrones was a structural one: one season after another felt like it was marking time in its middle stretches, cutting aimlessly between subplots and relying on showy moments of violence to keep the audience awake, and many of its issues arose from a perceived need to keep from getting ahead of the books. It became a show that only knew how to stall and shock, and I would have been a lot more forgiving of its sexual politics if I had enjoyed the rest of it, or if I believed that the showrunners were building to something worthwhile.
I have more confidence in Westworld, in part because the pilot is such a confident piece of storytelling, but also because the writers aren’t as shackled by the source. And I feel almost grateful for the prospect of fully exploring this world over multiple seasons with this cast and these writers. Jonathan Nolan, in particular, has been overshadowed at times by his brother Christopher, who would overshadow anyone, but his résumé as a writer is just as impressive: the story for Memento, the scripts for The Dark Knight and The Dark Knight Rises, and that’s just on the movie side. (I haven’t seen Person of Interest, but I’ve heard it described as the best science fiction show on television, camouflaged in plain sight as a procedural.) Nolan has always tended to cram more ideas into one screenplay than a movie can comfortably hold, which is a big part of his appeal: The Dark Knight is so overflowing with invention that it only underlines the limpness of the storytelling in most of the Marvel movies. What excites me about Westworld is the opportunity it presents for Nolan to allow the story to breathe, going down interesting byways and exploring its implications at length. And the signs so far are very promising. The plot is a model of story construction, to the point where I’d use it as an example in a writing class: it introduces its world, springs a few big surprises, tells us something about a dozen characters, and ends on an image that is both inevitable and deliciously unexpected. Even its references to other movies are more interesting than most. A visual tribute to The Searchers seems predictable at first, but when the show repeats it, it becomes a wry commentary on how an homage can take the place of real understanding. And a recurring bit with a pesky fly feels like a nod to Psycho, which implicated the audience in similar ways. As Mrs. Bates says to us in one of her last lines: “I hope they are watching. They’ll see.”
Curtis Hanson, who died earlier this week, directed one movie that I expect to revisit endlessly for the rest of my life, and a bunch of others that I’m not sure I’ll ever watch again. Yet it’s those other films, rather than his one undisputed masterpiece, that fascinate me the most. L.A. Confidential—which I think is one of the three or four best movies made in my lifetime—would be enough to secure any director’s legacy, and you couldn’t have blamed Hanson for trying to follow up that great success with more of the same. Instead, he delivered a series of quirky, shaggy stories that followed no discernible pattern, aside from an apparent determination to strike out in a new direction every time: Wonder Boys, 8 Mile, In Her Shoes, Lucky You, Too Big to Fail, and Chasing Mavericks. I’ve seen them all, except for the last, which Hanson had to quit halfway through after his health problems made it impossible for him to continue. I’ve liked every single one of them, even Lucky You, which made about as minimal an impression on the world as any recent film from a major director. And what I admire the most about the back half of Hanson’s career is its insistence that a filmmaker’s choice of projects can form a kind of parallel narrative, unfolding invisibly in the silences and blank spaces between the movies themselves.
There comes a point in the life of every director, in fact, when each new film is freighted with a significance that wasn’t there in the early days. Watching Bridge of Spies recently, I felt heavy with the knowledge that Spielberg won’t be around forever. We don’t know how many more movies he’ll make, but it’s probably more than five and fewer than ten. As a result, there’s a visible opportunity cost attached to each one, and a year of Spielberg’s time feels more precious now than it did in the eighties. This sort of pressure becomes even more perceptible after a director has experienced a definitive triumph in the genre for which he or she is best known. After Goodfellas, Martin Scorsese seemed anxious to explore new kinds of narrative, and the result—the string of movies that included The Age of Innocence, Kundun, Bringing Out the Dead, and Hugo—was sometimes mixed in quality, but endlessly intriguing in its implications. Years ago, David Thomson wrote of Scorsese: “His search for new subjects is absorbing and important.” You could say much the same of Ridley Scott, Clint Eastwood, or any number of other aging, prolific directors with the commercial clout to pick their own material. In another thirty years or so, I expect that we’ll be saying much the same thing about David Fincher and Christopher Nolan. (If a director is less productive and more deliberate, his unfinished projects can end up carrying more mythic weight than most movies that actually get made, as we’re still seeing with Stanley Kubrick.)
Hanson’s example is a peculiar one because his choices were the subject of intense curiosity, at least from me, at a much earlier stage than usual. This is in part because L.A. Confidential is a movie of such clarity, confidence, and technical ability that it seemed to herald a director who could do just about anything. In a way, it did—but not in a manner that anyone could have anticipated. Hanson’s subsequent choices could come off as eccentric, and not after the fashion of Steven Soderbergh, who settled into a pattern of one for himself, one for the masses. The movies after Wonder Boys are the work of a man who was eager to reach a large popular audience, but not in the sense his fans were expecting, and with a writerly, almost novelistic approach that frustrated any attempt to pin him down to a particular brand. It’s likely that this was also a reflection of how hard it is to make a modestly budgeted movie for grownups, and Hanson’s filmography may have been shaped mostly by what projects he was able to finance. (This also accounts for the confusing career of his collaborator Brian Helgeland, who drifted after L.A. Confidential in ways that make Hanson seem obsessively focused.) His IMDb page was littered with the remains of ideas, like an abortive adaptation of The Crimson Petal and the White, that he was never able to get off the ground. His greatest accomplishment, I suspect, was to make the accidents of a life in Hollywood seem like the result of his own solitary sensibilities.
Yet we’re still left with the boundless gift of L.A. Confidential, which I’ve elsewhere noted is the movie that has had the greatest impact on my writing life. (My three published novels are basically triangulations between L.A. Confidential, Foucault’s Pendulum, and The Day of the Jackal, with touches of Thomas Harris and The X-Files, but it was Hanson, even more than James Ellroy, who first taught me the pleasures of a triple plot.) It has as many great scenes as The Godfather, and as deep a bench of memorable performances, and it’s the last really complicated story that a studio ever allowed itself. When you look at the shine of its images and the density of its screenplay, you realize that its real descendants can be found in the golden age of television, although it accomplishes more in two and a half hours than most prestige dramas can pull off in ten episodes. It’s a masterpiece of organization that still allows itself to breathe, and it keeps an attractive gloss of cynicism while remaining profoundly humane. I’m watching it again as I write this, and I’m relieved to find that it seems ageless: it’s startling to realize that it was released nearly two decades ago, and that a high school student discovering it now will feel much as I did when I saw Chinatown. When it first came out, I was almost tempted to undervalue it because it went down so easily, and it took me a few years to recognize that it was everything I’d ever wanted in a movie. And it still is—even if Hanson himself always seemed conscious of its limitations, and restless in his longing to do more.
Note: Spoilers follow for Stranger Things.
One of the first images we see on the television show Stranger Things is a poster for John Carpenter’s The Thing. (In fact, it’s only as I type this now that it occurs to me that the title of the series, which premiered earlier this summer on Netflix, might be an homage as well.) It’s hanging in the basement of one of the main characters, a twelve year old named Mike, who is serving as the Dungeon Master of a roleplaying campaign with three of his best friends. You can see the poster in the background for most of the scene, and in a later episode, two adults watch the movie at home, oblivious to the fact that a monster from another dimension is stalking the inhabitants of their town in Indiana. Not surprisingly, I was tickled to see my favorite story by John W. Campbell featured so prominently here: Campbell wrote “Who Goes There?” back in 1937, and the fact that it’s still a reference point for a series like this, almost eighty years later, is astounding. Yet apart from these two glimpses, The Thing doesn’t have much in common with Stranger Things. The former is set in a remote Antarctic wasteland in which no one is what he seems; the latter draws from a different tradition in science fiction, with gruesome events emerging from ordinary, even idyllic, surroundings, and once we’ve identified all the players, everything is more or less exactly what it appears to be. It flirts with paranoia, but it’s altogether cozy, even reassuring, in how cleverly it gives us just what we expect.
That said, Stranger Things is very good at achieving what it sets out to do. The date of the opening scene is November 6, 1983, and once Mike’s best friend Will is pulled by a hideous creature into a parallel universe, the show seems determined to reference every science fiction or fantasy movie of the previous five years. Its most obvious touchstones are E.T., Poltergeist, The Goonies, and Close Encounters of the Third Kind, but there are touches of The Fury as well, and even shades of Stephen King. (Will’s older brother, played by Charlie Heaton, looks eerily like a young King, and the narrative sometimes feels like an attempt to split the difference between Firestarter and It.) Visually, it goes past even Super 8 in its meticulous reconstruction of the look and feel of early Steven Spielberg, and the lighting and cinematography are exquisitely evocative of its source. The characters and situations are designed to trigger our memories, too, and the series gets a lot of mileage out of recombining the pieces: we’re invited to imagine the kids from The Goonies going after whatever was haunting the house in Poltergeist, with a young girl with psychokinetic powers taking the place of E.T. As Will’s mother, Winona Ryder initially comes off as a combination of the Melinda Dillon and Richard Dreyfuss characters from Close Encounters—she’s frantic at Will’s disappearance, but she also develops an intriguing streak of obsession, hanging up holiday lights in her house and watching them flicker in hopes of receiving a message from her missing son. And it can be fun to see these components slide into place.
It’s only when the characters are asked to stand for something more than their precursors that the series starts to falter. Ryder’s character doesn’t develop after the first couple of episodes, and she keeps hitting the same handful of notes. Once the players have been established, they don’t act in ways that surprise us or push against the roles that they’ve been asked to embody, and most of the payoffs are telegraphed well in advance. The only adult character who really sticks in the mind is the police chief played by David Harbour, and that’s due less to the writing than to Harbour’s excellent work as a rock-solid archetype. Worst of all, the show seems oddly uncertain about what to do with its kids, who should be the main attraction. They all look great with their bikes and walkie-talkies, and Gaten Matarazzo’s Dustin is undeniably endearing—he’s the show’s only entirely successful character. But they spend too much time squabbling among themselves, when a story like this really demands that they present a unified front against the adult world. For the most part, the interpersonal subplots do nothing but mark time: we don’t know enough about the characters to be invested in their conflicts or romances, and far too many scenes play like a postponement of the real business at hand. Any story about the paranormal is going to have one character trying to get the others to believe, but it’s all in service of the moment when they put their differences aside. When everyone teams up on Stranger Things, it’s satisfying, but it occurs just one episode before the finale, and before we have a chance to absorb or enjoy it, it’s over.
And part of the problem, I think, is that Stranger Things tells the kind of story that might have been better covered in two hours, rather than eight. When I go back and watch the Spielberg films that the series is trying to evoke, what strikes me first is an unusual absence of human conflict. In both Close Encounters and E.T., the shadowy government operatives turn out to be unexpectedly benevolent, and the worst villains we see are monsters of venality, like the councilmen who keep the beaches open in Jaws or the developers who build on a graveyard in Poltergeist. For the most part, the characters are too busy dealing with the wonders or terrors on display to fight among themselves. In The Goonies, the kids are arguing all the time, like the crew in Jaws, but it never slows down the plot: they keep stumbling into new set pieces. It’s a strategy that works fine for a movie, in which the glow of the images and situations is enough to carry us to the climax, but a season of television can’t run on that battery alone. As a result, Stranger Things feels obliged to bring in conflicts that will keep the wheels turning, even if it lessens the appeal of the whole. The men in black are anonymous bad guys, full stop, and the show isn’t above using them to pad an episode’s body count, with the psychokinetic girl Eleven snapping their necks with her mind. (I kept expecting her to simply blow up the main antagonist, as Amy Irving—Spielberg’s future wife—did to John Cassavetes in The Fury, and I was half right.) Sustaining a sense of awe or dread over multiple episodes would have been a much harder trick than getting the lighting just right. And the strangest thing about Stranger Things is that it makes us think it might have been possible.
Earlier this week, The A.V. Club, which is still the pop culture website at which I spend the vast majority of my online life, announced a new food section called “Supper Club.” It’s helmed by the James Beard Award-winning food critic and journalist Kevin Pang, a talented writer and documentarian whose work I’ve admired for years. On Wednesday, alongside the site’s usual television and movie coverage, seemingly half the homepage was devoted to features like “America’s ten tastiest fast foods,” followed a day later by “All of Dairy Queen’s Blizzards, ranked.” And the reaction from the community was—not good. Pang’s introductory post quickly drew over a thousand comments, with the most upvoted response reading:
I’ll save you about six months of pissed-away cash. Please reallocate the money that will be wasted on this venture to add more shows to the TV Club review section.
Most of the other food features received the same treatment, with commenters ignoring the content of the articles themselves and complaining about the new section on principle. Internet commenters, it must be said, are notoriously resistant to change, and most vocal segment of the community represents a tiny fraction of the overall readership of The A.V. Club. But I think it’s fair to say that the site’s editors can’t be entirely happy with how the launch has gone.
Yet the readers aren’t altogether wrong, either, and in retrospect, you could make a good case that the rollout should have been handled differently. The A.V. Club has gone through a rough couple of years, with many of its most recognizable writers leaving to start the movie site The Dissolve—which recently folded—even as its signature television coverage has been scaled back. Those detailed reviews of individual episodes might be popular with commenters, but they evidently don’t generate enough page views to justify the same degree of investment, and the site is looking at ways to stabilize its revenue at a challenging time for the entire industry. The community is obviously worried abut this, and Supper Club happened to appear at a moment when the commenters were likely to be skeptical about any new move, as if it were all a zero-sum game, which it isn’t. But the launch itself didn’t help matters. It makes sense to start an enterprise like this with a lot of articles on its first day, but taking over half the site with minimal advance warning lost it a lot of goodwill. Pang could also have been introduced more gradually: he’s a celebrity in foodie circles, but to most A.V. Club readers, he’s just a name. (It was also probably a miscalculation to have Pang write the introductory post himself, which placed him in the awkward position of having to drum up interest in his own work for an audience that didn’t know who he was.) And while I’ve enjoyed some of the content so far, and I understand the desire to keep the features lightweight and accessible, I don’t think the site has done itself any favors by leading with articles like “Do we eat soup or do we drink soup?”
This might seem like a lot of analysis for a kerfuffle that will be forgotten within a few weeks, no matter how Supper Club does in the meantime. But The A.V. Club has been a landmark site for pop culture coverage for the last decade, and its efforts to reinvent itself should concern anyone who cares about whether such venues can survive. I found myself thinking about this shortly after reading the excellent New Yorker profile of Pete Wells, the restaurant critic of the New York Times. Its author, Ian Parker, notes that modern food writing has become a subset of cultural criticism:
“A lot of reviews now tend to be food features,” [former Times restaurant critic Mimi Sheraton] said. She recalled a reference to Martin Amis in a Wells review of a Spanish restaurant in Brooklyn; she said she would have mentioned Amis only “if he came in and sat down and ordered chopped liver.”
Craig Claiborne, in a review from 1966, observed, “The lobster tart was palatable but bland and the skewered lamb on the dry side. The mussels marinière were creditable.” Thanks, in part, to the informal and diverting columns of Gael Greene, at New York, and Ruth Reichl, the Times’ critic during the nineties, restaurant reviewing in American papers has since become as much a vehicle for cultural criticism and literary entertainment—or, as Sheraton put it, “gossip”—as a guide to eating out.
If this is true, and I think it is, it means that food criticism, for better or worse, falls squarely within the mandate of The A.V. Club, whether its commenters like it or not.
But that doesn’t mean that we shouldn’t hold The A.V. Club to unreasonably high standards. In fact, we should be harder on it than we would on most sites, for reasons that Parker neatly outlines in his profile of Wells:
As Wells has come to see it, a disastrous restaurant is newsworthy only if it has a pedigree or commercial might. The mom-and-pop catastrophe can be overlooked. “I shouldn’t be having to explain to people what the place is,” he said. This reasoning seems civil, though, as Wells acknowledged, it means that his pans focus disproportionately on restaurants that have corporate siblings. Indeed, hype is often his direct or indirect subject. Of the fifteen no-star evaluations in his first four years, only two went to restaurants that weren’t part of a group of restaurants.
Parker continues: “There are restaurants that exist to have four Times stars. With fewer, they become a kind of paradox.” And when it comes to pop culture, The A.V. Club is the equivalent of a four-star restaurant. It was writing deeply felt, outrageously long essays on film and television before the longread was even a thing—in part, I suspect, because of its historical connection to The Onion: because it was often mistaken for a parody site, it always felt the need to prove its fundamental seriousness, which it did, over and over again. If Supper Club had launched with one of the ambitious, richly reported pieces that Pang has written elsewhere, the response might have been very different. Listicles might make more economic sense, and they can be fun if done right, but The A.V. Club has defined itself as a place where obsessively detailed and personal pop culture writing has a home. That’s what Supper Club should be. And until it is, we shouldn’t be surprised if readers have trouble swallowing it.
I don’t have a lot of time to read for my own pleasure these days, but over the last week, I found myself plowing through all seven hundred pages of Powerhouse: The Untold Story of Hollywood’s Creative Artists Agency by James Andrew Miller. Admittedly, I didn’t read the whole thing with equal attention: it’s an oral history, and like many products of that genre, it’s uneven. It skips over entire years in a few paragraphs and devotes three pages to an anecdote about product placement in the Entourage movie. More disturbingly, it leaves out what feels like necessary material. If you remember anything about the superagent Michael Ovitz, who was once described as the most powerful man in Hollywood, it’s the spectacular fall from grace that ensued after he blamed his public implosion on the “gay mafia” in an interview in Vanity Fair. There’s no mention of it here—an omission that I suspect has something to do with Ovitz’s extensive participation in the project. Instead, we get a lot of inside baseball about the career paths of agents who names will mean nothing to most readers. Toward the end, I found myself skimming, and the book left me with a lot of raw data but no real sense of how CAA accomplished what it did. Yet I did finish it, and there were points in the middle where I was devouring hundreds of pages at a sitting, which speaks both to Miller’s abilities as an interviewer and to the fascination of the agency itself.
And I have the feeling that this book will become something of a bible, or a user’s manual, for a certain kind of hungry young person in Hollywood. Creative Artists Agency was the most significant force in the industry for decades on end: at one time or another, its clients included Tom Cruise, Steven Spielberg, Tom Hanks, Martin Scorsese, Meryl Streep, Robert Redford, and Michael Crichton, all at the peaks of their careers. It mastered the art of packaging, in which a director and star were bundled internally with a screenplay and presented as a unit to a studio. CAA and its clients seem to have been involved in some capacity with every big movie of the last thirty years, and Miller’s book, not surprisingly, is crammed with good stories. There’s the saga behind Rain Man, for instance, which kicked around town for years—at one point with Dustin Hoffman attached to the Tom Cruise part and Jack Nicholson or Bill Murray under consideration for the title role—before the unworkable script was finally saved when Barry Levinson had the idea to turn it into a road movie. We hear of how Sean Connery, much to his chagrin, found himself committed to star in Just Cause without knowing it. Best of all, there’s the unbelievable saga of how Ovitz was offered the chance to run Universal, negotiated the richest pay package in studio history, and then walked away, opening the door for CAA cofounder Ron Meyer to take the job instead. As Peter Gruber puts it: “When you have yourself as an agent and you’re the client, you have a fool for the client.”
There are plenty of other juicy tidbits like this, although they can be hard to find. (The book, unforgivably, lacks an index.) But it’s all shot through with a kind of nostalgia for a brand of influence that no longer exists. I don’t think there’s any doubt that power in Hollywood has been simultaneously consolidated and leveled in ways that aren’t favorable to the agencies: the real value is concentrated in a handful of franchises controlled by the studios, especially Disney, and the talent above the line, while not exactly irrelevant, is more fungible. The practice of packaging a star and a director with a hot script isn’t entirely gone, but it’s not as relevant to the bottom line when billions are riding on Marvel and Star Wars sequels. If anything, the older model is a greater force in television, which I’ve elsewhere argued is the last place where traditional star power has any meaning. A recognizable name in a lead role still carries weight, as do original ideas, which allows the agency to remain a viable player. In fact, when we look back at the golden age of television, I have a hunch that we’ll find that it was driven in large part by a migration of talent on the agency side, as agents and their clients shifted their resources to a medium that was better equipped to exploit what they did best. CAA itself has been a major player in television for years: it famously turned a moldering spec script into ER, earning a huge payout for Crichton for doing basically nothing. And I suspect that it played a considerable role in television’s recent renaissance, although you won’t hear about it here.
But Powerhouse is still worth reading as a sort of dream book, or cautionary tale, of what Hollywood used to be, and could be again. My favorite story is told by the agent David Styne, who describes flying to a farm in Illinois in an attempt to sign John Hughes:
At a certain point John Hughes leaves and comes back with this titanium briefcase. And he opens it up, and he says, “There are fifteen completed screenplays that I’ve written in this briefcase that nobody has ever seen…What I’d like to do if it’s okay for you guys, let me tell you about some of these and I want you to be honest and tell me what you think.” So he starts pitching these movies, and he’s like the greatest pitcher of all time. He was pitching us these whole movies. So the first one—yeah, we like that. Second one—yeah, we like that…So after that, he says, “So, this next one is about a woman, she’s in a hospital in Chicago, she has an abortion, but the abortion is dumped out in the alley at night, and it’s like a partial abortion, and it lives. And this little baby boy kind of grows up in the alley, and he’s like this street urchin. And I call this one Partial Sid.”
The agents glance at each other and finally say: “No, we don’t think that’s a good idea at all.” And Hughes replies: “I am so glad that you guys said that, because Jim Wiatt put me with one of his agents at ICM, and I pitched him Partial Sid, and he said ‘John! That is genius! Johnny Depp is Partial Sid.’” The story tells you a lot about agents, of course, but what I love the most about it is the image of that briefcase full of ideas: it’s like the unattainable object in an agent’s dream. And I’d like to think that most agents still fantasize about finding it—and the writer to whom it belongs—and bringing glory once again to an industry of Partial Sids.
At some point, everyone owns a copy of The Album. The title or the artist might differ, but its impact on the listener is the same: it’s simply the album that alerts you to the fact that it can be worth devoting every last piece of your inner life to music, rather than treating it as a source of background noise or diversion. It’s the first album that leaves a mark on your soul. Usually, it makes an appearance as you’re entering your teens, which means that there’s as much random chance involved here as in any of the other cultural influences that dig in their claws at that age. You don’t have a lot of control over what it will be. Maybe it begins with a song on the radio, or a cover that catches your eye at a record store, or a stab of familiarity that comes from a passing moment of exposure: in your early teens, you’re likely to love something just because you recognize it. Whatever it is, unlike every other album you’ve ever heard, it doesn’t let you go. It gets into your dreams. You draw pictures of the cover art and pick out a few notes from it on every piano. And it shapes you in ways that you can’t fully articulate. The specific album is different for everyone, or so it seems, although logic suggests that it’s probably the same for a lot of teenagers at any given time. And I think you can draw a pretty clear line between those for whom The Album involved them deeply in the culture of their era, and those who wound up estranged from it. I’d be a different person—and maybe a better one—if mine had been something like Nevermind. But it wasn’t. It was the soundtrack from Twin Peaks, followed by Julee Cruise’s Floating Into the Night.
If I’d been born a few years earlier, this might not have been an issue, but I happened to get seriously into Twin Peaks, or at least its score, long after the series itself had peaked as a cultural phenomenon. The finale had aired two full years ago, and it had been followed shortly thereafter, with what seems today like startling speed, by Twin Peaks: Fire Walk With Me. After that, it mostly disappeared. There wasn’t even a chance for me to belatedly get into the show itself. I’d watched a few episodes back when they first aired, including the pilot and the horrifying scene in which the identity of Laura’s killer is finally revealed. As far as I can remember, the premiere was later released on video, but nothing else, and I had to get by with a few grainy episodes that my parents had recorded. It wasn’t until many years later that the first box set became available, allowing me to fully experience a show that I ultimately ended up loving, but which was far more uneven—and often routine—than its reputation had led me to believe. But it didn’t really matter. Twin Peaks was just a television show, admittedly an exceptional one, but the score by Angelo Badalamenti was something else: a vision of a world that was complete and unlimited in itself. I’d have trouble expressing exactly what it represents, except that it has something to do with the places where a gorgeous nightmare impinges on the everyday. In Blue Velvet, which I still think is David Lynch’s greatest achievement, Jeffrey expresses it as simply as possible: “It’s a strange world.” But you can hear it more clearly in “Laura Palmer’s Theme,” which Badalamenti composed in response to Lynch’s instructions:
Start it off foreboding, like you’re in a dark wood, and then segue into something beautiful to reflect the trouble of a beautiful teenage girl. Then, once you’ve got that, go back and do something that’s sad and go back into that sad, foreboding darkness.
If all forms of art, as Water Pater puts it, aspire to the condition of music, then it isn’t an exaggeration to say that Twin Peaks aspired to the condition of its own soundtrack. Badalamenti’s score did everything that the series itself often struggled to accomplish, and there were times when I felt that the music was the primary work, with the show as a kind of visual adjunct. (I still feel that way, on some level, about Twin Peaks: Fire Walk With Me. The movie means a lot to me, but I don’t have a lot of interest in rewatching it, while I know every note of the soundtrack by heart, even though I haven’t listened to it in years.) And even if I grant that a soundtrack is never really complete in itself, the Twin Peaks score pointed invisibly toward an even more intriguing artifact. It included three tracks—“The Nightingale,” “Into the Night,” and “Falling”—sung by Julee Cruise, with music by Badalamenti and lyrics by Lynch, who had earlier written her song “Mysteries of Love” for Blue Velvet. I loved them, obviously, and I can still remember the moment when a close reading of the liner notes clued me into the fact that there was an entire album by Cruise, Floating Into the Night, that I could actually own. (In fact, there were two. As it happened, my brainstorm occurred only a few months after the release of The Voice of Love, a much less coherent sophomore album that I wouldn’t have missed for the world.) Listening to it for the first time, I felt like the narrator of Borges’s “Tlön, Uqbar, Orbis Tertius,” who once saw a fragment of an undiscovered country, and now found himself confronted with all of it at once. The next few years of my life were hugely eventful, as they are for every teenager: I read, did, and thought about a lot of things, some of which are paying off only now. But whatever else I was doing, I was probably listening to Floating Into the Night.
So when I heard that the Twin Peaks soundtrack was coming out today in a deluxe new vinyl release, I felt mixed feelings at the news. (Of course, I’m going to buy a copy, and so should you.) The plain fact is that toward the end of my teens, I put Badalamenti and Cruise away, and I haven’t listened to them much since. Which isn’t to say that I didn’t give them a lifetime’s worth of listening in the meantime. I became obsessed with Industrial Symphony No. 1: The Dream of the Brokenhearted, the curious performance piece by Lynch in which Cruise floats on wires high above the stage at the Brooklyn Academy of Music. Much later, I saw Cruise perform, rather awkwardly, in person. I tracked down her other collaborations and guest appearances—including the excellent “If I Survive” with Hybrid—and even bought her third album, The Art of Being a Girl, which I liked a lot. Somehow I never got around to buying the next one, though, and long before I graduated from college, Cruise and Badalamenti had ceased to play a role in my life. And I regret this. I still think that Floating Into the Night is a perfect album, although it wasn’t until years later, when I heard Cruise’s real, hilariously brassy voice, that I realized the extent to which I’d fallen in love with an ironic simulation. There are still moments when I believe, with complete seriousness, that I’d be a better person today if I’d kept listening to this music: half of my life has been spent trying to live up to the values of my early adolescence, and I might have had an easier job of integrating all of my past selves if they shared a common soundtrack. Whenever I play it now, it feels like a part of me that has been locked away, ageless and untouched, in the Black Lodge. But life has a way of coming full circle. As Laura says to Cooper: “I’ll see you again in twenty-five years.” And it feels sometimes as if she were talking to me.
There are two sorts of commentary tracks. The first kind is recorded shortly after a movie or television season is finished, or even while it’s still being edited or mixed, and before it comes out in theaters. Because their memories of the production are still vivid, the participants tend to be a little giddy, even punch drunk, and their feelings about the movie are raw: “The wound is still open,” as Jonathan Franzen put it to Slate. They don’t have any distance, and they remember everything, which means that they can easily get sidetracked into irrelevant detail. They don’t yet know what is and isn’t important. Most of all, they don’t know how the film did with viewers or critics, so their commentary becomes a kind of time capsule, sometimes laden with irony. The second kind of commentary is recorded long after the fact, either for a special edition, for the release of an older movie in a new format, or for a television series that is catching up with its early episodes. These tend to be less predictable in quality: while commentaries on recent work all start to sound more or less the same, the ones that reach deeper into the past are either disappointingly superficial or hugely insightful, without much room in between. Memories inevitably fade with time, but this can also allow the artist to be more honest about the result, and the knowledge of how the work was ultimately received adds another layer of interest. (For instance, one of my favorite commentaries from The Simpsons is for “The Principal and the Pauper,” with writer Ken Keeler and others ranting against the fans who declared it—preemptively, it seems safe to say—the worst episode ever.)
Perhaps most interesting of all are the audio commentaries that begin as the first kind, but end up as the second. You can hear it on the bonus features for The Lord of the Rings, in which, if memory serves, Peter Jackson and his cowriters start by talking about a movie that they finished years ago, continue by discussing a movie that they haven’t finished editing yet, and end by recording their comments for The Return of the King after it won the Oscar for Best Picture. (This leads to moments like the one for The Two Towers in which Jackson lays out his reasoning for pushing the confrontation with Saruman to the next movie—which wound up being cut for the theatrical release.) You also see it, on a more modest level, on the author’s commentaries I’ve just finished writing for my three novels. I began the commentary on The Icon Thief way back on April 30, 2012, or less than two months after the book itself came out. At the time, City of Exiles was still half a year away from being released, and I was just beginning the first draft of the novel that I still thought would be called The Scythian. I had a bit of distance from The Icon Thief, since I’d written a whole book and started another in the meantime, but I was still close enough that I remembered pretty much everything from the writing process. In my earliest posts, you can sense me trying to strike the right balance between providing specific anecdotes about the novel itself to offering more general thoughts on storytelling, while using the book mostly as a source of examples. And I eventually reached a compromise that I hoped would allow those who had actually read the book to learn something about how it was put together, while still being useful to those who hadn’t.
As a result, the commentaries began to stray further from the books themselves, usually returning to the novel under discussion only in the final paragraph. I did this partly to keep the posts accessible to nonreaders, but also because my own relationship with the material had changed. Yesterday, when I posted the last entry in my commentary on Eternal Empire, almost four years had passed since I finished the first draft of that novel. Four years is a long time, and it’s even longer in writing terms. If every new project puts a wall between you and the previous one, a series of barricades stands between these novels and me: I’ve since worked on a couple of book-length manuscripts that never got off the ground, a bunch of short stories, a lot of occasional writing, and my ongoing nonfiction project. With each new endeavor, the memory of the earlier ones grows dimmer, and when I go back to look at Eternal Empire now, not only do I barely remember writing it, but I’m often surprised by my own plot. This estrangement from a work that consumed a year of my life is a little sad, but it’s also unavoidable: you can’t keep all this information in your head and still stay sane. Amnesia is a coping strategy. We’re all programmed to forget many of our experiences—as well as our past selves—to free up capacity for the present. A novel is different, because it exists in a form outside the brain. Any book is a piece of its writer, and it can be as disorienting to revisit it as it is to read an old diary. As François Mauriac put it: “It is as painful as reading old letters…We touch it like a thing: a handful of ashes, of dust.” I’m not quite at that point with Eternal Empire, but I’ll sometimes read a whole series of chapters and think to myself, where did that come from?
Under the circumstances, I should count myself lucky that I’m still reasonably happy with how these novels turned out, since I have no choice but to be objective about it. There are things that I’d love to change, of course: sections that run too long, others that seem underdeveloped, conceits that seem too precious or farfetched or convenient. At times, I can see myself taking the easy way out, going with a shortcut or ignoring a possible implication because I lacked the time or energy to do it justice. (I don’t necessarily regret this: half of any writing project involves conserving your resources for when it really matters.) But I’m also surprised by good ideas or connections that seem to have come from outside of me, as if, to use Isaac Asimov’s phrase, I were writing over my own head. Occasionally, I’ll have trouble following my own logic, and the result is less a commentary than a forensic reconstruction of what I must have been thinking at the time. But if I find it hard to remember my reasoning today, it’s easier now than it will be next year, or after another decade. As I suspected at the time, the commentary exists more for me than for anybody else. It’s where I wrote down my feelings about a series of novels that once dominated my life, and which now seem like a distant memory. While I didn’t devote nearly as many hours to these commentaries as I did to the books themselves, they were written over a comparable stretch of time. And now that I’ve gotten to the point of writing a commentary on my commentary—well, it’s pretty clear that it’s time to stop.
How do you end a series that has lasted for three books and more than a thousand pages? To some extent, no conclusion can be completely satisfying, so it makes sense to focus on what you actually stand a chance of achieving. There’s a reason, for instance, that so few series finales live up to our hopes: a healthy television show has to cultivate and maintain more narrative threads than can be resolved in a single episode, so any finale has to leave certain elements unaddressed. In practice, this means that entire characters and subplots are ignored in favor of others, which is exactly how it should be. During the last season of Mad Men, Matthew Weiner and his writing team prepared a list of story points that they wanted to revisit, and reading it over again now is a fascinating exercise. The show used some of the ideas, but it omitted many more, and we never did get a chance to see what happened to Sal, Dr. Faye, or Peggy’s baby. This kind of creative pruning is undoubtedly good for the whole, and it serves as a reminder of Weiner’s exceptional skill as a showrunner. Mad Men was one of the most intricate dramas ever written, with literally dozens of characters who might have earned a resonant guest appearance in the closing stretch of episodes. But Weiner rightly forced himself to focus on the essentials, while also allowing for a few intriguing digressions, and the result was one of the strongest finales I’ve ever seen—a rare example of a show sticking the landing to maintain an impossibly high standard from the first episode to the last.
It’s tempting to think of a series finale as a piece of valuable real estate in which every second counts, or as a zero-sum game in which every moment devoted to one character means that another won’t have a chance to appear. (Watching the Mad Men finale, I found myself waiting for my favorite supporting players to pop up, and as soon as they had their scene, I couldn’t help thinking: That’s the last thing I’ll ever see them do.) But it can be dangerous to take such a singleminded approach to any unit of narrative, particularly for shows that have thrived on the unpredictable. My favorite example is the series finale of Twin Peaks, which wasn’t even meant to end the show, but provided as perfect a conclusion as any viewer could want—an opinion that I’ll continue to hold even after the new season premieres on Showtime. Instead of taking time to check in with everyone in their huge cast, David Lynch and Mark Frost indulge in long, seemingly pointless set pieces: the scene in the bank with Audrey, with the decrepit manager shuffling interminable across the floor to get her a drink of water, and especially the sequence in the Black Lodge, which is still the weirdest, emptiest twenty minutes ever to air on network television. You can imagine a viewer almost shouting at the screen for Lynch and Frost to get back to Sheriff Truman or Shelly or Donna, but that wouldn’t have been true to the show’s vision. Similarly, the Mad Men finale devotes a long scene to a character we’ve never seen before or since, the man at the encounter group who ends up inspiring Don’s return to humanity. It might seem like a strange choice, but it was the right call: Don’s relationships with every other character were so burdened with history that it took a new face to carry him over the finish line.
I found myself dealing with many of the same issues when it came to the epilogue of Eternal Empire, which was like the final season of a television series that had gone on for longer than I’d ever expected. Maddy and Wolfe had already received a sendoff in the previous chapter, so I only had to deal with Ilya. Pragmatically, the scene could have been about anything, or nothing at all. Ilya was always a peculiar character: he was defined mostly by action, and I deliberately refrained from detailing large portions of his backstory, on the assumption that he would be more interesting the less we knew about his past. It would have been easy to give him a conclusion that filled in more of his background, or that restored something of what he had lost—his family, a home, his sense of himself as a fundamentally good man. But that didn’t seem right. Another theme that you often see in series finales, particularly for a certain type of sitcom, is the showrunner’s desire to make every character’s dreams come true: the last season of Parks and Recreation, in particular, was a sustained exercise in wish fulfillment. I can understand the need to reward the characters that we love, but in Ilya’s case, what I loved about him was inseparable from the fact of his rootlessness. The novel repeatedly draws a parallel between his situation and that of the Khazars, the tribe of nomads that converted to Judaism before being erased from history, and I once compared him to the tzaddikim, or the unknown men and women for whose sake God refrains from destroying the world. Above all else, he was the Scythian, a wanderer of the steppes. I chose these emblems intuitively, but they clearly all have something in common. And it implied that Ilya would have to depart the series as he began it: as a man without a country.
What we get, in the end, is this quiet scene, in which Ilya goes to visit the daughter of the woman who had helped him in Yalta. The woman was a bride of the brotherhood, a former convict who gave up her family to work with the thieves, and her daughter ended up as the servant of a gangster in Moldova, five hundred miles away. Ilya gives her some money and her mother’s address, which he hopes will allow them to build a new life together, and then leaves. (The song that is playing on the girl’s cassette deck, incidentally, is Joni Mitchell’s “Cactus Tree.” This might be the nerdiest, most obscure inside joke of the entire series: it’s the song that appears in a deleted epigraph in the page proofs of Gravity’s Rainbow, before Thomas Pynchon removed it prior to publication. I’d wanted to use it, in some form, since The Icon Thief, and the fact that it includes the word “eternity” was a lucky coincidence.) It all makes for a subdued conclusion to the trilogy, and I came up with it fairly late in the process: as far as I can remember, the idea that there was a connection between the women in Yalta and Moldova didn’t occur to me until I’d already outlined the scenes, and this conclusion would have been an equally late addition. And it works, more or less, even if it feels a little too much like the penultimate scene of The Bourne Supremacy. It seemed right to end the series—which was pointedly made up of big, exaggerated gestures—on a gentle note, which implies that reuniting a parent and her child might be an act of greater significance than saving the world. I don’t know where Ilya goes after this, even though I spent the better part of four years trying to see through his eyes. But I suspect that he just wants to be left in peace…
Vladimir Putin is still here. I type these words not because we need to be reminded of that fact—I can’t think of another foreign political leader whose shadow has loomed so ominously over a peacetime presidential race—but to consider what it means. When I began writing The Icon Thief, more than eight years ago, Putin was ostensibly on his way out: he was ineligible to run for a third term, so the reigns of power were passed to Dmitry Medvedev, his chosen successor. Instead, Medvedev appointed him prime minister, and a few years later, Putin was back in the presidency, as if he’d never been gone. It isn’t hard to imagine him pulling the same trick forever, or for as long as his health holds out, which might be for quite some time. He’s only in his early sixties now, which is practically his young adulthood compared to some of the decrepit Russian leaders of the past, and he’s in what he takes pains to assure us is peak physical condition. It’s a situation that ought to keep most of us up at night, but it’s also a boon to suspense novelists. As I once pointed out, Putin’s name is the most evocative word in the lexicon of the modern thriller: it calls up an entire world of intrigue and implication, allowing a novel to do in a few sentences what might otherwise require five pages. As a rhetorical device, it isn’t just confined to fiction, either. Putin wouldn’t be evoked so often in this election if he didn’t have such a powerful hold over our imaginations, and recent events have only confirmed, as I’ve said from the beginning, that nothing that a writer can invent about Russia can possibly compare to the reality.
Incorporating a contemporary or historical political figure into a thriller is nothing new, of course. The gold standard was set, as it was in so many other things, by Frederick Forsyth, who built The Day of the Jackal around an assassination attempt on Charles de Gaulle, and who gave prominent speaking parts to Margaret Thatcher in several of his later novels. It’s a trick that grows stale when a writer uses it too often, as Forsyth sometimes does, but its easy to understand its appeal. For a certain kind of thriller, the story is less about something that could happen than about what might be happening right now, or that has already happened without our knowledge. Such novels often set up a sliding scale of verisimilitude, starting with big, obvious figures like Putin, working their way down through historical figures or events that aren’t as familiar, and finally entering the realm of pure fiction. Even if you’re reasonably conversant with current events, you can have trouble telling where fact leaves off and invention begins, especially when the novel starts to show its age. (For instance, I have a feeling that most contemporary readers of The Day of the Jackal aren’t aware that the opening sequence, which depicts a failed attempt on de Gaulle’s life, is based on fact—an interesting case of a novel outliving the material that it once used to enhance its own credibility.) Ideally, the transition from someone like Putin to the fictional characters at the bottom of the pecking order should be totally seamless, at least in the moment. We know that Putin is real and that most of the other characters aren’t, but in some cases, we aren’t sure, and the overwhelming fact of Putin himself serves to organize and enhance the rest of the story.
Eternal Empire is literally framed by Putin, both in terms of how the novel was conceived and of how it was finally published. It opens with an epigraph from Rachel Polonsky’s Molotov’s Magic Lantern, which describes how Putin asked to have a fragment of the polar seabed brought back to him as a nod to the underground kingdom of Shambhala, and it ends with an excerpt from a New York Times article from December 10, 2011, which describes the abortive protests that flared up that year against the Putin regime. As I’ve mentioned elsewhere, the entire novel unfolded like a paper flower from those lines in Polonsky’s book, and it isn’t hard to see why they struck me. In juxtaposing the steely figure of Putin, the ultimate pragmatist, with the gauzy myth of Shambhala, it encapsulates the tension that defined the rest of the series, which in many ways is about the collision between practical spycraft and the weirder elements that have a way of impinging on the rational picture. (As Powell says to Wolfe of the Shambhala story: “That doesn’t sound like the Putin I know.”) The closing epigraph attracted me for many of the same reasons. Its image of protesters with white flowers and ribbons was derived from an actual event, but it could easily stand for something more. A white flower can mean just about anything, so it wasn’t hard for me to tweak the story so that the protests seemed to emerge from the Shambhala plot. And the entire narrative was timed to culminate at this moment, which would serve as the visible eruption of the forces that my characters had spent the entire book marshaling in secret.
Now that five years have passed, the image that concludes the trilogy, of Maddy watching the protesters on television, feels very different in tone. The protests themselves are little more than a footnote, and Putin’s hold on power has never been stronger. Since the plot hinges on a plan to change Russian politics from the inside, the historical outcome might seem to undermine the whole story. I’m not sure it does, though. Maddy notes that Tarkovsky has bought himself “a few years” to prepare, which might well mean that his plan is underway even now—although I doubt it. More pragmatically, the characters observe, both here and in the epilogue, that most attempts at reform are crushed, and that a revolution is more likely to die than to endure. (You can picture me typing those lines, more than three years ago, as a way of hedging my bets.) But if there’s a thread that runs through all these novels, it’s the importance of small, private victories in the face of the indifference or hostility of larger systems. I began the series with a conspiracy novel, which is a genre that implicitly raises the issue, even in its pulpiest incarnations, of the relationship between the individual and the impersonal forces to which he or she is subjected. All three books conclude on a similar note, which is that we can try to get glimpse behind the mask, if only for a moment, and then return to the more achievable task of establishing what little order we can in our own lives. It isn’t much of an answer, but it provides just enough consolation to see us through, both in a novel and in the real world. Putin survives, as I suspect I always knew he would. But so do Wolfe and Maddy. And that’s how their story ends…