Forty years ago, the cinematographer Garrett Brown invented the Steadicam. It was a stabilizer attached to a harness that allowed a camera operator, walking on foot or riding in a vehicle, to shoot the kind of smooth footage that had previously only been possible using a dolly. Before long, it had revolutionized the way in which both movies and television were shot, and not always in the most obvious ways. When we think of the Steadicam, we’re likely to remember virtuoso extended takes like the Copacabana sequence in Goodfellas, but it can also be a valuable tool even when we aren’t supposed to notice it. As the legendary Robert Elswit said recently to the New York Times:
“To me, it’s not a specialty item,” he said. “It’s usually there all the time.” The results, he added, are sometimes “not even necessarily recognizable as a Steadicam shot. You just use it to get something done in a simple way.”
Like digital video, the Steadicam has had a leveling influence on the movies. Scenes that might have been too expensive, complicated, or time-consuming to set up in the conventional manner can be done on the fly, which has opened up possibilities both for innovative stylists and for filmmakers who are struggling to get their stories made at all.
Not surprisingly, there are skeptics. In On Directing Film, which I think is the best book on storytelling I’ve ever read, David Mamet argues that it’s a mistake to think of a movie as a documentary record of what the protagonist does, and he continues:
The Steadicam (a hand-held camera), like many another technological miracle, has done injury; it has injured American movies, because it makes it so easy to follow the protagonist around, one no longer has to think, “What is the shot?” or “Where should I put the camera?” One thinks, instead, “I can shoot the whole thing in the morning.”
This conflicts with Mamet’s approach to structuring a plot, which hinges on dividing each scene into individual beats that can be expressed in purely visual terms. It’s a method that emerges naturally from the discipline of selecting shots and cutting them together, and it’s the kind of hard work that we’re often tempted to avoid. As Mamet adds in a footnote: “The Steadicam is no more capable of aiding in the creation of a good movie than the computer is in the writing of a good novel—both are labor-saving devices, which simplify and so make more attractive the mindless aspects of creative endeavor.” The casual use of the Steadicam seduces directors into conceiving of the action in terms of “little plays,” rather than in fundamental narrative units, and it removes some of the necessity of disciplined thinking beforehand.
But it isn’t until toward the end of the book that Mamet delivers his most ringing condemnation of what the Steadicam represents:
“Wouldn’t it be nice,” one might say, “if we could get this hall here, really around the corner from that door there; or to get that door here to really be the door that opens on the staircase to that door there? So we could just movie the camera from one to the next?”
It took me a great deal of effort and still takes me a great deal and will continue to take me a great deal of effort to answer the question thusly: no, not only is it not important to have those objects literally contiguous; it is important to fight against this desire, because fighting it reinforces an understanding of the essential nature of film, which is that it is made of disparate shorts, cut together. It’s a door, it’s a hall, it’s a blah-blah. Put the camera “there” and photograph, as simply as possible, that object. If we don’t understand that we both can and must cut the shots together, we are sneakily falling victim to the mistaken theory of the Steadicam.
This might all sound grumpy and abstract, but it isn’t. Take Birdman. You might well love Birdman—plenty of viewers evidently did—but I think it provides a devastating confirmation of Mamet’s point. By playing as a single, seemingly continuous shot, it robs itself of the ability to tell the story with cuts, and it inadvertently serves as an advertisement of how most good movies come together in the editing room. It’s an audacious experiment that never needs to be tried again. And it wouldn’t exist at all if it weren’t for the Steadicam.
But the Steadicam can also be a thing of beauty. I don’t want to discourage its use by filmmakers for whom it means the difference between making a movie under budget and never making it at all, as long as they don’t forget to think hard about all of the constituent parts of the story. There’s also a place for the bravura long take, especially when it depends on our awareness of the unfaked passage of time, as in the opening of Touch of Evil—a long take, made without benefit of a Steadicam, that runs the risk of looking less astonishing today because technology has made this sort of thing so much easier. And there’s even room for the occasional long take that exists only to wow us. De Palma has a fantastic one in Raising Cain, which I watched again recently, that deserves to be ranked among the greats. At its best, it can make the filmmaker’s audacity inseparable from the emotional core of the scene, as David Thomson observes of Goodfellas: “The terrific, serpentine, Steadicam tracking shot by which Henry Hill and his girl enter the Copacabana by the back exit is not just his attempt to impress her but Scorsese’s urge to stagger us and himself with bravura cinema.” The best example of all is The Shining, with its tracking shots of Danny pedaling his Big Wheel down the deserted corridors of the Overlook. It’s showy, but it also expresses the movie’s basic horror, as Danny is inexorably drawn to the revelation of his father’s true nature. (And it’s worth noting that much of its effectiveness is due to the sound design, with the alternation of the wheels against the carpet and floor, which is one of those artistic insights that never grows dated.) The Steadicam is a tool like any other, which means that it can be misused. It can be wonderful, too. But it requires a steady hand behind the camera.
Note: Spoilers follow for the season finale of Westworld.
Over time, as a society, we’ve more or less figured out how we’re all supposed to deal with spoilers. When a movie first comes out, there’s a grace period in which most of us agree not to discuss certain aspects of the story, especially the ending. Usually, reviewers will confine their detailed observations to the first half of the film, which can be difficult for a critic who sees his or her obligation as that of a thoughtful commentator, rather than of a consumer advisor who simply points audiences in the right direction on opening weekend. If there’s a particularly striking development before the halfway mark, we usually avoid talking about that, too. (Over time, the definition of what constitutes a spoiler has expanded to the point where some fans apply it to any information about a film whatsoever, particularly for big franchise installments.) For six months or so, we remain discreet—and most movies, it’s worth noting, are forgotten long before we even get to that point. A movie with a major twist at the end may see that tacit agreement extended for years. Eventually, however, it becomes fair game. Sometimes it’s because a surprise has seeped gradually into the culture, so that a film like Citizen Kane or Psycho becomes all but defined by its secrets. In other cases, as with The Sixth Sense or Fight Club, it feels more like we’ve collectively decided that anyone who wants to see it has already gotten a chance, and now we can talk about it openly. And up until now, it’s a system that has worked pretty well.
But this approach no longer makes sense for a television show that is still on the air, at least if the case of Westworld is any indication. We’re not talking about spoilers, exactly, but about a certain kind of informed speculation. The idea that one of the plotlines on Westworld was actually an extended flashback first surfaced in discussions on communities like Reddit, was picked up by the commenters on the reviews on mainstream websites, led theorists to put together elaborate chronologies and videos to organize the evidence, and finally made its way into think pieces. Long before last night’s finale, it was clear that the theory had to be correct. The result didn’t exactly ruin my enjoyment, since it turned out to be just one thread in a satisfying piece of storytelling, but I’ll never know what it would have been like to have learned the truth along with Dolores, and I suspect that a lot of other viewers felt the same twinge of regret. (To be fair, the percentage of people who keep up with this sort of theorizing online probably amounts to a fraction of the show’s total viewership, and the majority of the audience experienced the reveal pretty much as the creators envisioned it.) There’s clearly no point in discouraging this kind of speculation entirely. But when a show plays fair, as Westworld did, it’s only a matter of time before somebody solves the mystery in advance. And because a plausible theory can spread so quickly through the hive mind, it makes us feel smarter, as individuals, than we really are, which compromises our reactions to what was a legitimately clever and resonant surprise.
Westworld isn’t the first show to be vulnerable to this kind of collective sleuthing: Game of Thrones has been subjected to it for years, especially when it comes to the parentage, status, and ultimate fate of a certain character who otherwise wouldn’t seem interesting enough to survive. In both cases, it’s because the show—or the underlying novels—provided logical clues along the way to prepare us, in the honorable fashion of all good storytelling. The trouble is that these rules were established at a time when most works of narrative were experienced in solitude. Even if one out of three viewers figured out the twist in The Usual Suspects before the movie was halfway done, it didn’t really affect the experience of the others in the theater, since we don’t tend to discuss the story in progress out loud. That was true of television, too, for most of the medium’s history. These days, however, many of us are essentially talking about these stories online while they’re still happening, so it isn’t surprising if the solutions can spread like a virus. I don’t blame the theorists, because this kind of speculation can be an absorbing game in its own right. But it’s so powerful that it needs to be separated from the general population. It requires a kind of self-policing, or quarantine, that has to become second nature to every viewer of this kind of show. Reviewers need to figure out how to deal with it, too. Otherwise, shows will lose the incentive to play fair, relying instead on blunter, more mechanical kinds of surprise. And this would be a real shame, because Westworld has assembled the pieces so effectively that I don’t doubt it will continue to do so in the future.
Watching the finale, I was curious to see how it would manage to explain the chronology of Dolores’s story without becoming hopelessly confusing, and it did a beautiful job, mostly by subordinating it to the larger questions of William’s fate, Dolores’s journey, and Ford’s master plan, which has taken thirty-five years to come to fruition. (In itself, this is a useful insight into storytelling: it’s easier for the audience to make a big conceptual leap when it feeds into an emotional arc that is already in progress, and if it’s treated as a means, not an end.) If anything, the reveal of the identity of Wyatt was even more powerful—although, oddly, the fact that everything has unfolded according to Ford’s design undermines the agency of the very robots that it was supposed to defend. It’s an emblem for why this excellent season remains one notch down from the level of a masterpiece, thanks to the need of its creators, like Ford, to maintain a tight level of control. Still, if it lasts for as long as I think it will, it may not even matter how much of it the Internet figured out on first viewing. For a television show, the lifespan of a spoiler seems to play in reverse: instead of a grace period followed by free discussion after enough time has passed, we get intense speculation while the show airs, giving way to silence once we’ve all moved on to the next big thing. If Westworld endures as a work of art, it will be seen just as it was intended by those who discover it much later, after the flurry of speculation has faded. I don’t know how long it will take before it can be seen again with fresh eyes. But thirty-five years seems about right.
Over the last few months, there’s been a surprising flurry of film and television activity involving the writers featured in my upcoming book Astounding. SyFy has announced plans to adapt Robert A. Heinlein’s Stranger in the Strange Land as a miniseries, with an imposing creative team that includes Hollywood power broker Scott Rudin and Zodiac screenwriter James Vanderbilt. Columbia is aiming to reboot Starship Troopers with producer Neal H. Mortiz of The Fast and the Furious, prompting Paul Verhoeven, the director of the original, to comment: “Going back to the novel would fit very much in a Trump presidency.” The production company Legendary has bought the film and television rights to Dune, which first appeared as a serial edited by John W. Campbell in Analog. Meanwhile, Jonathan Nolan is apparently still attached to an adaptation of Isaac Asimov’s Foundation, although he seems rather busy at the moment. (L. Ron Hubbard remains relatively neglected, unless you want to count Leah Remini’s new show, which the Church of Scientology would probably hope you wouldn’t.) The fact that rights have been purchased and press releases issued doesn’t necessarily mean that anything will happen, of course, although the prospects for Stranger in a Strange Land seem strong. And while it’s possible that I’m simply paying more attention to these announcements now that I’m thinking about these writers all the time, I suspect that there’s something real going on.
So why the sudden surge of interest? The most likely, and also the most heartening, explanation is that we’re experiencing a revival of hard science fiction. Movies like Gravity, Interstellar, The Martian, and Arrival—which I haven’t seen yet—have demonstrated that there’s an audience for films that draw more inspiration from Clarke and Kubrick than from Star Wars. Westworld, whatever else you might think of it, has done much the same on television. And there’s no question that the environment for this kind of story is far more attractive now than it was even ten years ago. For my money, the most encouraging development is the movie Life, a horror thriller set on the International Space Station, which is scheduled to come out next summer. I’m tickled by it because, frankly, it doesn’t look like anything special: the trailer starts promisingly enough, but it ends by feeling very familiar. It might turn out to be better than it looks, but I almost hope that it doesn’t. The best sign that a genre is reaching maturity isn’t a series of singular achievements, but the appearance of works that are content to color inside the lines, consciously evoking the trappings of more visionary movies while remaining squarely focused on the mainstream. A film like Interstellar is always going to be an outlier. What we need are movies like what Life promises to be: a science fiction film of minimal ambition, but a certain amount of skill, and a willingness to copy the most obvious features of its predecessors. That’s when you’ve got a trend.
The other key development is the growing market for prestige dramas on television, which is the logical home for Stranger in a Strange Land and, I think, Dune. It may be the case, as we’ve been told in connection with Star Trek: Discovery, that there isn’t a place for science fiction on a broadcast network, but there’s certainly room for it on cable. Combine this with the increased appetite for hard science fiction on film, and you’ve got precisely the conditions in which smart production companies should be snatching up the rights to Asimov, Heinlein, and the rest. Given the historically rapid rise and fall of such trends, they shouldn’t expect this window to remain open for long. (In a letter to Asimov on February 3, 1939, Frederik Pohl noted the flood of new science fiction magazines on newsstands, and he concluded: “Time is indeed of the essence…Such a condition can’t possibly last forever, and the time to capitalize on it is now; next month may be too late.”) What they’re likely to find, in the end, is that many of these stories are resistant to adaptation, and that they’re better off seeking out original material. There’s a reason that there have been so few movies derived from Heinlein and Asimov, despite the temptation that they’ve always presented. Heinlein, in particular, seems superficially amenable to the movies: he certainly knew how to write action in a way that Asimov couldn’t. But he also liked to spend the second half of a story picking apart the assumptions of the first, after sucking in the reader with an exciting beginning, and if you aren’t going to include the deconstruction, you might as well write something from scratch.
As it happens, the recent spike of action on the adaptation front has coincided with another announcement. Analog, the laboratory in which all these authors were born, is cutting back its production schedule to six double issues every year. This is obviously intended to manage costs, and it’s a reminder of how close to the edge the science fiction digests have always been. (To be fair, the change also coincides with a long overdue update of the magazine’s website, which is very encouraging. If this reflects a true shift from print to online, it’s less a retreat than a necessary recalibration.) It’s easy to contrast the game of pennies being played at the bottom with the expenditure of millions of dollars at the top, but that’s arguably how it has to be. Analog, like Astounding before it, was a machine for generating variations, which needs to be done on the cheap. Most stories are forgotten almost at once, and the few that survive the test of time are the ones that get the lion’s share of resources. All the while, the magazine persists as an indispensable form of research and development—a sort of skunk works that keeps the entire enterprise going. That’s been true since the beginning, and you can see this clearly in the lives of the writers involved. Asimov, Heinlein, Herbert, and their estates became wealthy from their work. Campbell, who more than any other individual was responsible for the rise of modern science fiction, did not. Instead, he remained in his little office, lugging manuscripts in a heavy briefcase twice a week on the train. He was reasonably well off, but not in a way that creates an empire of valuable intellectual property. Instead, he ran the lab. And we can see the results all around us.
Note: Spoilers follow for the most recent episode of Westworld.
I’ve written a lot on this blog about the power of ensembles, which allow television shows to experiment with different combinations of characters. Usually, it takes a season or two for the most fruitful pairings to emerge, and they can take even the writers by surprise. When a series begins, characters tend to interact based on where the plot puts them, and those initial groupings are based on little more than the creator’s best guess. Later, when the strengths of the actors have become apparent and the story has wandered in unanticipated directions, you end up with wonderful pairings that you didn’t even know you wanted. Last night’s installment of Westworld features at least two of these. The first is an opening encounter between Bernard and Maeve that gets the episode off to an emotional high that it never quite manages to top: it hurries Bernard to the next—and maybe last—stage of his journey too quickly to allow him to fully process what Maeve tells him. But it’s still nice to see them onscreen together. (They’re also the show’s two most prominent characters of color, but its treatment of race is so deeply buried that it barely even qualifies as subtext.) The second nifty scene comes when Charlotte, the duplicitous representative from the board, shows up in the Man in Black’s storyline. It’s more plot-driven, and it exists mostly to feed us some useful pieces of backstory. But there’s an undeniable frisson whenever two previously unrelated storylines reveal a hidden connection.
I hope that the show gives us more moments like this, but I’m also a little worried that it can’t. The scenes that I liked most in “The Well-Tempered Clavier” were surprising and satisfying precisely because the series has been so meticulous about keeping its plot threads separated. This may well be because at least one subplot is occurring in a different timeline, but more often, it’s a way of keeping things orderly: there’s so much happening in various places that the show is obliged to let each story go its own way. I don’t fault it for this, because this is such a superbly organized series, and although there are occasional lulls, they’ve been far fewer than you’d expect from a show with this level of this complexity. But very little of it seems organic or unanticipated. This might seem like a quibble. Yet I desperately want this show to be as great as it shows promise of being. And if there’s one thing that the best shows of the last decade—from Mad Men to Breaking Bad to Fargo—have in common, it’s that they enjoy placing a few characters in a room and simply seeing what happens. You could say that Westworld is an inherently different sort of series, and that’s fine. But it’s such an effective narrative machine that it leaves me a little starved for those unpredictable moments that television, of all media, is the most likely to produce. (Its other great weakness is its general air of humorlessness, which arises from the same cause.) This is one of the most plot-heavy shows I’ve ever seen, but it’s possible to tell a tightly structured story while still leaving room for the unexpected. In fact, that’s one sign of mastery.
And you don’t need to look far for proof. In a pivotal passage in The Films of Akira Kurosawa, one of my favorite books on the movies, Donald Richie writes of “the irrational rightness of an apparently gratuitous image in its proper place,” and he goes to to say:
Part of the beauty of such scenes…is just that they are “thrown away” as it were, that they have no place, that they do not ostensibly contribute, that they even constitute what has been called bad filmmaking. It is not the beauty of these unexpected images, however, that captivates…but their mystery. They must remain unexplained. It has been said that after a film is over all that remains are a few scattered images, and if they remain then the film was memorable…Further, if one remembers carefully one finds that it is only the uneconomical, mysterious images which remain…Kurosawa’s films are so rigorous and, at the same time, so closely reasoned, that little scenes such as this appeal with the direct simplicity of water in the desert.
“Rigorous” and “closely reasoned” are two words that I’m sure the creators of Westworld would love to hear used to describe their show. But when you look at a movie like Seven Samurai—which on some level is the greatest western ever made—you have to agree with Richie: “What one remembers best from this superbly economical film then are those scenes which seem most uneconomical—that is, those which apparently add nothing to it.”
I don’t know if Westworld will ever become confident enough to offer viewers more water in the desert, but I’m hopeful that it will, because the precedent exists for a television series giving us a rigorous first season that it blows up down the line. I’m thinking, in particular, of Community, a show that might otherwise seem to have little in common with Westworld. It’s hard to remember now, after six increasingly nutty seasons, but Community began as an intensely focused sitcom: for its debut season, it didn’t even leave campus. The result gave the show what I’ve called a narrative home base, and even though I’m rarely inclined to revisit that first season, the groundwork that it laid was indispensable. It turned Greendale into a real place, and it provided a foundation for even the wildest moments to follow. Westworld seems to be doing much the same thing. Every scene so far has taken place in the park, and we’ve only received a few scattered hints of what the world beyond might be like—and whatever it is, it doesn’t sound good. The escape of the hosts from the park feels like an inevitable development, and the withholding of any information about what they’ll find is obviously a deliberate choice. This makes me suspect that this season is restricting itself on purpose, to prepare us for something even stranger, and in retrospect, it will seem cautious, compared to whatever else Westworld has up its sleeve. It’s the baseline from which crazier, more unexpected moments will later arise. Or, to take a page from the composer of “The Well-Tempered Clavier,” this season is the aria, and the variations are yet to come.
I first saw Brian De Palma’s Raising Cain when I was fourteen years old. In a weird way, it amounted to a peak moment of my early adolescence: I was on a school trip to our nation’s capital, sharing a hotel room with my friends from middle school, and we were just tickled to get away with watching an R-rated movie on cable. The fact that we ended up with Raising Cain doesn’t quite compare with the kids on The Simpsons cheering at the chance to see Barton Fink, but it isn’t too far off. I think that we liked it, and while I won’t claim that we understood it, that doesn’t mean much of anything—it’s hard for me to imagine anybody, of any age, entirely understanding this movie, which includes both me and De Palma himself. A few years later, I caught it again on television, and while I can’t say I’ve thought about it much since, I never forgot it. Gradually, I began to catch up on my De Palma, going mostly by whatever movies made Pauline Kael the most ecstatic at the time, which in itself was an education in the gap between a great critic’s pet enthusiasms and what exists on the screen. (In her review of The Fury, Kael wrote: “No Hitchcock thriller was ever so intense, went so far, or had so many ‘classic’ sequences.” I love Kael, but there are at least three things wrong with that sentence.) And ultimately De Palma came to mean a lot to me, as he does to just about anyone who responds to the movies in a certain way.
When I heard about the recut version of Raising Cain—in an interview with John Lithgow on The A.V. Club, no less, in which he was promoting his somewhat different role on The Crown—I was intrigued. And its backstory is particularly interesting. Shortly before the movie was first released, De Palma moved a crucial sequence from the beginning to the middle, eliminating an extended flashback and allowing the film to play more or less chronologically. He came to regret the change, but it was too late to do anything about it. Years later, a freelance director and editor named Peet Gelderblom read about the original cut and decided to restore it, performing a judicious edit on a digital copy. He put it online, where, unbelievably, it was seen by De Palma himself, who not only loved it but asked that it be included as a special feature on the new Blu-ray release. If nothing else, it’s a reminder of the true possibilities of fan edits, which have served mostly for competing visions of the ideal version of Star Wars. With modern software, a fan can do for a movie what Walter Murch did for Touch of Evil, restoring it to the director’s original version based on a script or a verbal description. In the case of Raising Cain, this mostly just involved rearranging the pieces in the theatrical cut, but other fans have tackled such challenges as restoring all the deleted scenes in Twin Peaks: Fire Walk With Me, and there are countless other candidates.
Yet Raising Cain might be the most instructive case study of all, because simply restoring the original opening to its intended place results in a radical transformation. It isn’t for everyone, and it’s necessary to grant De Palma his usual passes for clunky dialogue and characterization, but if you’re ready to meet it halfway, you’re rewarded with a thriller that twists back on itself like a Möbius strip. De Palma plunders his earlier movies so blatantly that it isn’t clear if he’s somehow paying loving homage to himself—bypassing Hitchcock entirely—or recycling good ideas that he feels like using again. The recut opens with a long mislead that recalls Dressed to Kill, which means that Lithgow barely even appears for the first twenty minutes. You can almost see why De Palma chickened out for the theatrical version: Lithgow’s performance as the meek Carter and his psychotic imaginary brother Cain feels too juicy to withhold. But the logic of the script was destroyed. For a film that tests an audience’s suspension of disbelief in so many other ways, it’s unclear why De Palma thought that a flashback would be too much for the viewer to handle. The theatrical release preserves all the great shock effects that are the movie’s primary reason for existing, but they don’t build to anything, and you’re left with a film that plays like a series of sketches. With the original order restored, it becomes what it was meant to be all along: a great shaggy dog story with a killer punchline.
Raising Cain is gleefully about nothing but itself, and I wouldn’t force anybody to watch it who wasn’t already interested. But the recut also serves as an excellent introduction to its director, just as the older version did for me: when I first encountered it, I doubt I’d seen anything by De Palma, except maybe The Untouchables, and Mission: Impossible was still a year away. It’s safe to say that if you like Raising Cain, you’ll like De Palma in general, and if you can’t get past its archness, campiness, and indifference to basic plausibility—well, I can hardly blame you. Watching it again, I was reminded of Blue Velvet, a far greater movie that presents the viewer with a similar test. It has the same mixture of naïveté and incredible technical virtuosity, with scenes that barely seem to have been written alternating with ones that push against the boundaries of the medium itself. You’re never quite sure if the director is in on the gag, and maybe it doesn’t matter. There isn’t much beauty in Raising Cain, and De Palma is a hackier and more mechanical director than Lynch, but both are so strongly visual that the nonsensory aspects of their films, like the obligatory scenes with the cops, seem to wither before our eyes. (It’s an approach that requires a kind of raw, intuitive trust from the cast, and as much as I enjoy what Lithgow does here, he may be too clever and resourceful an actor to really disappear into the role.) Both are rooted, crucially, in Hitchcock, who was equally obsessive, but was careful to never work from his own script. Hitchcock kept his secret self hidden, while De Palma puts it in plain sight. And if it turns out to be nothing at all, that’s probably part of the joke.
I cannot listen to Mahler’s Ninth Symphony with anything like the old melancholy mixed with the high pleasure I used to take from this music. There was a time, not long ago, when what I heard, especially in the final movement, was an open acknowledgement of death and at the same time a quiet celebration of the tranquility connected to the process. I took this music as a metaphor for reassurance, confirming my own strong hunch that the dying of every living creature, the most natural of all experiences, has to be a peaceful experience. I rely on nature. The long passages on all the strings at the end, as close as music can come to expressing silence itself, I used to hear as Mahler’s idea of leave-taking at its best. But always, I have heard this music as a solitary, private listener, thinking about death.
Now I hear it differently. I cannot listen to the last movement of the Mahler Ninth without the door-smashing intrusion of a huge new thought: death everywhere, the dying of everything, the end of humanity. The easy sadness expressed with such gentleness and delicacy by that repeated phrase on faded strings, over and over again, no longer comes to me as old, familiar news of the cycle of living and dying…If I were very young, sixteen or seventeen years old, I think I would begin, perhaps very slowly and imperceptibly, to go crazy…If I were sixteen or seventeen years old, I would not feel the cracking of my own brain, but I would know for sure that the whole world was coming unhinged. I can remember with some clarity what it was like to be sixteen. I had discovered the Brahms symphonies. I knew that there was something going on in the late Beethoven quartets that I would have to figure out, and I knew that there was plenty of time ahead for all the figuring I would ever have to do. I had never heard of Mahler. I was in no hurry. I was a college sophomore and had decided that Wallace Stevens and I possessed a comprehensive understanding of everything needed for a life…
The man on television, Sunday midday, middle-aged and solid, nice-looking chap, all the facts at his fingertips, more dependable looking than most high-school principals, is talking about civilian defense, his responsibility in Washington. It can make an enormous difference, he is saying. Instead of the outright death of eighty million American citizens in twenty minutes, he says, we can, by careful planning and practice, get that number down to only forty million, maybe even twenty…If I were sixteen or seventeen years old and had to listen to that, or read things like that, I would want to give up listening and reading. I would begin thinking up new kinds of sounds, different from any music heard before, and I would be twisting and turning to rid myself of human language.
Note: Major spoilers follow for the entire run of Westworld.
“The Adversary” is far from a bad hour of television, but it’s one of the weaker episodes of Westworld. We’re just past the halfway point of the season, which is when a show has to start focusing on its endgame, and in practice, this often means that we get an installment devoted to what showrunners call “laying pipe,” or setting up information that will pay off later on. There’s a lot of material being delivered to the viewer here, but it lacks some of the urgency of earlier installments, and on an emotional level, it’s more detached than usual. (The exception is gorgeous silent sequence that leans heavily on an orchestral version of Radiohead’s heartbreaking “Motion Picture Soundtrack,” a musical crutch that I’ll forgive because it’s so effective.) For the most part, though, it puts advancing the mystery ahead of spending time with the characters, and when we look back at the season as a whole, I have a feeling it will turn out to have been structurally necessary. I like all the intrigue surrounding the maze, the acts of industrial espionage in the park, and the enigmatic figure of Arnold—which are beginning to look as if they’re just different aspects of the same thing. But it’s all fairly standard for a series like this, and it isn’t the reason I keep watching. Westworld has so much going on, both for good and for bad, that its mystery box aspects seem less like the main attraction than like a convenient spine. And it means that the show sometimes has to take care of a few practical matters to prepare for the big finish.
What surprised me the most about the episode, though, was the reason I found it a little less compelling than usual. It was the absence of Dolores. She’s obviously an important figure—she’s the show’s nominal lead, no less—and her journey is central to the overall arc of the season. If you’d asked me if she was my favorite character, though, I would have said that she wasn’t: I get more pleasure out of our time with Bernard. But if you take her out of an episode entirely, something interesting happens. Westworld, like Game of Thrones, is an ensemble series that spends much of its time checking in on various groups of characters, and it means that you often won’t see important players at all, or for no more than a minute or two. And it’s only in their absences that you start to figure out who is truly essential. When Bernard was offscreen for most of last week, except for a brief conversation with Elsie, I was aware that I missed him, but it didn’t detract from the rest of the story. With Dolores gone, it’s as if the engine of the show has been removed. It’s surprising, because her scenes with William and Logan haven’t exactly jumped off the screen, and her storyline is the one area where the show seems to be stalling, because it’s clearly saving her big moments for closer to the end. But Dolores’s gradual movement toward consciousness is such a crucial thread that removing it leaves the show feeling a bit like Game of Thrones at its worst: a collection of scenes without a center. We aren’t supposed to identify with Dolores, exactly, but she’s the most dynamic character in sight, and her evolution is what gives the series its narrative thrust.
This is why I’m wary of the popular fan theory, which has been exhaustively discussed online, that the show is taking place in different timelines. The gist of the argument, in case you haven’t heard it, is that the scenes involving Dolores, William, and Logan are flashbacks that are occurring more than thirty years before the rest of the show, and that William is really a younger version of the Man in Black. Its proponents bolster their case using details like the two different versions of the Westworld park logo, the changing typeface on a can of condensed milk, and the fact that we never see William or Logan interacting with any of the other human characters. There’s plenty of evidence to the contrary, but nothing that can’t be explained away in isolation as a deliberate mislead, and I don’t think the conspiracy theorists will give up until William and the Man in Black meet face to face. It’s a clever reading, and it isn’t inconsistent with what we know about the past tactics of creator Jonathan Nolan. For all I know, it may turn out to be true. It’s certainly a better surprise than most shows have managed. But I hope it isn’t what’s really happening here—and for many of the same reasons that I gave above. Dolores’s story is the heart of the series, and placing her scenes with William three decades earlier makes nonsense of the show’s central conceit: that Dolores is slowly edging her way toward greater self-awareness because she’s been growing all this time. The flashback theory implies that she was already experiencing flashes of deeper consciousness almost from the beginning, which requires us to throw out most of what we know about her so far.
This isn’t always a bad thing, and some of the most effective twists in the history of storytelling have forced the audience to radically revise what it thinks it knows about the protagonist. But I think it would be a mistake here. It has the advantage of turning William, who has been kind of a bore, into a vastly more interesting figure, but only at the cost of making Dolores considerably less interesting—a puppet of the plot, rather than a character who can drive the narrative forward in her own right. It’s possible that this may turn out to be a commentary on her lack of agency as a robot: the series might be fooling us into reading more into Dolores than we should, just like William does, which would be an inspired trick indeed. But Dolores is such a load-bearing character that I’m worried that the show would lose more than it gained by the reveal. Her story may be nothing but a bridge that can be blown to smithereens as soon as the other characters have crossed safely to the other side, as James Joyce memorably put it. But I’m skeptical. As “The Adversary” demonstrates, when you remove Dolores from the equation, you end up with a show that provides memorable moments but little in the way of an overarching shape. (The scene in which Maeve blackmails Felix and Sylvester into making her more intelligent only highlights how much more intriguing Dolores’s organic discovery of her true nature has been.) The multiple timeline theory, as described, would remove the Dolores we know from the story forever. It would be a fantastic twist. But I’m not sure the show could survive it.
Yesterday, I was leafing through my copy of The Conversations: Water Murch and the Art of Editing Film, in which the novelist Michael Ondaatje interviews the movie editor whom Lawrence Weschler has called “the smartest person in America.” Murch, who worked on many of the films of Francis Ford Coppola and directed Return to Oz, has long been one of my heroes, and it’s worth listening to just about everything he says. (When my wife recently asked me if I could stand to hear anyone talk for four hours straight, I mentioned Murch first, followed by David Mamet and Werner Herzog.) As I was browsing through the book last night, however, I came across a line that I didn’t remember reading before:
As I’ve gone through life, I’ve found that your chances for happiness are increased if you wind up doing something that is a reflection of what you loved most when you were somewhere between nine and eleven years old.
I was very moved by this, because I’ve often thought the same thing. In the past, I’ve said that my ideal reader is myself in fifth grade—which doesn’t mean that I’m writing for kids—and that I judge my life by how closely it lives up to the hopes and expectations of that eleven year old. And although I haven’t always met that high standard, it’s still the closest thing that I have to a reliable moral compass.
Murch evidently agrees, but he also goes much further in identifying why this would be true. He continues:
At that age, you know enough of the world to have opinions of things, but you’re not old enough yet to be overly influenced by the crowd or by what other people are doing or what you think you “should” be doing. If what you do later on ties into that reservoir in some way, then you are nurturing some essential part of yourself. It’s certainly been true in my case. I’m doing now, at fifty-eight, almost exactly what most excited me when I was eleven.
And I think he’s getting at something immensely important here. The ages between nine and eleven strike me as a precious island of rationality, in its deepest and most meaningful sense. A boy of ten is a miniature adult in a lot of ways: it’s an age at which he is able to systematically follow up on his interests without much in the way of outside guidance, which may explain why the obsessions that he acquires around that time can be so lasting. For a few years, he’s thinking independently: he’s old enough to know that there’s more to the world than the toys and television shows that his schoolmates happen to like, and still young enough that he hasn’t started to feel anxious about his own preferences. In the language of biology, which obviously plays a central role here, it’s the narrow window of time in which the brain has achieved a certain structural maturity, but it hasn’t been taken over by puberty yet.
As Murch implies, it’s the choices that we make in that relatively objective life stage that reflect who we really are. A lot of complications are around the corner, which isn’t necessarily a bad thing—they’re the individual experiences that make us special, even if they assemble themselves in ways that we can’t control. I’ve noted before that I’m essentially the product of a handful of books, movies, and other media that I happened to encounter around the age of thirteen, but I don’t think I’ve ever made the connection with the more profound turning point that occurred a few years earlier. By the time I was ten, I knew that I wanted to be a writer, but for the specifics of how that would look, I had to wait until the world had given me a unique set of material. Elsewhere, I’ve described this process as a random one, but that isn’t really true: you’re exposed to dozens or hundreds of discrete influences in your early teens, and if five or six of them survive to shape who you are as an adult, that isn’t arbitrary at all. The result is such a useful source of insight about what truly matters to us that we probably should try to access those memories of ourselves more diligently. I haven’t accomplished everything I’ve tried to do, and I’ve got my share of regrets. But if I’ve been relatively happy in my work and life, it’s because I combined the goals that I set for myself at the age of ten with the pieces that stuck in my head when I was thirteen, as refined by the perspective of an adult. The closer I’ve kept to that standard, the happier I’ve been, and whenever I’ve strayed, I’ve been forcibly corrected.
The trouble, of course, is that the ages between nine and thirteen are exactly the ones that our culture tends to neglect. We’ve never been able to figure out what to do with kids in middle school, in part because they present such a wide range of development that there’s no single approach that makes sense, and perhaps because we’re still too traumatized by our own memories to look at it very closely. It’s also possible—and while I don’t want to believe this, I can’t rule it out entirely—that the neglect is intentional. Adolescence enforces conformity and undermines a lot of dreams, and I doubt many people get out of high school with their childhood ideals still intact. (If anything, it takes a conscious effort, in college and afterward, to go back and retrieve them.) But there’s an incentive for society to allow it to happen. Middle school and high school are particular kinds of hell that are designed to produce functional adults, and individual happiness isn’t a priority. At best, when we grow up, we’re allowed hobbies and side interests that appeal to who we were as children, even if our adult lives take us ever further away from those values. For most people, this isn’t a bad compromise, but it tends to separate the two halves, when we should be trying to bring them together. Our culture only becomes infantilized, paradoxically, when we no longer take our childhood selves seriously, or if we underestimate what we wanted for ourselves as grownups. And if it’s important to return to those dreams whenever we can, it’s not for the sake of the children we once were, but for the adults we could still become.
In the latest issue of The New York Times Magazine, the film critic Wesley Morris has a reflective piece titled “Last Taboo,” the subheadline of which reads: “Why Pop Culture Just Can’t Deal With Black Male Sexuality.” Morris, who is a gay black man, notes that full-frontal male nudity has become more common in recent years in movies and television, but it’s usually white men who are being undressed for the camera, which tells us a lot about the unresolved but highly charged feelings that the culture still has toward the black male body. As Morris writes:
Black men [are] desired on one hand and feared on the other…Here’s our original sin metastasized into a perverted sticking point: The white dick means nothing, while, whether out of revulsion or lust, the black dick means too much.
And although I don’t want to detract from the importance of the point that Morris is making here, I’ll admit that as I read these words, another thought ran though my mind. If the white penis means nothing, then the Asian penis, by extension, must mean—well, less than nothing. I don’t mean to equate the desexualization of Asian males in popular culture with the treatment of black men in fiction and in real life. But both seem to provide crucial data points, from opposite ends, for our understanding of the underlying phenomenon, which is how writers and other artists have historically treated the bodies of those who look different than they do.
I read Morris’s piece after seeing a tweet by the New Yorker critic Emily Nussbaum, who connected it to an awful scene in last night’s episode of Westworld, in which an otherwise likable character makes a joke about a well-endowed black robot. It’s a weirdly dissonant moment for a series that is so controlled in other respects, and it’s possible that it reflects nothing more than Jonathan Nolan’s clumsiness—which he shares with his older brother—whenever he makes a stab at humor. (I also suspect, given the show’s production delays, that the line was written and shot a long time ago, before these questions assumed a more prominent role in the cultural conversation. Which doesn’t make it any easier to figure out what the writers were thinking.) Race hasn’t played much of a role on the series so far, and it may not be fair to pass judgment on a show that has only aired five episodes and clearly has a lot of other stuff on its mind. But it’s hard not to wonder. The cast is diverse, but the guests are mostly white men, undoubtedly because, as Nussbaum notes elsewhere, they’re the natural target audience for the park’s central fantasy. And the show has a strange habit of using its Asian cast members, who are mostly just faces in the background, as verbal punching bags for the other characters, a trend so peculiar that my wife and I both noticed it separately. It’s likely that this has all been muddied by what seems to be shaping up to be an actual storyline for Felix, played by Leonardo Nam, who looks as if he’s about to respond to his casual mistreatment by rising to a larger role in the story. But even for a show with a lot of moving parts, it strikes me as a lazy way of prodding a character into action.
Over the last few months, as it happens, I’ve been thinking a lot about the representation of Asians in science fiction. (As I’ve mentioned before, I’m Eurasian—half Chinese, half Finnish and Estonian.) I may as well start with Robert A. Heinlein’s Sixth Column, a novel that he wrote on assignment for Astounding Science Fiction, based in part on All, an earlier, unpublished serial by John W. Campbell. Both stories, which were written long before Pearl Harbor, are about the invasion of the United States by a combined Chinese and Japanese empire, which inspires an underground resistance movement in the form of a fake religion. Heinlein later wrote that he tried to rework the narrative to tone down its more objectionable elements, but it pains me to say that Sixth Column actually reads as more racist than All, simply because Heinlein was the stronger writer. When you read All, you don’t feel much of anything, because Campbell was a stiff and awkward stylist. Heinlein, by contrast, spent much of his career bringing immense technical skill to even the most questionable projects, and he can’t keep from investing his characters with real rhetorical vigor as they talk about “flat-faced apes” and “our slant-eyed lords.” I don’t even mind the idea of an Asian menace, as long as the bad guys are treated as worthy antagonists, which Heinlein mostly does. But when the leaders of the resistance decide to grow beards in order to fill the invaders with “a feeling of womanly inferiority,” it’s hard to excuse it. And the most offensive moment of all involves Mitsui, the only sympathetic Asian character in sight, who sacrifices himself for the sake of his friends and is rewarded with the epitaph: “But they had no time to dwell on the end of little Mitsui’s tragic life.”
That’s the kind of racism that rankles me: not the diabolical Asian villain, who can be invested with a kind of sinister allure, as much as the legion of little Mitsuis who still populate so much of our fiction. (This may be why I’ve always sort of liked Michael Cimino’s indefensible Year of the Dragon, which at least treats John Lone’s character as a formidable, glamorous foe. It’s certainly less full of hate than The Deer Hunter.) And it complicates my reactions to other issues. When it was announced that Sulu would be unobtrusively presented as gay in Star Trek Beyond, it filled me with mixed feelings, and not just because George Takei didn’t seem to care for the idea. As much as I appreciated what the filmmakers were trying to do, I couldn’t help but think that it would have been just as innovative, if not more so, to depict Sulu as straight. I’m aware that this risks making it all seem like a zero-sum game, which it isn’t. But these points deserve to be raised, if only because they enrich the larger conversation. If a single scene on Westworld can spark a discussion of how we treat black men as sexual objects, we can do the same with the show’s treatment of Asians. The series presumably didn’t invite or expect such scrutiny, but it occupies a cultural position—as a prestige drama on a premium cable channel—in which it has no choice but to play that part. Science fiction, in particular, has always been a sandbox in which these issues can be investigated in ways that wouldn’t be possible in narratives set in the present, from the original run of Star Trek on down. Westworld belongs squarely in that tradition. And these are frontiers that it ought to explore.
In last week’s issue of The New Yorker, the critic Emily Nussbaum delivers one of the most useful takes I’ve seen so far on Westworld. She opens with many of the same points that I made after the premiere—that this is really a series about storytelling, and, in particular, about the challenges of mounting an expensive prestige drama on a premium network during the golden age of television. Nussbaum describes her own ambivalence toward the show’s treatment of women and minorities, and she concludes:
This is not to say that the show is feminist in any clear or uncontradictory way—like many series of this school, it often treats male fantasy as a default setting, something that everyone can enjoy. It’s baffling why certain demographics would ever pay to visit Westworld…The American Old West is a logical fantasy only if you’re the cowboy—or if your fantasy is to be exploited or enslaved, a desire left unexplored…So female customers get scattered like raisins into the oatmeal of male action; and, while the cast is visually polyglot, the dialogue is color-blind. The result is a layer of insoluble instability, a puzzle that the viewer has to work out for herself: Is Westworld the blinkered macho fantasy, or is that Westworld? It’s a meta-cliffhanger with its own allure, leaving us only one way to find out: stay tuned for next week’s episode.
I agree with many of her reservations, especially when it comes to race, but I think that she overlooks or omits one important point: conscious or otherwise, it’s a brilliant narrative strategy to make a work of art partially about the process of its own creation, which can add a layer of depth even to its compromises and mistakes. I’ve drawn a comparison already to Mad Men, which was a show about advertising that ended up subliminally criticizing its own tactics—how it drew viewers into complex, often bleak stories using the surface allure of its sets, costumes, and attractive cast. If you want to stick with the Nolan family, half of Chris’s movies can be read as commentaries on themselves, whether it’s his stricken identification with the Joker as the master of ceremonies in The Dark Knight or his analysis of his own tricks in The Prestige. Inception is less about the construction of dreams than it is about making movies, with characters who stand in for the director, the producer, the set designer, and the audience. And perhaps the greatest cinematic example of them all is Vertigo, in which Scotty’s treatment of Madeline is inseparable from the use that Hitchcock makes of Kim Novak, as he did with so many other blonde leading ladies. In each case, we can enjoy the story on its own merits, but it gains added resonance when we think of it as a dramatization of what happened behind the scenes. It’s an approach that is uniquely forgiving of flawed masterpieces, which comment on themselves better than any critic can, until we wonder about the extent to which they’re aware of their own limitations.
And this kind of thing works best when it isn’t too literal. Movies about filmmaking are often disappointing, either because they’re too close to their subject for the allegory to resonate or because the movie within the movie seems clumsy compared to the subtlety of the larger film. It’s why Being John Malkovich is so much more beguiling a statement than the more obvious Adaptation. In television, the most unfortunate recent example is UnREAL. You’d expect that a show that was so smart about the making of a reality series would begin to refer intriguingly to itself, and it did, but not in a good way. Its second season was a disappointment, evidently because of the same factors that beset its fictional show Everlasting: interference from the network, conceptual confusion, tensions between producers on the set. It seemed strange that UnREAL, of all shows, could display such a lack of insight into its own problems, but maybe it isn’t so surprising. A good analogy needs to hold us at arm’s length, both to grant some perspective and to allow for surprising discoveries in the gaps. The ballet company in The Red Shoes and the New York Inquirer in Citizen Kane are surrogates for the movie studio, and both films become even more interesting when you realize how much the lead character is a portrait of the director. Sometimes it’s unclear how much of this is intentional, but this doesn’t hurt. So much of any work of art is out of your control that you need to find an approach that automatically converts your liabilities into assets, and you can start by conceiving a premise that encourages the viewer or reader to play along at home.
Which brings us back to Westworld. In her critique, Nussbaum writes: “Westworld [is] a come-hither drama that introduces itself as a science-fiction thriller about cyborgs who become self-aware, then reveals its true identity as what happens when an HBO drama struggles to do the same.” She implies that this is a bug, but it’s really a feature. Westworld wouldn’t be nearly as interesting if it weren’t being produced with this cast, on this network, and on this scale. We’re supposed to be impressed by the time and money that have gone into the park—they’ve spared no expense, as John Hammond might say—but it isn’t all that different from the resources that go into a big-budget drama like this. In the most recent episode, “Dissonance Theory,” the show invokes the image of the maze, as we might expect from a series by a Nolan brother: get to the center to the labyrinth, it says, and you’ve won. But it’s more like what Douglas R. Hofstadter describes in I Am a Strange Loop:
What I mean by “strange loop” is—here goes a first stab, anyway—not a physical circuit but an abstract loop in which, in the series of stages that constitute the cycling-around, there is a shift from one level of abstraction (or structure) to another, which feels like an upwards movement in a hierarchy, and yet somehow the successive “upward” shifts turn out to give rise to a closed cycle. That is, despite one’s sense of departing ever further from one’s origin, one winds up, to one’s shock, exactly where one had started out.
This neatly describes both the park and the series. And it’s only through such strange loops, as Hofstadter has long argued, that any complex system—whether it’s the human brain, a robot, or a television show—can hope to achieve full consciousness.
At last night’s presidential debate, when moderator Chris Wallace asked if he would accept the outcome of the election, Donald Trump replied: “I’ll keep you in suspense, okay?” It was an extraordinary moment that immediately dominated the headlines, and not just because it was an unprecedented repudiation of a crucial cornerstone of the democratic process. Trump’s statement—it seems inaccurate to call it a “gaffe,” since it clearly reflects his actual views—was perhaps the most damaging remark anyone could have made in that setting, and it reveals a curious degree of indifference, or incompetence, in a candidate who has long taken pride in his understanding of the media. It was a short, unforgettable sound bite that could instantly be brought to members of both parties for comment. And it wasn’t an arcane matter of policy or an irrelevant personal issue, but an instantly graspable attack on assumptions shared by every democratically elected official in America, and presumably by the vast majority of voters. Even if Trump had won the rest of the debate, which he didn’t, those six words would have erased whatever gains he might have made. Not only was it politically and philosophically indefensible, but it was a ludicrous tactical mistake, an unforced error in response to a question that he and his advisors knew was going to be asked. As Julia Azari put it during the live chat on FiveThirtyEight: “The American presidency is not the latest Tana French novel—leaders can’t keep the people in suspense.”
But the phrase that he used tells us a lot about Trump. I’m speaking as someone who has devoted my fair share of thought to suspense itself: I’ve written a trilogy of thrillers and blogged here about the topic at length. When I think about the subject, I often start with what John Updike wrote in a review of Nabokov’s Glory, which is that it “never really awakens to its condition as a novel, its obligation to generate suspense.” What Updike meant is that stories are supposed to make us wonder about what’s going to happen next, and it’s that state of pleasurable anticipation that keeps us reading. It can be an end in itself, but it can also be a literary tool for sustaining the reader’s interest while the writer tackles other goals. As Kurt Vonnegut once said of plot, it isn’t necessarily an accurate representation of life, but a way to keep readers turning pages. Over time, the techniques of suspense have developed to the point where you can simulate it using purely mechanical tricks. If you watch enough reality television, you start to notice how the grammar of the editing repeats itself, whether you’re talking about Top Chef or Project Runway or Jim Henson’s Creature Shop. The delay before the judges deliver their decision, the closeups of the faces of the contestants, the way in which an editor pads out the moment by inserting cutaways between every word that Padma Lakshmi says—these are all practical tools that can give a routine stretch of footage the weight of the verdict in the O.J. Simpson trial. You can rely on them when you can’t rely on the events of the show itself.
And the best trick of all is to have a host who keeps things moving whenever the contestants or guests start to drag. That’s where someone like Trump comes in. He’s an embarrassment, but he’s far from untalented, at least within the narrow range of competence in which he used to operate. When I spent a season watching The Celebrity Apprentice—my friend’s older sister was on it—I was struck by how little Trump had to do: he was only onscreen for a few minutes in each episode. But he was good at his job, and he was also the obedient instrument of his producers. He has approached the campaign with the same mindset, but with few of the resources that are at an actual reality show’s disposal. Trump’s strategy has been built around the idea that he doesn’t need to spend money on advertising or a ground game, as long as the media provides him with free coverage. It’s an interesting experiment, but there’s a limit to how effective it can be. In practice, Trump is less like the producer or the host than a contestant, which reduces him to acting like a reality star who wants to maximize his screen time: say alarming things, pick fights, act unpredictably, and generate the footage that the show needs, while never realizing that the incentives of the contestants and producers are fundamentally misaligned. (He should have just watched the first season of UnREAL.) When he says that he’ll keep us in suspense about accepting the results of the election, he’s just following the reality show playbook, which is to milk such climactic moments for all they’re worth.
Yet this approach has backfired, and television provides us with some important clues as to why. I once believed that the best analogy to Trump’s campaign was the rake gag made famous by The Simpsons. As producer Al Jean described it: “Sam Simon had a theory that if you repeat a joke too many times, it stops being funny, but if you keep on repeating it, it might get really funny.” Trump performed a rake gag in public for months. First we were offended when he made fun of John McCain’s military service; then he said so many offensive things that we became numb to it; and then it passed a tipping point, and we got really offended. I still think that’s true. But there’s an even better analogy from television, which is the practice of keeping the audience awake by killing off major characters without warning. As I’ve said here before, it’s a narrative trick that used to seem daring, but now it’s a form of laziness: it’s easier to deliver shocking death scenes than to tell interesting stories about the characters who are still alive. In Trump’s case, the victims are ideas, or key constituents of the electorate: minorities, immigrants, women. When Trump turned on Paul Ryan, it was the equivalent of one of those moments, like the Red Wedding on Game of Thrones, when you’re supposed to gasp and realize that nobody is safe. His attack on a basic principle of democracy might seem like more of the same, but there’s a difference. The strategy might work for a few seasons, but there comes a point at which the show cuts itself too deeply, and there aren’t any characters left that we care about. This is where Trump is now. And by telling us that he’s going to keep us in suspense, he may have just made the ending a lot less suspenseful.
Note: Spoilers follow for the Westworld episode “The Stray.”
There’s a clever moment in the third episode of Westworld when Teddy, the clean-cut gunslinger played by James Marsden, is finally given a backstory. Teddy has spoken vaguely of a guilty secret in his past, but when he’s pressed for the details, he doesn’t elaborate. That’s the mark of a good hero. As William Goldman points out in his wonderful book Which Lie Did I Tell?, protagonists need to have mystery, and when you give them a sob story, here’s what happens:
They make [him] a wimp. They make him a loser. He’s just another whiny asshole who went to pieces when the gods pissed on him. “Oh, you cannot know the depth of my pain” is what that seems to be saying to the audience. Well, if I’m in that audience, what I think is this: Fuck you. I know people who are dying of cancer, I know people who are close to vegetables, and guess what—they play it as it lays.
Of course, we know that Teddy is really an android, and if he doesn’t talk about his past, it’s for good reason: as Dr. Ford, his creator, gently explains, the writers never bothered to give him one. With a few commands on a touchscreen, a complete backstory is uploaded into his system, and Teddy sets off on a doomed quest in pursuit of his old enemy, Wyatt, against whom he has sworn undying revenge. We don’t know how this plot thread ties into the rest of Dr. Ford’s plan, but we can only assume that it’s going somewhere—and it’s lucky for him that he had a convenient hero available to fill that role.
There are several levels of sly commentary here. When you’re writing a television show—or a series of novels—you want to avoid filling in anybody’s backstory for as long as possible. Part of the reason, as Goldman notes above, is to maintain a sense of mystery, and for the sake of narrative momentum, it makes sense to avoid dwelling on what happened before the story began. But it’s also a good idea to keep this information in your back pocket for when you really need it. If you know how to deploy it strategically, backstory can be very useful, and it can get you out of trouble or provide a targeted nudge when you need to push the plot in a particular direction. If you’re too explicit about it too soon, you narrow your range of options. (You also make it harder for viewers to project their own notions onto the characters, which is what Westworld, the theme park, is all about.) I almost wish that Westworld had saved this moment with Teddy for later in the show’s run, which would underline its narrative point. We’re only a third of the way through the first season, but within the world of the show itself, the park has been running for decades with the same generic storylines. Dr. Ford has a few ideas about how to shake things up, and Teddy is a handy blank slate. Television showrunners make that sort of judgment call all the time. In the internal logic of the park, this isn’t the first season, but more like its fifth or sixth, when a scripted drama tends to go off the rails, and the accumulation of years of backstory starts to feel like a burden.
“The Stray,” in fact, is essentially about backstory, on the level both of the park and of the humans who are running it. Shortly after filling in the details of Teddy’s past, Dr. Ford does exactly the same thing for himself: he delivers a long, not entirely convincing monologue about a mysterious business partner, Arnold, who died in the park and was later removed from its corporate history. At the end of the speech, he looks at Bernard, his head of programming, and tells him that he knows how much his son’s death still haunts him. It’s a little on the nose, but I think it’s supposed to be. It makes us wonder if Bernard might unknowingly be a robot himself, a la Blade Runner, and whether his flashbacks of his son are just as artificial as Teddy’s memories of Wyatt. I hope that this isn’t the big twist, if only because it seems too obvious, but in a way, it doesn’t really matter. Bernard may or may not be a robot, but there’s no question that Bernard, Dr. Ford, and all the other humans in sight are characters on a show called Westworld, and whatever backstories they’ve been given by Jonathan Nolan and Lisa Joy are as calculated as the ones that the androids have received. Even if Bernard’s memories are “real,” we’re being shown them for a reason. (It helps that Dr. Ford and Bernard are played by Anthony Hopkins and Jeffrey Wright, two actors who are good at giving technically exquisite performances that draw subtle attention to their own artifice. Wright’s trademark whisper—he’s like a man of great passion who refuses to raise his voice—draws the viewer into a conspiracy with the actor, as if he’s letting us in on a secret.)
The trouble with this reading, of course, is that it allows us to excuse instances of narrative sloppiness under the assumption that the series is deliberately commenting on itself. I’m willing to see Dr. Ford’s speech about Arnold as a winking nod to the tendency of television shows to dispense backstory in big infodumps, but I’m less sure about the moment in which he berates a lab technician for covering up a robot’s naked body and slashes at the android’s face. It’s doesn’t seem like the Dr. Ford of the pilot, talking nostalgically to Old Bill in storage, and while we’re presumably supposed to see him as a man of contradictions, it feels more like a juxtaposition of two character beats that weren’t meant to be so close together. (I have a hunch that it also reflects Hopkins’s availability: the show seems to have him for about two scenes per episode, which means that it has to do in five minutes what might have been better done in ten.) Westworld, as you might expect from a show from one of the Nolan brothers, has more ideas than it knows how handle: it hurries past a reference to Julian Jaynes’s The Origin of Consciousness in the Breakdown of the Bicameral Mind so quickly that it’s as if the writers just want to let us know that they’ve read the book. But I still have faith in this show’s potential. When Teddy is ignominiously killed yet again by Wyatt’s henchmen, it forces Dolores to face the familiar attackers in her own storyline by herself—an ingenious way of getting her to where she needs to be, but also a reminder, I think, of how the choices that a storyteller makes in one place can have unexpected consequences somewhere else. It’s a risk that all writers take. And Westworld is playing the same tricky game as the characters whose stories it tells.
As we were watching the premiere of Westworld last week, my wife turned to me and said: “Why would they make it a western park?” Or maybe I asked her—I can’t quite remember. But it’s a more interesting question than it sounds. When Michael Crichton’s original movie was released in the early seventies, the western was still a viable genre. It had clearly fallen from its peak, but major stars were doing important work in cowboy boots: Eastwood, of course, but also Newman, Redford, and Hoffman. John Wayne was still alive, which may have been the single most meaningful factor of all. As a result, it wasn’t hard to imagine a theme park with androids designed to fulfill that particular fantasy. These days, the situation has changed. The western is so beleaguered an art form that whenever one succeeds, it’s treated as newsworthy, and that’s been true for the last twenty years. Given the staggering expense and investment involved in a park like this, it’s hard to see why the western would be anybody’s first choice. (Even with the movie, I suspect that Crichton’s awareness of his relatively low budget was part of the decision: it was his first film as a director, with all of the limitations that implies, and a western could be shot cheaply on standing sets in the studio backlot.) Our daydreams simply run along different lines, and it’s easier to imagine a park being, say, set in a medieval fantasy era, or in the future, or with dinosaurs. In fact, there was even a sequel, Futureworld, that explored some of these possibilities, although it’s fair to say that nobody remembers it.
The television series Westworld, which is arriving in a markedly different pop cultural landscape, can’t exactly ditch the premise—it’s right there in the title. But the nice thing about the second episode, “Chestnut,” is that it goes a long way toward explaining why you’d still want to structure an experience like this around those conventions. It does this mostly by focusing on a new character, William, who arrives at the park knowing implausibly little about it, but who allows us to see it through the eyes of someone encountering it for the first time. What he’s told, basically, is that the appeal of Westworld is that it allows you to find out who you really are: you’re limited only by your inhibitions, your abilities, and your sense of right and wrong. That’s true of the real world, to some extent, but we’re also more conscious of the rules. And if the western refuses to go away as a genre, it’s because it’s the purest distillation of that seductive sense of lawlessness. The trouble with telling certain stories in the present day is that there isn’t room for the protagonist that thrillers have taught us to expect: a self-driven hero who solves his problems for himself in matters of life and death. That isn’t how most of us respond to a crisis, and in order to address the issue of why the main character doesn’t just go to the police, writers are forced to fall back on various makeshift solutions. You can focus on liminal figures, like cops or criminals, who can take justice into their own hands; you can establish an elaborate reason why the authorities are helpless, indifferent, or hostile; or you can set your story in a time or place where the rules are different or nonexistent.
The western, in theory, is an ideal setting for a story in which the hero has to rely on himself. It’s a genre made up of limitless open spaces, nonexistent government, unreliable law enforcement, and a hostile native population. If there’s too much civilization for your story to work, your characters can just keep riding. To move west, or to leave the center of the theme park, is to move back in time, increasing the extent to which you’re defined by your own agency. (A western, revealingly, is a celebration of the qualities that we tend to ignore or dismiss in our contemporary immigrant population: the desire for a new life, the ability to overcome insurmountable obstacles, and the plain observation that those who uproot themselves and start from scratch are likely to be more competent and imaginative, on average, than those who remain behind.) The western is the best narrative sandbox ever invented, and if it ultimately exhausted itself, it was for reasons that were inseparable from its initial success. Its basic components were limited: there were only so many ways that you could combine those pieces. Telling escapist stories involved overlooking inconvenient truths about Native Americans, women, and minorities, and the tension between the myth and its reality eventually became too strong to sustain. Most of all, its core parts were taken over by other genres, and in particular by science fiction and fantasy. This began as an accidental discovery of pulp western writers who switched genres and realized that their tricks worked equally well in Astounding, and it was only confirmed by Star Trek—which Gene Roddenberry famously pitched as Wagon Train in space—and Star Wars, which absorbed those clichés so completely that they became new again.
What I like about Westworld, the series, is that it reminds us of how artificial this narrative always was, even in its original form. The Old West symbolizes freedom, but only if you envision yourself in the role of the stock protagonist, who is usually a white male antihero making the journey of his own volition. It falls apart when you try to imagine the lives of the people in the background, who exist in such stories solely to enable the protagonist’s fragile range of options. In reality, the frontier brutally circumscribed the lives of most of those who tried to carve out an existence there, and the whole western genre is enabled by a narrative illusion, or a conspiracy, that keeps its solitary and brutish aspects safely in the hands of the characters at the edges of the frame. Westworld takes that notion to its limit, by casting all the supporting roles with literal automatons. They aren’t meant to have inner lives, any more than the peripheral figures in any conventional western, and the gradual emergence of their consciousness implies that the park will eventually come to deconstruct itself. (The premiere quoted cleverly from The Searchers and Unforgiven, but I almost wish that it had saved those references until later, so that the series could unfold as a miniature history of the genre as it slowly attained self-awareness.) If you want to talk about how we picture ourselves in the heroes of our own stories, while minimizing or reducing the lives of those at the margins, it’s hard to imagine a better place to do it than the western, which depended on a process of historical amnesia and dehumanization from the very beginning. I’m not sure I’d want to visit a park like Westworld. But there will always be those who would.
One of the greatest compliments that we can pay to any story is that it seems shorter than it actually is. It’s obviously best for a narrative to be only as long as it has to be, and no more, which means that the creator needs to be willing to cut wherever necessary. (Sometimes it’s even better if these time or length limits are imposed from the outside. I’ve always maintained that Blue Velvet, my favorite American movie ever, was tremendously improved by a contractual stipulation that forced David Lynch and editor Duwayne Dunham to cut it from three hours down to two. And as much as I’m enjoying the streaming renaissance on Netflix, I sometimes wish that the episodes of these shows were shorter: without a fixed time slot, there’s no incentive to trim any given installment, and a literal hour of television tends to drag toward the end.) But it’s nice when a movie, in particular, grips us so completely that we don’t realize how long we’ve been watching it. I still remember being so absorbed by Michael Mann’s The Insider that I was startled to realize, when I checked my watch after the screening, that it was two and a half hours long: I would have guessed that it was closer to ninety minutes. And you only need to compare the experience of watching the original cut of Seven Samurai with, say, four episodes of the second season of True Detective to realize that three and a half hours can be something very different in subjective and objective time.
But there’s another storytelling trick that deserves just as much attention, which is the ability to make a short work of art seem longer. I’m not talking about the way in which even a twenty minutes of a bad sitcom can seem interminable, but of how a story can somehow persuade us that we’ve lived through a longer and more meaningful experience than seems possible to encompass within a limited timeframe. On some level, this is an illusion that you encounter in most narratives of any kind: with the exception of the rare works designed to unfold in real time, we’re asked to believe that the relatively short period that it takes to physically view or read the story really covers days, weeks, or months of action, and occasionally much longer. Many biopics, for instance, ask us to go through an entire lifetime in a couple of hours, and the fact that the result is usually so unsatisfying only indicates how hard it is to pull this off. But it has a greater chance of succeeding when it uses our perceptions of time to convince us, in a pleasurable way, that we’ve seen and felt more than could be packed into a single sitting. We could start with Citizen Kane, which is exactly a minute short of two hours long—which, like Blue Velvet, probably reflects an attempt to meet a contractually mandated length. Yet more than any other movie, it feels like a full picture of a man’s life, and the fact that it asks us to assemble Kane’s story from the fragments of other people’s memories offers a very important clue as to how this kind of thing works.
Because one of the best ways to create a subjective impression of length is through contrasts: the alternation of big and little, loud and soft, fast and slow. I got to thinking about this while listening to “Yorktown (The World Turned Upside Down),” which is one of the two or three best songs in Hamilton. It’s as epic a number as you could imagine, and it leaves you feeling as if you’ve lived through an unforgettable experience, but it lasts just four minutes. In his notes in Hamilton: The Revolution, Lin-Manuel Miranda explains how it works:
Part of the inspiration for the structure of “Yorktown” is what I call the “Busta Rhymes soft-loud-soft technique. On countless songs, Busta will give you the smoothest, quietest delivery and then full-on scream the next verse. It makes for a delightful tension and release, and it’s entirely vocal. Same here. “I have everything I wanted but I can’t die today / We’re going into battle / Here’s what my friends are doing / Hercules Mulligan!” Thank you and God bless you, Busta Rhymes.
It isn’t hard to see why this kind of alternation creates an impression of length, in the much same way that we find with the experiments with chronology in Kane. With every transition, the listener has to readjust, and the mental effort of these regroupings draws out our perception of time passing. The switching costs of moving from one moment to the next allow the story to do with a juxtaposition what would otherwise require a pause. As the old proverb says, a change is as good as a rest.
And this phenomenon emerges from something fundamental in how our brains are wired. As the neurologist David Eagleman says about the perception of time in everyday life:
When our brains receive new information, it doesn’t necessarily come in the proper order. This information needs to be reorganized and presented to us in a form we understand. When familiar information is processed, this doesn’t take much time at all. New information, however, is a bit slower and makes time feel elongated.
In other words, it takes a while for the brain to process new information, leading to a subjective impression of extended time. It’s why travel or a change of scenery can make our lives seem to slow down, and why we’re advised to use surprise or variety to keep the days from turning into a blur. The real challenge for artists is to combine different kinds of time within the same narrative. A movie or book that consists of nothing but action will quickly become boring, and so will a string of talky interior scenes. If you can speed it up and slow it down in the right proportions, the result, at its finest, will make you feel as if you’ve lived a rich, fulfilling life over the course of two hours. Hamilton does this beautifully. So does Kane—and you could even argue that the best reason to use a nonlinear narrative, rather than as a gimmick, is the ability it presents to treat time as a tool. You’re not just painting a picture; you’re asking the audience to assemble a puzzle. And it helps to use different kinds of pieces.
As hard as it is to believe these days, I spent most of my early twenties working at a hedge fund in New York. I got there by a process that was circuitous even by my standards: I’d moved to the city after college, hoping to land a job at a newspaper or magazine while writing fiction on the side, but my prospects weren’t great, and I was nearly at the end of the savings that I’d set aside to get me through the summer. When I was invited to interview at a financial firm that actively recruited Ivy League graduates with good grades and no previous experience, I set the letter aside, and I didn’t pick it up again until my other avenues had dried up. But when I decided to give it a shot, I took it seriously. I checked out a guide to hedge funds from the local library in Queens, along with a book on interview questions along the lines of “How many gas stations are in the United States?” It also seemed like a good idea to pick up a recent book on finance, in case my interviewer asked what I’d been reading on the subject. After browsing at the Strand Bookstore, I picked up a promising title that I’d seen mentioned elsewhere, and I read the whole thing in about an hour. I did one interview over the phone, and I did well enough that they asked me to come by the office in person. In the end, I got the job, and it turned out to be the right choice: I learned a lot, saved some money, and made friends who have had an incalculable impact on my life. That’s a story for another time. But I’m lucky that nobody asked me what I’d been reading—and if they had, I’m not sure they would have hired me. Because the book I chose was Robert Kiyosaki’s Rich Dad, Poor Dad.
Even now, almost fifteen years later, it embarrasses me to type this. Kiyosaki has more or less disappeared from the national consciousness, and he’s remembered now, if at all, as a relic of the peculiar financial bubble of the early twenty-first century, just after the tech bust and shortly before the subprime crisis. His books consist of about a paragraph of actual advice—on the level of a personal finance article in Parade magazine—padded out to a couple of hundred pages with platitudes, misleading examples, and sales pitches for other items in his product line. The autobiographical narrative that he provides in Rich Dad, Poor Dad is blatantly fictionalized. (For a more thorough review of Kiyosaki’s evasions, fabrications, and bad ideas, I urge you to check out the comprehensive takedown by real estate guru John T. Reed, which is more than a decade old, but remains one of my favorite things on the Internet.) But the key point about Kiyosaki is that he’s a branding expert masquerading as a real estate and investing authority. His wealth didn’t come from buying, selling, and managing properties, but from hocking his books through organizations like Amway. When he’s pressed for specifics, Kiyosaki, who spends most of his life promoting his own success, suddenly turns coy, and refuses to provide any details on his holdings. He once claimed that his net worth fluctuated between $50 and $100 million, “depending on the day.” And if any of this sounds familiar, you shouldn’t be surprised: Kiyosaki later partnered with Donald Trump on the books Why We Want You to Be Rich and Midas Touch, most of which were devoted to steering readers to network marketing companies. They feel, frankly, like artifacts of a more innocent time.
But what interests me the most now are the reasons why I bought a copy of Kiyosaki’s book in the first place. First, I didn’t know any better. Second, it had been positioned by many reviewers at the time as a legitimate book on personal finance. Both points, I think, are illuminating. I was a smart kid, and I’d graduated with honors from a good college, but I didn’t know the first thing about finance or investing. Over the next few years, I learned a lot, but I still remember how little I understood when I started, and how dependent I was on outside sources, many of them actively misleading, to point me in the right direction. I wasn’t alone, either. Many of my friends in their twenties were freaking out over how unprepared they were to manage their own money. The language of finance seemed too daunting to master, and there was a palpable sense that we were all faking our way into adulthood. At cocktail parties, whenever I had to explain what I did for a living, I’d ask: “Well, do you know what a mutual fund is?” If the answer was yes, I would go on to explain how a hedge fund was different—but the answer was usually no. And I don’t blame anyone for this. There was good advice to be had: I became a regular on the Bogleheads forum, which is still where I’d advise an aspiring investor to poke around first. But you had to seek it out, at a time when a huckster like Kiyosaki was receiving respectful press as long as his books were selling. It was easier to write stories about his run on the bestseller list than to honestly interrogate the statements he was making. People bought Rich Dad, Poor Dad because they heard that other people were buying it, and it’s what finally gave Kiyosaki the wealth that he claimed to have earned. The snake ate its own tail.
And that was the most insidious phenomenon of all. I’ve been thinking about Kiyosaki a lot recently, and not just because the Republican presidential nominee is a self-help financial guru with an unreliable memoir of his own. If Trump and Kiyosaki were drawn to each other, it’s because they were kindred spirits. Like Kiyosaki, Trump appears to have made most of his current wealth from brand extension, licensing, and his work as a television personality on The Apprentice, and he benefited from indulgent media coverage that treated him for years as a property developer rather than as an entertainer. Trump is uncannily adept at promising the world to his followers while refusing to provide any specifics about how his goals could be achieved, which is a skill that he honed as a financial guru: you always tease the reader by hinting at the answers that will be revealed in the next book, class or seminar. And he benefits, above all, from the same lack of basic knowledge—and the hunger for guidance of any kind—that led an intelligent college graduate, on the verge of applying for a position at a global hedge fund, to turn to Kiyosaki as a source of advice. If it weren’t for that fundamental confusion about how economic value is created, Trump wouldn’t be able to sell himself as someone with the business expertise to run the country, or as someone who “brilliantly” used almost a billion dollars of losses in a single year to avoid paying federal income taxes for two decades. And I can’t fault people for wanting to believe him, any more than I can blame them for buying into the seductive, empty pitch that Kiyosaki peddled for years. Because whenever I feel tempted to condescend to Trump’s supporters, I remind myself that I once fell for it, too.
Note: Spoilers follow for the series premiere of Westworld.
Producing a television series, as I’ve often said here before, is perhaps the greatest test imaginable of the amount of control that a storyteller can impose on any work of art. You may have a narrative arc in mind that works beautifully over five seasons, but before you even begin, you know that you’ll have to change the plan to deal with the unexpected: the departure of a star, budgetary limitations, negotiations with the network. Hanging overhead at all times is the specter of cancellation, which means that you don’t know if your story will be told over an hour, one season, or many years. You may not even be sure what your audience really wants. Maybe you’ve devoted a lot of thought to creating nuanced, complicated characters, only to realize that most viewers are tuning in for sex, violence, and sudden death scenes. It might even be to your advantage to make the story less realistic, keeping it all safely escapist to avoid raising uncomfortable questions. If you’re going to be a four-quadrant hit, you can’t appeal to just one demographic, so you’ve got to target some combination of teenagers and adults of both sexes. This doesn’t even include the critics, who are likely to nitpick the outcome no matter what. All you can really do, in the end, is set the machine going, adjust it as necessary on the fly, try to keep the big picture in mind, and remain open to the possibility that your creation will surprise you—which are conditions that the best shows create on purpose. But it doesn’t always go as it should, and successes and failures alike tend to wreak havoc with the plans of their creators. Television, you might say, finds a way.
The wonderful thing about Westworld, which might have the best pilot for any show since Mad Men, is that it delivers exceptional entertainment while also functioning as an allegory that you can read in any number of ways. Michael Crichton’s original movie, which I haven’t seen, was pitched as a commentary on the artificially cultivated experience offered to us by parks like Disney World, an idea that he later revisited with far more lucrative results. Four decades later, the immersive, open world experience that Westworld evokes is more likely to remind us of certain video games, which serve as a sandbox in which we can indulge in our best or worst impulses with maximum freedom of movement. (The character played by Ed Harris is like a player who has explored the game so throughly that he’s more interested now in looking for exploits or glitches in the code.) Its central premise—a theme park full of androids that are gradually attaining sentience—suggests plenty of other parallels, and I’m sure the series will investigate most of them eventually. But I’m frankly most inclined to see it as a show about the act of making television itself. Series creators Jonathan Nolan and Lisa Joy have evidently mapped out a narrative for something like the next five or six seasons, which feels like an attempt to reassure viewers frustrated by the way in which serialized, mythology-driven shows tend to peter out toward the end, or to endlessly tease mysteries without ever delivering satisfying answers. But I wonder if Nolan and Joy also see themselves in Dr. Ford, played here with unusual restraint and cleverness by Anthony Hopkins, who looks at his own creations and muses about how little control he really has over the result.
It’s always dangerous to predict a show’s future from the pilot alone, and I haven’t seen the other episodes that were sent to critics for review. Westworld’s premise is also designed to make you even more wary than usual about trying to forecast a system as complicated as an ambitious cable series, especially one produced by J.J. Abrams. (There are references to the vagaries of television production in the pilot itself, much of which revolves around a technical problem that forces the park’s head writer to rewrite scenes overnight, cranking up the body count in hopes that guests won’t notice the gaps in the narrative. And one of its most chilling moments comes down to the decision to recast a key supporting role with a more cooperative performer.) After the premiere, which we both loved, my wife worried that we’ll just get disillusioned by the show over time, as we did with Game of Thrones. It’s always possible, and the number of shows over the last decade that have sustained a high level of excellence from first episode to last basically starts and ends with Mad Men—which, interestingly, was also a show about writing, and the way in which difficult concepts have to be sold and marketed to a large popular audience. But I have high hopes. The underlying trouble with Game of Thrones was a structural one: one season after another felt like it was marking time in its middle stretches, cutting aimlessly between subplots and relying on showy moments of violence to keep the audience awake, and many of its issues arose from a perceived need to keep from getting ahead of the books. It became a show that only knew how to stall and shock, and I would have been a lot more forgiving of its sexual politics if I had enjoyed the rest of it, or if I believed that the showrunners were building to something worthwhile.
I have more confidence in Westworld, in part because the pilot is such a confident piece of storytelling, but also because the writers aren’t as shackled by the source. And I feel almost grateful for the prospect of fully exploring this world over multiple seasons with this cast and these writers. Jonathan Nolan, in particular, has been overshadowed at times by his brother Christopher, who would overshadow anyone, but his résumé as a writer is just as impressive: the story for Memento, the scripts for The Dark Knight and The Dark Knight Rises, and that’s just on the movie side. (I haven’t seen Person of Interest, but I’ve heard it described as the best science fiction show on television, camouflaged in plain sight as a procedural.) Nolan has always tended to cram more ideas into one screenplay than a movie can comfortably hold, which is a big part of his appeal: The Dark Knight is so overflowing with invention that it only underlines the limpness of the storytelling in most of the Marvel movies. What excites me about Westworld is the opportunity it presents for Nolan to allow the story to breathe, going down interesting byways and exploring its implications at length. And the signs so far are very promising. The plot is a model of story construction, to the point where I’d use it as an example in a writing class: it introduces its world, springs a few big surprises, tells us something about a dozen characters, and ends on an image that is both inevitable and deliciously unexpected. Even its references to other movies are more interesting than most. A visual tribute to The Searchers seems predictable at first, but when the show repeats it, it becomes a wry commentary on how an homage can take the place of real understanding. And a recurring bit with a pesky fly feels like a nod to Psycho, which implicated the audience in similar ways. As Mrs. Bates says to us in one of her last lines: “I hope they are watching. They’ll see.”
Curtis Hanson, who died earlier this week, directed one movie that I expect to revisit endlessly for the rest of my life, and a bunch of others that I’m not sure I’ll ever watch again. Yet it’s those other films, rather than his one undisputed masterpiece, that fascinate me the most. L.A. Confidential—which I think is one of the three or four best movies made in my lifetime—would be enough to secure any director’s legacy, and you couldn’t have blamed Hanson for trying to follow up that great success with more of the same. Instead, he delivered a series of quirky, shaggy stories that followed no discernible pattern, aside from an apparent determination to strike out in a new direction every time: Wonder Boys, 8 Mile, In Her Shoes, Lucky You, Too Big to Fail, and Chasing Mavericks. I’ve seen them all, except for the last, which Hanson had to quit halfway through after his health problems made it impossible for him to continue. I’ve liked every single one of them, even Lucky You, which made about as minimal an impression on the world as any recent film from a major director. And what I admire the most about the back half of Hanson’s career is its insistence that a filmmaker’s choice of projects can form a kind of parallel narrative, unfolding invisibly in the silences and blank spaces between the movies themselves.
There comes a point in the life of every director, in fact, when each new film is freighted with a significance that wasn’t there in the early days. Watching Bridge of Spies recently, I felt heavy with the knowledge that Spielberg won’t be around forever. We don’t know how many more movies he’ll make, but it’s probably more than five and fewer than ten. As a result, there’s a visible opportunity cost attached to each one, and a year of Spielberg’s time feels more precious now than it did in the eighties. This sort of pressure becomes even more perceptible after a director has experienced a definitive triumph in the genre for which he or she is best known. After Goodfellas, Martin Scorsese seemed anxious to explore new kinds of narrative, and the result—the string of movies that included The Age of Innocence, Kundun, Bringing Out the Dead, and Hugo—was sometimes mixed in quality, but endlessly intriguing in its implications. Years ago, David Thomson wrote of Scorsese: “His search for new subjects is absorbing and important.” You could say much the same of Ridley Scott, Clint Eastwood, or any number of other aging, prolific directors with the commercial clout to pick their own material. In another thirty years or so, I expect that we’ll be saying much the same thing about David Fincher and Christopher Nolan. (If a director is less productive and more deliberate, his unfinished projects can end up carrying more mythic weight than most movies that actually get made, as we’re still seeing with Stanley Kubrick.)
Hanson’s example is a peculiar one because his choices were the subject of intense curiosity, at least from me, at a much earlier stage than usual. This is in part because L.A. Confidential is a movie of such clarity, confidence, and technical ability that it seemed to herald a director who could do just about anything. In a way, it did—but not in a manner that anyone could have anticipated. Hanson’s subsequent choices could come off as eccentric, and not after the fashion of Steven Soderbergh, who settled into a pattern of one for himself, one for the masses. The movies after Wonder Boys are the work of a man who was eager to reach a large popular audience, but not in the sense his fans were expecting, and with a writerly, almost novelistic approach that frustrated any attempt to pin him down to a particular brand. It’s likely that this was also a reflection of how hard it is to make a modestly budgeted movie for grownups, and Hanson’s filmography may have been shaped mostly by what projects he was able to finance. (This also accounts for the confusing career of his collaborator Brian Helgeland, who drifted after L.A. Confidential in ways that make Hanson seem obsessively focused.) His IMDb page was littered with the remains of ideas, like an abortive adaptation of The Crimson Petal and the White, that he was never able to get off the ground. His greatest accomplishment, I suspect, was to make the accidents of a life in Hollywood seem like the result of his own solitary sensibilities.
Yet we’re still left with the boundless gift of L.A. Confidential, which I’ve elsewhere noted is the movie that has had the greatest impact on my writing life. (My three published novels are basically triangulations between L.A. Confidential, Foucault’s Pendulum, and The Day of the Jackal, with touches of Thomas Harris and The X-Files, but it was Hanson, even more than James Ellroy, who first taught me the pleasures of a triple plot.) It has as many great scenes as The Godfather, and as deep a bench of memorable performances, and it’s the last really complicated story that a studio ever allowed itself. When you look at the shine of its images and the density of its screenplay, you realize that its real descendants can be found in the golden age of television, although it accomplishes more in two and a half hours than most prestige dramas can pull off in ten episodes. It’s a masterpiece of organization that still allows itself to breathe, and it keeps an attractive gloss of cynicism while remaining profoundly humane. I’m watching it again as I write this, and I’m relieved to find that it seems ageless: it’s startling to realize that it was released nearly two decades ago, and that a high school student discovering it now will feel much as I did when I saw Chinatown. When it first came out, I was almost tempted to undervalue it because it went down so easily, and it took me a few years to recognize that it was everything I’d ever wanted in a movie. And it still is—even if Hanson himself always seemed conscious of its limitations, and restless in his longing to do more.
Note: Spoilers follow for Stranger Things.
One of the first images we see on the television show Stranger Things is a poster for John Carpenter’s The Thing. (In fact, it’s only as I type this now that it occurs to me that the title of the series, which premiered earlier this summer on Netflix, might be an homage as well.) It’s hanging in the basement of one of the main characters, a twelve year old named Mike, who is serving as the Dungeon Master of a roleplaying campaign with three of his best friends. You can see the poster in the background for most of the scene, and in a later episode, two adults watch the movie at home, oblivious to the fact that a monster from another dimension is stalking the inhabitants of their town in Indiana. Not surprisingly, I was tickled to see my favorite story by John W. Campbell featured so prominently here: Campbell wrote “Who Goes There?” back in 1937, and the fact that it’s still a reference point for a series like this, almost eighty years later, is astounding. Yet apart from these two glimpses, The Thing doesn’t have much in common with Stranger Things. The former is set in a remote Antarctic wasteland in which no one is what he seems; the latter draws from a different tradition in science fiction, with gruesome events emerging from ordinary, even idyllic, surroundings, and once we’ve identified all the players, everything is more or less exactly what it appears to be. It flirts with paranoia, but it’s altogether cozy, even reassuring, in how cleverly it gives us just what we expect.
That said, Stranger Things is very good at achieving what it sets out to do. The date of the opening scene is November 6, 1983, and once Mike’s best friend Will is pulled by a hideous creature into a parallel universe, the show seems determined to reference every science fiction or fantasy movie of the previous five years. Its most obvious touchstones are E.T., Poltergeist, The Goonies, and Close Encounters of the Third Kind, but there are touches of The Fury as well, and even shades of Stephen King. (Will’s older brother, played by Charlie Heaton, looks eerily like a young King, and the narrative sometimes feels like an attempt to split the difference between Firestarter and It.) Visually, it goes past even Super 8 in its meticulous reconstruction of the look and feel of early Steven Spielberg, and the lighting and cinematography are exquisitely evocative of its source. The characters and situations are designed to trigger our memories, too, and the series gets a lot of mileage out of recombining the pieces: we’re invited to imagine the kids from The Goonies going after whatever was haunting the house in Poltergeist, with a young girl with psychokinetic powers taking the place of E.T. As Will’s mother, Winona Ryder initially comes off as a combination of the Melinda Dillon and Richard Dreyfuss characters from Close Encounters—she’s frantic at Will’s disappearance, but she also develops an intriguing streak of obsession, hanging up holiday lights in her house and watching them flicker in hopes of receiving a message from her missing son. And it can be fun to see these components slide into place.
It’s only when the characters are asked to stand for something more than their precursors that the series starts to falter. Ryder’s character doesn’t develop after the first couple of episodes, and she keeps hitting the same handful of notes. Once the players have been established, they don’t act in ways that surprise us or push against the roles that they’ve been asked to embody, and most of the payoffs are telegraphed well in advance. The only adult character who really sticks in the mind is the police chief played by David Harbour, and that’s due less to the writing than to Harbour’s excellent work as a rock-solid archetype. Worst of all, the show seems oddly uncertain about what to do with its kids, who should be the main attraction. They all look great with their bikes and walkie-talkies, and Gaten Matarazzo’s Dustin is undeniably endearing—he’s the show’s only entirely successful character. But they spend too much time squabbling among themselves, when a story like this really demands that they present a unified front against the adult world. For the most part, the interpersonal subplots do nothing but mark time: we don’t know enough about the characters to be invested in their conflicts or romances, and far too many scenes play like a postponement of the real business at hand. Any story about the paranormal is going to have one character trying to get the others to believe, but it’s all in service of the moment when they put their differences aside. When everyone teams up on Stranger Things, it’s satisfying, but it occurs just one episode before the finale, and before we have a chance to absorb or enjoy it, it’s over.
And part of the problem, I think, is that Stranger Things tells the kind of story that might have been better covered in two hours, rather than eight. When I go back and watch the Spielberg films that the series is trying to evoke, what strikes me first is an unusual absence of human conflict. In both Close Encounters and E.T., the shadowy government operatives turn out to be unexpectedly benevolent, and the worst villains we see are monsters of venality, like the councilmen who keep the beaches open in Jaws or the developers who build on a graveyard in Poltergeist. For the most part, the characters are too busy dealing with the wonders or terrors on display to fight among themselves. In The Goonies, the kids are arguing all the time, like the crew in Jaws, but it never slows down the plot: they keep stumbling into new set pieces. It’s a strategy that works fine for a movie, in which the glow of the images and situations is enough to carry us to the climax, but a season of television can’t run on that battery alone. As a result, Stranger Things feels obliged to bring in conflicts that will keep the wheels turning, even if it lessens the appeal of the whole. The men in black are anonymous bad guys, full stop, and the show isn’t above using them to pad an episode’s body count, with the psychokinetic girl Eleven snapping their necks with her mind. (I kept expecting her to simply blow up the main antagonist, as Amy Irving—Spielberg’s future wife—did to John Cassavetes in The Fury, and I was half right.) Sustaining a sense of awe or dread over multiple episodes would have been a much harder trick than getting the lighting just right. And the strangest thing about Stranger Things is that it makes us think it might have been possible.
Earlier this week, The A.V. Club, which is still the pop culture website at which I spend the vast majority of my online life, announced a new food section called “Supper Club.” It’s helmed by the James Beard Award-winning food critic and journalist Kevin Pang, a talented writer and documentarian whose work I’ve admired for years. On Wednesday, alongside the site’s usual television and movie coverage, seemingly half the homepage was devoted to features like “America’s ten tastiest fast foods,” followed a day later by “All of Dairy Queen’s Blizzards, ranked.” And the reaction from the community was—not good. Pang’s introductory post quickly drew over a thousand comments, with the most upvoted response reading:
I’ll save you about six months of pissed-away cash. Please reallocate the money that will be wasted on this venture to add more shows to the TV Club review section.
Most of the other food features received the same treatment, with commenters ignoring the content of the articles themselves and complaining about the new section on principle. Internet commenters, it must be said, are notoriously resistant to change, and most vocal segment of the community represents a tiny fraction of the overall readership of The A.V. Club. But I think it’s fair to say that the site’s editors can’t be entirely happy with how the launch has gone.
Yet the readers aren’t altogether wrong, either, and in retrospect, you could make a good case that the rollout should have been handled differently. The A.V. Club has gone through a rough couple of years, with many of its most recognizable writers leaving to start the movie site The Dissolve—which recently folded—even as its signature television coverage has been scaled back. Those detailed reviews of individual episodes might be popular with commenters, but they evidently don’t generate enough page views to justify the same degree of investment, and the site is looking at ways to stabilize its revenue at a challenging time for the entire industry. The community is obviously worried abut this, and Supper Club happened to appear at a moment when the commenters were likely to be skeptical about any new move, as if it were all a zero-sum game, which it isn’t. But the launch itself didn’t help matters. It makes sense to start an enterprise like this with a lot of articles on its first day, but taking over half the site with minimal advance warning lost it a lot of goodwill. Pang could also have been introduced more gradually: he’s a celebrity in foodie circles, but to most A.V. Club readers, he’s just a name. (It was also probably a miscalculation to have Pang write the introductory post himself, which placed him in the awkward position of having to drum up interest in his own work for an audience that didn’t know who he was.) And while I’ve enjoyed some of the content so far, and I understand the desire to keep the features lightweight and accessible, I don’t think the site has done itself any favors by leading with articles like “Do we eat soup or do we drink soup?”
This might seem like a lot of analysis for a kerfuffle that will be forgotten within a few weeks, no matter how Supper Club does in the meantime. But The A.V. Club has been a landmark site for pop culture coverage for the last decade, and its efforts to reinvent itself should concern anyone who cares about whether such venues can survive. I found myself thinking about this shortly after reading the excellent New Yorker profile of Pete Wells, the restaurant critic of the New York Times. Its author, Ian Parker, notes that modern food writing has become a subset of cultural criticism:
“A lot of reviews now tend to be food features,” [former Times restaurant critic Mimi Sheraton] said. She recalled a reference to Martin Amis in a Wells review of a Spanish restaurant in Brooklyn; she said she would have mentioned Amis only “if he came in and sat down and ordered chopped liver.”
Craig Claiborne, in a review from 1966, observed, “The lobster tart was palatable but bland and the skewered lamb on the dry side. The mussels marinière were creditable.” Thanks, in part, to the informal and diverting columns of Gael Greene, at New York, and Ruth Reichl, the Times’ critic during the nineties, restaurant reviewing in American papers has since become as much a vehicle for cultural criticism and literary entertainment—or, as Sheraton put it, “gossip”—as a guide to eating out.
If this is true, and I think it is, it means that food criticism, for better or worse, falls squarely within the mandate of The A.V. Club, whether its commenters like it or not.
But that doesn’t mean that we shouldn’t hold The A.V. Club to unreasonably high standards. In fact, we should be harder on it than we would on most sites, for reasons that Parker neatly outlines in his profile of Wells:
As Wells has come to see it, a disastrous restaurant is newsworthy only if it has a pedigree or commercial might. The mom-and-pop catastrophe can be overlooked. “I shouldn’t be having to explain to people what the place is,” he said. This reasoning seems civil, though, as Wells acknowledged, it means that his pans focus disproportionately on restaurants that have corporate siblings. Indeed, hype is often his direct or indirect subject. Of the fifteen no-star evaluations in his first four years, only two went to restaurants that weren’t part of a group of restaurants.
Parker continues: “There are restaurants that exist to have four Times stars. With fewer, they become a kind of paradox.” And when it comes to pop culture, The A.V. Club is the equivalent of a four-star restaurant. It was writing deeply felt, outrageously long essays on film and television before the longread was even a thing—in part, I suspect, because of its historical connection to The Onion: because it was often mistaken for a parody site, it always felt the need to prove its fundamental seriousness, which it did, over and over again. If Supper Club had launched with one of the ambitious, richly reported pieces that Pang has written elsewhere, the response might have been very different. Listicles might make more economic sense, and they can be fun if done right, but The A.V. Club has defined itself as a place where obsessively detailed and personal pop culture writing has a home. That’s what Supper Club should be. And until it is, we shouldn’t be surprised if readers have trouble swallowing it.