Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Search Results

Peak television and the future of stardom

with one comment

Kevin Costner in The Postman

Earlier this week, I devoured the long, excellent article by Josef Adalian and Maria Elena Fernandez of Vulture on the business of peak television. It’s full of useful insights and even better gossip—and it names plenty of names—but there’s one passage that really caught my eye, in a section about the huge salaries that movie stars are being paid to make the switch to the small screen:

A top agent defends the sums his clients are commanding, explaining that, in the overall scheme of things, the extra money isn’t all that significant. “Look at it this way,” he says. “If you’re Amazon and you’re going to launch a David E. Kelley show, that’s gonna cost $4 million an episode [to produce], right? That’s $40 million. You can have Bradley Whitford starring in it, [who is] gonna cost you $150,000 an episode. That’s $1.5 million of your $40 million. Or you could spend another $3.5 million [to get Costner] on what will end up being a $60 million investment by the time you market and promote it. You can either spend $60 [million] and have the Bradley Whitford show, or $63.5 [million] and have the Kevin Costner show. It makes a lot of sense when you look at it that way.”

With all due apologies to Bradley Whitford, I found this thought experiment fascinating, and not just for the reasons that the agent presumably shared it. It implies, for one thing, that television—which is often said to be overtaking Hollywood in terms of quality—is becoming more like feature filmmaking in another respect: it’s the last refuge of the traditional star. We frequently hear that movie stardom is dead and that audiences are drawn more to franchises than to recognizable faces, so the fact that cable and streaming networks seem intensely interested in signing film stars, in a post-True Detective world, implies that their model is different. Some of it may be due to the fact, as William Goldman once said, that no studio executive ever got fired for hiring a movie star: as the new platforms fight to establish themselves, it makes sense that they’d fall back on the idea of star power, which is one of the few things that corporate storytelling has ever been able to quantify or understand. It may also be because the marketing strategy for television inherently differs from that for film: an online series is unusually dependent on media coverage to stand out from the pack, and signing a star always generates headlines. Or at least it once did. (The Vulture article notes that Woody Allen’s new series for Amazon “may end up marking peak Peak TV,” and it seems a lot like a deal that was made for the sake of the coverage it would produce.)

Kevin Costner in JFK

But the most plausible explanation lies in simple economics. As the article explains, Netflix and the other streaming companies operate according to a “cost-plus” model: “Rather than holding out the promise of syndication gold, the company instead pays its studio and showrunner talent a guaranteed up-front profit—typically twenty or thirty percent above what it takes to make a show. In exchange, it owns all or most of the rights to distribute the show, domestically and internationally.” This limits the initial risk to the studio, but also the potential upside: nobody involved in producing the show itself will see any money on the back end. In addition, it means that even the lead actors of the series are paid a flat dollar amount, which makes them a more attractive investment than they might be for a movie. Most of the major stars in Hollywood earn gross points, which means that they get a cut of the box office receipts before the film turns a profit—a “first dollar” deal that makes the mathematics of breaking even much more complicated. The thought experiment about Bradley Whitford and Kevin Costner only makes sense if you can get Costner at a fixed salary per episode. In other words, movie stars are being actively courted by television because its model is a throwback to an earlier era, when actors were held under contract by a studio without any profit participation, and before stars and their agents negotiated better deals that ended up undermining the economic basis of the star system entirely.

And it’s revealing that Costner, of all actors, appears in this example. His name came up mostly because multiple sources told Vulture that he was offered $500,000 per episode to star in a streaming series: “He passed,” the article says, “but industry insiders predict he’ll eventually say ‘yes’ to the right offer.” But he also resonates because he stands for a kind of movie stardom that was already on the wane when he first became famous. It has something to do with the quintessentially American roles that he liked to play—even JFK is starting to seem like the last great national epic—and an aura that somehow kept him in leading parts two decades after his career as a major star was essentially over. That’s weirdly impressive in itself, and it testifies to how intriguing a figure he remains, even if audiences aren’t likely to pay to see him in a movie. Whenever I think of Costner, I remember what the studio executive Mike Medavoy once claimed to have told him right at the beginning of his career:

“You know,” I said to him over lunch, “I have this sense that I’m sitting here with someone who is going to become a great big star. You’re going to want to direct your own movies, produce your own movies, and you’re going to end up leaving your wife and going through the whole Hollywood movie-star cycle.”

Costner did, in fact, end up leaving his first wife. And if he also leaves film for television, even temporarily, it may reveal that “the whole Hollywood movie-star cycle” has a surprising final act that few of us could have anticipated.

Written by nevalalee

May 27, 2016 at 9:03 am

“Asthana glanced over at the television…”

leave a comment »

"A woman was standing just over his shoulder..."

Note: This post is the eighteenth installment in my author’s commentary for Eternal Empire, covering Chapter 19. You can read the previous installments here.

A quarter of a century ago, I read a story about the actor Art Carney, possibly apocryphal, that I’ve never forgotten. Here’s the version told by the stage and television actress Patricia Wilson:

During a live performance of the original Honeymooners, before millions of viewers, Jackie [Gleason] was late making an entrance into a scene. He left Art Carney onstage alone, in the familiar seedy apartment set of Alice and Ralph Kramden. Unflappable, Carney improvised action for Ed Norton. He looked around, scratched himself, then went to the Kramden refrigerator and peered in. He pulled out an orange, shuffled to the table, and sat down and peeled it. Meanwhile frantic stage managers raced to find Jackie. Art Carney sat onstage peeling and eating an orange, and the audience convulsed with laughter.

According to some accounts, Carney stretched the bit of business out for a full two minutes before Gleason finally appeared. And while it certainly speaks to Carney’s ingenuity and resourcefulness, we should also take a moment to tip our hats to that humble orange, as well as the prop master who thought to stick it in the fridge—unseen and unremarked—in the first place.

Theatrical props, as all actors and directors know, can be a source of unexpected ideas, just as the physical limitations or possibilities of the set itself can provide a canvas on which the action is conceived in real time. I’ve spoken elsewhere of the ability of vaudeville comedians to improvise routines on the spot using whatever was available on a standing set, and there’s a sense in which the richness of the physical environment in which a scene takes place is a battery from which the performances can draw energy. When a director makes sure that each actor’s pockets are full of the litter that a character might actually carry, it isn’t just a mark of obsessiveness or self-indulgence, or even a nod toward authenticity, but a matter of storing up potential tools. A prop by itself can’t make a scene work, but it can provide the seed around which a memorable moment or notion can grow, like a crystal. In more situations than you might expect, creativity lies less in the ability to invent from scratch than to make effective use of whatever happens to lie at hand. Invention is a precious resource, and most artists have a finite amount of it; it’s better, whenever possible, to utilize what the world provides. And much of the time, when you’re faced with a hard problem to solve, you’ll find that the answer is right there in the background.

"Asthana glanced over at the television..."

This is as true of writing fiction as of any of the performing arts. In the past, I’ve suggested that this is the true purpose of research or location work: it isn’t about accuracy, but about providing raw material for dreams, and any writer faced with the difficult task of inventing a scene would be wise to exploit what already exists. It’s infinitely easier to write a chase scene, for example, if you’re tailoring it to the geography of a particular street. As usual, it comes back to the problem of making choices: the more tangible or physical the constraints, the more likely they’ll generate something interesting when they collide with the fundamentally abstract process of plotting. Even if the scene I’m writing takes place somewhere wholly imaginary, I’ll treat it as if it were being shot on location: I’ll pick a real building or locale that has the qualities I need for the story, pore over blueprints and maps, and depart from the real plan only when I don’t have any alternative. In most cases, the cost of that departure, in terms of the confusion it creates, is far greater than the time and energy required to make the story fit within an existing structure. For much the same reason, I try to utilize the props and furniture you’d naturally find there. And that’s all the more true when a scene occurs in a verifiable place.

Sometimes, this kind of attention to detail can result in surprising resonances. There’s a small example that I like in Chapter 19 of Eternal Empire. Rogozin, my accused intelligence agent, is being held without charges at a detention center in Paddington Green. This is a real location, and its physical setup becomes very important: Rogozin is going to be killed, in an apparent suicide, under conditions of heavy security. To prepare these scenes, I collected reference photographs, studied published descriptions, and shaped the action as much as possible to unfold logically under the constraints the location imposed. And one fact caught my eye, purely as a matter of atmosphere: the cells at Paddington Green are equipped with televisions, usually set to play something innocuous, like a nature video. This had obvious potential as a counterpoint to the action, so I went to work looking for a real video that might play there. And after a bit of searching, I hit on a segment from the BBC series Life in the Undergrowth, narrated by David Attenborough, about the curious life cycle of the gall wasp. The phenomenon it described, as an invading wasp burrows into the gall created by another, happened to coincide well—perhaps too well—with the story itself. As far as I’m concerned, it’s what makes Rogozin’s death scene work. And while I could have made up my own video to suit the situation, it seemed better, and easier, to poke around the stage first to see what I could find…

Written by nevalalee

May 7, 2015 at 9:11 am

The unbreakable television formula

leave a comment »

Ellie Kemper in Unbreakable Kimmy Schmidt

Watching the sixth season premiere of Community last night on Yahoo—which is a statement that would have once seemed like a joke in itself—I was struck by the range of television comedy we have at our disposal these days. We’ve said goodbye to Parks and Recreation, we’re following Community into what is presumably its final stretch, and we’re about to greet Unbreakable Kimmy Schmidt as it starts what looks to be a powerhouse run on Netflix. These shows are superficially in the same genre: they’re single-camera sitcoms that freely grant themselves elaborate sight gags and excursions into surrealism, with a cutaway style that owes as much to The Simpsons as to Arrested Development. Yet they’re palpably different in tone. Parks and Rec was the ultimate refinement of the mockumentary style, with talking heads and reality show techniques used to flesh out a narrative of underlying sweetness; Community, as always, alternates between obsessively detailed fantasy and a comic strip version of emotions to which we can all relate; and Kimmy Schmidt takes place in what I can only call Tina Fey territory, with a barrage of throwaway jokes and non sequiturs designed to be referenced and quoted forever.

And the diversity of approach we see in these three comedies makes the dramatic genre seem impoverished. Most television dramas are still basically linear; they’re told using the same familiar grammar of establishing shots, medium shots, and closeups; and they’re paced in similar ways. If you were to break down an episode by shot length and type, or chart the transitions between scenes, an installment of Game of Thrones would look a lot on paper like one of Mad Men. There’s room for individual quirks of style, of course: the handheld cinematography favored by procedurals has a different feel from the clinical, detached camera movements of House of Cards. And every now and then, we get a scene—like the epic tracking shot during the raid in True Detective—that awakens us to the medium’s potential. But the fact that such moments are striking enough to inspire think pieces the next day only points to how rare they are. Dramas are just less inclined to take big risks of structure and tone, and when they do, they’re likely to be hybrids. Shows like Fargo or Breaking Bad are able to push the envelope precisely because they have a touch of black comedy in their blood, as if that were the secret ingredient that allowed for greater formal daring.

Jon Hamm on Mad Men

It isn’t hard to pin down the reason for this. A cutaway scene or extended homage naturally takes us out of the story for a second, and comedy, which is inherently more anarchic, has trained us to roll with it. We’re better at accepting artifice in comic settings, since we aren’t taking the story quite as seriously: whatever plot exists is tacitly understood to be a medium for the delivery of jokes. Which isn’t to say that we can’t care deeply about these characters; if anything, our feelings for them are strengthened because they take place in a stylized world that allows free play for the emotions. Yet this is also something that comedy had to teach us. It can be fun to watch a sitcom push the limits of plausibility to the breaking point, but if a drama deliberately undermines its own illusion of reality, we can feel cheated. Dramas that constantly draw attention to their own artifice, as Twin Peaks did, are more likely to become cult favorites than popular successes, since most of us just want to sit back and watch a story that presents itself using the narrative language we know. (Which, to be fair, is true of comedies as well: the three sitcoms I’ve mentioned above, taken together, have a fraction of the audience of something like The Big Bang Theory.)

In part, it’s a problem of definition. When a drama pushes against its constraints, we feel more comfortable referring to it as something else: Orange is the New Black, which tests its structure as adventurously as any series on the air today, has suffered at awards season from its resistance to easy categorization. But what’s really funny is that comedy escaped from its old formulas by appropriating the tools that dramas had been using for years. The three-camera sitcom—which has been responsible for countless masterpieces of its own—made radical shifts of tone and location hard to achieve, and once comedies liberated themselves from the obligation to unfold as if for a live audience, they could indulge in extended riffs and flights of imagination that were impossible before. It’s the kind of freedom that dramas, in theory, have always had, even if they utilize it only rarely. This isn’t to say that a uniformity of approach is a bad thing: the standard narrative grammar evolved for a reason, and if it gives us compelling characters with a maximum of transparency, that’s all for the better. Telling good stories is hard enough as it is, and formal experimentation for its own sake can be a trap in itself. Yet we’re still living in a world with countless ways of being funny, and only one way, within a narrow range of variations, of being serious. And that’s no laughing matter.

The crowded circle of television

with 2 comments

The cast of Mad Men

Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s question: “What’s your favorite TV show of the year so far?”

There are times when watching television can start to feel like a second job—a pleasurable one, to be sure, but one that demands a lot of work nevertheless. Over the last year, I’ve followed more shows than ever, including Mad Men, Game of Thrones, Orange is the New Black, Hannibal, Community, Parks and Recreation, House of Cards, The Vampire Diaries, and True Detective. For the most part, they’ve all had strong runs, and I’d have trouble picking a favorite. (If pressed, I’d probably go with Mad Men, if only for old times’ sake, with Hannibal as a very close second.) They’re all strikingly different in emphasis, tone, and setting, but they also have a lot in common. With one exception, which I’ll get to in a moment, these are dense shows with large casts and intricate storylines. Many seem devoted to pushing the limits of how much complexity can be accommodated within the constraints of the television format, which may be why the majority run for just ten to thirteen episodes: it’s hard to imagine that level of energy sustained over twenty or more installments.

And while I’m thrilled by the level of ambition visible here, it comes at a price. There’s a sort of arms race taking place between media of all kinds, as they compete to stand out in an increasingly crowded space with so much competing for our attention. Books, even literary novels, are expected to be page-turners; movies offer up massive spectacle to the point where miraculous visual effects are taken for granted; and television has taken to packing every minute of narrative time to the bursting point. (This isn’t true of all shows, of course—a lot of television series are still designed to play comfortably in the background of a hotel room—but it’s generally the case with prestige shows that end up on critics’ lists and honored at award ceremonies.) This trend toward complexity arises from a confluence of factors I’ve tried to unpack here before: just as The Simpsons was the first freeze-frame sitcom, modern television takes advantage of our streaming and binge-watching habits to deliver storytelling that rewards, and even demands, close attention.

Matthew McConaughey on True Detective

For the most part, this is a positive development. Yet there’s also a case to be made that television, which is so good at managing extended narratives and enormous casts of characters, is also uniquely suited for the opposite: silence, emptiness, and contemplation. In a film, time is a precious commodity, and when you’re introducing characters while also setting in motion the machinery of a complicated story, there often isn’t time to pause. Television, in theory, should be able to stretch out a little, interspersing relentless forward momentum with moments of quiet, which are often necessary for viewers to consolidate and process what they’ve seen. Twin Peaks was as crowded and plotty as any show on the air today, but it also found time for stretches of weird, inexplicable inaction, and it’s those scenes that I remember best. Even in the series finale, with so many threads to address and only forty minutes to cover them all, it devotes endless minutes to Cooper’s hallucinatory—and almost entirely static—ordeal in the Black Lodge, and even to a gag involving a decrepit bank manager rising from his desk and crossing the floor of his branch very, very slowly.

So while there’s a lot of fun to be had with shows that constantly accelerate the narrative pace, it can also be a limitation, especially when it’s handled less than fluently. (For every show, like Orange is the New Black, that manages to cut expertly between subplots, there’s another, like Game of Thrones, that can’t quite seem to handle its enormous scope, and even The Vampire Diaries is showing signs of strain.) Both Hannibal and Mad Men know when to linger on an image or revelation—roughly half of Hannibal is devoted to contemplating its other half—and True Detective, in particular, seemed to consist almost entirely of such pauses. We remember such high points as the final chase with the killer or the raid in “Who Goes There,” but what made the show special were the scenes in which nothing much seemed to be happening. It was aided in this by its limited cast and its tight focus on its two leads, so it’s possible that what shows really need to slow things down are a couple of movie stars to hold the eye. But it’s a step in the right direction. If time is a flat circle, as Rust says, so is television, and it’s good to see it coming back around.

The dreamlife of television

with one comment

Aaron Paul on Breaking Bad

I’ve been dreaming a lot about Breaking Bad. On Wednesday, my wife and I returned from a trip to Barcelona, where we’d spent a beautiful week: my baby daughter was perfectly happy to be toted around various restaurants, cultural sites, and the Sagrada Familia, and it came as a welcome break from my own work. Unfortunately, it also meant that we were going to miss the Breaking Bad finale, which aired the Sunday before we came home. For a while, I seriously considered bringing my laptop and downloading it while we were out of the country, both because I was enormously anxious to see how the show turned out and because I dreaded the spoilers I’d have to avoid for the three days before we returned. In the end, I gritted my teeth and decided to wait until we got home. This meant avoiding most of my favorite news and pop cultural sites—I was afraid to even glance past the top few headlines on the New York Times—and staying off Twitter entirely, which I suppose wasn’t such a great loss. And even as we toured the Picasso Museum and walked for miles along the marina with a baby in tow, my thoughts were rarely very far from Walter White.

This must have done quite a number on my psyche, because I started dreaming about the show with alarming frequency. My dreams included two separate, highly elaborated versions of the finale, one of which was a straightforward bloodbath with a quiet epilogue, the other a weird metafictional conclusion in which the events of the series were played out on a movie screen with the cast and crew watching them unfold—which led me to exclaim, while still dreaming: “Of course that’s how they would end it!” Now that I’ve finally seen the real finale, the details of these dreams are fading, and only a few scraps of imagery remain. Yet the memories are still emotionally charged, and they undoubtedly affected how I approached the last episode itself, which I was afraid would never live up to the versions I’d dreamed for myself. I suspect that a lot of fans, even those who didn’t actually hallucinate alternate endings, probably felt the same way. (For the record, I liked the finale a lot, even if it ranks a notch below the best episodes of the show, which was always best at creating chaos, not resolving it. And I think about its closing moments almost every day.)

Jon Hamm on Mad Men

And it made me reflect on the ways in which television, especially in its modern, highly serialized form, is so conducive to dreaming. Dreams are a way of assembling and processing fragments of the day’s experience, or recollections from the distant past, and a great television series is nothing less than a vast storehouse of memories from another life. When a show is as intensely serialized as Breaking Bad was, it can be hard to remember individual episodes, aside from the occasional formal standout like “Fly”: I can’t always recall what scenes took place when, or in what order, and an especially charged sequence of installments—like the last half of this final season—tends to blend together into a blur of vivid impressions. What I remember are facial expressions, images, bits of dialogue: “Stay out of my territory.” “Run.” “Tread lightly.” And the result is a mine of moments that end up naturally incorporated into my own subconscious. A good movie or novel exists as a piece, and I rarely find myself dreaming alternate lives for, say, Rick and Ilsa or Charles Foster Kane. With Walter White, it’s easy to imagine different paths that the action could have taken, and those byways play themselves out in the deepest parts of my brain.

Which may explain why television is so naturally drawn to dream sequences and fantasies, which are only one step removed from the supposedly factual events of the shows themselves. Don Draper’s dreams have become a huge part of Mad Men, almost to the point of parody, and this has always been an art form that attracts surreal temperaments, from David Lynch to Bryan Fuller, even if they tend to be destroyed by it. As I’ve often said before, it’s the strangest medium I know, and at its best, it’s the outcome of many unresolved tensions. Television can feel maddeningly real, a hidden part of your own life, which is why it can be so hard to say goodbye to a great show. It’s also impossible to get a lasting grip on it or to hold it all in your mind at once, especially if it runs for more than a few seasons, which hints at an even deeper meaning. I’ve always been struck by how poorly we integrate the different chapters in our own past: there are entire decades of my life that I don’t think about for months on end. When they return, it’s usually in the hours just before waking. And by teaching us to process narratives that can last for years, it’s possible that television subtly trains us to better understand the shapes of our own lives, even if it’s only in dreams.

Written by nevalalee

October 7, 2013 at 8:27 am

Posted in Television

Tagged with ,

Critical television studies

with 4 comments

The cast of Community

Television is such a pervasive medium that it’s easy to forget how deeply strange it is. Most works of art are designed to be consumed all at once, or at least in a fixed period of time—it’s physically possible, if not entirely advisable, to read War and Peace in one sitting. Television, by contrast, is defined by the fact of its indefinite duration. House of Cards aside, it seems likely that most of us will continue to watch shows week by week, year after year, until they become a part of our lives. This kind of extended narrative can be delightful, but it’s also subject to risk. A beloved show can change for reasons beyond anyone’s control. Sooner or later, we find out who killed Laura Palmer. An actor’s contract expires, so Mulder is abducted by aliens, and even if he comes back, by that point, we’ve lost interest. For every show like Breaking Bad that has its dark evolution mapped out for seasons to come, there’s a series like Glee, which disappoints, or Parks and Recreation, which gradually reveals a richness and warmth that you’d never guess from the first season alone. And sometimes a show breaks your heart.

It’s clear at this point that the firing of Dan Harmon from Community was the most dramatic creative upheaval for any show in recent memory. This isn’t the first time that a show’s guiding force has departed under less than amicable terms—just ask Frank Darabont—but it’s unusual in a series so intimately linked to one man’s particular vision. Before I discovered Community, I’d never heard of Dan Harmon, but now I care deeply about what this guy feels and thinks. (Luckily, he’s never been shy about sharing this with the rest of us.) And although it’s obvious from the opening minutes of last night’s season premiere that the show’s new creative team takes its legacy seriously, there’s no escaping the sense that they’re a cover band doing a great job with somebody else’s music. Showrunners David Guarascio and Moses Port do their best to convince us out of the gate that they know how much this show means to us, and that’s part of the problem. Community was never a show about reassuring us that things won’t change, but about unsettling us with its endless transformations, even as it delighted us with its new tricks.

The Community episode "Remedial Chaos Theory"

Don’t get me wrong: I laughed a lot at last night’s episode, and I was overjoyed to see these characters again. By faulting the new staff for repeating the same beats I loved before, when I might have been outraged by any major alterations, I’m setting it up so they just can’t win. But the show seems familiar now in a way that would have seemed unthinkable for most of its first three seasons. Part of the pleasure of watching the series came from the fact that you never knew what the hell might happen next, and it wasn’t clear if Harmon knew either. Not all of his experiments worked: there even some clunkers, like “Messianic Myths and Ancient Peoples,” in the glorious second season, which is one of my favorite runs of any modern sitcom. But as strange as this might have once seemed, it feels like we finally know what Community is about. It’s a show that takes big formal risks, finds the emotional core in a flurry of pop culture references, and has no idea how to use Chevy Chase. And although I’m grateful that this version of the show has survived, I don’t think I’m going to tune in every week wondering where in the world it will take me.

And the strange thing is that Community might have gone down this path with or without Harmon. When a show needs only two seasons to establish that anything is possible, even the most outlandish developments can seem like variations on a theme. Even at the end of the third season, there was the sense that the series was repeating itself. I loved “Digital Estate Planning,” for instance, but it felt like the latest attempt to do one of the formally ambitious episodes that crop up at regular intervals each season, rather than an idea that forced itself onto television because the writers couldn’t help themselves. In my review of The Master, I noted that Paul Thomas Anderson has perfected his brand of hermetic filmmaking to the point where it would be more surprising if he made a movie that wasn’t ambiguous, frustrating, and deeply weird. Community has ended up in much the same place, so maybe it’s best that Harmon got out when he did. It’s doubtful that the series will ever be able to fake us out with a “Critical Film Studies” again, because it’s already schooled us, like all great shows, in how it needs to be watched. And although its characters haven’t graduated from Greendale yet, its viewers, to their everlasting benefit, already have.

Written by nevalalee

February 8, 2013 at 9:50 am

Wouldn’t it be easier to write for television?

leave a comment »

Last week, I had dinner with a college friend I hadn’t seen in years, who is thinking about giving up a PhD in psychology to write for television in Los Angeles. We spent a long time commiserating about the challenges of the medium, at least from a writer’s point of view, hitting many of the points that I’ve discussed here before. With the prospects of a fledgling television show so uncertain, I said, especially when the show might be canceled after four episodes, or fourteen, or forty, it’s all but impossible for the creator to tell effective stories over time. Running a television show is one of the hardest jobs in the world, with countless obstacles along the way, even for critical darlings. Knowing all this, I asked my friend, why did he want to do this in the first place?

My friend’s response was an enlightening one. The trouble with writing novels or short stories, he said, is the fact that the author is expected to spend a great deal of time on description, style, and other tedious elements that a television writer can cheerfully ignore. Teleplays, like feature scripts, are nothing but structure and dialogue (or maybe just structure, as William Goldman says), and there’s something liberating in how they strip storytelling down to its core. The writer takes care of the bones of the narrative, which is where his primary interest presumably lies, then outsources the work of casting, staging, and art direction to qualified professionals who are happy to do the work. And while I didn’t agree with everything my friend said, I could certainly see his point.

Yet that’s only half of the story. It’s true that a screenwriter gets to outsource much of the conventional apparatus of fiction to other departments, but only at the price of creative control. You may have an idea about how a character should look, or what kind of home he should have, or how a moment of dialogue, a scene, or an overall story should unfold, but as a writer, you don’t have much control over the matter. Scripts are easier to write than novels for a reason: they’re only one piece of a larger enterprise, which is reflected in the writer’s relative powerlessness. The closest equivalent to a novelist in television isn’t the writer, but the executive producer. Gene Roddenberry, in The Making of Star Trek, neatly sums up the similarity between the two roles:

Producing in television is like storytelling. The choice of the actor, picking the right costumes, getting the right flavor, the right pace—these are as much a part of storytelling as writing out that same description of a character in a novel.

And the crucial point about producing a television series, like directing a feature film, is that it’s insanely hard. As Thomas Lennon and Robert Ben Garant point out in their surprisingly useful Writing Movies for Fun and Profit, as far as directing is concerned, “If you’re doing it right, it’s not that fun.” As a feature director or television producer, you’re responsible for a thousand small but critical decisions that need to be made very quickly, and while you’re working on the story, you’re also casting parts, scouting for locations, dealing with the studio and the heads of various departments, and surviving on only a few hours of sleep a night, for a year or more of your life. In short, the amount of effort required to keep control of the project is greater, not less, than what is required to write a novel—except with more money on the line, in public, and with greater risk that control will eventually be taken away from you.

So it easier to write for television? Yes, if that’s all you want to do. But if you want control of your work, if you want your stories to be experienced in a form close to what you originally envisioned, it isn’t easier. It’s much harder. Which is why, to my mind, John Irving still puts it best: “When I feel like being a director, I write a novel.”

Lessons from great (and not-so-great) television

with one comment

It can be hard for a writer to admit being influenced by television. In On Becoming a Novelist, John Gardner struck a disdainful note that hasn’t changed much since:

Much of the dialogue one encounters in student fiction, as well as plot, gesture, even setting, comes not from life but from life filtered through TV. Many student writers seem unable to tell their own most important stories—the death of a father, the first disillusionment in love—except in the molds and formulas of TV. One can spot the difference at once because TV is of necessity—given its commercial pressures—false to life.

In the nearly thirty years since Gardner wrote these words, the television landscape has changed dramatically, but it’s worth pointing out that much of what he says here is still true. The basic elements of fiction—emotion, character, theme, even plot—need to come from close observation of life, or even the most skillful novel will eventually ring false. That said, the structure of fiction, and the author’s understanding of the possibilities of the form, doesn’t need to come from life alone, and probably shouldn’t. To develop a sense of what fiction can do, a writer needs to pay close attention to all types of art, even the nonliterary kind. And over the past few decades, television has expanded the possibilities of narrative in ways that no writer can afford to ignore.

If you think I’m exaggerating, consider a show like The Wire, which tells complex stories involving a vast range of characters, locations, and social issues in ways that aren’t possible in any other medium. The Simpsons, at least in its classic seasons, acquired a richness and velocity that continued to build for years, until it had populated a world that rivaled the real one for density and immediacy. (Like the rest of the Internet, I respond to most situations with a Simpsons quote.) And Mad Men continues to furnish a fictional world of astonishing detail and charm. World-building, it seems, is where television shines: in creating a long-form narrative that begins with a core group of characters and explores them for years, until they can come to seem as real as one’s own family and friends.

Which is why Glee can seem like such a disappointment. Perhaps because the musical is already the archest of genres, the show has always regarded its own medium with an air of detachment, as if the conventions of the after-school special or the high school sitcom were merely a sandbox in which the producers could play. On some level, this is fine: The Simpsons, among many other great shows, has fruitfully treated television as a place for narrative experimentation. But by turning its back on character continuity and refusing to follow any plot for more than a few episodes, Glee is abandoning many of the pleasures that narrative television can provide. Watching the show run out of ideas for its lead characters in less than two seasons simply serves as a reminder of how challenging this kind of storytelling can be.

Mad Men, by contrast, not only gives us characters who take on lives of their own, but consistently lives up to those characters in its acting, writing, and direction. (This is in stark contrast to Glee, where I sense that a lot of the real action is taking place in fanfic.) And its example has changed the way I write. My first novel tells a complicated story with a fairly controlled cast of characters, but Mad Men—in particular, the spellbinding convergence of plots in “Shut the Door, Have a Seat”—reminded me of the possibilities of expansive casts, which allows characters to pair off and develop in unexpected ways. (The evolution of Christina Hendricks’s Joan from eye candy to second lead is only the most obvious example.) As a result, I’ve tried to cast a wider net with my second novel, using more characters and settings in the hopes that something unusual will arise. Television, strangely, has made me more ambitious. I’d like to think that even John Gardner would approve.

Written by nevalalee

March 17, 2011 at 8:41 am

Swallowing the turkey

with 2 comments

Benjamin Disraeli

Lord Rowton…says that he once asked Disraeli what was the most remarkable, the most self-sustained and powerful sentence he knew. Dizzy paused for a moment, and then said, “Sufficient unto the day is the evil thereof.”

—Augustus J.C. Hare, The Story of My Life

Disraeli was a politician and a novelist, which is an unusual combination, and he knew his business. Politics and writing have less to do with each other than a lot of authors might like to believe, and the fact that you can create a compelling world on paper doesn’t mean that you can do the same thing in real life. (One of the hidden themes of Astounding is that the skills that many science fiction writers acquired in organizing ideas on the page turned out to be notably inadequate when it came to getting anything done during World War II.) Yet both disciplines can be equally daunting and infuriating to novices, in large part because they both involve enormously complicated projects—often requiring years of effort—that need to be approached one day at a time. A single day’s work is rarely very satisfying in itself, and you have to cling to the belief that countless invisible actions and compromises will somehow result in something real. It doesn’t always happen, and even if it does, you may never get credit or praise. The ability to deal with the everyday tedium of politics or writing is what separates professionals from amateurs. And in both cases, the greatest accomplishments are usually achieved by freaks who can combine an overarching vision with a finicky obsession with minute particulars. As Eugène-Melchior de Vogüé, who was both a diplomat and literary critic, said of Tolstoy, it requires “a queer combination of the brain of an English chemist with the soul of an Indian Buddhist.”

And if you go into either field without the necessary degree of patience, the results can be unfortunate. If you’re a writer who can’t subordinate yourself to the routine of writing on a daily basis, the most probable outcome is that you’ll never finish your novel. In politics, you end up with something very much like what we’ve all observed over the last few weeks. Regardless of what you might think about the presidential refugee order, its rollout was clearly botched, thanks mostly to a president and staff that want to skip over all the boring parts of governing and get right to the good stuff. And it’s tempting to draw a contrast between the incumbent, who achieved his greatest success on reality television, and his predecessor, a detail-oriented introvert who once thought about becoming a novelist. (I’m also struck, yet again, by the analogy to L. Ron Hubbard. He spent most of his career fantasizing about a life of adventure, but when he finally got into the Navy, he made a series of stupid mistakes—including attacking two nonexistent submarines off the coast of Oregon—that ultimately caused him to be stripped of his command. The pattern repeated itself so many times that it hints at a fundamental aspect of his personality. He was too impatient to deal with the tedious reality of life during wartime, which failed to live up to the version he had dreamed of himself. And while I don’t want to push this too far, it’s hard not to notice the difference between Hubbard, who cranked out his fiction without much regard for quality, and Heinlein, a far more disciplined writer who was able to consciously tame his own natural impatience into a productive role at the Philadelphia Navy Yard.)

R.H. Blyth

Which brings us back to the sentence that impressed Disraeli. It’s easy to interpret it as an admonition not to think about the future, which isn’t quite right. We can start by observing that it comes at the end of what The Five Gospels notes is possibly “the longest connected discourse that can be directly attributed to Jesus.” It’s the one that asks us to consider the birds of the air and the lilies of the field, which, for a lot of us, prompts an immediate flashback to The Life of Brian. (“Consider the lilies?” “Uh, well, the birds, then.” “What birds?” “Any birds.” “Why?” “Well, have they got jobs?”) But whether or not you agree with the argument, it’s worth noticing that the advice to focus on the evils of each day comes only after an extended attempt at defining a larger set of values—what matters, what doesn’t, and what, if anything, you can change by worrying. You’re only in a position to figure out how best to spend your time after you’ve considered the big questions. As the physician William Osler put it:

[My ideal is] to do the day’s work well and not to bother about tomorrow. You may say that is not a satisfactory ideal. It is; and there is not one which the student can carry with him into practice with greater effect. To it more than anything else I owe whatever success I have had—to this power of settling down to the day’s work and trying to do it well to the best of my ability, and letting the future take care of itself.

This has important implications for both writers and politicians, as well as for progressives who wonder how they’ll be able to get through the next twenty-four hours, much less the next four years. When you’re working on any important project, even the most ambitious agenda comes down to what you’re going to do right now. In On Directing Film, David Mamet expresses it rather differently:

Now, you don’t eat a whole turkey, right? You take off the drumstick and you take a bite of the drumstick. Okay. Eventually you get the whole turkey done. It’ll probably get dry before you do, unless you have an incredibly good refrigerator and a very small turkey, but that is outside the scope of this lecture.

A lot of frustration in art, politics, and life in general comes from attempting to swallow the turkey in one bite. Jesus, I think, was aware of the susceptibility of his followers to grandiose but meaningless gestures, which is why he offered up the advice, so easy to remember and so hard to follow, to simultaneously focus on the given day while keeping the kingdom of heaven in mind. Nearly every piece of practical wisdom in any field is about maintaining that double awareness. Fortunately, it goes in both directions: small acts of discipline aid us in grasping the whole, and awareness of the whole tells us what to do in the moment. As R.H. Blyth says of Zen: “That is all religion is: eat when you are hungry, sleep when you are tired.” And don’t try to eat the entire turkey at once.

From Xenu to Xanadu

leave a comment »

L. Ron Hubbard

I do know that I could form a political platform, for instance, which would encompass the support of the unemployed, the industrialist and the clerk and day laborer all at one and the same time. And enthusiastic support it would be.

L. Ron Hubbard, in a letter to his wife Polly, October 1938

Yesterday, my article “Xenu’s Paradox: The Fiction of L. Ron Hubbard and the Making of Scientology” was published on Longreads. I’d been working on this piece, off and on, for the better part of a year, almost from the moment I knew that I was going to be writing the book Astounding. As part of my research, I had to read just about everything Hubbard ever wrote in the genres of science fiction and fantasy, and I ended up working my way through well over a million words of his prose. The essay that emerged from this process was inspired by a simple question. Hubbard clearly didn’t much care for science fiction, and he wrote it primarily for the money. Yet when the time came to invent a founding myth for Scientology, he turned to the conventions of space opera, which had previously played a minimal role in his work. Both his critics and his followers have looked hard at his published stories to find hints of the ideas to come, and there are a few that seem to point toward later developments. (One that frequently gets mentioned is “One Was Stubborn,” in which a fake religious messiah convinces people to believe in the nonexistence of matter so that he can rule the universe. There’s circumstantial evidence, however, that the premise came mostly from John W. Campbell, and that Hubbard wrote it up on the train ride home from New York to Puget Sound.) Still, it’s a tiny fraction of the whole. And such stories by other writers as “The Double Minds” by Campbell, “Lost Legacy” by Robert A. Heinlein, and The World of Null-A by A.E. van Vogt make for more compelling precursors to dianetics than anything Hubbard ever wrote.

The solution to the mystery, as I discuss at length in the article, is that Hubbard tailored his teachings to the small circle of followers he had available after his blowup with Campbell, many of whom were science fiction fans who owed their first exposure to his ideas to magazines like Astounding. And this was only the most dramatic and decisive instance of a pattern that is visible throughout his life. Hubbard is often called a fabulist who compulsively embellished own accomplishments and turned himself into something more than he really was. But it would be even more accurate to say that Hubbard transformed himself into whatever he thought the people around him wanted him to be. When he was hanging out with members of the Explorers Club, he became a barnstormer, world traveler, and intrepid explorer of the Caribbean and Alaska. Around his fellow authors, he presented himself as the most productive pulp writer of all time, inflating his already impressive word count to a ridiculous extent. During the war, he spun stories about his exploits in battle, claiming to have been repeatedly sunk and wounded, and even a former naval officer as intelligent and experienced as Heinlein evidently took him at his word. Hubbard simply became whatever seemed necessary at the time—as long as he was the most impressive man in the room. It wasn’t until he found himself surrounded by science fiction fans, whom he had mostly avoided until then, that he assumed the form that he would take for the rest of his career. He had never been interested in past lives, but many of his followers were, and the memories that they were “recovering” in their auditing sessions were often colored by the imagery of the stories they had read. And Hubbard responded by coming up with the grandest, most unbelievable space opera saga of them all.

Donald Trump

This leaves us with a few important takeaways. The first is that Hubbard, in the early days, was basically harmless. He had invented a colorful background for himself, but he wasn’t alone: Lester del Rey, among others, seems to have engaged in the same kind of self-mythologizing. His first marriage wasn’t a happy one, and he was always something of a blowhard, determined to outshine everyone he met. Yet he also genuinely impressed John and Doña Campbell, Heinlein, Asimov, and many other perceptive men and women. It wasn’t until after the unexpected success of dianetics that he grew convinced of his own infallibility, casting off such inconvenient collaborators as Campbell and Joseph Winter as obstacles to his power. Even after he went off to Wichita with his remaining disciples, he might have become little more than a harmless crank. As he began to feel persecuted by the government and professional organizations, however, his mood curdled into something poisonous, and it happened at a time in which he had undisputed authority over the people around him. It wasn’t a huge kingdom, but because of its isolation—particularly when he was at sea—he was able to exercise a terrifying amount of control over his closest followers. Hubbard didn’t even enjoy it. He had wealth, fame, and the adulation of a handful of true believers, but he grew increasingly paranoid and miserable. At the time of his death, his wrath was restricted to his critics and to anyone within arm’s reach, but he created a culture of oppression that his successor cheerfully extended against current and former members in faraway places, until no one inside or outside the Church of Scientology was safe.

I wrote the first draft of this essay in May of last year, but it’s hard to read it now without thinking of Donald Trump. Like Hubbard, Trump spent much of his life as an annoying but harmless windbag: a relentless self-promoter who constantly inflated his own achievements. As with Hubbard, everything that he did had to be the biggest and best, and until recently, he was too conscious of the value of his own brand to risk alienating too many people at once. After a lifetime of random grabs for attention, however, he latched onto a cause—the birther movement—that was more powerful than anything he had encountered before, and, like Hubbard, he began to focus on the small number of passionate followers he had attracted. His presidential campaign seems to have been conceived as yet another form of brand extension, culminating in the establishment of a Trump Television network. He shaped his message in response to the crowds who came to his rallies, and before long, he was caught in the same kind of cycle: a man who had once believed in nothing but himself gradually came to believe his own words. (Hubbard and Trump have both been described as con men, but the former spent countless hours auditing himself, and Trump no longer seems conscious of his own lies.) Both fell upward into positions of power that exceeded their wildest expectations, and it’s frightening to consider what might come next, when we consider how Hubbard was transformed. During his lifetime, Hubbard had a small handful of active followers; the Church of Scientology has perhaps 30,000, although, like Trump, they’re prone to exaggerate such numbers; Trump has millions. It’s especially telling that both Hubbard and Trump loved Citizen Kane. I love it, too. But both men ended up in their own personal Xanadu. And as I’ve noted before, the only problem with that movie is that our affection for Orson Welles distracts us from the fact that Kane ultimately went crazy.

Don’t stay out of Riverdale

leave a comment »

Riverdale

In the opening seconds of the series premiere of Riverdale, a young man speaks quietly in voiceover, his words playing over idyllic shots of American life:

Our story is about a town, a small town, and the people who live in the town. From a distance, it presents itself like so many other small towns all over the world. Safe. Decent. Innocent. Get closer, though, and you start seeing the shadows underneath. The name of our town is Riverdale.

Much later, we realize that the speaker is Jughead of Archie Comics fame, played by former Disney child star Cole Sprouse, which might seem peculiar enough in itself. But what I noticed first about this monologue is that it basically summarizes the prologue of Blue Velvet, which begins with images of roses and picket fences and then dives into the grass, revealing the insects ravening like feral animals in the darkness. It’s one of the greatest declarations of intent in all of cinema, and initially, there’s something a little disappointing in the way that Riverdale feels obliged to blandly state what Lynch put into a series of unforgettable images. Yet I have the feeling that series creator Roberto Aguirre-Sacasa, who says that Blue Velvet is one of his favorite movies, knows exactly what he’s doing. And the result promises to be more interesting than even he can anticipate.

Riverdale has been described as The O.C. meets Twin Peaks, which is how it first came to my attention. But it’s also a series on the CW, with all the good, the bad, and the lack of ugly that this implies. This the network that produced The Vampire Diaries, the first three seasons of which unexpectedly generated some of my favorite television from the last few years, and it takes its genre shows very seriously. There’s a fascinating pattern at work within systems that produce such narratives on a regular basis, whether in pulp magazines or comic books or exploitation pictures: as long as you hit all the obligatory notes and come in under budget, you’re granted a surprising amount of freedom. The CW, like its predecessors, has become an unlikely haven for auteurs, and it’s the sort of place where a showrunner like Aguirre-Sacasa—who has an intriguing background in playwriting, comics, and television—can explore a sandbox like this for years. Yet it also requires certain heavy, obvious beats, like structural supports, to prop up the rest of the edifice. A lot of the first episode of Riverdale, like most pilots, is devoted to setting up its premise and characters for even the most distracted viewers, and it can be almost insultingly on the nose. It’s why it feels obliged to spell out its theme of dark shadows beneath its sunlit surfaces, which isn’t exactly hard to grasp. As Roger Ebert wrote decades ago in his notoriously indignant review of Blue Velvet: “What are we being told? That beneath the surface of Small Town, U.S.A., passions run dark and dangerous? Don’t stop the presses.”

Blue Velvet

As a result, if you want to watch Riverdale at all, you need to get used to being treated occasionally as if you were twelve years old. But Aguirre-Sacasa seems determined to have it both ways. Like Glee before it, it feels as if it’s being pulled in three different directions even before it begins, but in this case, it comes off less as an unwanted side effect than as a strategy. It’s worth noting that not only did Aguirre-Sacasa write for Glee itself, but he’s also the guy who stepped in rewrite Spider-Man: Turn Off the Dark, which means that he knows something about wrangling intractable material for a mass audience under enormous scrutiny. (He’s also the chief creative officer of Archie Comics, which feels like a dream job in the best sort of way: one of his projects at the Yale School of Drama was a play about Archie encountering the murderers Leopold and Loeb, and he later received a cease and desist order from his future employer over Archie’s Weird Fantasy, which depicted its lead character as coming out of the closet.) Riverdale often plays like the work of a prodigiously talented writer trying to put his ideas into a form that could plausibly air on Thursdays after Supernatural. Like most shows at this stage, it’s also openly trying to decide what it’s supposed to be about. And I want to believe, on the basis of almost zero evidence, that Aguirre-Sacasa is deliberately attempting something almost unworkable, in hopes that he’ll be able to stick with it long enough—on a network that seems fairly indulgent of shows on the margins—to make it something special.

Most great television results from this sort of evolutionary process, and I’ve noted before—most explicitly in my Salon piece on The X-Files—that the best genre shows emerge when a jumble of inconsistent elements is given the chance to find its ideal form, usually because it lucks into a position where it can play under the radar for years. The pressures of weekly airings, fan response, critical reviews, and ratings, along with the unpredictable inputs of the cast and writing staff, lead to far more rewarding results than even the most visionary showrunner could produce in isolation. Writers of serialized narratives like comic books know this intuitively, and consciously or not, Aguirre-Sacasa seems to be trying something similar on television. It’s not an approach that would make sense for a series like Westworld, which was produced for so much money and with such high expectations that its creators had no choice but to start with a plan. But it might just work on the CW. I’m hopeful that Aguirre-Sacasa and his collaborators will use the mystery at the heart of the series much as Twin Peaks did, as a kind of clothesline on which they can hang a lot of wild experiments, only a certain percentage of which can be expected to work. Twin Peaks itself provides a measure of this method’s limitations: it mutated into something extraordinary, but it didn’t survive the departure of its original creative team. Riverdale feels like an attempt to recreate those conditions, and if it utilizes the Archie characters as its available raw material, well, why not? If Lynch had been able to get the rights, he might have used them, too.

Rob and Betty and Don and Laura

with 3 comments

Laura Petrie and Betty Draper

Mary Tyler Moore was the loveliest woman ever to appear on television, but you can only fully appreciate her charms if you also believe that Dick Van Dyke was maybe the most attractive man. I spent much of my youth obsessed with Rob and Laura Petrie on The Dick Van Dyke Show, which I think is the best three-camera sitcom of all time, and the one that secretly had the greatest impact on my inner life. Along with so much else, it was the first show that seemed to mine comedic and narrative material out of the act of its own creation. Rob was a comedy writer, and thanks to his scenes at the office with Sally and Buddy, I thought for a while I might want to do the same thing. I know now that this wouldn’t be a great job for someone like me, but the image of it is still enticing. What made it so appealing, I’ve come to realize, is that when Rob came home, the show never ended—he was married to a woman who was just as smart, funny, and talented as he was. (Looking at Moore, who was only twenty-four when the series premiered, I’m reminded a little of Debbie Reynolds in Singin’ in the Rain, who effortlessly kept up with her older costars under conditions of enormous pressure.) It was my first and best picture of a life that seemed complete both at work and at home. And the fact that both Moore and Van Dyke seem to have been drinking heavily during the show’s production only points at how difficult it must have been to sustain that dream on camera.

What strikes me the most now about The Dick Van Dyke Show is the uncanny way in which it anticipates the early seasons of Mad Men. In both shows, a husband leaves his idyllic home in Westchester each morning to commute to a creative job in Manhattan, where he brainstorms ideas with his wisecracking colleagues. (Don and Betty lived in Ossining, but the house that was used for exterior shots was in New Rochelle, with Rob and Laura presumably just up the road.) His wife is a much younger knockout—Laura was a former dancer, Betty a model—who seems that she ought to be doing something else besides watching a precocious kindergartener. The storylines are about evenly divided between the home and the office, and between the two, they give us a fuller portrait of the protagonist than most shows ever do. The influence, I can only assume, was unconscious. We know that Matthew Weiner watched the earlier series, as he revealed in a GQ interview when asked about life in the writers’ room:

We all came up in this system…When I watch The Dick Van Dyke Show, I’m like, Wow, this is the same job. There’s the twelve-year-old kid on the staff. There’s the guy who delivers lunch. I guarantee you I can walk into [another writer’s office] and, except for where the snack room is, it’s gonna be similar on some level.

And I don’t think it’s farfetched to guess that The Dick Van Dyke Show was Weiner’s introduction, as it was for so many of us, to the idea of writing for television in the first place.

Rob Petrie and Don Draper

The more I think about it, the more these two shows feel like mirror images of each other, just as “Don and Betty Draper” and “Rob and Laura Petrie” share the same rhythm. I’m not the first one to draw this connection, but instead of highlighting the obvious contrast between the sunniness of the former and the darkness of the latter, I’d prefer to focus on what they have in common. Both are hugely romantic visions of what it means to be a man who can afford a nice house in Westchester based solely on his ability to pitch better ideas than anybody else. Mad Men succeeds in large part because it manages to have it both ways. The series implicitly rebukes Don’s personal behavior, but it never questions his intelligence or talent. It doesn’t really sour us on advertising, any more than it does on drinking or smoking, and I don’t have any doubt that there are people who will build entire careers around its example. Both shows are the work of auteurs—Carl Reiner and Matt Weiner, whose names actually rhyme—who can’t help but let their joy in their own technical facility seep into the narrative. Rob and Don are veiled portraits of their creators. One is a lot better and the other a whole lot worse, but both amount to alternate lives, enacted for an audience, that reflect the restless activity behind the scenes.

And the real difference between Mad Men and The Dick Van Dyke Show doesn’t have anything to do with the decades in which they first aired, or even with the light and dark halves of the Camelot era that they both evoke. It comes down to the contrast between Laura and Betty—who, on some weird level, seem to represent opposing sides of the public image of Jacqueline Kennedy, and not just because the hairstyles are so similar. Betty was never a match for Don at home, and the only way in which she could win the game, which she did so emphatically, was to leave him altogether. Laura was Rob’s equal, intellectually and comedically, and she fit so well into the craziness at The Alan Brady Show that it wasn’t hard to envision her working there. In some ways, she was limited by her role as a housewife, and she would find her fullest realization in her second life as Mary Richards. But the enormous gap between Rob and Don boils down to the fact that one was married to a full partner and teammate, while the other had to make do with a glacial symbol of his success. When I think of them, I remember two songs. One is “Song of India,” which plays as Betty descends the hotel steps in “For Those Who Think Young,” as Don gazes at her so longingly that he seems to be seeing the ghost of his own marriage. The other is “Mountain Greenery,” which Rob and Laura sing at a party at their house, in a scene that struck me as contrived even at the time. Were there ever parties like this? It doesn’t really matter. Because I can’t imagine Don and Betty doing anything like it.

Written by nevalalee

January 26, 2017 at 9:05 am

Listening to “Retention,” Part 3

leave a comment »

Retention

Note: I’m discussing the origins of “Retention,” the episode that I wrote for the audio science fiction anthology series The Outer Reach. It’s available for streaming here on the Howl podcast network, and you can get a free month of access by using the promotional code REACH.

One of the unsung benefits of writing for film, television, or radio is that it requires the writer to conform to a fixed format on the printed page. The stylistic conventions of the screenplay originally evolved for the sake of everyone but the screenwriter: it’s full of small courtesies for the director, actors, sound editor, production designer, and line producer, and in theory, it’s supposed to result in one minute of running time per page—although, in practice, the differences between filmmakers and genres make even this rule of thumb almost meaningless. But it also offers certain advantages for writers, too, even if it’s mostly by accident. It can be helpful for authors to force themselves to work within the typeface, margins, and arbitrary formatting rules that the script imposes: it leaves them with minimal freedom except in the choice of the words themselves. Because all the dialogue is indented, you can see the balance between talk and action at a glance, and you eventually develop an intuition about how a good script should look when you flip rapidly through the pages. (The average studio executive, I suspect, rarely does much more than this.) Its typographical constraints amount to a kind of poetic form, and you find yourself thinking in terms of the logic of that space. As the screenwriter Terry Rossio put it:

In retrospect, my dedication—or my obsession—toward getting the script to look exactly the way it should, no matter how long it took—that’s an example of the sort of focus one needs to make it in this industry…If you find yourself with this sort of obsessive behavior—like coming up with inventive ways to cheat the page count!—then, I think, you’ve got the right kind of attitude to make it in Hollywood.

When it came time to write “Retention,” I was looking forward to working within a new template: the radio play. I studied other radio scripts and did my best to make the final result look right. This was more for my own sake than for anybody else’s, and I’m pretty sure that my producer would have been happy to get a readable script in any form. But I had a feeling that it would be helpful to adapt my habitual style to the standard format, and it was. In many ways, this story was a more straightforward piece of writing than most: it’s just two actors talking with minimal sound effects. Yet the stark look of the radio script, which consists of nothing but numbered lines of dialogue alternating between characters, had a way of clarifying the backbone of the narrative. Once I had an outline, I began by typing the dialogue as quickly as I could, almost in real time, without even indicating the identities of the speakers. Then I copied and pasted the transcript—which is how I came to think of it—into the radio play template. For the second draft, I found myself making small changes, as I always do, so that the result would look good on the page, rewriting lines to make for an even right margin and tightening speeches so that they wouldn’t fall across a page break. My goal was to come up with a document that would be readable and compelling in itself. And what distinguished it from my other projects was that I knew that it would ultimately be translated into performance, which was how its intended audience would experience it.

A page from the radio script of "Retention"

I delivered a draft of the script to Nick White, my producer, on January 8, 2016, which should give you a sense of how long it takes for something like this to come to fruition. Nick made a few edits, and I did one more pass on the whole thing, but we essentially had a finished version by the end of the month. After that, there was a long stretch of waiting, as we ran the script past the Howl network and began the process of casting. It went out to a number of potential actors, and it wasn’t until September that Aparna Nancherla and Echo Kellum came on board. (I also finally got paid for the script, which was noteworthy in itself—not many similar projects can afford to pay their writers. The amount was fairly modest, but it was more than reasonable for what amounted to a week of work.) In November, I got a rough cut of the episode, and I was able to make a few small suggestions. Finally, on December 21, it premiered online. All told, it took about a year to transform my initial idea into fifteen minutes of audio, so I was able to listen to the result with a decent amount of detachment. I’m relieved to say that I’m pleased with how it turned out. Casting Aparna Nancherla as Lisa, in particular, was an inspired touch. And although I hadn’t anticipated the decision to process her voice to make it more obvious from the beginning that she was a chatbot, on balance, I think that it was a valid choice. It’s probably the most predictable of the story’s twists, and by tipping it in advance, it serves as a kind of mislead for listeners, who might catch onto it quickly and conclude, incorrectly, that it was the only surprise in store.

What I found most interesting about the whole process was how it felt to deliver what amounted to a blueprint of a story for others to execute. Playwrights and screenwriters do it all the time, but for me, it was a novel experience: I may not be entirely happy with every story I’ve published, but they’re all mine, and I bear full responsibility for the outcome. “Retention” gave me a taste, in a modest way, of how it feels to hand an idea over to someone else, and of the peculiar relationship between a script and the dramatic work itself. Many aspiring screenwriters like to think that their vision on the page is complete, but it isn’t, and it has to pass through many intermediaries—the actors, the producer, the editor, the technical team—before it comes out on the other side. On balance, I prefer writing my own stuff, but I came away from “Retention” with valuable lessons that I expect to put into practice, whether or not I write for audio again. (I’m hopeful that there will be a second season of The Outer Reach, and I’d love to be a part of it, but its future is still up in the air.) I’ve spent most of my career worrying about issues of clarity, and in the case of a script, this isn’t an abstract goal, but a strategic element that can determine how faithfully the story is translated into its final form. Any fuzzy thinking early on will only be magnified in subsequent stages, so there’s a huge incentive for the writer to make the pieces as transparent and logical as possible. This is especially true when you’re providing a sketch for someone else to finish, but it also applies when you’re writing for ordinary readers, who are doing nothing else, after all, but turning the story into a movie in their heads.

Written by nevalalee

January 25, 2017 at 10:30 am

Rogue One and the logic of the story reel

leave a comment »

Gareth Edwards and Felicity Jones on the set of Rogue One

Last week, I came across a conversation on Yahoo Movies UK with John Gilroy and Colin Goudie, two of the editors who worked on Rogue One. I’ve never read an interview with a movie editor that wasn’t loaded with insights into storytelling, and this one is no exception. Here’s my favorite tidbit, in which Goudie describes cutting together a story reel early in the production process:

There was no screenplay, there was just a story breakdown at that point, scene by scene. [Director Gareth Edwards] got me to rip hundreds of movies and basically make Rogue One using other films so that they could work out how much dialogue they actually needed in the film.

It’s very simple to have a line [in the script] that reads “Krennic’s shuttle descends to the planet.” Now that takes maybe two to three seconds in other films, but if you look at any other Star Wars film you realize that takes forty-five seconds or a minute of screen time. So by making the whole film that way—I used a lot of the Star Wars films—but also hundreds of other films, too, it gave us a good idea of the timing.

This is a striking observation in itself. If Rogue One does an excellent job of recreating the feel of its source material, and I think it does, it’s because it honors its rhythms—which differ in subtle respects from those of other films—to an extent that the recent Star Trek movies mostly don’t. Goudie continues:

For example, the sequence of them breaking into the vault, I was ripping the big door closing in WarGames to work out how long does a vault door take to close.

So that’s what I did, and that was three months work to do that, and that had captions at the bottom which explained the action that was going to be taking place, and two thirds of the screen was filled with the concept art that had already been done and one quarter, the bottom corner, was the little movie clip to give you how long that scene would actually take.

Then I used dialogue from other movies to give you a sense of how long it would take in other films for someone to be interrogated. So for instance, when Jyn gets interrogated at the beginning of the film by the Rebel council, I used the scene where Ripley gets interrogated in Aliens.

Rogue One

This might seem like little more than interesting trivia, but there’s actually a lot to unpack. You could argue that the ability to construct an entire Star Wars movie out of analogous scenes from other films only points to how derivative the series has always been: it’s hard to imagine doing this for, say, Manchester By the Sea, or even Inception. But that’s also a big part of the franchise’s appeal. Umberto Eco famously said that Casablanca was made up of the memories of other movies, and he suggested that a cult movie—which we can revisit in our imagination from different angles, rather than recalling it as a seamless whole—is necessarily “unhinged”:

Only an unhinged movie survives as a disconnected series of images, of peaks, of visual icebergs. It should display not one central idea but many. It should not reveal a coherent philosophy of composition. It must live on, and because of, its glorious ricketiness.

After reminding us of the uncertain circumstances under which Casablanca was written and filmed, Eco then suggests: “When you don’t know how to deal with a story, you put stereotyped situations in it because you know that they, at least, have already worked elsewhere…My guess is that…[director Michael Curtiz] was simply quoting, unconsciously, similar situations in other movies and trying to provide a reasonably complete repetition of them.”

What interests me the most is Eco’s conclusion: “What Casablanca does unconsciously, other movies will do with extreme intertextual awareness, assuming also that the addressee is equally aware of their purposes.” He cites Raiders of the Lost Ark and E.T. as two examples, and he easily could have named Star Wars as well, which is explicitly made up of such references. (In fact, George Lucas was putting together story reels before there was even a word for it: “Every time there was a war movie on television, like The Bridges at Toko-Ri, I would watch it—and if there was a dogfight sequence, I would videotape it. Then we would transfer that to 16mm film, and I’d just edit it according to my story of Star Wars. It was really my way of getting a sense of the movement of the spaceships.”) What Eco doesn’t mention—perhaps because he was writing a generation ago—is how such films can pass through intertextuality and end up on the other side. They create memories for viewers who aren’t familiar with the originals, and they end up being quoted in turn by filmmakers who only know Star Wars. They become texts in themselves. In assembling a story reel from hundreds of other movies, Edwards and Goudie were only doing in a literal fashion what most storytellers do in their heads. They figure out how a story should “look” at its highest level, in a rough sketch of the whole, and fill in the details later. The difference here is that Rogue One had the budget and resources to pay someone to do it for real, in a form that could be timed down to the second and reviewed by others, on the assumption that it would save money and effort down the line. Did it work? I’ll be talking about this more tomorrow.

Written by nevalalee

January 12, 2017 at 9:13 am

The tentpole test

leave a comment »

Rogue One: A Star Wars Story

How do you release blockbusters like clockwork and still make each one seem special? It’s an issue that the movie industry is anxious to solve, and there’s a lot riding on the outcome. When I saw The Phantom Menace nearly two decades ago, there was an electric sense of excitement in the theater: we were pinching ourselves over the fact that we were about to see see the opening crawl for a new Star Wars movie on the big screen. That air of expectancy diminished for the two prequels that followed, and not only because they weren’t very good. There’s a big difference, after all, between the accumulated anticipation of sixteen years and one in which the installments are only a few years apart. The decade that elapsed between Revenge of the Sith and The Force Awakens was enough to ramp it up again, as if fan excitement were a battery that recovers some of its charge after it’s allowed to rest for a while. In the past, when we’ve watched a new chapter in a beloved franchise, our experience hasn’t just been shaped by the movie itself, but by the sudden release of energy that has been bottled up for so long. That kind of prolonged wait can prevent us from honestly evaluating the result—I wasn’t the only one who initially thought that The Phantom Menace had lived up to my expectations—but that isn’t necessarily a mistake. A tentpole picture is named for the support that it offers to the rest of the studio, but it also plays a central role in the lives of fans, which have been going on long before the film starts and will continue after it ends. As Robert Frost once wrote about a different tent, it’s “loosely bound / By countless silken ties of love and thought / to every thing on earth the compass round.”

When you have too many tentpoles coming out in rapid succession, however, the outcome—if I can switch metaphors yet again—is a kind of wave interference that can lead to a weakening of the overall system. On Christmas Eve, I went to see Rogue One, which was preceded by what felt like a dozen trailers. One was for Spider-Man: Homecoming, which left me with a perplexing feeling of indifference. I’m not the only one to observe that the constant onslaught of Marvel movies makes each installment feel less interesting, but in the case of Spider-Man, we actually have a baseline for comparison. Two baselines, really. I can’t defend every moment of the three Sam Raimi films, but there’s no question that each of those movies felt like an event. There was even enough residual excitement lingering after the franchise was rebooted to make me see The Amazing Spider-Man in the theater, and even its sequel felt, for better or worse, like a major movie. (I wonder sometimes if audiences can sense the pressure when a studio has a lot riding on a particular film: even a mediocre movie can seem significant if a company has tethered all its hopes to it.) Spider-Man: Homecoming, by contrast, feels like just one more component in the Marvel machine, and not even a particularly significant one. It has the effect of diminishing a superhero who ought to be at the heart of any universe in which he appears, relegating one of the two or three most successful comic book characters of all time to a supporting role in a larger universe. And because we still remember how central he was to no fewer than two previous franchises, it feels like a demotion, as if Spider-Man were an employee who had left the company, came back, and is now reporting to Iron Man.

Spider-Man in Captain America: Civil War

It isn’t that I’m all that emotionally invested in the future of Spider-Man, but it’s a useful case study for what it tells us about the pitfalls of these films, which can take something that once felt like a milestone and reduce it to a midseason episode of an ongoing television series. What’s funny, of course, is that the attitude we’re now being asked to take toward these movies is actually closer to the way in which they were originally conceived. The word “episode” is right there in the title of every Star Wars movie, which George Lucas saw as an homage to classic serials, with one installment following another on a weekly basis. Superhero films, obviously, are based on comic books, which are cranked out by the month. The fact that audiences once had to wait for years between movies may turn out to have been a historical artifact caused by technological limitations and corporate inertia. Maybe the logical way to view these films is, in fact, in semiannual installments, as younger viewers are no doubt growing up to expect. In years to come, the extended gaps between these movies in prior decades will seem like a structural quirk, rather than an inherent feature of how we relate to them. This transition may not be as meaningful as, say, the shift from silent films to the talkies, but they imply a similar change in the way we relate to the film onscreen. Blockbusters used to be released with years of anticipation baked into the response from moviegoers, which is no longer something that can be taken for granted. It’s a loss, in its way, to fan culture, which had to learn how to sustain itself during the dry periods between films, but it also implies that the movies themselves face a new set of challenges.

To be fair, Disney, which controls both the Marvel and Star Wars franchises, has clearly thought a lot about this problem, and they’ve hit on approaches that seem to work pretty well. With the Marvel Universe, this means pitching most of the films at a level at which they’re just good enough, but no more, while investing real energy every few years into a movie that is first among equals. This leads to a lot of fairly mediocre installments, but also to the occasional Captain America: Civil War, which I think is the best Marvel movie yet—it pulls off the impossible task of updating us on a dozen important characters while also creating real emotional stakes in the process, which is even more difficult than it looks. Rogue One, which I also liked a lot, takes a slightly different tack. For most of the first half, I was skeptical of how heavily it was leaning on its predecessors, but by the end, I was on board, and for exactly the same reason. This is a movie that depends on our knowledge of the prior films for its full impact, but it does so with intelligence and ingenuity, and there’s a real satisfaction in how neatly it aligns with and enhances the original Star Wars, while also having the consideration to close itself off at the end. (A lot of the credit for this may be due to Tony Gilroy, the screenwriter and unbilled co-director, who pulled off much of the same feat when he structured much of The Bourne Ultimatum to take place during gaps in The Bourne Supremacy.) Relying on nostalgia is a clever way to compensate for the reduced buildup between movies, as if Rogue One were drawing on the goodwill that Star Wars built up and hasn’t dissipated, like a flywheel that serves as an uninterruptible power supply. Star Wars isn’t just a tentpole, but a source of energy. And it might just be powerful enough to keep the whole machine running forever.

The steady hand

with 2 comments

Danny Lloyd in The Shining

Forty years ago, the cinematographer Garrett Brown invented the Steadicam. It was a stabilizer attached to a harness that allowed a camera operator, walking on foot or riding in a vehicle, to shoot the kind of smooth footage that had previously only been possible using a dolly. Before long, it had revolutionized the way in which both movies and television were shot, and not always in the most obvious ways. When we think of the Steadicam, we’re likely to remember virtuoso extended takes like the Copacabana sequence in Goodfellas, but it can also be a valuable tool even when we aren’t supposed to notice it. As the legendary Robert Elswit said recently to the New York Times:

“To me, it’s not a specialty item,” he said. “It’s usually there all the time.” The results, he added, are sometimes “not even necessarily recognizable as a Steadicam shot. You just use it to get something done in a simple way.”

Like digital video, the Steadicam has had a leveling influence on the movies. Scenes that might have been too expensive, complicated, or time-consuming to set up in the conventional manner can be done on the fly, which has opened up possibilities both for innovative stylists and for filmmakers who are struggling to get their stories made at all.

Not surprisingly, there are skeptics. In On Directing Film, which I think is the best book on storytelling I’ve ever read, David Mamet argues that it’s a mistake to think of a movie as a documentary record of what the protagonist does, and he continues:

The Steadicam (a hand-held camera), like many another technological miracle, has done injury; it has injured American movies, because it makes it so easy to follow the protagonist around, one no longer has to think, “What is the shot?” or “Where should I put the camera?” One thinks, instead, “I can shoot the whole thing in the morning.”

This conflicts with Mamet’s approach to structuring a plot, which hinges on dividing each scene into individual beats that can be expressed in purely visual terms. It’s a method that emerges naturally from the discipline of selecting shots and cutting them together, and it’s the kind of hard work that we’re often tempted to avoid. As Mamet adds in a footnote: “The Steadicam is no more capable of aiding in the creation of a good movie than the computer is in the writing of a good novel—both are labor-saving devices, which simplify and so make more attractive the mindless aspects of creative endeavor.” The casual use of the Steadicam seduces directors into conceiving of the action in terms of “little plays,” rather than in fundamental narrative units, and it removes some of the necessity of disciplined thinking beforehand.

Michael Keaton and Edward Norton in Birdman

But it isn’t until toward the end of the book that Mamet delivers his most ringing condemnation of what the Steadicam represents:

“Wouldn’t it be nice,” one might say, “if we could get this hall here, really around the corner from that door there; or to get that door here to really be the door that opens on the staircase to that door there? So we could just movie the camera from one to the next?”

It took me a great deal of effort and still takes me a great deal and will continue to take me a great deal of effort to answer the question thusly: no, not only is it not important to have those objects literally contiguous; it is important to fight against this desire, because fighting it reinforces an understanding of the essential nature of film, which is that it is made of disparate shorts, cut together. It’s a door, it’s a hall, it’s a blah-blah. Put the camera “there” and photograph, as simply as possible, that object. If we don’t understand that we both can and must cut the shots together, we are sneakily falling victim to the mistaken theory of the Steadicam.

This might all sound grumpy and abstract, but it isn’t. Take Birdman. You might well love Birdman—plenty of viewers evidently did—but I think it provides a devastating confirmation of Mamet’s point. By playing as a single, seemingly continuous shot, it robs itself of the ability to tell the story with cuts, and it inadvertently serves as an advertisement of how most good movies come together in the editing room. It’s an audacious experiment that never needs to be tried again. And it wouldn’t exist at all if it weren’t for the Steadicam.

But the Steadicam can also be a thing of beauty. I don’t want to discourage its use by filmmakers for whom it means the difference between making a movie under budget and never making it at all, as long as they don’t forget to think hard about all of the constituent parts of the story. There’s also a place for the bravura long take, especially when it depends on our awareness of the unfaked passage of time, as in the opening of Touch of Evil—a long take, made without benefit of a Steadicam, that runs the risk of looking less astonishing today because technology has made this sort of thing so much easier. And there’s even room for the occasional long take that exists only to wow us. De Palma has a fantastic one in Raising Cain, which I watched again recently, that deserves to be ranked among the greats. At its best, it can make the filmmaker’s audacity inseparable from the emotional core of the scene, as David Thomson observes of Goodfellas: “The terrific, serpentine, Steadicam tracking shot by which Henry Hill and his girl enter the Copacabana by the back exit is not just his attempt to impress her but Scorsese’s urge to stagger us and himself with bravura cinema.” The best example of all is The Shining, with its tracking shots of Danny pedaling his Big Wheel down the deserted corridors of the Overlook. It’s showy, but it also expresses the movie’s basic horror, as Danny is inexorably drawn to the revelation of his father’s true nature. (And it’s worth noting that much of its effectiveness is due to the sound design, with the alternation of the wheels against the carpet and floor, which is one of those artistic insights that never grows dated.) The Steadicam is a tool like any other, which means that it can be misused. It can be wonderful, too. But it requires a steady hand behind the camera.

The decline of the west

leave a comment »

Evan Rachel Wood on Westworld

Note: Spoilers follow for the season finale of Westworld.

Over time, as a society, we’ve more or less figured out how we’re all supposed to deal with spoilers. When a movie first comes out, there’s a grace period in which most of us agree not to discuss certain aspects of the story, especially the ending. Usually, reviewers will confine their detailed observations to the first half of the film, which can be difficult for a critic who sees his or her obligation as that of a thoughtful commentator, rather than of a consumer advisor who simply points audiences in the right direction on opening weekend. If there’s a particularly striking development before the halfway mark, we usually avoid talking about that, too. (Over time, the definition of what constitutes a spoiler has expanded to the point where some fans apply it to any information about a film whatsoever, particularly for big franchise installments.) For six months or so, we remain discreet—and most movies, it’s worth noting, are forgotten long before we even get to that point. A movie with a major twist at the end may see that tacit agreement extended for years. Eventually, however, it becomes fair game. Sometimes it’s because a surprise has seeped gradually into the culture, so that a film like Citizen Kane or Psycho becomes all but defined by its secrets. In other cases, as with The Sixth Sense or Fight Club, it feels more like we’ve collectively decided that anyone who wants to see it has already gotten a chance, and now we can talk about it openly. And up until now, it’s a system that has worked pretty well.

But this approach no longer makes sense for a television show that is still on the air, at least if the case of Westworld is any indication. We’re not talking about spoilers, exactly, but about a certain kind of informed speculation. The idea that one of the plotlines on Westworld was actually an extended flashback first surfaced in discussions on communities like Reddit, was picked up by the commenters on the reviews on mainstream websites, led theorists to put together elaborate chronologies and videos to organize the evidence, and finally made its way into think pieces. Long before last night’s finale, it was clear that the theory had to be correct. The result didn’t exactly ruin my enjoyment, since it turned out to be just one thread in a satisfying piece of storytelling, but I’ll never know what it would have been like to have learned the truth along with Dolores, and I suspect that a lot of other viewers felt the same twinge of regret. (To be fair, the percentage of people who keep up with this sort of theorizing online probably amounts to a fraction of the show’s total viewership, and the majority of the audience experienced the reveal pretty much as the creators envisioned it.) There’s clearly no point in discouraging this kind of speculation entirely. But when a show plays fair, as Westworld did, it’s only a matter of time before somebody solves the mystery in advance. And because a plausible theory can spread so quickly through the hive mind, it makes us feel smarter, as individuals, than we really are, which compromises our reactions to what was a legitimately clever and resonant surprise.

The Westworld episode "The Bicameral Mind"

Westworld isn’t the first show to be vulnerable to this kind of collective sleuthing: Game of Thrones has been subjected to it for years, especially when it comes to the parentage, status, and ultimate fate of a certain character who otherwise wouldn’t seem interesting enough to survive. In both cases, it’s because the show—or the underlying novels—provided logical clues along the way to prepare us, in the honorable fashion of all good storytelling. The trouble is that these rules were established at a time when most works of narrative were experienced in solitude. Even if one out of three viewers figured out the twist in The Usual Suspects before the movie was halfway done, it didn’t really affect the experience of the others in the theater, since we don’t tend to discuss the story in progress out loud. That was true of television, too, for most of the medium’s history. These days, however, many of us are essentially talking about these stories online while they’re still happening, so it isn’t surprising if the solutions can spread like a virus. I don’t blame the theorists, because this kind of speculation can be an absorbing game in its own right. But it’s so powerful that it needs to be separated from the general population. It requires a kind of self-policing, or quarantine, that has to become second nature to every viewer of this kind of show. Reviewers need to figure out how to deal with it, too. Otherwise, shows will lose the incentive to play fair, relying instead on blunter, more mechanical kinds of surprise. And this would be a real shame, because Westworld has assembled the pieces so effectively that I don’t doubt it will continue to do so in the future.

Watching the finale, I was curious to see how it would manage to explain the chronology of Dolores’s story without becoming hopelessly confusing, and it did a beautiful job, mostly by subordinating it to the larger questions of William’s fate, Dolores’s journey, and Ford’s master plan, which has taken thirty-five years to come to fruition. (In itself, this is a useful insight into storytelling: it’s easier for the audience to make a big conceptual leap when it feeds into an emotional arc that is already in progress, and if it’s treated as a means, not an end.) If anything, the reveal of the identity of Wyatt was even more powerful—although, oddly, the fact that everything has unfolded according to Ford’s design undermines the agency of the very robots that it was supposed to defend. It’s an emblem for why this excellent season remains one notch down from the level of a masterpiece, thanks to the need of its creators, like Ford, to maintain a tight level of control. Still, if it lasts for as long as I think it will, it may not even matter how much of it the Internet figured out on first viewing. For a television show, the lifespan of a spoiler seems to play in reverse: instead of a grace period followed by free discussion after enough time has passed, we get intense speculation while the show airs, giving way to silence once we’ve all moved on to the next big thing. If Westworld endures as a work of art, it will be seen just as it was intended by those who discover it much later, after the flurry of speculation has faded. I don’t know how long it will take before it can be seen again with fresh eyes. But thirty-five years seems about right.

Written by nevalalee

December 5, 2016 at 9:24 am

Posted in Television

Tagged with ,

The analytical laboratory

leave a comment »

The Martian

Over the last few months, there’s been a surprising flurry of film and television activity involving the writers featured in my upcoming book Astounding. SyFy has announced plans to adapt Robert A. Heinlein’s Stranger in the Strange Land as a miniseries, with an imposing creative team that includes Hollywood power broker Scott Rudin and Zodiac screenwriter James Vanderbilt. Columbia is aiming to reboot Starship Troopers with producer Neal H. Mortiz of The Fast and the Furious, prompting Paul Verhoeven, the director of the original, to comment: “Going back to the novel would fit very much in a Trump presidency.” The production company Legendary has bought the film and television rights to Dune, which first appeared as a serial edited by John W. Campbell in Analog. Meanwhile, Jonathan Nolan is apparently still attached to an adaptation of Isaac Asimov’s Foundation, although he seems rather busy at the moment. (L. Ron Hubbard remains relatively neglected, unless you want to count Leah Remini’s new show, which the Church of Scientology would probably hope you wouldn’t.) The fact that rights have been purchased and press releases issued doesn’t necessarily mean that anything will happen, of course, although the prospects for Stranger in a Strange Land seem strong. And while it’s possible that I’m simply paying more attention to these announcements now that I’m thinking about these writers all the time, I suspect that there’s something real going on.

So why the sudden surge of interest? The most likely, and also the most heartening, explanation is that we’re experiencing a revival of hard science fiction. Movies like Gravity, Interstellar, The Martian, and Arrival—which I haven’t seen yet—have demonstrated that there’s an audience for films that draw more inspiration from Clarke and Kubrick than from Star Wars. Westworld, whatever else you might think of it, has done much the same on television. And there’s no question that the environment for this kind of story is far more attractive now than it was even ten years ago. For my money, the most encouraging development is the movie Life, a horror thriller set on the International Space Station, which is scheduled to come out next summer. I’m tickled by it because, frankly, it doesn’t look like anything special: the trailer starts promisingly enough, but it ends by feeling very familiar. It might turn out to be better than it looks, but I almost hope that it doesn’t. The best sign that a genre is reaching maturity isn’t a series of singular achievements, but the appearance of works that are content to color inside the lines, consciously evoking the trappings of more visionary movies while remaining squarely focused on the mainstream. A film like Interstellar is always going to be an outlier. What we need are movies like what Life promises to be: a science fiction film of minimal ambition, but a certain amount of skill, and a willingness to copy the most obvious features of its predecessors. That’s when you’ve got a trend.

Jake Gyllenhaal in Life

The other key development is the growing market for prestige dramas on television, which is the logical home for Stranger in a Strange Land and, I think, Dune. It may be the case, as we’ve been told in connection with Star Trek: Discovery, that there isn’t a place for science fiction on a broadcast network, but there’s certainly room for it on cable. Combine this with the increased appetite for hard science fiction on film, and you’ve got precisely the conditions in which smart production companies should be snatching up the rights to Asimov, Heinlein, and the rest. Given the historically rapid rise and fall of such trends, they shouldn’t expect this window to remain open for long. (In a letter to Asimov on February 3, 1939, Frederik Pohl noted the flood of new science fiction magazines on newsstands, and he concluded: “Time is indeed of the essence…Such a condition can’t possibly last forever, and the time to capitalize on it is now; next month may be too late.”) What they’re likely to find, in the end, is that many of these stories are resistant to adaptation, and that they’re better off seeking out original material. There’s a reason that there have been so few movies derived from Heinlein and Asimov, despite the temptation that they’ve always presented. Heinlein, in particular, seems superficially amenable to the movies: he certainly knew how to write action in a way that Asimov couldn’t. But he also liked to spend the second half of a story picking apart the assumptions of the first, after sucking in the reader with an exciting beginning, and if you aren’t going to include the deconstruction, you might as well write something from scratch.

As it happens, the recent spike of action on the adaptation front has coincided with another announcement. Analog, the laboratory in which all these authors were born, is cutting back its production schedule to six double issues every year. This is obviously intended to manage costs, and it’s a reminder of how close to the edge the science fiction digests have always been. (To be fair, the change also coincides with a long overdue update of the magazine’s website, which is very encouraging. If this reflects a true shift from print to online, it’s less a retreat than a necessary recalibration.) It’s easy to contrast the game of pennies being played at the bottom with the expenditure of millions of dollars at the top, but that’s arguably how it has to be. Analog, like Astounding before it, was a machine for generating variations, which needs to be done on the cheap. Most stories are forgotten almost at once, and the few that survive the test of time are the ones that get the lion’s share of resources. All the while, the magazine persists as an indispensable form of research and development—a sort of skunk works that keeps the entire enterprise going. That’s been true since the beginning, and you can see this clearly in the lives of the writers involved. Asimov, Heinlein, Herbert, and their estates became wealthy from their work. Campbell, who more than any other individual was responsible for the rise of modern science fiction, did not. Instead, he remained in his little office, lugging manuscripts in a heavy briefcase twice a week on the train. He was reasonably well off, but not in a way that creates an empire of valuable intellectual property. Instead, he ran the lab. And we can see the results all around us.

The Westworld variations

leave a comment »

Jeffrey Wright on Westworld

Note: Spoilers follow for the most recent episode of Westworld.

I’ve written a lot on this blog about the power of ensembles, which allow television shows to experiment with different combinations of characters. Usually, it takes a season or two for the most fruitful pairings to emerge, and they can take even the writers by surprise. When a series begins, characters tend to interact based on where the plot puts them, and those initial groupings are based on little more than the creator’s best guess. Later, when the strengths of the actors have become apparent and the story has wandered in unanticipated directions, you end up with wonderful pairings that you didn’t even know you wanted. Last night’s installment of Westworld features at least two of these. The first is an opening encounter between Bernard and Maeve that gets the episode off to an emotional high that it never quite manages to top: it hurries Bernard to the next—and maybe last—stage of his journey too quickly to allow him to fully process what Maeve tells him. But it’s still nice to see them onscreen together. (They’re also the show’s two most prominent characters of color, but its treatment of race is so deeply buried that it barely even qualifies as subtext.) The second nifty scene comes when Charlotte, the duplicitous representative from the board, shows up in the Man in Black’s storyline. It’s more plot-driven, and it exists mostly to feed us some useful pieces of backstory. But there’s an undeniable frisson whenever two previously unrelated storylines reveal a hidden connection.

I hope that the show gives us more moments like this, but I’m also a little worried that it can’t. The scenes that I liked most in “The Well-Tempered Clavier” were surprising and satisfying precisely because the series has been so meticulous about keeping its plot threads separated. This may well be because at least one subplot is occurring in a different timeline, but more often, it’s a way of keeping things orderly: there’s so much happening in various places that the show is obliged to let each story go its own way. I don’t fault it for this, because this is such a superbly organized series, and although there are occasional lulls, they’ve been far fewer than you’d expect from a show with this level of this complexity. But very little of it seems organic or unanticipated. This might seem like a quibble. Yet I desperately want this show to be as great as it shows promise of being. And if there’s one thing that the best shows of the last decade—from Mad Men to Breaking Bad to Fargo—have in common, it’s that they enjoy placing a few characters in a room and simply seeing what happens. You could say that Westworld is an inherently different sort of series, and that’s fine. But it’s such an effective narrative machine that it leaves me a little starved for those unpredictable moments that television, of all media, is the most likely to produce. (Its other great weakness is its general air of humorlessness, which arises from the same cause.) This is one of the most plot-heavy shows I’ve ever seen, but it’s possible to tell a tightly structured story while still leaving room for the unexpected. In fact, that’s one sign of mastery.

Evan Rachel Wood on Westworld

And you don’t need to look far for proof. In a pivotal passage in The Films of Akira Kurosawa, one of my favorite books on the movies, Donald Richie writes of “the irrational rightness of an apparently gratuitous image in its proper place,” and he goes to to say:

Part of the beauty of such scenes…is just that they are “thrown away” as it were, that they have no place, that they do not ostensibly contribute, that they even constitute what has been called bad filmmaking. It is not the beauty of these unexpected images, however, that captivates…but their mystery. They must remain unexplained. It has been said that after a film is over all that remains are a few scattered images, and if they remain then the film was memorable…Further, if one remembers carefully one finds that it is only the uneconomical, mysterious images which remain…Kurosawa’s films are so rigorous and, at the same time, so closely reasoned, that little scenes such as this appeal with the direct simplicity of water in the desert.

“Rigorous” and “closely reasoned” are two words that I’m sure the creators of Westworld would love to hear used to describe their show. But when you look at a movie like Seven Samurai—which on some level is the greatest western ever made—you have to agree with Richie: “What one remembers best from this superbly economical film then are those scenes which seem most uneconomical—that is, those which apparently add nothing to it.

I don’t know if Westworld will ever become confident enough to offer viewers more water in the desert, but I’m hopeful that it will, because the precedent exists for a television series giving us a rigorous first season that it blows up down the line. I’m thinking, in particular, of Community, a show that might otherwise seem to have little in common with Westworld. It’s hard to remember now, after six increasingly nutty seasons, but Community began as an intensely focused sitcom: for its debut season, it didn’t even leave campus. The result gave the show what I’ve called a narrative home base, and even though I’m rarely inclined to revisit that first season, the groundwork that it laid was indispensable. It turned Greendale into a real place, and it provided a foundation for even the wildest moments to follow. Westworld seems to be doing much the same thing. Every scene so far has taken place in the park, and we’ve only received a few scattered hints of what the world beyond might be like—and whatever it is, it doesn’t sound good. The escape of the hosts from the park feels like an inevitable development, and the withholding of any information about what they’ll find is obviously a deliberate choice. This makes me suspect that this season is restricting itself on purpose, to prepare us for something even stranger, and in retrospect, it will seem cautious, compared to whatever else Westworld has up its sleeve. It’s the baseline from which crazier, more unexpected moments will later arise. Or, to take a page from the composer of “The Well-Tempered Clavier,” this season is the aria, and the variations are yet to come.

Written by nevalalee

November 28, 2016 at 8:35 am

Cain rose up

with 2 comments

John Lithgow in Raising Cain

I first saw Brian De Palma’s Raising Cain when I was fourteen years old. In a weird way, it amounted to a peak moment of my early adolescence: I was on a school trip to our nation’s capital, sharing a hotel room with my friends from middle school, and we were just tickled to get away with watching an R-rated movie on cable. The fact that we ended up with Raising Cain doesn’t quite compare with the kids on The Simpsons cheering at the chance to see Barton Fink, but it isn’t too far off. I think that we liked it, and while I won’t claim that we understood it, that doesn’t mean much of anything—it’s hard for me to imagine anybody, of any age, entirely understanding this movie, which includes both me and De Palma himself. A few years later, I caught it again on television, and while I can’t say I’ve thought about it much since, I never forgot it. Gradually, I began to catch up on my De Palma, going mostly by whatever movies made Pauline Kael the most ecstatic at the time, which in itself was an education in the gap between a great critic’s pet enthusiasms and what exists on the screen. (In her review of The Fury, Kael wrote: “No Hitchcock thriller was ever so intense, went so far, or had so many ‘classic’ sequences.” I love Kael, but there are at least three things wrong with that sentence.) And ultimately De Palma came to mean a lot to me, as he does to just about anyone who responds to the movies in a certain way.

When I heard about the recut version of Raising Cain—in an interview with John Lithgow on The A.V. Club, no less, in which he was promoting his somewhat different role on The Crown—I was intrigued. And its backstory is particularly interesting. Shortly before the movie was first released, De Palma moved a crucial sequence from the beginning to the middle, eliminating an extended flashback and allowing the film to play more or less chronologically. He came to regret the change, but it was too late to do anything about it. Years later, a freelance director and editor named Peet Gelderblom read about the original cut and decided to restore it, performing a judicious edit on a digital copy. He put it online, where, unbelievably, it was seen by De Palma himself, who not only loved it but asked that it be included as a special feature on the new Blu-ray release. If nothing else, it’s a reminder of the true possibilities of fan edits, which have served mostly for competing visions of the ideal version of Star Wars. With modern software, a fan can do for a movie what Walter Murch did for Touch of Evil, restoring it to the director’s original version based on a script or a verbal description. In the case of Raising Cain, this mostly just involved rearranging the pieces in the theatrical cut, but other fans have tackled such challenges as restoring all the deleted scenes in Twin Peaks: Fire Walk With Me, and there are countless other candidates.

Raising Cain

Yet Raising Cain might be the most instructive case study of all, because simply restoring the original opening to its intended place results in a radical transformation. It isn’t for everyone, and it’s necessary to grant De Palma his usual passes for clunky dialogue and characterization, but if you’re ready to meet it halfway, you’re rewarded with a thriller that twists back on itself like a Möbius strip. De Palma plunders his earlier movies so blatantly that it isn’t clear if he’s somehow paying loving homage to himself—bypassing Hitchcock entirely—or recycling good ideas that he feels like using again. The recut opens with a long mislead that recalls Dressed to Kill, which means that Lithgow barely even appears for the first twenty minutes. You can almost see why De Palma chickened out for the theatrical version: Lithgow’s performance as the meek Carter and his psychotic imaginary brother Cain feels too juicy to withhold. But the logic of the script was destroyed. For a film that tests an audience’s suspension of disbelief in so many other ways, it’s unclear why De Palma thought that a flashback would be too much for the viewer to handle. The theatrical release preserves all the great shock effects that are the movie’s primary reason for existing, but they don’t build to anything, and you’re left with a film that plays like a series of sketches. With the original order restored, it becomes what it was meant to be all along: a great shaggy dog story with a killer punchline.

Raising Cain is gleefully about nothing but itself, and I wouldn’t force anybody to watch it who wasn’t already interested. But the recut also serves as an excellent introduction to its director, just as the older version did for me: when I first encountered it, I doubt I’d seen anything by De Palma, except maybe The Untouchables, and Mission: Impossible was still a year away. It’s safe to say that if you like Raising Cain, you’ll like De Palma in general, and if you can’t get past its archness, campiness, and indifference to basic plausibility—well, I can hardly blame you. Watching it again, I was reminded of Blue Velvet, a far greater movie that presents the viewer with a similar test. It has the same mixture of naïveté and incredible technical virtuosity, with scenes that barely seem to have been written alternating with ones that push against the boundaries of the medium itself. You’re never quite sure if the director is in on the gag, and maybe it doesn’t matter. There isn’t much beauty in Raising Cain, and De Palma is a hackier and more mechanical director than Lynch, but both are so strongly visual that the nonsensory aspects of their films, like the obligatory scenes with the cops, seem to wither before our eyes. (It’s an approach that requires a kind of raw, intuitive trust from the cast, and as much as I enjoy what Lithgow does here, he may be too clever and resourceful an actor to really disappear into the role.) Both are rooted, crucially, in Hitchcock, who was equally obsessive, but was careful to never work from his own script. Hitchcock kept his secret self hidden, while De Palma puts it in plain sight. And if it turns out to be nothing at all, that’s probably part of the joke.

%d bloggers like this: