Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Search Results

Peak television and the future of stardom

with one comment

Kevin Costner in The Postman

Earlier this week, I devoured the long, excellent article by Josef Adalian and Maria Elena Fernandez of Vulture on the business of peak television. It’s full of useful insights and even better gossip—and it names plenty of names—but there’s one passage that really caught my eye, in a section about the huge salaries that movie stars are being paid to make the switch to the small screen:

A top agent defends the sums his clients are commanding, explaining that, in the overall scheme of things, the extra money isn’t all that significant. “Look at it this way,” he says. “If you’re Amazon and you’re going to launch a David E. Kelley show, that’s gonna cost $4 million an episode [to produce], right? That’s $40 million. You can have Bradley Whitford starring in it, [who is] gonna cost you $150,000 an episode. That’s $1.5 million of your $40 million. Or you could spend another $3.5 million [to get Costner] on what will end up being a $60 million investment by the time you market and promote it. You can either spend $60 [million] and have the Bradley Whitford show, or $63.5 [million] and have the Kevin Costner show. It makes a lot of sense when you look at it that way.”

With all due apologies to Bradley Whitford, I found this thought experiment fascinating, and not just for the reasons that the agent presumably shared it. It implies, for one thing, that television—which is often said to be overtaking Hollywood in terms of quality—is becoming more like feature filmmaking in another respect: it’s the last refuge of the traditional star. We frequently hear that movie stardom is dead and that audiences are drawn more to franchises than to recognizable faces, so the fact that cable and streaming networks seem intensely interested in signing film stars, in a post-True Detective world, implies that their model is different. Some of it may be due to the fact, as William Goldman once said, that no studio executive ever got fired for hiring a movie star: as the new platforms fight to establish themselves, it makes sense that they’d fall back on the idea of star power, which is one of the few things that corporate storytelling has ever been able to quantify or understand. It may also be because the marketing strategy for television inherently differs from that for film: an online series is unusually dependent on media coverage to stand out from the pack, and signing a star always generates headlines. Or at least it once did. (The Vulture article notes that Woody Allen’s new series for Amazon “may end up marking peak Peak TV,” and it seems a lot like a deal that was made for the sake of the coverage it would produce.)

Kevin Costner in JFK

But the most plausible explanation lies in simple economics. As the article explains, Netflix and the other streaming companies operate according to a “cost-plus” model: “Rather than holding out the promise of syndication gold, the company instead pays its studio and showrunner talent a guaranteed up-front profit—typically twenty or thirty percent above what it takes to make a show. In exchange, it owns all or most of the rights to distribute the show, domestically and internationally.” This limits the initial risk to the studio, but also the potential upside: nobody involved in producing the show itself will see any money on the back end. In addition, it means that even the lead actors of the series are paid a flat dollar amount, which makes them a more attractive investment than they might be for a movie. Most of the major stars in Hollywood earn gross points, which means that they get a cut of the box office receipts before the film turns a profit—a “first dollar” deal that makes the mathematics of breaking even much more complicated. The thought experiment about Bradley Whitford and Kevin Costner only makes sense if you can get Costner at a fixed salary per episode. In other words, movie stars are being actively courted by television because its model is a throwback to an earlier era, when actors were held under contract by a studio without any profit participation, and before stars and their agents negotiated better deals that ended up undermining the economic basis of the star system entirely.

And it’s revealing that Costner, of all actors, appears in this example. His name came up mostly because multiple sources told Vulture that he was offered $500,000 per episode to star in a streaming series: “He passed,” the article says, “but industry insiders predict he’ll eventually say ‘yes’ to the right offer.” But he also resonates because he stands for a kind of movie stardom that was already on the wane when he first became famous. It has something to do with the quintessentially American roles that he liked to play—even JFK is starting to seem like the last great national epic—and an aura that somehow kept him in leading parts two decades after his career as a major star was essentially over. That’s weirdly impressive in itself, and it testifies to how intriguing a figure he remains, even if audiences aren’t likely to pay to see him in a movie. Whenever I think of Costner, I remember what the studio executive Mike Medavoy once claimed to have told him right at the beginning of his career:

“You know,” I said to him over lunch, “I have this sense that I’m sitting here with someone who is going to become a great big star. You’re going to want to direct your own movies, produce your own movies, and you’re going to end up leaving your wife and going through the whole Hollywood movie-star cycle.”

Costner did, in fact, end up leaving his first wife. And if he also leaves film for television, even temporarily, it may reveal that “the whole Hollywood movie-star cycle” has a surprising final act that few of us could have anticipated.

Written by nevalalee

May 27, 2016 at 9:03 am

“Asthana glanced over at the television…”

leave a comment »

"A woman was standing just over his shoulder..."

Note: This post is the eighteenth installment in my author’s commentary for Eternal Empire, covering Chapter 19. You can read the previous installments here.

A quarter of a century ago, I read a story about the actor Art Carney, possibly apocryphal, that I’ve never forgotten. Here’s the version told by the stage and television actress Patricia Wilson:

During a live performance of the original Honeymooners, before millions of viewers, Jackie [Gleason] was late making an entrance into a scene. He left Art Carney onstage alone, in the familiar seedy apartment set of Alice and Ralph Kramden. Unflappable, Carney improvised action for Ed Norton. He looked around, scratched himself, then went to the Kramden refrigerator and peered in. He pulled out an orange, shuffled to the table, and sat down and peeled it. Meanwhile frantic stage managers raced to find Jackie. Art Carney sat onstage peeling and eating an orange, and the audience convulsed with laughter.

According to some accounts, Carney stretched the bit of business out for a full two minutes before Gleason finally appeared. And while it certainly speaks to Carney’s ingenuity and resourcefulness, we should also take a moment to tip our hats to that humble orange, as well as the prop master who thought to stick it in the fridge—unseen and unremarked—in the first place.

Theatrical props, as all actors and directors know, can be a source of unexpected ideas, just as the physical limitations or possibilities of the set itself can provide a canvas on which the action is conceived in real time. I’ve spoken elsewhere of the ability of vaudeville comedians to improvise routines on the spot using whatever was available on a standing set, and there’s a sense in which the richness of the physical environment in which a scene takes place is a battery from which the performances can draw energy. When a director makes sure that each actor’s pockets are full of the litter that a character might actually carry, it isn’t just a mark of obsessiveness or self-indulgence, or even a nod toward authenticity, but a matter of storing up potential tools. A prop by itself can’t make a scene work, but it can provide the seed around which a memorable moment or notion can grow, like a crystal. In more situations than you might expect, creativity lies less in the ability to invent from scratch than to make effective use of whatever happens to lie at hand. Invention is a precious resource, and most artists have a finite amount of it; it’s better, whenever possible, to utilize what the world provides. And much of the time, when you’re faced with a hard problem to solve, you’ll find that the answer is right there in the background.

"Asthana glanced over at the television..."

This is as true of writing fiction as of any of the performing arts. In the past, I’ve suggested that this is the true purpose of research or location work: it isn’t about accuracy, but about providing raw material for dreams, and any writer faced with the difficult task of inventing a scene would be wise to exploit what already exists. It’s infinitely easier to write a chase scene, for example, if you’re tailoring it to the geography of a particular street. As usual, it comes back to the problem of making choices: the more tangible or physical the constraints, the more likely they’ll generate something interesting when they collide with the fundamentally abstract process of plotting. Even if the scene I’m writing takes place somewhere wholly imaginary, I’ll treat it as if it were being shot on location: I’ll pick a real building or locale that has the qualities I need for the story, pore over blueprints and maps, and depart from the real plan only when I don’t have any alternative. In most cases, the cost of that departure, in terms of the confusion it creates, is far greater than the time and energy required to make the story fit within an existing structure. For much the same reason, I try to utilize the props and furniture you’d naturally find there. And that’s all the more true when a scene occurs in a verifiable place.

Sometimes, this kind of attention to detail can result in surprising resonances. There’s a small example that I like in Chapter 19 of Eternal Empire. Rogozin, my accused intelligence agent, is being held without charges at a detention center in Paddington Green. This is a real location, and its physical setup becomes very important: Rogozin is going to be killed, in an apparent suicide, under conditions of heavy security. To prepare these scenes, I collected reference photographs, studied published descriptions, and shaped the action as much as possible to unfold logically under the constraints the location imposed. And one fact caught my eye, purely as a matter of atmosphere: the cells at Paddington Green are equipped with televisions, usually set to play something innocuous, like a nature video. This had obvious potential as a counterpoint to the action, so I went to work looking for a real video that might play there. And after a bit of searching, I hit on a segment from the BBC series Life in the Undergrowth, narrated by David Attenborough, about the curious life cycle of the gall wasp. The phenomenon it described, as an invading wasp burrows into the gall created by another, happened to coincide well—perhaps too well—with the story itself. As far as I’m concerned, it’s what makes Rogozin’s death scene work. And while I could have made up my own video to suit the situation, it seemed better, and easier, to poke around the stage first to see what I could find…

Written by nevalalee

May 7, 2015 at 9:11 am

The unbreakable television formula

leave a comment »

Ellie Kemper in Unbreakable Kimmy Schmidt

Watching the sixth season premiere of Community last night on Yahoo—which is a statement that would have once seemed like a joke in itself—I was struck by the range of television comedy we have at our disposal these days. We’ve said goodbye to Parks and Recreation, we’re following Community into what is presumably its final stretch, and we’re about to greet Unbreakable Kimmy Schmidt as it starts what looks to be a powerhouse run on Netflix. These shows are superficially in the same genre: they’re single-camera sitcoms that freely grant themselves elaborate sight gags and excursions into surrealism, with a cutaway style that owes as much to The Simpsons as to Arrested Development. Yet they’re palpably different in tone. Parks and Rec was the ultimate refinement of the mockumentary style, with talking heads and reality show techniques used to flesh out a narrative of underlying sweetness; Community, as always, alternates between obsessively detailed fantasy and a comic strip version of emotions to which we can all relate; and Kimmy Schmidt takes place in what I can only call Tina Fey territory, with a barrage of throwaway jokes and non sequiturs designed to be referenced and quoted forever.

And the diversity of approach we see in these three comedies makes the dramatic genre seem impoverished. Most television dramas are still basically linear; they’re told using the same familiar grammar of establishing shots, medium shots, and closeups; and they’re paced in similar ways. If you were to break down an episode by shot length and type, or chart the transitions between scenes, an installment of Game of Thrones would look a lot on paper like one of Mad Men. There’s room for individual quirks of style, of course: the handheld cinematography favored by procedurals has a different feel from the clinical, detached camera movements of House of Cards. And every now and then, we get a scene—like the epic tracking shot during the raid in True Detective—that awakens us to the medium’s potential. But the fact that such moments are striking enough to inspire think pieces the next day only points to how rare they are. Dramas are just less inclined to take big risks of structure and tone, and when they do, they’re likely to be hybrids. Shows like Fargo or Breaking Bad are able to push the envelope precisely because they have a touch of black comedy in their blood, as if that were the secret ingredient that allowed for greater formal daring.

Jon Hamm on Mad Men

It isn’t hard to pin down the reason for this. A cutaway scene or extended homage naturally takes us out of the story for a second, and comedy, which is inherently more anarchic, has trained us to roll with it. We’re better at accepting artifice in comic settings, since we aren’t taking the story quite as seriously: whatever plot exists is tacitly understood to be a medium for the delivery of jokes. Which isn’t to say that we can’t care deeply about these characters; if anything, our feelings for them are strengthened because they take place in a stylized world that allows free play for the emotions. Yet this is also something that comedy had to teach us. It can be fun to watch a sitcom push the limits of plausibility to the breaking point, but if a drama deliberately undermines its own illusion of reality, we can feel cheated. Dramas that constantly draw attention to their own artifice, as Twin Peaks did, are more likely to become cult favorites than popular successes, since most of us just want to sit back and watch a story that presents itself using the narrative language we know. (Which, to be fair, is true of comedies as well: the three sitcoms I’ve mentioned above, taken together, have a fraction of the audience of something like The Big Bang Theory.)

In part, it’s a problem of definition. When a drama pushes against its constraints, we feel more comfortable referring to it as something else: Orange is the New Black, which tests its structure as adventurously as any series on the air today, has suffered at awards season from its resistance to easy categorization. But what’s really funny is that comedy escaped from its old formulas by appropriating the tools that dramas had been using for years. The three-camera sitcom—which has been responsible for countless masterpieces of its own—made radical shifts of tone and location hard to achieve, and once comedies liberated themselves from the obligation to unfold as if for a live audience, they could indulge in extended riffs and flights of imagination that were impossible before. It’s the kind of freedom that dramas, in theory, have always had, even if they utilize it only rarely. This isn’t to say that a uniformity of approach is a bad thing: the standard narrative grammar evolved for a reason, and if it gives us compelling characters with a maximum of transparency, that’s all for the better. Telling good stories is hard enough as it is, and formal experimentation for its own sake can be a trap in itself. Yet we’re still living in a world with countless ways of being funny, and only one way, within a narrow range of variations, of being serious. And that’s no laughing matter.

The crowded circle of television

with 2 comments

The cast of Mad Men

Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s question: “What’s your favorite TV show of the year so far?”

There are times when watching television can start to feel like a second job—a pleasurable one, to be sure, but one that demands a lot of work nevertheless. Over the last year, I’ve followed more shows than ever, including Mad Men, Game of Thrones, Orange is the New Black, Hannibal, Community, Parks and Recreation, House of Cards, The Vampire Diaries, and True Detective. For the most part, they’ve all had strong runs, and I’d have trouble picking a favorite. (If pressed, I’d probably go with Mad Men, if only for old times’ sake, with Hannibal as a very close second.) They’re all strikingly different in emphasis, tone, and setting, but they also have a lot in common. With one exception, which I’ll get to in a moment, these are dense shows with large casts and intricate storylines. Many seem devoted to pushing the limits of how much complexity can be accommodated within the constraints of the television format, which may be why the majority run for just ten to thirteen episodes: it’s hard to imagine that level of energy sustained over twenty or more installments.

And while I’m thrilled by the level of ambition visible here, it comes at a price. There’s a sort of arms race taking place between media of all kinds, as they compete to stand out in an increasingly crowded space with so much competing for our attention. Books, even literary novels, are expected to be page-turners; movies offer up massive spectacle to the point where miraculous visual effects are taken for granted; and television has taken to packing every minute of narrative time to the bursting point. (This isn’t true of all shows, of course—a lot of television series are still designed to play comfortably in the background of a hotel room—but it’s generally the case with prestige shows that end up on critics’ lists and honored at award ceremonies.) This trend toward complexity arises from a confluence of factors I’ve tried to unpack here before: just as The Simpsons was the first freeze-frame sitcom, modern television takes advantage of our streaming and binge-watching habits to deliver storytelling that rewards, and even demands, close attention.

Matthew McConaughey on True Detective

For the most part, this is a positive development. Yet there’s also a case to be made that television, which is so good at managing extended narratives and enormous casts of characters, is also uniquely suited for the opposite: silence, emptiness, and contemplation. In a film, time is a precious commodity, and when you’re introducing characters while also setting in motion the machinery of a complicated story, there often isn’t time to pause. Television, in theory, should be able to stretch out a little, interspersing relentless forward momentum with moments of quiet, which are often necessary for viewers to consolidate and process what they’ve seen. Twin Peaks was as crowded and plotty as any show on the air today, but it also found time for stretches of weird, inexplicable inaction, and it’s those scenes that I remember best. Even in the series finale, with so many threads to address and only forty minutes to cover them all, it devotes endless minutes to Cooper’s hallucinatory—and almost entirely static—ordeal in the Black Lodge, and even to a gag involving a decrepit bank manager rising from his desk and crossing the floor of his branch very, very slowly.

So while there’s a lot of fun to be had with shows that constantly accelerate the narrative pace, it can also be a limitation, especially when it’s handled less than fluently. (For every show, like Orange is the New Black, that manages to cut expertly between subplots, there’s another, like Game of Thrones, that can’t quite seem to handle its enormous scope, and even The Vampire Diaries is showing signs of strain.) Both Hannibal and Mad Men know when to linger on an image or revelation—roughly half of Hannibal is devoted to contemplating its other half—and True Detective, in particular, seemed to consist almost entirely of such pauses. We remember such high points as the final chase with the killer or the raid in “Who Goes There,” but what made the show special were the scenes in which nothing much seemed to be happening. It was aided in this by its limited cast and its tight focus on its two leads, so it’s possible that what shows really need to slow things down are a couple of movie stars to hold the eye. But it’s a step in the right direction. If time is a flat circle, as Rust says, so is television, and it’s good to see it coming back around.

The dreamlife of television

with one comment

Aaron Paul on Breaking Bad

I’ve been dreaming a lot about Breaking Bad. On Wednesday, my wife and I returned from a trip to Barcelona, where we’d spent a beautiful week: my baby daughter was perfectly happy to be toted around various restaurants, cultural sites, and the Sagrada Familia, and it came as a welcome break from my own work. Unfortunately, it also meant that we were going to miss the Breaking Bad finale, which aired the Sunday before we came home. For a while, I seriously considered bringing my laptop and downloading it while we were out of the country, both because I was enormously anxious to see how the show turned out and because I dreaded the spoilers I’d have to avoid for the three days before we returned. In the end, I gritted my teeth and decided to wait until we got home. This meant avoiding most of my favorite news and pop cultural sites—I was afraid to even glance past the top few headlines on the New York Times—and staying off Twitter entirely, which I suppose wasn’t such a great loss. And even as we toured the Picasso Museum and walked for miles along the marina with a baby in tow, my thoughts were rarely very far from Walter White.

This must have done quite a number on my psyche, because I started dreaming about the show with alarming frequency. My dreams included two separate, highly elaborated versions of the finale, one of which was a straightforward bloodbath with a quiet epilogue, the other a weird metafictional conclusion in which the events of the series were played out on a movie screen with the cast and crew watching them unfold—which led me to exclaim, while still dreaming: “Of course that’s how they would end it!” Now that I’ve finally seen the real finale, the details of these dreams are fading, and only a few scraps of imagery remain. Yet the memories are still emotionally charged, and they undoubtedly affected how I approached the last episode itself, which I was afraid would never live up to the versions I’d dreamed for myself. I suspect that a lot of fans, even those who didn’t actually hallucinate alternate endings, probably felt the same way. (For the record, I liked the finale a lot, even if it ranks a notch below the best episodes of the show, which was always best at creating chaos, not resolving it. And I think about its closing moments almost every day.)

Jon Hamm on Mad Men

And it made me reflect on the ways in which television, especially in its modern, highly serialized form, is so conducive to dreaming. Dreams are a way of assembling and processing fragments of the day’s experience, or recollections from the distant past, and a great television series is nothing less than a vast storehouse of memories from another life. When a show is as intensely serialized as Breaking Bad was, it can be hard to remember individual episodes, aside from the occasional formal standout like “Fly”: I can’t always recall what scenes took place when, or in what order, and an especially charged sequence of installments—like the last half of this final season—tends to blend together into a blur of vivid impressions. What I remember are facial expressions, images, bits of dialogue: “Stay out of my territory.” “Run.” “Tread lightly.” And the result is a mine of moments that end up naturally incorporated into my own subconscious. A good movie or novel exists as a piece, and I rarely find myself dreaming alternate lives for, say, Rick and Ilsa or Charles Foster Kane. With Walter White, it’s easy to imagine different paths that the action could have taken, and those byways play themselves out in the deepest parts of my brain.

Which may explain why television is so naturally drawn to dream sequences and fantasies, which are only one step removed from the supposedly factual events of the shows themselves. Don Draper’s dreams have become a huge part of Mad Men, almost to the point of parody, and this has always been an art form that attracts surreal temperaments, from David Lynch to Bryan Fuller, even if they tend to be destroyed by it. As I’ve often said before, it’s the strangest medium I know, and at its best, it’s the outcome of many unresolved tensions. Television can feel maddeningly real, a hidden part of your own life, which is why it can be so hard to say goodbye to a great show. It’s also impossible to get a lasting grip on it or to hold it all in your mind at once, especially if it runs for more than a few seasons, which hints at an even deeper meaning. I’ve always been struck by how poorly we integrate the different chapters in our own past: there are entire decades of my life that I don’t think about for months on end. When they return, it’s usually in the hours just before waking. And by teaching us to process narratives that can last for years, it’s possible that television subtly trains us to better understand the shapes of our own lives, even if it’s only in dreams.

Written by nevalalee

October 7, 2013 at 8:27 am

Posted in Television

Tagged with ,

Critical television studies

with 4 comments

The cast of Community

Television is such a pervasive medium that it’s easy to forget how deeply strange it is. Most works of art are designed to be consumed all at once, or at least in a fixed period of time—it’s physically possible, if not entirely advisable, to read War and Peace in one sitting. Television, by contrast, is defined by the fact of its indefinite duration. House of Cards aside, it seems likely that most of us will continue to watch shows week by week, year after year, until they become a part of our lives. This kind of extended narrative can be delightful, but it’s also subject to risk. A beloved show can change for reasons beyond anyone’s control. Sooner or later, we find out who killed Laura Palmer. An actor’s contract expires, so Mulder is abducted by aliens, and even if he comes back, by that point, we’ve lost interest. For every show like Breaking Bad that has its dark evolution mapped out for seasons to come, there’s a series like Glee, which disappoints, or Parks and Recreation, which gradually reveals a richness and warmth that you’d never guess from the first season alone. And sometimes a show breaks your heart.

It’s clear at this point that the firing of Dan Harmon from Community was the most dramatic creative upheaval for any show in recent memory. This isn’t the first time that a show’s guiding force has departed under less than amicable terms—just ask Frank Darabont—but it’s unusual in a series so intimately linked to one man’s particular vision. Before I discovered Community, I’d never heard of Dan Harmon, but now I care deeply about what this guy feels and thinks. (Luckily, he’s never been shy about sharing this with the rest of us.) And although it’s obvious from the opening minutes of last night’s season premiere that the show’s new creative team takes its legacy seriously, there’s no escaping the sense that they’re a cover band doing a great job with somebody else’s music. Showrunners David Guarascio and Moses Port do their best to convince us out of the gate that they know how much this show means to us, and that’s part of the problem. Community was never a show about reassuring us that things won’t change, but about unsettling us with its endless transformations, even as it delighted us with its new tricks.

The Community episode "Remedial Chaos Theory"

Don’t get me wrong: I laughed a lot at last night’s episode, and I was overjoyed to see these characters again. By faulting the new staff for repeating the same beats I loved before, when I might have been outraged by any major alterations, I’m setting it up so they just can’t win. But the show seems familiar now in a way that would have seemed unthinkable for most of its first three seasons. Part of the pleasure of watching the series came from the fact that you never knew what the hell might happen next, and it wasn’t clear if Harmon knew either. Not all of his experiments worked: there even some clunkers, like “Messianic Myths and Ancient Peoples,” in the glorious second season, which is one of my favorite runs of any modern sitcom. But as strange as this might have once seemed, it feels like we finally know what Community is about. It’s a show that takes big formal risks, finds the emotional core in a flurry of pop culture references, and has no idea how to use Chevy Chase. And although I’m grateful that this version of the show has survived, I don’t think I’m going to tune in every week wondering where in the world it will take me.

And the strange thing is that Community might have gone down this path with or without Harmon. When a show needs only two seasons to establish that anything is possible, even the most outlandish developments can seem like variations on a theme. Even at the end of the third season, there was the sense that the series was repeating itself. I loved “Digital Estate Planning,” for instance, but it felt like the latest attempt to do one of the formally ambitious episodes that crop up at regular intervals each season, rather than an idea that forced itself onto television because the writers couldn’t help themselves. In my review of The Master, I noted that Paul Thomas Anderson has perfected his brand of hermetic filmmaking to the point where it would be more surprising if he made a movie that wasn’t ambiguous, frustrating, and deeply weird. Community has ended up in much the same place, so maybe it’s best that Harmon got out when he did. It’s doubtful that the series will ever be able to fake us out with a “Critical Film Studies” again, because it’s already schooled us, like all great shows, in how it needs to be watched. And although its characters haven’t graduated from Greendale yet, its viewers, to their everlasting benefit, already have.

Written by nevalalee

February 8, 2013 at 9:50 am

Wouldn’t it be easier to write for television?

leave a comment »

Last week, I had dinner with a college friend I hadn’t seen in years, who is thinking about giving up a PhD in psychology to write for television in Los Angeles. We spent a long time commiserating about the challenges of the medium, at least from a writer’s point of view, hitting many of the points that I’ve discussed here before. With the prospects of a fledgling television show so uncertain, I said, especially when the show might be canceled after four episodes, or fourteen, or forty, it’s all but impossible for the creator to tell effective stories over time. Running a television show is one of the hardest jobs in the world, with countless obstacles along the way, even for critical darlings. Knowing all this, I asked my friend, why did he want to do this in the first place?

My friend’s response was an enlightening one. The trouble with writing novels or short stories, he said, is the fact that the author is expected to spend a great deal of time on description, style, and other tedious elements that a television writer can cheerfully ignore. Teleplays, like feature scripts, are nothing but structure and dialogue (or maybe just structure, as William Goldman says), and there’s something liberating in how they strip storytelling down to its core. The writer takes care of the bones of the narrative, which is where his primary interest presumably lies, then outsources the work of casting, staging, and art direction to qualified professionals who are happy to do the work. And while I didn’t agree with everything my friend said, I could certainly see his point.

Yet that’s only half of the story. It’s true that a screenwriter gets to outsource much of the conventional apparatus of fiction to other departments, but only at the price of creative control. You may have an idea about how a character should look, or what kind of home he should have, or how a moment of dialogue, a scene, or an overall story should unfold, but as a writer, you don’t have much control over the matter. Scripts are easier to write than novels for a reason: they’re only one piece of a larger enterprise, which is reflected in the writer’s relative powerlessness. The closest equivalent to a novelist in television isn’t the writer, but the executive producer. Gene Roddenberry, in The Making of Star Trek, neatly sums up the similarity between the two roles:

Producing in television is like storytelling. The choice of the actor, picking the right costumes, getting the right flavor, the right pace—these are as much a part of storytelling as writing out that same description of a character in a novel.

And the crucial point about producing a television series, like directing a feature film, is that it’s insanely hard. As Thomas Lennon and Robert Ben Garant point out in their surprisingly useful Writing Movies for Fun and Profit, as far as directing is concerned, “If you’re doing it right, it’s not that fun.” As a feature director or television producer, you’re responsible for a thousand small but critical decisions that need to be made very quickly, and while you’re working on the story, you’re also casting parts, scouting for locations, dealing with the studio and the heads of various departments, and surviving on only a few hours of sleep a night, for a year or more of your life. In short, the amount of effort required to keep control of the project is greater, not less, than what is required to write a novel—except with more money on the line, in public, and with greater risk that control will eventually be taken away from you.

So it easier to write for television? Yes, if that’s all you want to do. But if you want control of your work, if you want your stories to be experienced in a form close to what you originally envisioned, it isn’t easier. It’s much harder. Which is why, to my mind, John Irving still puts it best: “When I feel like being a director, I write a novel.”

Lessons from great (and not-so-great) television

with one comment

It can be hard for a writer to admit being influenced by television. In On Becoming a Novelist, John Gardner struck a disdainful note that hasn’t changed much since:

Much of the dialogue one encounters in student fiction, as well as plot, gesture, even setting, comes not from life but from life filtered through TV. Many student writers seem unable to tell their own most important stories—the death of a father, the first disillusionment in love—except in the molds and formulas of TV. One can spot the difference at once because TV is of necessity—given its commercial pressures—false to life.

In the nearly thirty years since Gardner wrote these words, the television landscape has changed dramatically, but it’s worth pointing out that much of what he says here is still true. The basic elements of fiction—emotion, character, theme, even plot—need to come from close observation of life, or even the most skillful novel will eventually ring false. That said, the structure of fiction, and the author’s understanding of the possibilities of the form, doesn’t need to come from life alone, and probably shouldn’t. To develop a sense of what fiction can do, a writer needs to pay close attention to all types of art, even the nonliterary kind. And over the past few decades, television has expanded the possibilities of narrative in ways that no writer can afford to ignore.

If you think I’m exaggerating, consider a show like The Wire, which tells complex stories involving a vast range of characters, locations, and social issues in ways that aren’t possible in any other medium. The Simpsons, at least in its classic seasons, acquired a richness and velocity that continued to build for years, until it had populated a world that rivaled the real one for density and immediacy. (Like the rest of the Internet, I respond to most situations with a Simpsons quote.) And Mad Men continues to furnish a fictional world of astonishing detail and charm. World-building, it seems, is where television shines: in creating a long-form narrative that begins with a core group of characters and explores them for years, until they can come to seem as real as one’s own family and friends.

Which is why Glee can seem like such a disappointment. Perhaps because the musical is already the archest of genres, the show has always regarded its own medium with an air of detachment, as if the conventions of the after-school special or the high school sitcom were merely a sandbox in which the producers could play. On some level, this is fine: The Simpsons, among many other great shows, has fruitfully treated television as a place for narrative experimentation. But by turning its back on character continuity and refusing to follow any plot for more than a few episodes, Glee is abandoning many of the pleasures that narrative television can provide. Watching the show run out of ideas for its lead characters in less than two seasons simply serves as a reminder of how challenging this kind of storytelling can be.

Mad Men, by contrast, not only gives us characters who take on lives of their own, but consistently lives up to those characters in its acting, writing, and direction. (This is in stark contrast to Glee, where I sense that a lot of the real action is taking place in fanfic.) And its example has changed the way I write. My first novel tells a complicated story with a fairly controlled cast of characters, but Mad Men—in particular, the spellbinding convergence of plots in “Shut the Door, Have a Seat”—reminded me of the possibilities of expansive casts, which allows characters to pair off and develop in unexpected ways. (The evolution of Christina Hendricks’s Joan from eye candy to second lead is only the most obvious example.) As a result, I’ve tried to cast a wider net with my second novel, using more characters and settings in the hopes that something unusual will arise. Television, strangely, has made me more ambitious. I’d like to think that even John Gardner would approve.

Written by nevalalee

March 17, 2011 at 8:41 am

Out of the past

leave a comment »

You shouldn’t have been that sentimental.

Vertigo

About halfway through the beautiful, devastating finale of Twin Peaks—which I’ll be discussing here in detail—I began to reflect on what the figure of Dale Cooper really means. When we encounter him for the first time in the pilot, with his black suit, fastidious habits, and clipped diction, he’s the embodiment of what we’ve been taught to expect of a special agent of the Federal Bureau of Investigation. The FBI occupies a role in movies and television far out of proportion to its actual powers and jurisdiction, in part because it seems to exist on a level intriguingly beyond that of ordinary law enforcement, and it’s often been used to symbolize the sinister, the remote, or the impersonal. Yet when Cooper reveals himself to be a man of real empathy, quirkiness, and faith in the extraordinary, it comes almost as a relief. We want to believe that a person like this exists. Cooper carries a badge, he wears a tie, and he’s comfortable with a gun, but he’s here to enforce human reason in the face of a bewildering universe. The Black Lodge might be out there, but the Blue Rose task force is on it, and there’s something oddly consoling about the notion that it’s a part of the federal government. A few years later, Chris Carter took this premise and refined it into The X-Files, which, despite its paranoia, reassured us that somebody in a position of authority had noticed the weirdness in the world and was trying to make sense of it. They might rarely succeed, but it was comforting to think that their efforts had been institutionalized, complete with a basement office, a place in the org chart, and a budget. And for a lot of viewers, Mulder and Scully, like Cooper, came to symbolize law and order in stories that laugh at our attempts to impose it.

Even if you don’t believe in the paranormal, the image of the lone FBI agent—or two of them—arriving in a small town to solve a supernatural mystery is enormously seductive. It appeals to our hopes that someone in power cares enough about us to investigate problems that can’t be rationally addressed, which all stand, in one way or another, for the mystery of death. This may be why both Twin Peaks and The X-Files, despite their flaws, have sustained so much enthusiasm among fans. (No other television dramas have ever meant more to me.) But it’s also a myth. This isn’t really how the world works, and the second half of the Twin Peaks finale is devoted to tearing down, with remarkable cruelty and control, the very idea of such solutions. It can only do this by initially giving us what we think we want, and the first of last night’s two episodes misleads us with a satisfying dose of wish fulfillment. Not only is Cooper back, but he’s in complete command of the situation, and he seems to know exactly what to do at every given moment. He somehow knows all about Freddie and his magical green glove, which he utilizes to finally send Bob into oblivion. After rescuing Diane, he uses his room key from the Great Northern, like a magical item in a video game, to unlock the door that leads him to Mike and the disembodied Phillip Jeffries. He goes back in time, enters the events of Fire Walk With Me, and saves Laura on the night of her murder. The next day, Pete Martell simply goes fishing. Viewers at home even get the appearance by Julee Cruise that I’ve been awaiting since the premiere. After the credits ran, I told my wife that if it had ended there, I would have been totally satisfied.

But that was exactly what I was supposed to think, and even during the first half, there are signs of trouble. When Cooper first sees the eyeless Naido, who is later revealed to be the real Diane, his face freezes in a huge closeup that is superimposed for several minutes over the ensuing action. It’s a striking device that has the effect of putting us, for the first time, in Cooper’s head, rather than watching him with bemusement from the outside. We identify with him, and at the very end, when his efforts seemingly come to nothing, despite the fact that he did everything right, it’s more than heartbreaking—it’s like an existential crisis. It’s the side of the show that was embodied by Sheryl Lee’s performance as Laura Palmer, whose tragic life and horrifying death, when seen in its full dimension, put the lie to all the cozy, comforting stories that the series told us about the town of Twin Peaks. Nothing good could ever come out of a world in which Laura died in the way that she did, which was the message that Fire Walk With Me delivered so insistently. And seeing Laura share the screen at length with Cooper presents us with both halves of the show’s identity within a single frame. (It also gives us a second entry, after Blue Velvet, in the short list of great scenes in which Kyle MacLachlan enters a room to find a man sitting down with his brains blown out.) For a while, as Cooper drives Laura to the appointment with her mother, it seems almost possible that the series could pull off one last, unfathomable trick. Even if it means erasing the show’s entire timeline, it would be worth it to save Laura. Or so we think. In the end, they return to a Twin Peaks that neither of them recognize, in which the events of the series presumably never took place, and Cooper’s only reward is Laura’s scream of agony.

As I tossed and turned last night, thinking about Cooper’s final, shattering moment of comprehension, a line of dialogue from another movie drifted into my head: “It’s too late. There’s no bringing her back.” It’s from Vertigo, of course, which is a movie that David Lynch and Mark Frost have been quietly urging us to revisit all along. (Madeline Ferguson, Laura’s identical cousin, who was played by Lee, is named after the film’s two main characters, and both works of art pivot on a necklace and a dream sequence.) Along with so much else, Vertigo is about the futility of trying to recapture or change the past, and its ending, which might be the most unforgettable of any film I’ve ever seen, destroys Scotty’s delusions, which embody the assumptions of so many American movies: “One final thing I have to do, and then I’ll be rid of the past forever.” I think that Lynch and Frost are consciously harking back to Vertigo here—in the framing of the doomed couple on their long drive, as well as in Cooper’s insistence that Laura revisit the scene of the crime—and it doesn’t end well in either case. The difference is that Vertigo prepares us for it over the course of two hours, while Twin Peaks had more than a quarter of a century. Both works offer a conclusion that feels simultaneously like a profound statement of our helplessness in the face of an unfair universe and like the punchline to a shaggy dog story, and perhaps that’s the only way to express it. I’ve quoted Frost’s statement on this revival more than once: “It’s an exercise in engaging with one of the most powerful themes in all of art, which is the ruthless passage of time…We’re all trapped in time and we’re all going to die. We’re all traveling along this conveyor belt that is relentlessly moving us toward this very certain outcome.” Thirty seconds before the end, I didn’t know what he meant. But I sure do now. And I know at last why this show’s theme is called “Falling.”

Written by nevalalee

September 4, 2017 at 9:40 am

The number nine

leave a comment »

Note: This post reveals plot details from last night’s episode of Twin Peaks.

One of the central insights of my life as a reader is that certain kinds of narrative are infinitely expansible or contractible. I first started thinking about this in college, when I was struggling to read Homer in Greek. Oral poetry, I discovered, wasn’t memorized, but composed on the fly, aided by the poet’s repertoire of stock lines, formulas, and images that happened to fit the meter. This meant that the overall length of the composition was highly variable. A scene that takes up just a few lines in the Iliad that survives could be expanded into an entire night’s recital, based on what the audience wanted to hear. (For instance, the characters of Crethon and Orsilochus, who appear for only twenty lines in the existing version before being killed by Aeneas, might have been the stars of the evening if the poet happened to be working in Pherae.) That kind of flexibility originated as a practical consequence of the oral form, but it came to affect the aesthetics of the poem itself, which could grow or shrink to accommodate anything that the poet wanted to talk about. Homer uses his metaphors to introduce miniature narratives of human life that don’t otherwise fit into a poem of war, and some amount to self-contained short stories in themselves. Proust operates in much the same way. One observation leads naturally to another, and an emotion or analogy evoked in passing can unfold like a paper flower into three dense pages of reflections. In theory, any novel could be expanded like this, like a hypertext that opens into increasingly deeper levels. In Search of Lost Time happens to be the one book in existence in which all of these flowerings have been preserved, with a plot could fit into a novella of two hundred unhurried pages.

Something similar appears to have happened with the current season of Twin Peaks, and when you start to think of it in those terms, its structure, which otherwise seems almost perversely shapeless, begins to make more sense. In the initial announcement by Showtime, the revival was said to consist of nine episodes, and Mark Frost even said to Buzzfeed:

If you think back about the first season, if you put the pilot together with the seven that we did, you get nine hours. It just felt like the right number. I’ve always felt the story should take as long as the story takes to tell. That’s what felt right to us.

It was doubled to eighteen after a curious interlude in which David Lynch dropped out of the project, citing budget constraints: “I left because not enough money was offered to do the script the way I felt it needed to be done.” He came back, of course, and shortly thereafter, it was revealed that the length of the season had increased. Yet there was never any indication that either Lynch or Frost had done any additional writing. My personal hunch is that they always had nine episodes of material, and this never changed. What happened is that the second act of the show expanded in the fashion that I’ve described above, creating a long central section that was free to explore countless byways without much concern for the plot. The beginning, and presumably the end, remained more or less as conceived—it was the middle that grew. And a quick look at the structure of the season so far seems to confirm this. The first three episodes, which take Cooper from inside the Black Lodge to slightly before his meeting with his new family in Las Vegas, seemed weird at the time, but now they look positively conventional in terms of how much story they covered. They were followed by three episodes, the Dougie Jones arc, that were expanded beyond recognition. And now that we’ve reached the final three, which account for the third act of the original outline, it makes sense for Cooper to return at last.

If the season had consisted of just those nine episodes, I suspect that more viewers would have been able to get behind it. Even if the second act had doubled in length—giving us a total of twelve installments, of which three would have been devoted to detours and loose ends—I doubt that most fans would have minded. It’s expanding that middle section to four times its size, without any explanation, that lost a lot of people. But it’s clearly the only way that Lynch would have returned. For most of the last decade, Lynch has been contentedly pottering around with odd personal projects, concentrating on painting, music, digital video, and other media that don’t require him to be answerable to anyone but himself. The Twin Peaks revival, after the revised terms had been negotiated with Showtime, allowed him to do this with a larger budget and for a vastly greater audience. Much of this season has felt like Lynch’s private sketchbook or paintbox, allowing him to indulge himself within each episode as long as the invisible scaffolding of the original nine scripts remained. The fact that so much of the strangeness of this season has been visual and nonverbal points to Lynch, rather than Frost, as the driving force on this end. And at its best, it represents something like a reinvention of television, which is the most expandable or compressible medium we have, but which has rarely utilized this quality to its full extent. (There’s an opening here, obviously, for a fan edit that condenses the season down to nine episodes, leaving the first and last three intact while shrinking the middle twelve. It would be an interesting experiment, although I’m not sure I’d want to watch it.)

Of course, this kind of aggressive attack on the structure of the narrative doesn’t come without a cost. In the case of Twin Peaks, the primary casualty has been the Dougie Jones storyline, which has been criticized for three related reasons. The first, and most understandable, is that we’re naturally impatient to get the old Cooper back. Another is that this material was never meant to go on for this long, and it starts to feel a little thin when spread over twelve episodes. And the third is that it prevents Kyle MacLachlan, the ostensible star of the show, from doing what he does best. This last criticism feels like the most valid. MacLachlan has played an enormous role in my life as a moviegoer and television viewer, but he operates within a very narrow range, with what I might inadequately describe as a combination of rectitude, earnestness, and barely concealed eccentricity. (In other words, it’s all but indistinguishable from the public persona of David Lynch himself.) It’s what made his work as Jeffrey in Blue Velvet so moving, and a huge part of the appeal of Twin Peaks lay in placing this character at the center of what looked like a procedural. MacLachlan can also convey innocence and darkness, but by bringing these two traits to the forefront, and separating them completely in Dougie and Dark Cooper, it robs us of the amalgam that makes MacLachlan interesting in the first place. Like many stars, he’s chafed under the constraints of his image, and perhaps he even welcomed the challenges that this season presented—although he may not have known how his performance would look when extended past its original dimensions and cut together with the rest. When Cooper returned last night, it reminded me of how much I’ve missed him. And the fact that we’ll get him for two more episodes, along with everything else that this season has offered us, feels more than ever like a gift.

Written by nevalalee

August 28, 2017 at 9:17 am

Amplifying the dream

leave a comment »

In the book Nobody Turn Me Around, Charles Euchner shares a story about Bayard Rustin, a neglected but pivotal figure in the civil rights movement who played a crucial role in the March on Washington in 1963:

Bayard Rustin had insisted on renting the best sound system money could buy. To ensure order at the march, Rustin insisted, people needed to hear the program clearly. He told engineers what he wanted. “Very simple,” he said, pointing at a map. “The Lincoln Memorial is here, the Washington Monument is there. I want one square mile where anyone can hear.” Most big events rented systems for $1,000 or $2,000, but Rustin wanted to spend ten times that. Other members of the march committee were skeptical about the need for a deluxe system. “We cannot maintain order where people cannot hear,” Rustin said. If the Mall was jammed with people baking in the sun, waiting in long lines for portable toilets, anything could happen. Rustin’s job was to control the crowd. “In my view it was a classic resolution of the problem of how can you keep a crowd from becoming something else,” he said. “Transform it into an audience.”

Ultimately, Rustin was able to convince the United Auto Workers and International Ladies’ Garment Workers’ Unions to raise twenty thousand dollars for the sound system, and when he was informed that it ought to be possible to do it for less, he replied: “Not for what I want.” The company American Amplifier and Television landed the contract, and after the system was sabotaged by persons unknown the night before the march, Walter Fauntroy, who was in charge of operations on the ground, called Attorney General Robert Kennedy and said: “We have a serious problem. We have a couple hundred thousand people coming. Do you want a fight here tomorrow after all we’ve done?”

The system was fixed just in time, and its importance to the march is hard to overstate. As Zeynep Tufekci writes in her recent book Twitter and Tear Gas: “Rustin knew that without a focused way to communicate with the massive crowd and to keep things orderly, much could go wrong…The sound system worked without a hitch during the day of the march, playing just the role Rustin had imagined: all the participants could hear exactly what was going on, hear instructions needed to keep things orderly, and feel connected to the whole march.” And its impact on our collective memory of the event may have been even more profound. In an article in last week’s issue of The New Yorker, which is where I first encountered the story, Nathan Heller notes in a discussion of Tufekci’s work:

Before the march, Martin Luther King, Jr., had delivered variations on his “I Have a Dream” speech twice in public. He had given a longer version to a group of two thousand people in North Carolina. And he had presented a second variation, earlier in the summer, before a vast crowd of a hundred thousand at a march in Detroit. The reason we remember only the Washington, D.C., version, Tufekci argues, has to do with the strategic vision and attentive detail work of people like Rustin. Framed by the Lincoln Memorial, amplified by a fancy sound system, delivered before a thousand-person press bay with good camera sight lines, King’s performance came across as something more than what it had been in Detroit—it was the announcement of a shift in national mood, the fulcrum of a movement’s story line and power. It became, in other words, the rarest of protest performances: the kind through which American history can change.

Heller concludes that successful protest movements hinge on the existence of organized, flexible, practical structures with access to elites, noting that the sound system was repaired, on Kennedy’s orders, by the Army Corps of Engineers: “You can’t get much cozier with the Man than that.”

There’s another side to the story, however, which neither Tufekci or Heller mention. In his memoir Behind the Dream, the activist Clarence B. Jones recalls:

The Justice Department and the police had worked hand in hand with the March Committee to design a public address system powerful enough to get the speakers’ voices across the Mall; what march coordinators wouldn’t learn until after the event had ended was that the government had built in a bypass to the system so that they could instantly take over control if they deemed it necessary…Ted [Brown] and Bayard [Rustin] told us that right after the march ended those officers approached them, eager to relieve their consciences and reveal the truth about the sound system. There was a kill switch and an administration official’s thumb had been on it the entire time.

The journalist Gary Jounge—whose primary source seems to be Jones—expands on this claim in his book The Speech: “Fearing incitement from the podium, the Justice Department secretly inserted a cutoff switch into the sound system so they could turn off the speakers if an insurgent group hijacked the microphone. In such an eventuality, the plan was to play a recording to Mahalia Jackson singing ‘He’s Got the Whole World in His Hands’ in order to calm down the crowd.” And in Pillar of Fire, Taylor Branch identifies the official in question as Jerry Bruno, President Kennedy’s “advance man,” who “positioned himself to cut the power to the public address system if rally speeches proved incendiary.” Regardless of the truth of the matter, it speaks to the extent to which Rustin’s sound system was central to the question of who controlled the march and its message. If nothing else, the people who sabotaged it understood this intuitively. (I should also mention the curious rumor, shared by Dave Chapelle in a recent comedy special on Netflix: “I heard when Martin Luther King stood on the steps of the Lincoln Memorial and said he had a dream, he was speaking into a PA system that Bill Cosby paid for.” It’s demonstrably untrue, but it also speaks to the hold that the sound system has on the stories that we tell about the march.)

But what strikes me the most is the sheer practicality of the ends that Rustin, Fauntroy, and the others on the ground were trying to achieve. Listen to how they describe it: “We cannot maintain order where people cannot hear.” “How can you keep a crowd from becoming something else?” “Do you want a fight here tomorrow after all we’ve done?” They weren’t worried about history, but about making it safely to the end of the day. Rustin had been thinking about this march for two decades, and he spent years actively planning for it, conscious that it presented massive organizational challenges that could only be addressed by careful preparation in advance. He had specifically envisioned it as ending at the Lincoln Memorial, with a crowd filling the National Mall, a huge space that imposed enormous logistical problems of its own. The primary purpose of the sound system was to allow a quarter of a million people to assemble and disperse in a peaceful fashion, and its properties were chosen with that end in mind. (As Euchner notes: “To get one square mile of clear sound, you need to spend upwards of twenty thousand dollars.”) A system of unusual power, expense, and complexity was the minimum required to ensure the orderly conclusion of an event on this scale. But when the audacity to envision the National Mall as a backdrop was combined with the attention to detail to make it work, the result was an electrically charged platform that would amplify any message, figuratively and literally, which made it both powerful and potentially dangerous. Everyone understood this. The saboteurs did. So did the Justice Department. The march’s organizers were keenly aware of it, which was why potentially controversial speakers—including James Baldwin—were excluded from the program. In the end, it became a stage for King, and at least one lesson is clear. When you aim high, and then devote everything you can to the practical side, the result might be more than you could have dreamed.

The world spins

with one comment

Note: This post discusses plot points from Sunday’s episode of Twin Peaks.

“Did you call me five days ago?” Dark Cooper asks the shadowy shape in the darkness in the most recent episode of Twin Peaks. It’s a memorable moment for a number of reasons, not the least of which is that he’s addressing the disembodied Philip Jeffries, who was played by David Bowie in Fire Walk With Me, and is now portrayed by a different voice actor and what looks to be a sentient tea kettle. But that didn’t even strike me as the weirdest part. What hit me hardest is the implication that everything that we’ve seen so far this season has played out over less than a week in real time—the phone call to which Dark Cooper is referring occurred during the second episode. Admittedly, there are indications that the events onscreen have unfolded in a nonlinear fashion, not to draw attention to itself, but to allow David Lynch and Mark Frost to cut between storylines according to their own rhythms, rather than being tied down to chronology. (The text message that Dark Cooper sends at the end of the scene was received by Diane a few episodes ago, while Audrey’s painful interactions with Charlie apparently consist of a single conversation parceled out over multiple weeks. And the Dougie Jones material certainly feels as if it occurs over a longer period than five days, although it’s probably possible to squeeze it into that timeline if necessary.) And if viewers are brought up short by the contrast between the show’s internal calendar and its emotional duration, it’s happened before. When I look back at the first two seasons of the show, I’m still startled to realize that every event from Laura’s murder to Cooper’s possession unfolds over just one month.

Why does this feel so strange? The obvious answer is that we get to know these characters over a period of years, while we really only see them in action for a few weeks, and their interactions with one another end up carrying more weight than you might expect for people who, in some cases, met only recently. And television is the one medium that routinely creates that kind of disparity. It’s inherently impossible for a movie to take longer to watch than the events that it depicts—apart from a handful, like Run Lola Run or Vantage Point, that present scrambled timelines or stage the same action from multiple perspectives—and it usually compresses days or weeks of action within a couple of hours. With books, the length of the act of reading varies from one reader to the next, and we’re unlikely to find it particularly strange that it can take months to finish Ulysses, which recounts the events of a single day. It’s only television, particularly when experienced in its original run, that presents such a sharp contrast between narrative and emotional time, even if we don’t tend to worry about this with sitcoms, procedurals, and other nonserialized shows. (One interesting exception consists of shows set in high school or college, in which it’s awfully tempting to associate each season with an academic year, although there’s no reason why a series like Community couldn’t take place over a single semester.) Shows featuring children or teenagers have a built-in clock that reminds us of how time is passing in the real world, as Urkel or the Olsen twins progress inexorably toward puberty. And occasionally there’s an outlier like The Simpsons, in which a quarter of a century’s worth of storylines theoretically takes place within the same year or so.

But the way in which a serialized show can tell a story that occurs over a short stretch of narrative time while simultaneously drawing on the emotional energy that builds up over years is one of the unsung strengths of the entire medium. Our engagement with a favorite show that airs on a weekly basis isn’t just limited to the hour that we spend watching it every Sunday, but expands to fill much of the time in between. If a series really matters to us, it gets into our dreams. (I happened to miss the initial airing of this week’s episode because I was on vacation with my family, and I’ve been so conditioned to get my fix of Twin Peaks on a regular basis that I had a detailed dream about an imaginary episode that night—which hasn’t happened to me since I had to wait a week to watch the series finale of Breaking Bad. As far as I can remember, my dream involved the reappearance of Sheriff Harry Truman, who has been institutionalized for years, with his family and friends describing him euphemistically as “ill.” And I wouldn’t mention it here at all if this weren’t a show that has taught me to pay close attention to my dreamlife.) Many of us also spend time between episodes in reading reviews, discussing plot points online, and catching up with various theories about where it might go next. In a few cases, as with Westworld, this sort of active analysis can be detrimental to the experience of watching the show itself, if you see it as a mystery with clues that the individual viewer is supposed to crack on his or her own. For the most part, though, it’s an advantage, with time conferring an emotional weight that the show might not have otherwise had. As the world spins, the series stays where it was, and we’ve all changed in the meantime.

The revival of Twin Peaks takes this tendency and magnifies it beyond anything else we’ve seen before, with its fans investing it with twenty-five years of accumulated energy—and this doesn’t even account for the hundreds of hours that I spent listening to the show’s original soundtrack, which carries an unquantifiable duration of its own. And one of the charming things about this season is how Lynch and Frost seem to have gone through much the same experience themselves, mulling over their own work until stray lines and details take on a greater significance. When Dark Cooper goes to his shadowy meeting above a convenience store, it’s paying off on a line that Mike, the one-armed man, uttered in passing during a monologue from the first Bush administration. The same applies to the show’s references to a mysterious “Judy,” whom Jeffries mentioned briefly just before disappearing forever. I don’t think that these callbacks reflect a coherent plan that Lynch and Frost have been keeping in their back pockets for decades, but a process of going back to tease out meanings that even they didn’t know were there. Smart writers of serialized narratives learn to drop vague references into their work that might pay off later on. (Two of my favorite examples are Spock’s “Remember” at the end of Star Trek II: The Wrath of Khan, and the Second Foundation, which Isaac Asimov introduced in case he needed it in a subsequent installment.) What Twin Peaks is doing now is analogous to what the writers of Breaking Bad did when they set up problems that they didn’t know how to solve, trusting that they would figure it out eventually. The only difference is that Lynch and Frost, like the rest of us, have had more time to think about it. And it might take us another twenty-five years before we—or they—figure out what they were actually doing.

Written by nevalalee

August 22, 2017 at 9:08 am

The sense of an ending

leave a comment »

Note: This post discusses details from last night’s episode of Twin Peaks.

When I was working as a film critic in college, one of my first investments was a wristwatch that could glow in the dark. If you’re sitting through an interminable slog of a movie, sometimes you simply want to know how much longer the pain will last, and, assuming that you have a sense of the runtime, a watch puts a piece of narrative information at your disposal that has nothing to do with the events of the story itself. Even if you’re enjoying yourself, the knowledge that a film has twenty minutes left to run—which often happens if you’re watching it at home and staring right at the numbers on the display of your DVD player—affects the way you think about certain scenes. A climax plays differently near the end, as opposed to somewhere in the middle. The length of a work of art is a form of metadata that influences the way we watch movies and read books, as Douglas Hofstadter points out in Gödel, Escher, Bach:

You have undoubtedly noticed how some authors go to so much trouble to build up great tension a few pages before the end of their stories—but a reader who is holding the book physically in his hands can feel that the story is about to end. Hence, he has some extra information which acts as an advance warning, in a way. The tension is a bit spoiled by the physicality of the book. It would be so much better if, for instance, there were a lot of padding at the end of novels…A lot of extra printed pages which are not part of the story proper, but which serve to conceal the exact location of the end from a cursory glance, or from the feel of the book.

Not surprisingly, I tend to think about the passage of time the most when I’m not enjoying the story. When I’m invested in the experience, I’ll do the opposite: I’ll actively resist glancing at the clock or looking to see how much time has elapsed. When I know that the credits are going to roll no matter what within the next five minutes, it amounts to a spoiler. With Twin Peaks, which has a narrative that can seemingly be cut anywhere, like yard goods, I try not to think about how long I’ve been watching. Almost inevitably, the episode ends before I’m ready for it, in part because it provides so few of the usual cues that we’ve come to expect from television. There aren’t any commercial breaks, obviously, but the stories also don’t divide neatly into three or four acts. In the past, most shows, even those that aired without interruption on cable networks, followed certain structural conventions that allow us to guess when the story is coming to an end. (This is even more true of Hollywood movies, which, with their mandated beat sheets—the inciting incident, the midpoint, the false dawn, the crisis—practically tell the audience how much longer they need to pay attention, which may be the reason why such rules exist in the first place.) Now that streaming services allow serialized stories to run for hours without worrying about the narrative shape of individual episodes, this is less of an issue, and it can be a mixed blessing. But at its best, on a show like Twin Peaks, it creates a feeling of narrative suspension, cutting us off from any sense of the borders of the episode until the words Starring Kyle MacLachlan appear suddenly onscreen.

Yet there’s also another type of length of which we can’t help but be conscious, at least if we’re the kind of viewers likely to be watching Twin Peaks in the first place. We know that there are eighteen episodes in this season, the fourteenth of which aired last night, and the fact that we only have four hours left to go adds a degree of tension to the narrative that wouldn’t be there if we weren’t aware of it. This external pressure also depends on the knowledge that this is the only new season of the show that we’re probably going to get, which, given how hard it is to avoid this sort of news these days, is reasonable to expect of most fans. Maybe we’ve read the Rolling Stone interview in which David Lynch declared, in response to the question of whether there would be additional episodes: “I have no idea. It depends on how it goes over. You’re going to have to wait and see.” Or we’ve seen that David Nevins of Showtime said to Deadline: “It was always intended to be one season. A lot of people are speculating but there’s been zero contemplation, zero discussions other than fans asking me about it.” Slightly more promisingly, Kyle MacLachlan told the Hollywood Reporter: “I don’t know. David has said: ‘Everything is Twin Peaks.’ It leads me to believe that there are other stories to tell. I think it’s just a question of whether David and Mark want to tell them. I don’t know.” And Lynch even said to USA Today: “You never say never.” Still, it’s fair to say that the current season was conceived, written, and filmed to stand on its own, and until we know otherwise, we have to proceed under the assumption that this is the last time we’ll ever see these characters.

This has important implications for how we watch it from one week to the next. For one thing, it means that episodes near the end will play differently than they would have earlier in the season. Last night’s installment was relatively packed with incident—the revelation of the identity of Diane’s estranged half sister, Andy’s trip into the void, the green gardening glove, Monica Bellucci—but we’re also aware of how little time remains for the show to pay off any of these developments. Most series would have put an episode like this in the fourth slot, rather than the fourteenth, and given the show’s tendency to drop entire subplots for months, it leaves us keenly aware that many of these storylines may never be resolved. Every glimpse of a character, old or new, feels like a potential farewell. And with each episode that passes without the return of Agent Cooper, every minute in which we don’t see him increases our sense of urgency. (If this were the beginning of an open-ended run, rather than the presumptive final season, the response to the whole Dougie Jones thread would have been very different.) This information has nothing to do with the contents of the show itself, which, with one big exception, haven’t changed much since the premiere. But it’s hard not to think about it. In some ways, this may be the greatest difference between this season and the initial run, since there was always hope that the series would be renewed by ABC, or that Fire Walk With Me would tie off any loose ends. Unlike the first generation of fans, we know that this is it, and it can hardly fail to affect our impressions, even if Lynch still whispers in our heads: “You never say never.”

Written by nevalalee

August 14, 2017 at 8:48 am

Bester of both worlds

with one comment

In 1963, the editor Robert P. Mills put together an anthology titled The Worlds of Science Fiction, for which fifteen writers—including Isaac Asimov, Robert A. Heinlein, and Ray Bradbury—were invited to contribute one of their favorite stories. Mills also approached Alfred Bester, the author of the classic novels The Demolished Man and The Stars My Destination, who declined to provide a selection, explaining: “I don’t like any of [my stories]. They’re all disappointments to me. This is why I rarely reread my old manuscripts; they make me sick. And when, occasionally, I come across a touch that pleases me, I’m convinced that I never wrote it—I believe that an editor added it.” When Mills asked if he could pick a story that at least gave him pleasure in the act of writing it, Bester responded:

No. A writer is extremely schizophrenic; he is both author and critic. As an author he may have moments of happiness while he’s creating, but as a critic he is indifferent to his happiness. It cannot influence his merciless appraisal of his work. But there’s an even more important reason. The joy you derive from creating a piece of work has no relationship to the intrinsic value of the work. It’s a truism on Broadway that when an actor particularly enjoys the performance he gives, it’s usually his worst. It’s also true that the story which gives the author the most pain is often his best.

Bester finally obliged with the essay “My Private World of Science Fiction,” which Mills printed as an epilogue. Its centerpiece is a collection of two dozen ideas that Bester plucked from his commonplace book, which he describes as “the heavy leather-bound journal that I’ve been keeping for twenty years.” These scraps and fragments, Bester explains, are his best works, and they inevitably disappoint him when they’re turned into stories. And the bits and pieces that he provides are often dazzling in their suggestiveness: “A circulating brain library in a Womrath’s of the future, where you can rent a brain for any purpose.” “A story about weather smugglers.” “There must be a place where you can go to remember all the things that never happened to you.” And my personal favorite:

The Lefthanded Killer: a tour de force about a murder which (we tell the reader immediately) was committed by a lefthanded killer. But we show, directly or indirectly, that every character is righthanded. The story starts with, “I am the murderer,” and then goes on to relate the mystery, never revealing who the narrator is…The final twist; killer-narrator turns out to be an unborn baby, the survivor of an original pair of twins. The lefthand member killed his righthand brother in the womb. The entire motivation for the strange events that follow is the desire to conceal the crime. The killer is a fantastic and brilliant monster who does not realize that the murder would have gone unnoticed.

Every writer has a collection of story fragments like this—mine takes up a page in a notebook of my own—but few ever publish theirs, and it’s fascinating to wonder at Bester’s motivations for making his unused ideas public. I can think of three possible reasons. The first, and perhaps the most plausible, is that he knew that many of these premises were more interesting in capsule form than when written out as full stories, and so, in acknowledgement of what I’ve called the Borges test, he simply delivered them that way. (He also notes that ideas are cheap: “The idea itself is relatively unimportant; it’s the writer who develops it that makes the big difference…It is only the amateur who worries about ‘his idea being stolen.'”) Another possibility is that he wanted to convey how stray thoughts in a journal like this can mingle and combine in surprising ways, which is one of the high points of any writer’s life:

That’s the wonder of the Commonplace Book; the curious way an incomprehensible note made in 1950 can combine with a vague entry made in 1960 to produce a story in 1970. In A Life in the Day of a Writer, perhaps the most brilliant portrait of an author in action ever painted, Tess Slesinger wrote: “He rediscovered the miracle of something on page twelve tying up with something on page seven which he had not understood when he wrote it…”

Bester concludes of his ideas: “They’ll cross-pollinate, something totally unforeseen will emerge, and then, alas, I’ll have to write the story and destroy it. This is why your best is always what you haven’t written yet.”

Yet the real explanation, I suspect, lies in that line “I’ll have to write the story,” which gets at the heart of Bester’s remarkable career. In reality, Bester is all but unique among major science fiction writers in that he never seemed to “have to write” anything. He contributed short stories to Astounding for a few heady years before World War II, then disappeared for the next decade to do notable work in comic books, radio, and television. Even after he returned, there was a sense that science fiction only occupied part of his attention. He published a mainstream novel, wrote television scripts, and worked as a travel writer and senior editor for the magazine Holiday, and the fact that he had so many ideas that he never used seems to reflect the fact that he only turned to science fiction when he really felt like it. (Bester should have been an ideal writer for John W. Campbell, who, if he could have managed it, would have loved a circle of writers that consisted solely of professional men in other fields who wrote on the side—they were more likely to take his ideas and rewrite to order than either full-time pulp authors or hardcore science fiction fans. And the story of how Campbell alienated Bester over the course of a single meeting is one of the most striking anecdotes from the whole history of the genre.) Most professional writers couldn’t afford to allow their good ideas to go to waste, but Bester was willing to let them go, both because he had other sources of income and because he knew that there was plenty more where that came from. I still think of Heinlein as the genre’s indispensable writer, but Bester might be a better role model, if only because he seemed to understand, rightly, that there were realms to explore beyond the worlds of science fiction.

Written by nevalalee

August 11, 2017 at 9:33 am

The conveyor belt

leave a comment »

For all the endless discussion of various aspects of Twin Peaks, one quality that sometimes feels neglected is the incongruous fact that it had one of the most attractive casts in television history. In that respect—and maybe in that one alone—it was like just about every other series that ever existed. From prestige dramas to reality shows to local newscasts, the story of television has inescapably been that of beautiful men and women on camera. A show like The Hills, which was one of my guilty pleasures, seemed to be consciously trying to see how long it could coast on surface beauty alone, and nearly every series, ambitious or otherwise, has used the attractiveness of its actors as a commercial or artistic strategy. (In one of the commentary tracks on The Simpsons, a producer describes how a network executive might ask indirectly about the looks of the cast of a sitcom: “So how are we doing aesthetically?”) If this seemed even more pronounced on Twin Peaks, it was partially because, like Mad Men, it took its conventionally glamorous actors into dark, unpredictable places, and also because David Lynch had an eye for a certain kind of beauty, both male and female, that was more distinctive than that of the usual soap opera star. He’s continued this trend in the third season, which has been populated so far by such striking presences as Chrysta Bell, Ben Rosenfield, and Madeline Zima, and last night’s episode features an extended, very funny scene between a delighted Gordon Cole and a character played by Bérénice Marlohe, who, with her red lipstick and “très chic” spike heels, might be the platonic ideal of his type.

Lynch isn’t the first director to display a preference for actors, particularly women, with a very specific look—although he’s thankfully never taken it as far as his precursor Alfred Hitchcock did. And the notion that a film or television series can consist of little more than following around two beautiful people with a camera has a long and honorable history. My two favorite movies of my lifetime, Blue Velvet and Chungking Express, both understand this implicitly. It’s fair to say that the second half of the latter film would be far less watchable if it didn’t involve Tony Leung and Faye Wong, two of the most attractive people in the world, and Wong Kar-Wai, like so many filmmakers before him, uses it as a psychological hook to take us into strange, funny, romantic places. Blue Velvet is a much darker work, but it employs a similar lure, with the actors made up to look like illustrations of themselves. In a Time cover story on Lynch from the early nineties, Richard Corliss writes of Kyle MacLachlan’s face: “It is a startling visage, as pure of line as an art deco vase, with soft, all-American features and a comic-book hero’s jutting chin—you could park a Packard on it.” It echoes what Pauline Kael says of Isabella Rossellini in Blue Velvet: “She even has the kind of nostrils that cover artists can represent accurately with two dots.” MacLachlan’s chin and Rossellini’s nose would have caught our attention in any case, but it’s also a matter of lighting and makeup, and Lynch shoots them to emphasize their roots in the pulp tradition, or, more accurately, in the subconscious store of images that we take from those sources. And the casting gets him halfway there.

This leaves us in a peculiar position when it comes to the third season of Twin Peaks, which, both by nature and by design, is about aging. Mark Frost said in an interview: “It’s an exercise in engaging with one of the most powerful themes in all of art, which is the ruthless passage of time…We’re all trapped in time and we’re all going to die. We’re all traveling along this conveyor belt that is relentlessly moving us toward this very certain outcome.” One of the first, unforgettable images from the show’s promotional materials was Kyle MacLachlan’s face, a quarter of a century older, emerging from the darkness into light, and our feelings toward these characters when they were younger inevitably shape the way we regard them now. I felt this strongly in two contrasting scenes from last night’s episode. It offers us our first extended look at Sarah Palmer, played by Grace Zabriskie, who delivers a freakout in a grocery store that reminds us of how much we’ve missed and needed her—it’s one of the most electrifying moments of the season. And we also finally see Audrey Horne again, in a brutally frustrating sequence that feels to me like the first time that the show’s alienating style comes off as a miscalculation, rather than as a considered choice. Audrey isn’t just in a bad place, which we might have expected, but a sad, unpleasant one, with a sham marriage and a monster of a son, and she doesn’t even know the worst of it yet. It would be a hard scene to watch with anyone, but it’s particularly painful when we set it against our first glimpse of Audrey in the original series, when we might have said, along with the Norwegian businessman at the Great Northern Hotel: “Excuse me, is there something wrong, young pretty girl?”

Yet the two scenes aren’t all that dissimilar. Both Sarah and Audrey are deeply damaged characters who could fairly say: “Things can happen. Something happened to me.” And I can only explain away the difference by confessing that I was a little in love in my early teens with Audrey. Using those feelings against us—much as the show resists giving us Dale Cooper again, even as it extravagantly develops everything around him—must have been what Lynch and Frost had in mind. And it isn’t the first time that this series has toyed with our emotions about beauty and death. The original dream girl of Twin Peaks, after all, was Laura Palmer herself, as captured in two of its most indelible images: Laura’s prom photo, and her body wrapped in plastic. (Sheryl Lee, like January Jones in Mad Men, was originally cast for her look, and only later did anyone try to find out whether or not she could act.) The contrast between Laura’s lovely features and her horrifying fate, in death and in the afterlife, was practically the motor on which the show ran. Her face still opens every episode of the revival, dimly visible in the title sequence, but it also ended each installment of the original run, gazing out from behind the prison bars of the closing credits to the strains of “Laura Palmer’s Theme.” In the new season, the episodes generally conclude with whatever dream pop band Lynch feels like showcasing, usually with a few cool women, and I wouldn’t want to give that up. But I also wonder whether we’re missing something when we take away Laura at the end. This season began with Cooper being asked to find her, but she often seems like the last thing on anyone’s mind. Twin Peaks never allowed us to forget her before, because it left us staring at her photograph each week, which was the only time that one of its beautiful faces seemed to be looking back at us.

The driver and the signalman

leave a comment »

In his landmark book Design With Nature, the architect Ian L. McHarg shares an anecdote from the work of an English biologist named George Scott Williamson. McHarg, who describes Williamson as “a remarkable man,” mentions him in passing in a discussion of the social aspects of health: “He believed that physical, mental, and social health were unified attributes and that there were aspects of the physical and social environment that were their corollaries.” Before diving more deeply into the subject, however, McHarg offers up an apparently unrelated story that was evidently too interesting to resist:

One of the most endearing stories of this man concerns a discovery made when he was undertaking a study of the signalmen who maintain lonely vigils while operating the switches on British railroads. The question to be studied was whether these lonely custodians were subject to boredom, which would diminish their dependability. It transpired that lonely or not, underpaid or not, these men had a strong sense of responsibility and were entirely dependable. But this was not the major perception. Williamson learned that every single signalman, from London to Glasgow, could identify infallibly the drivers of the great express trains which flashed past their vision at one hundred miles per hour. The drivers were able to express their unique personalities through the unlikely and intractable medium of some thousand tons of moving train, passing in a fraction of a second. The signalmen were perceptive to this momentary expression of the individual, and Williamson perceived the power of the personality.

I hadn’t heard of Williamson before reading this wonderful passage, and all that I know about him is that he was the founder of the Peckham Experiment, an attempt to provide inexpensive health and recreation services to a neighborhood in Southeast London. The story of the signalmen seems to make its first appearance in his book Science, Synthesis, and Sanity: An Inquiry Into the Nature of Living, which he cowrote with his wife and collaborator Innes Hope Pearse. They relate:

Or again, sitting in a railway signal box on a dark night, in the far distance from several miles away came the rumble of the express train from London. “Hallo,” said my friend the signalman. “Forsyth’s driving her—wonder what’s happened to Courtney?” Next morning, on inquiry of the stationmaster at the junction, I found it was true. Courtney had been taken ill suddenly and Forsyth had deputized for him—all unknown, of course, to the signalman who in any case had met neither Forsyth nor Courtney. He knew them only as names on paper and by their “action-pattern” impressed on a dynamic medium—a unique action-pattern transmitted through the rumble of an unseen train. Or, in a listening post with nothing visible in the sky, said the listener: “That’s ‘Lizzie,’ and Crompton’s flying her.” “Lizzie” an airplane, and her pilot imprinting his action-pattern on her course.

And while Williamson and Pearse are mostly interested in the idea of an individual’s “action-pattern” being visible in an unlikely medium, it’s hard not to come away more struck, like McHarg, by the image of the lone signalman, the passing machine, and the transient moment of connection between them.

As I read over this, it occurred to me that it perfectly encapsulated our relationship with a certain kind of pop culture. We’re the signalmen, and the movie or television show is the train. As we sit in our living rooms, lonely and relatively isolated, something passes across our field of vision—an episode of Game of Thrones, say, which often feels like a locomotive to the face. This is the first time that we’ve seen it, but it represents the end result of a process that has unfolded for months or years, as the episode was written, shot, edited, scored, and mixed, with the contributions of hundreds of men and women we wouldn’t be able to name. As we experience it, however, we see the glimmer of another human being’s personality, as expressed through the narrative machine. It isn’t just a matter of the visible choices made on the screen, but of something less definable, a “style” or “voice” or “attitude,” behind which, we think, we can make out the amorphous factors of influence and intent. We identify an artist’s obsessions, hangups, and favorite tricks, and we believe that we can recognize the mark of a distinctive style even when it goes uncredited. Sometimes we have a hunch about what happened on the set that day, or the confluence of studio politics that led to a particular decision, even if we have no way of knowing it firsthand. (This was one of the tics of Pauline Kael’s movie reviews that irritated Renata Adler: “There was also, in relation to filmmaking itself, an increasingly strident knowingness: whatever else you may think about her work, each column seemed more hectoringly to claim, she certainly does know about movies. And often, when the point appeared most knowing, it was factually false.”) We may never know the truth, but it’s enough if a theory seems plausible. And the primary difference between us and the railway signalman is that we can share our observations with everyone in sight.

I’m not saying that these inferences are necessarily incorrect, any more than the signalmen were wrong when they recognized the personal styles of particular drivers. If Williamson’s account is accurate, they were often right. But it’s worth emphasizing that the idea that you can recognize a driver from the passage of a train is no less strange than the notion that we can know something about, say, Christopher Nolan’s personality from Dunkirk. Both are “unlikely and intractable” mediums that serve as force multipliers for individual ability, and in the case of a television show or movie, there are countless unseen variables that complicate our efforts to attribute anything to anyone, much less pick apart the motivations behind specific details. The auteur theory in film represents an attempt to read movies like novels, but as Thomas Schatz pointed out decades ago in his book The Genius of the System, trying to read Casablanca as the handiwork of Michael Curtiz, rather than that of all of its collaborators taken together, is inherently problematic. And this is easy to forget. (I was reminded of this by the recent controversy over David Benioff and D.B. Weiss’s pitch for their Civil War alternate history series Confederate. I agree with the case against it that the critic Roxane Gay presents in her opinion piece for the New York Times, but the fact that we’re closely scrutinizing a few paragraphs for clues about the merits of a show that doesn’t even exist only hints at how fraught the conversation will be after it actually premieres.) There’s a place for informed critical discussion about any work of art, but we’re often drawing conclusions based on the momentary passage of a huge machine before our eyes, and we don’t know much about how it got there or what might be happening inside. Most of us aren’t even signalmen, who are a part of the system itself. We’re trainspotters.

Off the hook

leave a comment »

In his wonderful interview in John Brady’s The Craft of the Screenwriter, Robert Towne—who might best be described as the Christopher McQuarrie of his time—tosses off a statement that is typically dense with insight:

One of the things that people say when they first start writing movies is, “Jeez, I have this idea for a movie. This is the way it opens. It’s a really great opening.” And of course they don’t know where to go from there. That’s true not only of people who are just thinking of writing movies, but very often of people who write them. They’re anxious for a splashy beginning to hook an audience, but then you end up paying for it with an almost mathematical certainty. If you have a lot of action and excitement at the beginning of a picture, there’s going to have to be some explanation, some character development somewhere along the line, and there will be a big sag about twenty minutes after you get into a film with a splashy opening. It’s something you learn. I don’t know if you’d call it technique. It’s made me prefer soft openings for films. It’s been my experience that an audience will forgive you almost anything at the beginning of the picture, but almost nothing at the end. If they’re not satisfied with the end, nothing that led up to it is going to help.

There’s a lot to absorb and remember here, particularly the implication, which I love, that a narrative has a finite amount of energy, and that if you use up too much of it at the start, you end up paying for it later.

For now, though, I’d like to focus on what Towne says about openings. He’s right in cautioning screenwriters against trying to start at a high point, which may not even be possible: I’ve noted elsewhere that few of the great scenes that we remember from movies come at the very beginning, since they require a degree of setup to really pay off. Yet at this very moment, legions of aspiring writers are undoubtedly sweating over a perfect grabber opening for their screenplay. In his interview with Brady, which was published in 1981, Towne blames this on television:

Unlike television, you don’t have to keep people from turning the channel to another network when they’re in the theater. They’ve paid three-fifty or five dollars and if the opening ten or fifteen minutes of a film are a little slow, they are still going to sit a few minutes, as long as it eventually catches hold. I believe in soft openings…Why bother to capture [the audience’s] interest at the expense of the whole film? They’re there. They’re not going anywhere.

William Goldman draws a similar contrast between the two forms in Adventures in the Screen Trade, writing a clumsy opening hook for a screenplay—about a girl being chased through the woods by a “disfigured giant”—and explaining why it’s bad: “Well, among other things, it’s television.” He continues:

This paragraph contains all that I know about writing for television. They need a hook. And they need it fast. Because they’re panicked you’ll switch to ABC. So TV stuff tends to begin with some kind of grabber. But in a movie, and only at the beginning of a movie, we have time. Not a lot, but some.

And while a lot has changed since Towne and Goldman made these statements, including the “three-fifty” that used to be the price of a ticket, the underlying point remains sound. Television calls for a different kind of structure and pacing than a movie, and screenwriters shouldn’t confuse the two. Yet I don’t think that the average writer who is fretting about the opening of his script is necessarily making that mistake, or thinking in terms of what viewers will see in a theater. I suspect that he or she is worrying about a very different audience—the script reader at an agency or production company. A moviegoer probably won’t walk out if the opening doesn’t grab them, but the first reader of a screenplay will probably toss it aside if the first few pages don’t work. (This isn’t just the case with screenwriters, either. Writers of short stories are repeatedly told that they need to hook the reader in the first paragraph, and the result is often a kind of palpable desperation that can actively turn off editors.) One reason, of course, why Towne and Goldman can get away with “soft” openings is that they’ve been successful enough to be taken seriously, both in person and in print. As Towne says:

There have been some shifts in attitudes toward me. If I’m in a meeting with some people, and if I say, “Look, fellas, I don’t think it’s gonna work this way,” there is a tendency to listen to me more. Before, they tended to dismiss a little more quickly than now.

Which, when you think about it, is exactly the same phenomenon as giving the script the benefit of the doubt—it buys Towne another minute or two to make his point, which is all a screenwriter can ask.

The sad truth is that a script trying to stand out from the slush pile and a filmed narrative have fundamentally different needs. In some cases, they’re diametrically opposed. Writers trying to break into the business can easily find themselves caught between the need to hype the movie on the page and their instincts about how the story deserves to be told, and that tension can be fatal. A smart screenwriter will often draw a distinction between the selling version, which is written with an eye to the reader, and the shooting script, which provides the blueprint for the movie, but most aspiring writers don’t have that luxury. And if we think of television as a model for dealing with distracted viewers or readers, it’s only going to get worse. In a recent essay for Uproxx titled “Does Anyone Still Have Time to Wait For Shows to Get Good?”, the legendary critic Alan Sepinwall notes that the abundance of great shows makes it hard to justify waiting for a series to improve, concluding:

We all have a lot going on, in both our TV and non-TV lives, and if you don’t leave enough breadcrumbs in the early going, your viewers will just wander off to watch, or do, something else. While outlining this post, I tweeted a few things about the phenomenon, phrasing it as “It Gets Better After Six Episodes”—to which many people replied with incredulous variations on, “Six? If it’s not good after two, or even one, I’m out, pal.”

With hundreds of shows instantly at our disposal—as opposed to the handful of channels that existed when Towne and Goldman were speaking—we’ve effectively been put into the position of a studio reader with a stack of scripts. If we don’t like what we see, we can move on. The result has been the emotionally punishing nature of so much peak television, which isn’t about storytelling so much as heading off distraction. And if it sometimes seems that many writers can’t do anything else, it’s because it’s all they were ever taught to do.

Frogs for snakes

with one comment

If you’re the sort of person who can’t turn away from a show business scandal with leaked memos, insider anecdotes, and accusations of bad conduct on both sides, the last two weeks have offered a pair of weirdly similar cases. The first involves Frank Darabont, the former showrunner of The Walking Dead, who was fired during the show’s second season and is now suing the network for a share of profits from the most popular series in the history of cable television. In response, AMC released a selection of Darabont’s emails intended to demonstrate that his firing was justified, and it makes for queasily riveting reading. Some are so profane that I don’t feel comfortable quoting them here, but this one gives you a sense of the tone:

If it were up to me, I’d have not only fired [these two writers] when they handed me the worst episode three script imaginable, I’d have hunted them down and f—ing killed them with a brick, then gone and burned down their homes. I haven’t even spoken to those worthless talentless hack sons-of-bitches since their third draft was phoned in after five months of all their big talk and promises that they’d dig deep and have my back covered…Calling their [script] “phoned-in” would be vastly overstating, because they were too busy wasting my time and your money to bother picking the damn phone up. Those f—ing overpaid con artists.

In an affidavit, Darabont attempted to justify his words: “Each of these emails must be considered in context. They were sent during an intense and stressful two-year period of work during which I was fighting like a mother lion to protect the show from harm…Each of these emails was sent because a ‘professional’ showed up whose laziness, indifference, or incompetence threatened to sink the ship. My tone was the result of the stress and magnitude of this extraordinary crisis. The language and hyperbole of my emails were harsh, but so were the circumstances.”

Frankly, I don’t find this quite as convincing as the third act of The Shawshank Redemption. As it happened, the Darabont emails were released a few days before a similar dispute engulfed Steve Whitmire, the puppeteer who had been performing Kermit the Frog since the death of Jim Henson. After the news broke last week that Whitmire had been fired, accounts soon emerged of his behavior that strikingly echoed the situation with Darabont: “He’d send emails and letters attacking everyone, attacking the writing and attacking the director,” Brian Henson told the New York Times. Whitmire has disputed the characterization: “Nobody was yelling and screaming or using inappropriate language or typing in capitals. It was strictly that I was sending detailed notes. I don’t feel that I was, in any way, disrespectful by doing that.” And his defense, like Darabont’s, stems from what he frames as a sense of protectiveness toward the show and its characters. Of a plot point involving Kermit and his nephew Robin on the defunct series The Muppets, Whitmire said to the Hollywood Reporter:

I don’t think Kermit would lie to him. I think that as Robin came to Kermit, he would say “Things happen, people go their separate ways, but that doesn’t mean we don’t care about you.” Kermit is too compassionate to lie to him to spare his feelings…We have been doing these characters for a long, long time and we know them better than anybody. I thought I was aiding to keep it on track, and I think a big reason why the show was canceled…was because that didn’t happen. I am not saying my notes would have saved it, but I think had they listened more to all of the performers, it would have made a really big difference.

Unfortunately, the case of Whitmire, like that of Darabont, is more complicated than it might seem. Henson’s children have come out in support of the firing, with Brian Henson, the public face of the company, saying that he had reservations about Whitmire’s behavior for over a decade:

I have to say, in hindsight, I feel pretty guilty that I burdened Disney by not having recast Kermit at that point because I knew that it was going to be a real problem. And I have always offered that if they wanted to recast Kermit, I was all for it, and I would absolutely help. I am very glad we have done this now. I think the character is better served to remove this destructive energy around it.

Elsewhere, Lisa Henson told the Times that Whitmire had become increasingly controlling, refusing to hire an understudy and blackballing aspiring puppeteers after the studio tried to cast alternate performers, as a source said to Gizmodo: “[Steve] told Disney that the people who were in the audition room are never allowed to work with the Muppets again.” For a Muppet fan, this is all very painful, so I’ll stop here, except to venture two comments. One is that Darabont and Whitmire may well have been right to be concerned. The second is that in expressing their thoughts, they alienated a lot of the people around them, and their protectiveness toward the material ended in them being removed from the creative process altogether. If they were simply bad at giving notes—and the evidence suggests that at least Darabont was—they weren’t alone. No one gives or takes notes well. You could even argue that the whole infrastructure of movie and television production exists to make the exchange of notes, which usually goes in just one direction, incrementally less miserable. And it doesn’t work.

Both men responded by trying to absorb more creative control into themselves, which is a natural response. Brian Henson recalls Whitmire saying: “I am now Kermit, and if you want the Muppets, you better make me happy, because the Muppets are Kermit.” And the most fascinating passage in Darabont’s correspondence is his proposal for how the show ought to be run in the future:

The crew goes away or stands there silently without milling or chattering about bullshit that doesn’t apply to the job at hand…The director [and crew]…stand there and carefully read the scene out loud word for word. Especially and including all description…The important beats are identified and discussed in terms of how they are to be shot. In other words, sole creative authority is being taken out of the director’s hands. It doesn’t matter that our actors are doing good work if the cameras fail to capture it. Any questions come straight to me by phone or text. If necessary I will shoot the coverage on my iPhone and text it to the set. The staging follows the script to the letter and is no longer willy-nilly horseshit with cameras just hosing it down from whatever angle…If the director tries to not shoot what is written, the director is beaten to death on the spot. A trained monkey is brought in to complete the job.

Reading this, I found myself thinking of an analogous situation that arose when David Mamet was running The Unit. (I’m aware that The Unit wasn’t exactly a great show—I don’t think I got through more than two episodes—but my point remains the same.) Mamet, like Darabont, was unhappy with the scripts that he was getting, but instead of writing everything himself, he wrote a memo on plot structure so lucid and logical that it has been widely shared online as a model of how to tell a story. Instead of raging at those around him, he did what he could to elevate them to his level. It strikes me as the best possible response. But as Kermit might say, that’s none of my business.

Written by nevalalee

July 19, 2017 at 9:02 am

The genius naïf

leave a comment »

Last night, after watching the latest episode of Twin Peaks, I turned off the television before the premiere of the seventh season of Game of Thrones. This is mostly because I only feel like subscribing to one premium channel at a time, but even if I still had HBO, I doubt that I would have tuned in. I gave up on Game of Thrones a while back, both because I was uncomfortable with its sexual violence and because I felt that the average episode had degenerated into a holding pattern—it cut between storylines simply to remind us that they still existed, and it relied on unexpected character deaths and bursts of bloodshed to keep the audience awake. The funny thing, of course, is that you could level pretty much the same charges against the third season of Twin Peaks, which I’m slowly starting to feel may be the television event of the decade. Its images of violence against women are just as unsettling now as they were a quarter of a century ago, when Madeline Ferguson met her undeserved end; it cuts from one subplot to another so inscrutably that I’ve compared its structure to that of a sketch comedy show; and it has already delivered a few scenes that rank among the goriest in recent memory. So what’s the difference? If you’re feeling generous, you can say that one is an opportunistic display of popular craftsmanship, while the other is a singular, if sometimes incomprehensible, artistic vision. And if you’re less forgiving, you can argue that I’m being hard on one show that I concluded was jerking me around, while indulging another that I wanted badly to love.

It’s a fair point, although I don’t think it’s necessarily true, based solely on my experience of each show in the moment. I’ve often found my attention wandering during even solid episodes of Game of Thrones, while I’m rarely less than absorbed for the full hour of Twin Peaks, even though I’d have trouble explaining why. But there’s no denying the fact that I approach each show in a different state of mind. One of the most obvious criticisms of Twin Peaks, then and now, is that its pedigree prompts viewers to overlook or forgive scenes that might seem questionable within in a more conventional series. (There have been times, I’ll confess, when I’ve felt like Homer Simpson chuckling “Brilliant!” and then confessing: “I have absolutely no idea what’s going on.”) Yet I don’t think we need to apologize for this. The history of the series, the track record of its creators, and everything implied by its brand mean that most viewers are willing to give it the benefit the doubt. David Lynch and Mark Frost are clearly aware of their position, and they’ve leveraged it to the utmost, resulting in a show in which they’re free to do just about anything they like. It’s hard to imagine any other series getting away with this, but’s also hard to imagine another show persuading a million viewers each week to meet it halfway. The implicit contract between Game of Thrones and its audience is very different, which makes the show’s lapses harder to forgive. One of the great fascinations of Lynch’s career is whether he even knows what he’s doing half the time, and it’s much less interesting to ask this question of David Benioff and D.B. Weiss, any more than it is of Chris Carter.

By now, I don’t think there’s any doubt that Lynch knows exactly what he’s doing, but that confusion is still central to his appeal. Pauline Kael’s review of Blue Velvet might have been written of last night’s Twin Peaks:

You wouldn’t mistake frames from Blue Velvet for frames from any other movie. It’s an anomaly—the work of a genius naïf. If you feel that there’s very little art between you and the filmmaker’s psyche, it may be because there’s less than the usual amount of inhibition…It’s easy to forget about the plot, because that’s where Lynch’s naïve approach has its disadvantages: Lumberton’s subterranean criminal life needs to be as organic as the scrambling insects, and it isn’t. Lynch doesn’t show us how the criminals operate or how they’re bound to each other. So the story isn’t grounded in anything and has to be explained in little driblets of dialogue. But Blue Velvet has so much aural-visual humor and poetry that it’s sustained despite the wobbly plot and the bland functional dialogue (that’s sometimes a deliberate spoof of small-town conventionality and sometimes maybe not)…Lynch skimps on these commercial-movie basics and fouls up on them, too, but it’s as if he were reinventing movies.

David Thomson, in turn, called the experience of seeing Blue Velvet a moment of transcendence: “A kind of passionate involvement with both the story and the making of a film, so that I was simultaneously moved by the enactment on screen and by discovering that a new director had made the medium alive and dangerous again.”

Twin Peaks feels more alive and dangerous than Game of Thrones ever did, and the difference, I think, lies in our awareness of the effects that the latter is trying to achieve. Even at its most shocking, there was never any question about what kind of impact it wanted to have, as embodied by the countless reaction videos that it inspired. (When you try to imagine videos of viewers reacting to Twin Peaks, you get a sense of the aesthetic abyss that lies between these two shows.) There was rarely a scene in which the intended emotion wasn’t clear, and even when it deliberately sought to subvert our expectations, it was by substituting one stimulus and response for another—which doesn’t mean that it wasn’t effective, or that there weren’t moments, at its best, that affected me as powerfully as any I’ve ever seen. Even the endless succession of “Meanwhile, back at the Wall” scenes had a comprehensible structural purpose. On Twin Peaks, by contrast, there’s rarely any sense of how we’re supposed to be feeling about any of it. Its violence is shocking because it doesn’t seem to serve anything, certainly not anyone’s character arc, and our laughter is often uncomfortable, so that we don’t know if we’re laughing at the situation onscreen, at the show, or at ourselves. It may not be an experiment that needs to be repeated ever again, any more than Blue Velvet truly “reinvented” anything over the long run, except my own inner life. But at a time when so many prestige dramas seem content to push our buttons in ever more expert and ruthless ways, I’m grateful for a show that resists easy labels. Lynch may or may not be a genius naïf, but no ordinary professional could have done what he does here.

Written by nevalalee

July 17, 2017 at 7:54 am

Children of the Lens

with 3 comments

During World War II, as the use of radar became widespread in battle, the U.S. Navy introduced the Combat Information Center, a shipboard tactical room with maps, consoles, and screens of the kind that we’ve all seen in television and the movies. At the time, though, it was like something out of science fiction, and in fact, back in 1939, E.E. “Doc” Smith had described a very similar display in the serial Gray Lensman:

Red lights are fleets already in motion…Greens are fleets still at their bases. Ambers are the planets the greens took off from…The white star is us, the Directrix. That violet cross way over there is Jalte’s planet, our first objective. The pink comets are our free planets, their tails showing their intrinsic velocities.

After the war, in a letter dated June 11, 1947, the editor John W. Campbell told Smith that the similarity was more than just a coincidence. Claiming to have been approached “unofficially, and in confidence” by a naval officer who played an important role in developing the C.I.C., Campbell said:

The entire setup was taken specifically, directly, and consciously from the Directrix. In your story, you reached the situation the Navy was in—more communications channels than integration techniques to handle it. You proposed such an integrating technique, and proved how advantageous it could be…Sitting in Michigan, some years before Pearl Harbor, you played a large share in the greatest and most decisive naval action of the recent war!

Unfortunately, this wasn’t true. The naval officer in question, Cal Laning, was indeed a science fiction fan—he was close friends with Robert A. Heinlein—but any resemblance to the Directrix was coincidental, or, at best, an instance of convergence as fiction and reality addressed the same set of problems. (An excellent analysis of the situation can be found in Ed Wysocki’s very useful book An Astounding War.)

If Campbell was tempted to overstate Smith’s influence, this isn’t surprising—the editor was disappointed that science fiction hadn’t played the role that he had envisioned for it in the war, and this wasn’t the first or last time that he would gently exaggerate it. Fifteen years later, however, Smith’s fiction had a profound impact on a very different field. In 1962, Steve Russell of M.I.T. developed Spacewar, the first video game to be played on more than one computer, with two spaceships dueling with torpedoes in the gravity well of a star. In an article for Rolling Stone written by my hero Stewart Brand, Russell recalled:

We had this brand new PDP-1…It was the first minicomputer, ridiculously inexpensive for its time. And it was just sitting there. It had a console typewriter that worked right, which was rare, and a paper tape reader and a cathode ray tube display…Somebody had built some little pattern-generating programs which made interesting patterns like a kaleidoscope. Not a very good demonstration. Here was this display that could do all sorts of good things! So we started talking about it, figuring what would be interesting displays. We decided that probably you could make a two-dimensional maneuvering sort of thing, and decided that naturally the obvious thing to do was spaceships…

I had just finished reading Doc Smith’s Lensman series. He was some sort of scientist but he wrote this really dashing brand of science fiction. The details were very good and it had an excellent pace. His heroes had a strong tendency to get pursued by the villain across the galaxy and have to invent their way out of their problem while they were being pursued. That sort of action was the thing that suggested Spacewar. He had some very glowing descriptions of spaceship encounters and space fleet maneuvers.

The “somebody” whom he mentions was Marvin Minsky, another science fiction fan, and Russell’s collaborator Martin Graetz elsewhere cited Smith’s earlier Skylark series as an influence on the game.

But the really strange thing is that Campbell, who had been eager to claim credit for Smith when it came to the C.I.C., never made this connection in print, at least not as far as I know, although he was hugely interested in Spacewar. In the July 1971 issue of Analog, he published an article on the game by Albert W. Kuhfeld, who had developed a variation of it at the University of Minnesota. Campbell wrote in his introductory note:

For nearly a dozen years I’ve been trying to get an article on the remarkable educational game invented at M.I.T. It’s a great game, involving genuine skill in solving velocity and angular relation problems—but I’m afraid it will never be widely popular. The playing “board” costs about a quarter of a megabuck!

Taken literally, the statement “nearly a dozen years” implies that the editor heard about Spacewar before it existed, but the evidence legitimately implies that he learned of it almost at once. Kuhfeld writes: “Although it uses a computer to handle orbital mechanics, physicists and mathematicians have no great playing advantage—John Campbell’s seventeen-year-old daughter beat her M.I.T. student-instructor on her third try—and thereafter.” Campbell’s daughter was born in 1945, which squares nicely with a visit around the time of the game’s first appearance. It isn’t implausible that Campbell would have seen and heard about it immediately—he had been close to the computer labs at Harvard and M.I.T. since the early fifties, and he made a point of dropping by once a year. If the Lensman series, the last three installments of which he published, had really been an influence on Spacewar, it seems inconceivable that nobody would have told him. For some reason, however, Campbell, who cheerfully promoted the genre’s impact on everything from the atomic bomb to the moon landing, didn’t seize the opportunity to do the same for video games, in an article that he badly wanted to publish. (In a letter to the manufacturers of the PDP-1, whom he had approached unsuccessfully for a writeup, he wrote: “I’ve tried for years to get a story on Spacewar, and I’ve repeatedly had people promise one…and not deliver.”)

So why didn’t he talk about it? The obvious answer is that he didn’t realize that Spacewar, which he thought would “never be widely popular,” was anything more than a curiosity, and if he had lived for another decade—he died just a few months after the article came out—he would have pushed the genre’s connection to video games as insistently as he did anything else. But there might have been another factor at play. For clues, we can turn to the article in Rolling Stone, in which Brand visited the Stanford Artificial Intelligence Laboratory with Annie Leibovitz, which is something that I wish I could have seen. Brand opens with the statement that computers are coming to the people, and he adds: “That’s good news, maybe the best since psychedelics.” It’s a revealing comparison, and it indicates the extent to which the computing movement was moving away from everything that Campbell represented. A description of the group’s offices at Stanford includes a detail that, if Campbell had read it, would only have added insult to injury:

Posters and announcements against the Vietnam War and Richard Nixon, computer printout photos of girlfriends…and signs on every door in Tolkien’s elvish Fëanorian script—the director’s office is Imladris, the coffee room The Prancing Pony, the computer room Mordor. There’s a lot of hair on those technicians, and nobody seems to be telling them where to scurry.

In the decade since the editor first encountered Spacewar, a lot had changed, and Campbell might have been reluctant to take much credit for it. The Analog article, which Brand mentions, saw the game as a way to teach people about orbital mechanics; Rolling Stone recognized it as a leading indicator of a development that was about to change the world. And even if he had lived, there might not have been room for Campbell. As Brand concludes:

Spacewar as a parable is almost too pat. It was the illegitimate child of the marrying of computers and graphic displays. It was part of no one’s grand scheme. It served no grand theory. It was the enthusiasm of irresponsible youngsters. It was disreputably competitive…It was an administrative headache. It was merely delightful.

%d bloggers like this: