Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘Fargo

Too far to go

leave a comment »

Note: Spoilers follow for the third season of Fargo.

A year and a half ago, I wrote on this blog: “Fargo may turn out to be the most important television series on the air today.” Now that the third season has ended, it feels like a good time to revisit that prediction, which turned out to be sort of right, but not for the reasons that I expected. When I typed those words, cracking the problem of the anthology series felt like the puzzle on which the future of television itself depended. We were witnessing an epochal shift of talent, which is still happening, from movies to the small screen, as big names on both sides of the camera began to realize that the creative opportunities it afforded were in many ways greater than what the studios were prepared to offer. I remain convinced that we’re entering an entertainment landscape in which Hollywood focuses almost exclusively on blockbusters, while dramas and quirky smaller films migrate to cable, or even the networks. The anthology series was the obvious crossover point. It could draw big names for a limited run, it allowed stories to be told over the course of ten tight episodes rather than stretched over twenty or more, it lent itself well to being watched in one huge binge, and it offered viewers the prospect of a definitive conclusion. At its best, it felt like an independent movie given the breathing room and resources of an epic. Fargo, its exemplar, became one of the most fascinating dramas on television in large part because it drew its inspiration from one of the most virtuoso experiments with tone in movie history—a triangulation, established by the original film, between politeness, quiet desperation, and sudden violence. It was a technical trick, but a very good one, and it seemed like a machine that could generate stories forever.

After three seasons, I haven’t changed my mind, even if the show’s underlying formula feels more obvious than before. What I’ve begun to realize about Fargo is that it’s an anthology series that treats each season as a kind of miniature anthology, too, with scenes and entire episodes that stand for nothing but themselves. In the first season, the extended sequence about Oliver Platt’s supermarket magnate was a shaggy dog story that didn’t go anywhere, but now, it’s a matter of strategy. The current season was ostensibly focused on the misfortunes of the Stussey brothers, played with showy brilliance by Ewan McGregor, but it allowed itself so many digressions that the plot became more like a framing device. It opened with a long interrogation scene set in East Germany that was never referenced again, aside from serving as a thematic overture to the whole—although it can’t be an accident that “Stussey” sounds so much like “Stasi.” Later, there was a self-contained flashback episode set in the science fiction and movie worlds of the seventies, including an animated cartoon to dramatize a story by one of the characters, which turned the series into a set of nesting dolls. It often paused to stage the parables told by the loathsome Varga, which were evidently supposed to cast light on the situation, but rarely had anything to do it. After the icy control of the first season and the visual nervousness of the second, the third season threaded the needle by simultaneously disciplining its look and indulging its penchant for odd byways. Each episode was like a film festival of short subjects, some more successful than others, and unified mostly by creator Noah Hawley’s confidence that we would follow him wherever he went.

Mostly, he was right, although his success rate wasn’t perfect, as it hardly could have been expected to be. There’s no question that between Fargo and Legion, Hawley has been responsible for some of the most arresting television of the last few years, but the strain occasionally shows. The storytelling and character development on Legion were never as interesting as its visual experiments, possibly because a show can innovate along only so many parameters at once. And Fargo has been so good at its quirky components—it rarely gives us a scene that isn’t riveting in itself—that it sometimes loses track of the overall effect. Like its inspiration, it positions itself as based on true events, even though it’s totally fictional, and in theory, this frees it up to indulge in loose ends, coincidences, and a lack of conventional climaxes, just like real life. But I’ve never entirely bought this. The show is obsessively stylized and designed, and it never feels like a story that could take place anywhere but in the fictional Coenverse. At times, Hawley seems to want it both ways. The character of Nikki Swango, played by Mary Elizabeth Winstead, is endlessly intriguing, and I give the show credit for carrying her story through to what feels like a real conclusion, rather than using her suffering as an excuse to motivate a male protagonist. But when she’s gratuitously targeted by the show’s villains, only to survive and turn into an avenging angel, it’s exactly what I wanted, but I couldn’t really believe a second of it. It’s just as contrived as any number of storylines on more conventional shows, and although the execution is often spellbinding, it has a way of eliding reasonable objections. When it dispatches Nikki at the end with a standard trick of fate, it feels less like a subversion than the kind of narrative beat that the show has taught us to expect, and by now, it’s dangerously close to a cliché.

This is where the anthology format becomes both a blessing and a curse. By tying off each story after ten episodes, Fargo can allow itself to be wilder and more intense than a show that has to focus on the long game, but it also gets to indulge in problematic storytelling devices that wouldn’t stand up to scrutiny if we had to live with these characters for multiple seasons. Even in its current form, there are troubling patterns. Back in the first season, one of my few complaints revolved around the character of Bill Oswalt, who existed largely to foil the resourceful Molly as she got closer to solving the case. Bill wasn’t a bad guy, and the show took pains to explain the reasons for his skepticism, but their scenes together quickly grew monotonous. They occurred like clockwork, once every episode, and instead of building to something, they were theme and variations, constantly retarding the story rather than advancing it. In the third season, incredibly, Fargo does the same thing, but worse, in the form of Chief Moe Dammik, who exists solely to doubt, undermine, and make fun of our hero, Gloria Burgle, and without the benefit of Bill’s underlying sweetness. Maybe the show avoided humanizing Dammik because it didn’t want to present the same character twice—which doesn’t explain why he had to exist at all. He brought the show to a halt every time he appeared, and his dynamic with Gloria would have seemed lazy even on a network procedural. (And it’s a foil, significantly, that the original Fargo didn’t think was necessary.) Hawley and his collaborators are only human, but so are all writers. And if the anthology format allows them to indulge their strengths while keeping their weaknesses from going too far, that may be the strongest argument for it of all.

Written by nevalalee

June 22, 2017 at 8:45 am

Posted in Television

Tagged with , ,

Moving through time

leave a comment »

Note: Spoilers follow for last night’s episode of Twin Peaks.

For all the debate over how best to watch a television show these days, which you see argued with various degrees of seriousness, the options that you’re offered are fairly predictable. If it’s a show on a streaming platform, you’re presented with all of it at once; if it’s on a cable or broadcast channel, you’re not. Between those two extremes, you’re allowed to structure your viewing experience pretty much however you like, and it isn’t just a matter of binging the whole season or parceling out each episode one week at a time. Few of us past the age of thirty have the ability or desire to watch ten hours of anything in one sitting, and the days of slavish faithfulness to an appointment show are ending, too—even if you aren’t recording it on DVR, you can usually watch it online the next day. Viewers are customizing their engagement with a series in ways that would have been unthinkable just fifteen years ago, and networks are experimenting with having it both ways, by airing shows on a weekly basis while simultaneously making the whole season available online. If there’s pushback, it tends to be from creators who are used to having their shows come out sequentially, like Dan Harmon, who managed to get Yahoo to release the sixth season of Community one episode at a time, as if it were still airing on Thursdays at eight. (Yahoo also buried the show on its site so that even fans had trouble figuring out that it was there, but that’s another story, as well as a reminder, in case we needed one, that such decisions aren’t always logical or considered.)

Twin Peaks, for reasons that I’ll discuss in a moment, doesn’t clearly lend itself to one approach or another, which may be why its launch was so muddled. Showtime premiered the first two hours on a Sunday evening, then quietly made the next two episodes available online, although this was so indifferently publicized that it took me a while to hear about it. It then ran episodes three and four yet again the following week, despite the fact that many of the show’s hardcore fans—and there’s hardly anyone else watching—would have seen them already, only to finally settle into the weekly delivery schedule that David Lynch had wanted in the first place. As a result, it stumbled a bit out of the gate, at least as far as shaping a wider conversation was concerned. You weren’t really sure who was watching those episodes or when. (To be fair, in the absence of blockbuster ratings, the existence of viewers watching at different times is what justifies this show’s existence.) As I’ve argued elsewhere, this isn’t a series that necessarily benefits from collective analysis, but there’s a real, if less tangible, emotional benefit to be had from collective puzzlement. It’s the understanding that a lot of other people are feeling the same things that you are, at roughly the same time, and that you have more in common with them than you will with anybody else in the world. I’m overstating it, but only a little. Whenever I meet someone who bought Julee Cruise’s first album or knows why Lil was wearing a sour face, I feel like I’ve found a kindred spirit. Twin Peaks started out as a huge cultural phenomenon, dwindling only gradually into a cult show that provided its adherents with their own set of passwords. And I think that it would have had a better chance of happening again now if Showtime had just aired all the episodes once a week from the beginning.

Yet I understand the network’s confusion, because this is both a show that needs to be seen over a period of time and one that can’t be analyzed until we’ve seen the full picture. Reviewing it must be frustrating. Writing about it here, I don’t need to go into much detail, and I’m free to let my thoughts wander wherever they will, but a site like the New York Times or The A.V. Club carries its own burden of expectations, which may not make sense for a show like this. A “recap” of an episode of Twin Peaks is almost a contradiction in terms. You can’t do much more than catalog the disconnected scenes, indulge in some desultory theorizing, and remind readers that they shouldn’t jump to any conclusions until they’ve seen more. It’s like reviewing Mulholland Drive ten minutes at a time—which is ridiculous, but it’s also exactly the position in which countless critics have found themselves. For ordinary viewers, there’s something alluring about the constant suspension of judgment that it requires: I’ve found it as absorbing as any television series I’ve seen in years. Despite its meditative pacing, an episode seems to go by more quickly than most installments of a more conventional show, even the likes of Fargo or Legion, which are clearly drawing from the same pool of ideas. (Noah Hawley is only the latest creator and showrunner to try to deploy the tone of Twin Peaks in more recognizable stories, and while he’s better at it than most, it doesn’t make the effort any less thankless.) But it also hamstrings the online critic, who has no choice but to publish a weekly first draft on the way to a more reasoned evaluation. Everything you write about Twin Peaks, even, or especially, if you love it, is bound to be provisional until you can look at it as a whole.

Still, there probably is a best way to watch Twin Peaks, which happens to be the way in which I first saw it. You stumble across it years after it originally aired, in bits and pieces, and with a sense that you’re the only person you know who is encountering it in quite this way. A decade from now, my daughter, or someone like her, will discover this show in whatever format happens to be dominant, and she’ll watch it alone. (I also suspect that she’ll view it after having internalized the soundtrack, which doesn’t even exist yet in this timeline.) It will deprive her, inevitably, of a few instants of shared bewilderment or revelation that can only occur when you’re watching a show on its first airing. When Albert Rosenfeld addresses the woman in the bar as Diane, and she turns around to reveal Laura Dern in a blonde wig, it’s as thrilling a moment as I’ve felt watching television in a long time—and by the way Lynch stages it, it’s clear that he knows it, too. My daughter won’t experience this. But there’s also something to be said for catching up with a show that meant a lot to people a long time ago, with your excitement tinged with a melancholy that you’re too late to have been a part of it. I frankly don’t know how often I’ll go back to watch this season again, any more than I’m inclined to sit through Inland Empire, which I loved, a second time. But I’m oddly consoled by the knowledge that it will continue to exist and mean a lot to future viewers after the finale airs, which isn’t something that you could take for granted if you were watching the first two seasons in the early nineties. And it makes this particular moment seem all the more precious, since it’s the last time that we’ll be able to watch Twin Peaks without any idea of where it might be going.

Written by nevalalee

June 12, 2017 at 9:07 am

The test of tone

with one comment

Brendan Gleeson and Colin Farrell in In Bruges

Note: I’m on vacation this week, so I’ll be republishing a few of my favorite posts from earlier in this blog’s run. This post originally appeared, in a slightly different form, on April 22, 2014.

Tone, as I’ve mentioned before, can be a tricky thing. On the subject of plot, David Mamet writes: “Turn the thing around in the last two minutes, and you can live quite nicely. Turn it around in the last ten seconds and you can buy a house in Bel Air.” And if you can radically shift tones within a single story and still keep the audience on board, you can end up with even more. If you look at the short list of the most exciting directors around—Paul Thomas Anderson, David O. Russell, Quentin Tarantino, David Fincher, the Coen Brothers—you find that what most of them have in common is the ability to alter tones drastically from scene to scene, with comedy giving way unexpectedly to violence or pathos. (A big exception here is Christopher Nolan, who seems happiest when operating within a fundamentally serious tonal range. It’s a limitation, but one we’re willing to accept because Nolan is so good at so many other things. Take away those gifts, and you end up with Transcendence.) Tonal variation may be the last thing a director masters, and it often only happens after a few films that keep a consistent tone most of the way through, however idiosyncratic it may be. The Coens started with Blood Simple, then Raising Arizona, and once they made Miller’s Crossing, they never had to look back.

The trouble with tone is that it imposes tremendous switching costs on the audience. As Tony Gilroy points out, during the first ten minutes of a movie, a viewer is making a lot of decisions about how seriously to take the material. Each time the level of seriousness changes gears, whether upward or downward, it demands a corresponding moment of consolidation, which can be exhausting. For a story that runs two hours or so, more than a few shifts in tone can alienate viewers to no end. You never really know where you stand, or whether you’ll be watching the same movie ten minutes from now, so your reaction is often how Roger Ebert felt upon watching Pulp Fiction for the first time: “Seeing this movie last May at the Cannes Film Festival, I knew it was either one of the year’s best films, or one of the worst.” (The outcome is also extremely subjective. I happen to think that Vanilla Sky is one of the most criminally underrated movies of the last two decades—few other mainstream films have accommodated so many tones and moods—but I’m not surprised that so many people hate it.) It also annoys marketing departments, who can’t easily explain what the movie is about; it’s no accident that one of the worst trailers I can recall was for In Bruges, which plays with tone as dexterously as any movie in recent memory.

Hugh Dancy on Hannibal

As a result, tone is another element in which television has considerable advantages. Instead of two hours, a show ideally has at least one season, maybe more, to play around with tone, and the number of potential switching points is accordingly increased. A television series is already more loosely organized than a movie, which allows it to digress and go off on promising tangents, and we’re used to being asked to stop and start from week to week, so we’re more forgiving of departures. That said, this rarely happens all at once; like a director’s filmography, a show often needs a season or two to establish its strengths before it can go exploring. When we think back to a show’s pivotal episodes—the ones in which the future of the series seemed to lock into place—they’re often installments that discovered a new tone that worked within the rules that the show had laid down. Community was never the same after “Modern Warfare,” followed by “Abed’s Uncontrollable Christmas,” demonstrated how much it could push its own reality while still remaining true to its characters, and The X-Files was altered forever by Darin Morgan’s “Humbug,” which taught the show how far it could kid itself while probing into ever darker places.

At its best, this isn’t just a matter of having a “funny” episode of a dramatic series, or a very special episode of a sitcom, but of building a body of narrative that can accommodate surprise. One of the great pleasures of watching Hannibal lay in how it learned to acknowledge its own absurdity while drawing the noose ever tighter, which only happens after a show has enough history for it to engage in a dialogue with itself. Much the same happened to Breaking Bad, which had the broadest tonal range imaginable: it was able to move between borderline slapstick and the blackest of narrative developments because it could look back and reassure itself that it had already done a good job with both. (Occasionally, a show will emerge with that kind of tone in mind from the beginning. Fargo remains the most fascinating drama on television in large part because it draws its inspiration from one of the most virtuoso experiments with tone in movie history.) If it works, the result starts to feel like life itself, which can’t be confined easily within any one genre. Maybe that’s because learning to master tone is like putting together the pieces of one’s own life: first you try one thing, then something else, and if you’re lucky, you’ll find that they work well side by side.

Written by nevalalee

April 26, 2016 at 9:00 am

Going for the kill

with 2 comments

David Duchovny and Gillian Anderson on The X-Files

Note: Spoilers follow for the X-Files episode “Home Again.”

One of the unexpected but undeniable pleasures of the tenth season of The X-Files is the chance it provides to reflect on how television itself has changed over the last twenty years. The original series was so influential in terms of storytelling and tone that it’s easy to forget how compelling its visuals were, too: it managed to tell brooding, cinematic stories on a tiny budget, with the setting and supporting cast changing entirely from one episode to the next, and it mined a tremendous amount of atmosphere from those Vancouver locations. When it pushed itself, it could come up with installments like “Triangle”—one of the first television episodes ever to air in widescreen—or “The Post-Modern Prometheus,” none of which looked like anything you’d ever seen before, but it could be equally impressive in its moody procedural mode. Yet after a couple of decades, even the most innovative shows start to look a little dated. Its blocking and camera style can seem static compared to many contemporary dramas, and one of the most intriguing qualities of the ongoing reboot has been its commitment to maintaining the feel of the initial run of the series while upgrading its technical aspects when necessary. (Sometimes the best choice is to do nothing at all: the decision to keep the classic title sequence bought it tremendous amounts of goodwill, at least with me, and the slightly chintzy digital transformation effects in “Mulder and Scully Meet the Were-Monster” come off as just right.)

This week’s episode, Glen Morgan’s “Home Again,” is interesting mostly as an illustration of the revival’s strengths and limitations. It’s basically a supernatural slasher movie, with a ghostly killer called the Band-Aid Nose Man stalking and tearing apart a string of unsympathetic victims who have exploited the homeless in Philadelphia. And the casefile element here is even more perfunctory than usual. All we get in the way of an explanation is some handwaving about the Tibetan tulpa, which the show undermines at once, and the killer turns out to be hilariously ineffective: he slaughters a bunch of people without doing anything to change the underlying situation. But there’s also a clear implication that the case isn’t meant to be taken seriously, except as a counterpoint to the real story about the death of Scully’s mother. Even there, though, the parallels are strained, and if the implicit point is that the case could have been about anything, literally anything would have been more interesting than this. (There’s another point to be made, which I don’t feel like exploring at length here, about how the show constantly falls back on using Scully’s family—when it isn’t using her body—to put her through the wringer. Scully has lost her father, her sister, and now her mother, and it feels even lazier here than usual, as if the writers thought she’d had too much fun last week, which meant that she had to suffer.)

Gillian Anderson and David Duchovny on The X-Files

What we have, then, are a series of scenes—four, to be exact—in which an unstoppable killer goes after his quarry. There’s nothing wrong with this, and if the resulting sequences were genuinely scary, the episode wouldn’t need to work so hard to justify its existence. Yet none of it is particularly memorable or frightening. As I watched it, I was struck by the extent to which the bar has been raised for this kind of televised suspense, particularly in shows like Breaking Bad and Fargo, which expertly blend the comedic and the terrifying. Fargo isn’t even billed as a suspense show, but it has given us scenes and whole episodes over the last two seasons that built the pressure so expertly that they were almost painful to watch: I’ve rarely had a show keep me in a state of dread for so long. And this doesn’t require graphic violence, or even any violence at all. Despite its title, Fargo takes its most important stylistic cue from another Coen brothers movie entirely, and particularly from the sequence in No Country For Old Men in which Llewelyn Moss awaits Anton Chigurh in his motel room. It’s the most brilliantly sustained sequence of tension in recent memory, and it’s built from little more than our knowledge of the two characters, the physical layout of the space, and a shadow under the door. Fargo has given us a version of this scene in every season, and it does it so well that it makes it all the less forgivable when an episode like “Home Again” falls short.

And the funny thing, of course, is that both Fargo and Breaking Bad lie in a direct line of descent from The X-Files. Breaking Bad, obviously, is the handiwork of Vince Gilligan, who learned much of what he knows in his stint on the earlier show, and who revealed himself in “Pusher” to be a master of constructing a tight suspense sequence from a handful of well-chosen elements. And Fargo constantly winks at The X-Files, most notably in the spaceship that darted in and out of sight during the second season, but also in its range and juxtaposition of tones and its sense of stoicism in the face of an incomprehensible universe. If an episode like “Home Again” starts to look a little lame, it’s only because the show’s descendants have done such a good job of expanding upon the basic set of tools that the original series provided. (It also points to a flaw in the show’s decision to allow all the writers to direct their own episodes. It’s a nice gesture, but it also makes me wonder how an episode like this would have played in the hands of a director like, say, Michelle McLaren, who is an expert at extending tension to the breaking point.) Not every Monster of the Week needs to be a masterpiece, but when we’re talking about six episodes after so many years, there’s greater pressure on each installment to give us something special—aside from killing off another member of the Scully family. Because if the show were just a little smarter about dispatching its other victims, it might have decided to let Margaret Scully live.

Written by nevalalee

February 11, 2016 at 9:30 am

The time factor

with 7 comments

Concept art for Toy Story 3

Earlier this week, my daughter saw Toy Story for the first time. Not surprisingly, she loved it—she’s asked to watch it three more times in two days—and we’ve already moved on to Toy Story 2. Seeing the two movies back to back, I was struck most of all by the contrast between them. The first installment, as lovely as it is, comes off as a sketch of things to come: the supporting cast of toys gets maybe ten minutes total of screen time, and the script still has vestiges of the villainous version of Woody who appeared in the earlier drafts. It’s a relatively limited film, compared to the sequels. Yet if you were to watch it today without any knowledge of the glories that followed, you’d come away with a sense that Pixar had done everything imaginable with the idea of toys who come to life. The original Toy Story feels like an exhaustive list of scenes and situations that emerge organically from its premise, as smartly developed by Joss Whedon and his fellow screenwriters, and in classic Pixar fashion, it exploits that core gimmick for all it’s worth. Like Finding Nemo, it amounts to an anthology of all the jokes and set pieces that its setting implies: you can practically hear the writers pitching out ideas. And taken on its own, it seems like it does everything it possibly can with that fantastic concept.

Except, of course, it doesn’t, as two incredible sequels and a series of shorts would demonstrate. Toy Story 2 may be the best example I know of a movie that takes what made its predecessor special and elevates it to a level of storytelling that you never imagined could exist. And it does this, crucially, by introducing a new element: time. If Toy Story is about toys and children, Toy Story 2 and its successor are about what happens when those kids become adults. It’s a complication that was inherent to its premise from the beginning, but the first movie wasn’t equipped to explore it—we had to get to know and care about these characters before we could worry about what would happen after Andy grew up. It’s a part of the story that had to be told, if its assumptions were to be treated honestly, and it shows that the original movie, which seemed so complete in itself, only gave us a fraction of the full picture. Toy Story 3 is an astonishing achievement on its own terms, but there’s a sense in which it only extends and trades on the previous film’s moment of insight, which turned it into a franchise of almost painful emotional resonance. If comedy is tragedy plus time, the Toy Story series knows that when you add time to comedy, you end up with something startlingly close to tragedy again.

Robert De Niro in The Godfather Part II

And thinking about the passage of time is an indispensable trick for creators of series fiction, or for those looking to expand a story’s premise beyond the obvious. Writers of all kinds tend to think in terms of unity of time and place, which means that time itself isn’t a factor in most stories: the action is confined within a safe, manageable scope. Adding more time to the story in either direction has a way of exploding the story’s assumptions, or of exposing fissures that lead to promising conflicts. If The Godfather Part II is more powerful and complex than its predecessor, it’s largely because of its double timeline, which naturally introduces elements of irony and regret that weren’t present in the first movie: the outside world seems to break into the hermetically sealed existence of the Corleones just as the movie itself breaks out of its linear chronology. And the abrupt time jump, which television series from Fargo to Parks and Recreation have cleverly employed, is such a useful way of advancing a story and upending the status quo that it’s become a cliché in itself. Even if you don’t plan on writing more than one story or incorporating the passage of time explicitly into the plot, asking yourself how the characters would change after five or ten years allows you to see whether the story depends on a static, unchanging timeframe. And those insights can only be good for the work.

This also applies to series in which time itself has become a factor for reasons outside anyone’s control. The Force Awakens gains much of its emotional impact from our recognition, even if it’s unconscious, that Mark Hamill is older now than Alec Guinness was in the original, and the fact that decades have gone by both within the story’s universe and in our own world only increases its power. The Star Trek series became nothing less than a meditation on the aging of its own cast. And this goes a long way toward explaining why Toy Story 3 was able to close the narrative circle so beautifully: eleven years had passed since the last movie, and both Andy and his voice actor had grown to adulthood, as had so many of the original film’s fans. (It’s also worth noting that the time element seems to have all but disappeared from the current incarnation of the Toy Story franchise: Bonnie, who owns the toys now, is in no danger of growing up soon, and even if she does, it would feel as if the films were repeating themselves. I’m still optimistic about Toy Story 4, but it seems unlikely to have the same resonance as its predecessors—the time factor has already been fully exploited. Of course, I’d also be glad to be proven wrong.) For a meaningful story, time isn’t a liability, but an asset. And it can lead to discoveries that you didn’t know were possible, but only if you’re willing to play with it.

“A kind of symbolic shorthand…”

leave a comment »

"A kind of symbolic shorthand..."

Note: This post is the thirty-seventh installment in my author’s commentary for Eternal Empire, covering Chapter 36. You can read the previous installments here.

When we remember a story after the fact, our minds have a way of producing juxtapositions and connections that weren’t there before. Most fans, for instance, are aware that Kirk and Khan are never in the same place at the same time in Star Trek II, and their only real face-to-face confrontation, courtesy of a viewscreen, consists of a single scene. Still, they’re indelibly associated in our imaginations, certainly more so than the modern incarnations of the same two characters who shared so much screen time in a far less memorable movie. Similarly, in the movie version of L.A. Confidential, Jack Vincennes says just one line to Bud White—”White, you better put a leash on your partner before he kills somebody”—and Bud doesn’t even bother responding. Yet we rightly think of Vincennes and White as two points in the movie’s central triangle, even if they interact largely through the contrasting shapes that they assume in our heads. As I wrote in a post on Legolas and Frodo, who also interact only once over the course of three Lord of the Rings movies: “We think of a novel or movie as a linear work of art that moves from one event to the next, but when we remember the books or films we love the most, even those that follow a strict line of action, we have a way of seeing everything simultaneously, with each piece commenting on every other.” When the book is closed and put back on the shelf, all the pages overlap, and links appear between characters that aren’t really there when the story is experienced as a sequence.

You could also make the case that separating characters can paradoxically result in a closer relationship than if they were physically together. When two characters share a scene, they can’t help but be themselves; when they’re further apart, each one begins to seem like a commentary on the other. Closeness tends to emphasize dissimilarity, while distance stresses the qualities they share. Some movies do this deliberately—like Heat, which keeps Pacino and De Niro separated and invites us to draw the parallels—while others do it by accident. (In L.A. Confidential, it seems to have been a little of both: Vincennes and White simply wouldn’t have much to talk about, and trying to force them into a conversation would have subtly diminished both men.) Movies and books benefit from the way we’ve been taught to read them, in which we assume that two lines of action will eventually converge. It’s a narrative technique as old as the Odyssey, and it can be used to create anticipation and lend structure to the story even if it never quite pays off. The first season of Fargo devoted a lot of time to foreshadowing a confrontation between two characters, played by Allison Tolman and Billy Bob Thornton, that it ultimately didn’t feel like providing. This worked well enough as a strategy to unite a lot of disconnected action, but the second season, which has consisted of a series of immensely entertaining collisions between disparate characters, reminds us of how satisfying this kind of convergence can be if it’s allowed to play out for real.

"I'm not responsible for what you've heard..."

And one of the unsung arts of storytelling lies in drawing out that distance as much as possible without losing the connection. One of the basic rules of visual design is that two elements in a composition, like two dots on a canvas, create a tension in the space between them that didn’t exist before. Elsewhere, I’ve written:

Two dots imply a line…No matter how far apart on the page the dots are placed, as long as they’re within the viewer’s visual field, they’re perceived in relation to one another, as well as to such larger elements as the edge of the paper. An impression of order or disorder—or stillness or dynamism—can be created by how close together they are, whether or not the implicit line runs parallel to the edges, or whether one dot is larger than the other. What was absolute becomes relative, and that shift carries our first big hint of design, or even story…In fiction, any kind of pairing or juxtaposition, whether it’s of two words, images, characters, or scenes, implies a logical relation, like a dream where two disconnected symbols occur together. We naturally look for affinity or causality, and for every line, we see a vector.

The tricky part is the placement. Put your dots too far apart, and they no longer seem related; too close together, and we tend to see them as a single unit. Much the same goes for characters, and it’s no accident that many of the fictional pairings we remember so vividly—like Clarice Starling and Hannibal Lecter, or Holmes and Moriarty—consist of two figures who spend most of their time apart, which only adds to the intensity when they meet at last.

I thought about this constantly when I was cracking the plot for Eternal Empire, in which Wolfe mostly keeps her distance from Maddy and Ilya, the two other points in the narrative triangle. Maddy and Ilya eventually converge in a satisfying way, but Wolfe isn’t brought into their story until the very end, and even then, their interactions are minimal. In the case of Maddy, they consist of a voice message and a long conversation in the last chapter of the book; with Ilya, Wolfe has little more than a charged exchange of glances. Yet I think that Wolfe still feels integrated into their stories, and if she does, it was because I devoted a fair amount of energy to maintaining that connection where I could. Wolfe spends a lot of time thinking about Maddy and following her movements, and even more so with Ilya—who also gets to send her a message in return. In Chapter 36, I introduce the concept of the “throw,” a symbolic shorthand used by thieves to send messages. An apple cut in half means that it’s time to divide the loot; a piece of bread wrapped in cloth means that the police are closing in. And when Wolfe finds a knot tied in a dishtowel at the crime scene in Hackney Wick, she realizes that Ilya is saying: I’m not responsible for what you’ve heard. As a narrative device that allows them to communicate under the eyes of Ilya’s enemies, it works nicely. But I also love the idea of a visual symbol that allows two people to speak over a distance, which is exactly what happens in many novels, if not always so explicitly. As Nabokov puts it so beautifully in his notes to Eugene Onegin, which I read while plotting out this trilogy: “There is a conspiracy of words signaling to one another, throughout the novel, from one part to another…”

The blood in the milkshake

leave a comment »

Fargo

Note: Spoilers follow for the first two episodes of the current season of Fargo.

The most striking aspect of the second season of Fargo—which, two episodes in, already ranks among the most exciting television I’ve seen in months—is its nervous visual style. If the first season had an icy, languid look openly inspired by its cinematic source, the current installment is looser, jazzier, and not particularly Coenesque: there are split screens, montages, dramatic chyrons and captions, and a lot of showy camerawork. (It’s so visually rich that the image of a murder victim’s blood mingling with a spilled vanilla milkshake, on which another show might have lingered, is only allowed to register for a fraction of a second.) The busy look of the season so far seems designed to mirror its plot, which is similarly overstuffed: an early scene involving a confrontation at a waffle joint piles on the complications until I almost wished that it had followed Coco Chanel’s advice and removed one accessory before leaving the house. But that’s part of the point. Fargo started off as a series that seemed so unlikely to succeed that it devoted much of its initial run to assuring us that it knew what it was doing. Now that its qualifications have been established, it’s free to spiral off into weirder directions without feeling the need to abide by any precedent, aside, of course, from the high bar it sets for itself.

And while it might seem premature to declare victory on its behalf, it’s already starting to feel like the best of what the anthology format has to offer. A few months ago, after the premiere of the second season of another ambitious show in much the same vein, I wrote: “Maintaining any kind of continuity for an anthology show is challenging enough, and True Detective has made it as hard on itself as possible: its cast, its period, its setting, its structure, even its overall tone have changed, leaving only the whisper of a conceit embedded in the title.” Like a lot of other viewers, I ended up bailing before the season was even halfway over: it not only failed to meet the difficult task it set for itself, but it fell short in most other respects as well. And I had really wanted it to work, if only because cracking the problem of the anthology series feels like a puzzle on which the future of television depends. We’re witnessing an epochal shift of talent from movies to the small screen, as big names on both sides of the camera begin to realize that the creative opportunities it affords are in many ways greater than what the studios are prepared to offer. And what we’re likely to see within the next ten years—to the extent that it hasn’t already happened—is an entertainment landscape in which Hollywood focuses exclusively on blockbusters while dramas and quirky smaller films migrate to cable or, in rare cases, even the networks.

Ted Danson and Patrick Wilson on Fargo

It isn’t hard to imagine this scenario: in many ways, we’re halfway there. But the current situation leaves a lot of actors, writers, and directors stranded somewhere in the middle: unable to finance the projects they want in the movies, but equally unwilling to roll the dice on the uncertainties of conventional episodic television. The anthology format works best when it strikes a balance between those two extremes. It can be packaged as conveniently as a movie, with a finite beginning and ending, and it allows a single creative personality to exert control throughout the process. By now, its production values are more than comparable to those of many feature films. And instead of such a story being treated as a poor relation of the tentpole franchises that make up a studio’s bottom line, on television, it’s seen as an event. As a result, at a time when original screenplays are so undervalued in Hollywood that it’s newsworthy when one gets produced at all, it’s not surprising that television is attracting talent that would otherwise be stuck in turnaround. But brands are as important in television as they are anywhere else—it’s no accident that Fargo draws its name from a familiar title, however tenuous that connection turned out to be in practice—and for the experiment to work, it needs a few flagship properties to which such resources can be reliably channelled. If the anthology format didn’t already exist, it would be necessary to invent it.

That’s why True Detective once seemed so important, and why its slide into irrelevance was so alarming. And it’s why I also suspect that Fargo may turn out to be the most important television series on the air today. Its first season wasn’t perfect: the lengthy subplot devoted to Oliver Platt’s character was basically a shaggy dog story without an ending, and the finale didn’t quite succeed in living up to everything that had come before. Yet it remains one of the most viscerally riveting shows I’ve ever seen—you have to go back to the best years of Breaking Bad to find a series that sustains the tension in every scene so beautifully, and that mingles humor and horror until it’s hard to tell where one leaves off and the other begins. (But will Jesse Plemons ever get a television role that doesn’t force him to dispose of a corpse?) If the opening act of the second season is any indication, the show will continue to draw talent intrigued by the opportunities that it affords, which translate, in practical terms, into scene after scene that any actor would kill to play. And the fact that it can do this while departing strategically from its own template is especially heartening. If True Detective is defined, in theory, by the genre associations evoked by its title, Fargo is about a triangulation between the contrasts established by the movie that inspired it: politeness, quiet desperation, and sudden violence. It’s a technical trick, but it’s a very good one, and it’s a machine that can generate stories forever, with good and evil mixed together like blood in vanilla ice cream.

Written by nevalalee

October 26, 2015 at 9:36 am

Posted in Television

Tagged with , ,

American horror stories

with one comment

Colin Farrell on True Detective

As a devoted viewer of the current golden age of television, I sometimes wake up at night haunted by the question: What if the most influential series of the decade turns out to be American Horror Story? I’ve never seen even a single episode of this show, and I’m not exactly a fan of Ryan Murphy. Yet there’s no denying that it provided the catalyst for our growing fascination with the anthology format, in which television shows are treated less as ongoing narratives with no defined conclusion than as self-contained stories, told over the course of a single season, with a clear beginning, middle, and end. And American Horror Story deserves enormous credit for initially keeping this fact under wraps. Until its first season finale aired, it looked for all the world like a conventional series, and Murphy never tipped his hand. As a result, when the season ended by killing off nearly every lead character, critics and audiences reacted with bewilderment, with many wondering how the show could possibly continue. (It’s especially amusing to read Todd VanDerWerff’s writeup on The A.V. Club, which opens by confessing his early hope that this might be an anthology series—”On one level, I knew this sort of blend between the miniseries and the anthology drama would never happen”—and ends with him resignedly trying to figure out what might happen to the Harmon family next year.)

It was only then that Murphy indicated that he would be tackling a different story each season. Even then, it took critics a while to catch on: I even remember some grumbling about the show’s decision to compete in the Best Miniseries category at the Emmys, as if it were some kind of weird strategic choice, when in fact it’s the logical place for a series like this. And at a time when networks seem inclined to spoil everything and anything for the sake of grabbing more viewers, the fact that this was actually kept a secret is a genuine achievement. It allowed the series to take the one big leap—killing off just about everybody—that nobody could have seen coming, but which was utterly consistent with the rules of its game. (It wouldn’t be the first or last time that horror, which has always been a sandbox for quick and dirty experimentation, pointed the way for more reputable genres, but that’s a topic for another post.) The result cleared a path for critical favorites from True Detective to Fargo to operate in a format that offers major advantages: it can draw big names for a limited run, it allows stories to be told over the course of ten tightly structured episodes rather than stretched over twenty or more, it lends itself well to being watched in one huge binge, and it offers viewers the chance for a definitive conclusion.

Woody Harrelson and Matthew McConaughey on True Detective

Yet the element of surprise that made the first season of American Horror Story so striking no longer exists. When we’re watching a standard television series, we go into it with a few baseline assumptions: the show may kill off important characters, but it isn’t likely to wipe out most of its cast at once, and it certainly won’t blow up its entire premise. American Horror Story worked because it walked all over those conventions, and it fooled its viewers because it shrewdly kept its big structural conceit a secret. But it reminds me a little of what Daffy Duck said after performing an incredible novelty act that involved blowing himself up with nitroglycerin: “I can only do it once.” With all the anthology series that follow, we know that everything is on the table: there’s no reason for the show to preserve anything at all. And it affects the way we watch these shows, not always to their benefit. During the first season of True Detective, fan speculation spiraled off in increasingly wild directions because we knew that there was no long game to keep the show from being exactly as crazy as it liked. There wasn’t any reason why Cohle or Hart couldn’t be the killer, or that they couldn’t both die, and I spent half the season convinced that Hart’s wife was maybe the Yellow King, if only because she otherwise seemed like just another thankless female character—and that couldn’t be what the show had in mind, could it?

And if viewers seem to have turned slightly against True Detective in retrospect, it’s in part because nothing could have lived up to the more outlandish speculations. It was simply an excellent genre show, without a closing mindblower of a twist, and I liked it just fine. And it’s possible that the second season will benefit from those adjusted expectations, although it has plenty of other obstacles to overcome. Maintaining any kind of continuity for an anthology show is challenging enough, and True Detective has made it as hard on itself as possible: its cast, its period, its setting, its structure, even its overall tone have changed, leaving only the whisper of a conceit embedded in the title. Instead of Southern Gothic, its new season feels like an homage to those Los Angeles noirs in which messy human drama plays out against a backdrop of urban development, which encompasses everything from Chinatown to L.A. Confidential to Who Framed Roger Rabbit. I’m a little mixed on last night’s premiere: these stories gain much of their power from contrasts between characters, and all the leads here share a common dourness. The episode ends with three haunted cops meeting each other for the first time, but they haven’t been made distinctive enough for that collision to seem particularly exciting. Still, despite some rote storytelling—Colin Farrell’s character is a divorced dad first seen dropping off his son at school, because of course he is—I really, really want it to work. There are countless stories, horror and otherwise, that the anthology format can tell. And this may turn out to be its greatest test yet.

A clash of timelines

leave a comment »

Game of Thrones

Note: Spoilers follow for Game of Thrones.

When we find ourselves on Westeros again, not much time has passed. Tywin Lannister’s body still lies in state. Tyrion has just crossed the Narrow Sea, sealed in a crate with air holes punched in the side, like Kermit in The Great Muppet Caper. Brienne, Sansa, and Jon Snow are still brooding over their recent losses, while Daenerys, as usual, isn’t doing much of anything. Nothing, in fact, has happened in the meantime, and not much will happen tonight. And we expect this. Each season of Game of Thrones follows a familiar rhythm, with the first and last episodes serving as bookends for more spectacular developments. If we’ve learned to brace ourselves for the penultimate episode of every run, in which all hell tends to break loose, we’ve also gotten used to the breathing space provided by the premiere and finale. Other shows use their opening and closing installments to propel the narrative forward, or at least to tell us what the next stretch of the story will be about, but Game of Thrones has a way of ramping up and ramping down again, as if it feels obliged to reintroduce us to its imaginary world, then ease us back into everyday life once enough innocent blood has been shed.

I’ve always thought of Game of Thrones as a deeply flawed but fascinating show, with unforgettable moments alternating with lengthy subplots that go nowhere. (Remember all that time we spent with Theon Greyjoy, aka Reek? I hope not.) It’s a show that seems constantly in dialogue with time, which I’ve noted elsewhere is the secret protagonist of every great television series. If a show like Mad Men uses time as an ally or collaborator, Game of Thrones regards it as an unwanted variable, one that constantly spoils, or at least complicates, its plans. The real collision—which will occur as soon as the series catches up with the novels—has yet to come, although we’re already seeing hints of it: Bran’s material is already used up, so he won’t be appearing at all this season, off at warg school, or whatever, until the show figures out what to do with him. And when we see him again, he’ll look very different. A series shot over a period of years inevitably runs into challenges with child actors, and Game of Thrones seems less inclined to turn this into an asset, as Mad Men did with Sally Draper, than to treat it as an inconvenient complication.

Jon Hamm and Kiernan Shipka on Mad Men

For serialized shows, the tension between production schedules and the internal chronology can create real problems. It’s tempting to treat a season as a calendar year, as in most shows set in high school or college, even if there isn’t a pressing reason. Community, for instance, had to scramble to figure out what to do when its characters started to graduate, but there’s no reason why the entire run of the show couldn’t have taken place, say, between junior and senior years. And M*A*S*H didn’t seem particularly concerned that it spent eleven years fighting a three-year war. Occasionally, a show will try to compress multiple years within a single season, either with an explicit time jump—which is turning into a cliché of its own, although Fargo handled it beautifully—or with more subtle nods to the passage of time. This can create its own kind of dissonance, as on Downton Abbey, where months or years can go by without any corresponding advance in the story. And The Simpsons has turned its longevity into a running joke: Bart, Lisa, and Maggie don’t age, but they’ve celebrated thirteen Christmases. (Unless, as one fan theory has it, we’re actually witnessing a single, eventful Christmas from multiple perspectives, which is a supercut I’d love to see.)

And for showrunners, cracking the problem of time is more urgent than ever before. In the past, most shows were content to ignore it, but the rise in serialization and unconventional viewing habits make this strategy less workable. The breakdown of the conventional television schedule, which mapped neatly onto the calendar with a break in the summer, has led to increasing confusion. I suspect that Game of Thrones devotes so much time to resetting the stage because of the hiatus between seasons: only a few days have gone by in Westeros, but we’ve been waiting ten months to see these characters again. But I can’t help but wish that it would simply get on with it, as Mad Men does. Nine months have passed between “Waterloo” and “Severance,” but Matthew Weiner jumps right in, trusting us to fill in the gaps with the clues he provides. And it works largely because we know more about the timeline, at least as it relates to the changing world at the edges of the plot, than even the characters do. Ted Chaough’s hair gets us ninety percent of the way there. And it leaves us with the sense, despite the deliberate pace, there’s more going on at Sterling Cooper than in all the Seven Kingdoms.

Written by nevalalee

April 14, 2015 at 9:14 am

The big piece of cheese

leave a comment »

William H. Macy

Yesterday, I picked up a copy of A Practical Handbook for the Actor, the classic guide written by members of the Atlantic Theatre Company. It’s one of those books I should have read years ago, and the fact that I haven’t is due mostly to the fact that it never occurred to me. I’ve said many times before that On Directing Film by David Mamet is arguably the most useful book on storytelling I know, and this slender volume is essentially the same argument conducted from the other side. It’s based on notes from a summer acting workshop conducted by Mamet and William H. Macy in the early eighties, and although their names don’t appear on the cover, it’s as close to a manifesto as exists to the core principles that have guided these two exceptional careers. I’m not an actor; I can’t judge its usefulness for the performers for which it was intended; but as a writer, I’ve never found a more refreshing perspective on the problems of plot, characterization, and structure. Elsewhere, I’ve noted some of the limitations of the Mamet approach, which I’ve described as a formula for writing rock-solid first drafts. Without additional development, it can seem thin and mechanical. But I don’t know of a better foundation for telling effective stories in any medium.

What strikes me the most about this book, though, is where it occurred in the timeline of two lives. When the workshop first took place, Mamet was in his late thirties and Macy was exactly the age I am now. At the time, Mamet’s star was on the rise: Glengarry Glen Ross had won, or was about to win, the Pulitzer Prize for Drama, making him the hottest playwright and screenwriter in America, and he was just a few years away from his spectacular directorial debut in House of Games. Macy, by contrast, had established a name for himself on stage, but his film and television credits were sparse, and he was a full decade away from his breakthrough role in Fargo, which established him overnight as nothing less than our indispensable character actor. I never tire of quoting Mamet’s observation that everyone gets a break in show business in twenty-five years, some at the beginning, others at the end, and there’s no question that he was thinking of Macy. One of his stories from early in their friendship gives a sense of those days:

Macy and I were in Chicago one time, and he was living in this wretched hovel—we’d both become screamingly poor—and I came over to talk to him about something, some play equipment. I opened the refrigerator, and there was this big piece of cheese. I hadn’t had anything to eat in a long time, so I picked it up, cut off a big chunk, and started eating. And Macy said, “Hey, help yourself.” I was really hurt. I went away and fumed about that for several days. Then I just started writing, and out of that came this scene, which was the start of [American Buffalo].

David Mamet

Mamet’s career skyrocketed shortly thereafter, but Macy scraped along for years, driving cabs, tending bar, and taking acting jobs wherever he could. And you feel this in an extended passage on one of the book’s first pages, which artists of all backgrounds would do well to memorize:

The best thing you can do for yourself as an actor is to clearly define and list those things that are your responsibilities and separate them from those things that are not. In other words, itemize what is within your control and what is not. If you apply this rather stoic philosophy of working on only those things within your control and not concerning with those things that are not, then every moment you spend will be concretely contributing to your growth as an actor. Why not devote your time and energy to developing measurable skills such as your voice, your ability to analyze a script correctly, your ability to concentrate, and your body? On the other hand, how can it possibly help to concern yourself with the views others choose to take of you, the overall success or failure of the play, the ability (or lack thereof) of the director or other actors, which critics are sitting in the audience, your height, your feelings, and so forth? You cannot and never will be able to do anything about any of those things. Consequently, it makes sense to devote yourself only to those things which you have the capacity to change, and refrain from wasting your time, thought, and energy on these things you can never affect.

And simply by hanging on, Macy grew into the actor he was meant to become. It’s hard to imagine him as a young man: Fargo may have typecast him as the desperate, repressed character he ended up playing so many times, but it’s one of the most striking instances in memory of a film encountering an actor at the precise instant when he was capable of delivering the necessary performance. Macy would have had the technical ability to play Jerry Lundegaard at any point in his career, but it took time for him to grow those eyes and that face, in which you can read everything that brought him to where he had to be. (You see the same quality in the faces of many of Mamet’s favorite supporting actors, few of whom ever became household names: men like Jack Wallace, J.J. Johnston, Mike Nussbaum, or Lionel Mark Smith, who never got their Fargo, but who provided countless small moments of clarity and pleasure to audiences over the years.) In On Becoming a Novelist, John Gardner writes: “Finally, the true novelist is the one who doesn’t quit.” You can say much the same for actors, playwrights, or creative professionals of any kind. Once his hour in the sun came, Macy often seemed content to coast a little, taking on paycheck parts in the manner of all great supporting actors. But you can’t say he hasn’t earned a bite from that big piece of cheese.

Written by nevalalee

April 1, 2015 at 9:16 am

“This case has been a disaster…”

leave a comment »

"This case has been a disaster..."

Note: This post is the twelfth installment in my author’s commentary for Eternal Empire, covering Chapter 13. You can read the previous installments here.

Many moviegoers probably maintain a mental list of scenes they’d like to see, as Mad Magazine did for so many years. Here’s one of mine. The maverick cop, doggedly pursuing a serial killer on his own time, is called onto the carpet by the police chief—or, even better, summoned before an administrative hearing. He calmly lays out the case against his primary suspect, ignoring the skeptical looks pointed in his direction. The chief glances at the stony faces to either side, turns back to the cop, and says:

You know what? That’s a very compelling case you’ve got there. You’ve convinced me. Tell us what you need. All the resources of the department are at your disposal.

Of course, that isn’t how it usually happens, but it would be great if it did. Not because it “subverts” a trope or convention, which isn’t always a valid reason in itself, but because it holds more dramatic potential. If it’s interesting to see a lone cop without any backup go up against the villain and fail, it would be all the more riveting to see the antagonist outsmart and outmaneuver the full weight of the police force. As I see it, if it’s a choice between reducing the hero or elevating the villain to keep the scales evenly matched, there’s no doubt as to which alternative would yield better stories.

That said, there’s a reason why even the best cop movies, from L.A. Confidential to The Departed, so often include some variation on the line: “Turn in your badge. You’re off the case.” In a way, it simply restores the protagonist to his proper place. Movies and television like to focus on cops because they’re the last members of our society who can plausibly confront violence directly: the rest of us are more inclined, without a strong reason to the contrary, to call the police. There’s a sense that the buck stops there, at least when it comes to the kinds of active heroes that we like to see. By throwing the officer off the force, we get the best of both worlds: he’s deprived of the system that supports him, while remaining the same driven guy as before—fully motivated and qualified, as most of us aren’t, to take justice into his own hands. His badge and gun get him to exactly the point in the story where he needs to be, after which they can be safely discarded. It doesn’t hurt that the scene also establishes our hero as a man who doesn’t play by the rules, which is rarely a bad thing, and sets up a conflict with a clueless authority figure. Like most good clichés, it survives because it does two or three useful things at once, and writers haven’t figured out anything better.

"She chose her next few words with care..."

As a result, even when we recognize the trope, we’re likely to respond to it as Homer Simpson does while shouting at the television set: “It means he gets results, you stupid chief!” (To which Lisa wearily responds: “Dad, sit down.”) But like any convention, it can grate if it presses our buttons too insistently, especially if it’s written by someone who should know better. One of my few complaints about the Fargo miniseries revolves around the character of Bill Oswalt, played by Bob Odenkirk, who exists largely to foil the resourceful Molly as she gets closer to solving the case. Bill isn’t a bad guy, and the show takes pains to explain the reasons for his skepticism: he went to high school with Lester, the prime suspect, and doesn’t want to live in a world in which such evil exists. But their scenes together quickly start to feel monotonous: they occur like clockwork, once every episode, and instead of building to something, they’re nothing but theme and variations. They retard the story, rather than advancing it, and it’s hard to avoid the impression that they exist solely to keep Molly from moving too quickly. I can understand the rationale here: Molly is too smart to be misled by Lester for long, and once she arrests him, the story is over. But I can’t help feeling that it could have been handled a tad more subtly.

Still, I probably shouldn’t talk, since I include much the same scene in Chapter 13 of Eternal Empire. Here, it’s an administrative hearing at the Serious Organised Crime Agency, in which Wolfe is called to account in much the same fashion as countless heroines before her. (In particular, the scene reads a lot like a similar one in the novel Hannibal, which isn’t entirely an accident.) In my defense, I can say that the sequence is designed to move the story along, rather than slowing it down: I had to convey some necessary exposition about the dead body Wolfe discovers in her previous scene, as well as to remind the reader of a few important events from the last novel, and delivering it in a setting with some inherent conflict is more interesting than a dry summation of the facts. Leaving Wolfe at a low point here also sets up the next big moment, when she has too much to drink and spills a crucial secret to the last person she should have told. And I don’t linger on it more than necessary. If there’s any conclusion to draw, it’s that a hoary scene like this—like most of the familiar tools in a writer’s bag of tricks—can better justify its existence if it’s there to serve a larger purpose, rather than just to rile up the reader. And if it riles up the reader just a little bit, well, I’ll take such moments where I can…

Written by nevalalee

March 19, 2015 at 9:21 am

The unbreakable television formula

leave a comment »

Ellie Kemper in Unbreakable Kimmy Schmidt

Watching the sixth season premiere of Community last night on Yahoo—which is a statement that would have once seemed like a joke in itself—I was struck by the range of television comedy we have at our disposal these days. We’ve said goodbye to Parks and Recreation, we’re following Community into what is presumably its final stretch, and we’re about to greet Unbreakable Kimmy Schmidt as it starts what looks to be a powerhouse run on Netflix. These shows are superficially in the same genre: they’re single-camera sitcoms that freely grant themselves elaborate sight gags and excursions into surrealism, with a cutaway style that owes as much to The Simpsons as to Arrested Development. Yet they’re palpably different in tone. Parks and Rec was the ultimate refinement of the mockumentary style, with talking heads and reality show techniques used to flesh out a narrative of underlying sweetness; Community, as always, alternates between obsessively detailed fantasy and a comic strip version of emotions to which we can all relate; and Kimmy Schmidt takes place in what I can only call Tina Fey territory, with a barrage of throwaway jokes and non sequiturs designed to be referenced and quoted forever.

And the diversity of approach we see in these three comedies makes the dramatic genre seem impoverished. Most television dramas are still basically linear; they’re told using the same familiar grammar of establishing shots, medium shots, and closeups; and they’re paced in similar ways. If you were to break down an episode by shot length and type, or chart the transitions between scenes, an installment of Game of Thrones would look a lot on paper like one of Mad Men. There’s room for individual quirks of style, of course: the handheld cinematography favored by procedurals has a different feel from the clinical, detached camera movements of House of Cards. And every now and then, we get a scene—like the epic tracking shot during the raid in True Detective—that awakens us to the medium’s potential. But the fact that such moments are striking enough to inspire think pieces the next day only points to how rare they are. Dramas are just less inclined to take big risks of structure and tone, and when they do, they’re likely to be hybrids. Shows like Fargo or Breaking Bad are able to push the envelope precisely because they have a touch of black comedy in their blood, as if that were the secret ingredient that allowed for greater formal daring.

Jon Hamm on Mad Men

It isn’t hard to pin down the reason for this. A cutaway scene or extended homage naturally takes us out of the story for a second, and comedy, which is inherently more anarchic, has trained us to roll with it. We’re better at accepting artifice in comic settings, since we aren’t taking the story quite as seriously: whatever plot exists is tacitly understood to be a medium for the delivery of jokes. Which isn’t to say that we can’t care deeply about these characters; if anything, our feelings for them are strengthened because they take place in a stylized world that allows free play for the emotions. Yet this is also something that comedy had to teach us. It can be fun to watch a sitcom push the limits of plausibility to the breaking point, but if a drama deliberately undermines its own illusion of reality, we can feel cheated. Dramas that constantly draw attention to their own artifice, as Twin Peaks did, are more likely to become cult favorites than popular successes, since most of us just want to sit back and watch a story that presents itself using the narrative language we know. (Which, to be fair, is true of comedies as well: the three sitcoms I’ve mentioned above, taken together, have a fraction of the audience of something like The Big Bang Theory.)

In part, it’s a problem of definition. When a drama pushes against its constraints, we feel more comfortable referring to it as something else: Orange is the New Black, which tests its structure as adventurously as any series on the air today, has suffered at awards season from its resistance to easy categorization. But what’s really funny is that comedy escaped from its old formulas by appropriating the tools that dramas had been using for years. The three-camera sitcom—which has been responsible for countless masterpieces of its own—made radical shifts of tone and location hard to achieve, and once comedies liberated themselves from the obligation to unfold as if for a live audience, they could indulge in extended riffs and flights of imagination that were impossible before. It’s the kind of freedom that dramas, in theory, have always had, even if they utilize it only rarely. This isn’t to say that a uniformity of approach is a bad thing: the standard narrative grammar evolved for a reason, and if it gives us compelling characters with a maximum of transparency, that’s all for the better. Telling good stories is hard enough as it is, and formal experimentation for its own sake can be a trap in itself. Yet we’re still living in a world with countless ways of being funny, and only one way, within a narrow range of variations, of being serious. And that’s no laughing matter.

The three kinds of surprise

with one comment

Billy Bob Thornton in Fargo

In real life, most of us would be happy to deal with fewer surprises, but in fiction, they’re a delight. Or at least movies and television would like to believe. In practice, twist endings and plot developments that arrive out of left field can be exhausting and a little annoying, if they emerge less out of the logic of the story than from a mechanical decision to jerk us around. I’ve noted before that our obsession with big twists can easily backfire: if we’re conditioned to expect a major surprise, it prevents us from engaging with the narrative as it unfolds, since we’re constantly questioning every detail. (In many cases, the mere knowledge that there is a twist counts as a spoiler in itself.) And Hitchcock was smart enough to know that suspense is often preferable to surprise, which is why he restructured the plot of Vertigo to place its big reveal much earlier than it occurs in the original novel. Writers are anxious to prevent the audience from getting ahead of the story for even a second, but you can also generate a lot of tension if viewers can guess what might be coming just slightly before the characters do. Striking that balance requires intelligence and sensitivity, and it’s easier, in general, to just keep throwing curveballs, as shows like 24 did until it became a cliché.

Still, a good surprise can be enormously satisfying. If we start from first principles, building on the concept of the unexpected, we end up with three different categories:

1. When something happens that we don’t expect.
2. When we expect something to happen, but something else happens instead.
3. When we expect something to happen, but nothing happens.

And it’s easy to come up with canonical examples of all three. For the first, you can’t do much better than the shower scene in Psycho; for the second, you can point to something like the famous fake-out in The Silence of the Lambs, in which the intercutting of two scenes misleads us into thinking that an assault team is closing in on Buffalo Bill, when Clarice is really wandering into danger on her own; and for the third, you have the scene in The Cabin in the Woods when one of the characters is dared to make out with the wolf’s head on the wall, causing us to brace ourselves for a shock that never comes. And these examples work so elegantly because they use our knowledge of the medium against us. We “know” that the protagonist won’t be killed halfway through; we “know” that intercutting implies convergence; and we “know” when to be wary of a jump scare. And none of these surprises would be nearly as effective for a viewer—if one even exists—who could approach the story in complete naiveté.

Psycho

But not every surprise is equally rewarding. A totally unexpected plot development can come dangerously close—like the rain of frogs in Magnolia—to feeling like a gimmick. The example I’ve cited from The Silence of the Lambs works beautifully on first viewing, but over time, it starts to seem more like a cheat. And there’s a fine line between deliberately setting up a plot thread without paying it off and simply abandoning it. I got to thinking about this after finishing the miniseries Fargo, which I loved, but which also has a way of picking up and dropping story points almost absentmindedly. In a long interview with The A.V. Club, showrunner Noah Hawley tries to explain his thought process, with a few small spoilers:

Okay, Gus is going to arrest Malvo in episode four, and he’s going to call Molly to tell her to come, but of course, she doesn’t get to go because her boss goes. What you want is the scene of Molly and Malvo, but you’re not getting it…

In episode ten when Gus tells her to stay put, and she just can’t, and she gets her keys and goes to the car and drives toward Lester, we are now expecting a certain event to happen. Therefore, when that doesn’t happen, there’s the unpredictable nature of what’s going to happen, and you’re coming into it with an assumption…

By giving Russell that handcuff key, people were going to expect him to be out there for the last two episodes and play some kind of role in the end game, which is never a bad thing, to set some expectations [that don’t pay off].

Fargo is an interesting test case because it positions itself, like the original movie, as based on true events, when in fact it’s totally fictional. In theory, this frees it up to indulge in loose ends, coincidences, and lack of conventional climaxes, since that’s what real life is like. But as much as I enjoyed Fargo, I’m not sure I really buy it. In many respects, the show is obsessively stylized and designed; it never really feels like a story that could take place anywhere but in the Coenverse. And there are times when Hawley seems to protest too much, pointing to the lack of a payoff as a subversion when it’s really more a matter of not following through. The test, as always, is a practical one. If the scene that the audience is denied is potentially more interesting than what actually happens, it’s worth asking if the writers are being honest with themselves: after all, it’s relatively easy to set up a situation and stop, while avoiding the hard work that comes with its resolution. A surprise can’t just be there to frustrate our expectations; it needs to top them, or to give us a development that we never knew we wanted. It’s hard to do this even once, and even harder to do it consistently. But if the element of surprise is here to stay—and it doesn’t seem to be going anywhere—then it should surprise us, above all else, with how good it is.

Written by nevalalee

March 11, 2015 at 9:21 am

The Achilles heel

with 2 comments

Jon Hamm on Mad Men

Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What fictional character embodies your masculine ideal?

AMC used to stand for American Movie Classics, but over the last few years, it’s felt more like an acronym for “antiheroic male character.” You’ve met this man before. He’s a direct descendent of Tony Soprano, who owed a great deal in turn to Michael Corleone: a deeply flawed white male who screws up the lives of just about everyone around him, whether out of uncontrollable compulsion, like Don Draper, or icy calculation, like Walter White. Yet he’s also enormously attractive. He’s great at his job, he knows what he wants and how to get it, and he doesn’t play by the rules. It’s a reliable formula for an interesting protagonist, except that his underlying motivations are selfish, and everyone else in his life is a means to an end. And the more ruthless he is, the more we respond to him. I’m only four episodes into the current season of House of Cards, but I’ve already found myself flitting with boredom, because Frank Underwood has lost so much of his evil spark. As much as I enjoy Kevin Spacey’s performance, I’ve never found Frank to be an especially compelling or even coherent character, and without that core of hate and ambition, I’m no longer sure why I’m supposed to be watching him at all.

Ever since Mad Men and Breaking Bad brought the figure of the male antihero to its current heights, we’ve seen a lot of shows, from Low Winter Sun to Ray Donovan, attempting to replicate that recipe without the same critical success. In itself, this isn’t surprising: television has always been about trying to take apart the shows that worked and put the pieces together in a new way. But by fixating on the obvious traits of their antiheroic leads, rather than on deeper qualities of storytelling, the latest round of imitators runs the risk of embodying all the genre’s shortcomings and few of its strengths. There’s the fact, for instance, that even the best of these shows have problems with their female characters. Mad Men foundered with Betty Draper for much of its middle stretch, to the point where it seemed tempted to write her out entirely, and I never much cared for Skylar on Breaking Bad—not, as some would have it, because I resented her for getting in Walt’s way, but because she was shrill and uninteresting. Even True Detective, a minor masterpiece of the form with two unforgettable male leads, couldn’t figure out what to do with its women. (The great exception here is Fargo, which offered us a fantastic heroine, even if she felt a little sidelined toward the end.)

Achilles and Ajax

Of course, the figure of the antihero is as old as literature itself. It’s only a small step from Hamlet to Edmund or Iago, and the Iliad, which inaugurates nothing less than the entire western tradition, opens by invoking the wrath of Achilles. In many ways, Achilles is the prototype for all protagonists of this kind: he’s a figure of superhuman ability on the battlefield, with a single mythic vulnerability, and he’s willing to let others die as he sulks in his tent out of wounded pride, over a woman who is treated as a spoil in a conflict between men. Achilles stands alone, and he’s defined more by his own fate than by any of his human relationships. (To the extent that other characters are important in our understanding of him, it’s as a series of counterexamples: Achilles is opposed at one point or another to Hector, Odysseus, and Agamemnon, and the fact that he’s contrasted against three such different men only points to how complicated he is.) It’s no wonder that readers tend to feel more sympathy for Hector, who is allowed moments of recognizable tenderness: when he tries to embrace his son Astyanax, who bursts into tears at the sight of his father’s armor and plumed helmet, the result is my favorite passage in all of classical poetry, because it feels so much like an instant captured out of real life and transmitted across the centuries.

Yet Achilles is the hero of the Iliad for a reason; Hector, for all his appeal, isn’t cut out for sustaining an entire poem. An antihero, properly written, can be the engine that drives the whole machine, and in epic poetry, or television, you need one heck of a motor. But a motor isn’t a man, or at least it’s a highly incomplete version of what a man can be. And there’s a very real risk that the choices writers make for the sake of the narrative can shape the way the rest of us think and behave. As Joseph Meeker points out, we tend to glamorize the tragic hero, who causes nothing but suffering to those around him, over the comic hero, who simply muddles through. Fortunately, we have a model both for vivid storytelling and meaningful connection in Achilles’ opposite number. Odysseus isn’t perfect: he engages in dalliances of his own while his wife remains faithful, and his bright ideas lead to the deaths of most of his shipmates. But he’s much closer to a comic than a tragic hero, relying on wit and good timing as much as strength to get home, and his story is like a guided tour of all the things a man can be: king, beggar, father, son, husband, lover, and nobody. We’d live in a happier world if our fictional heroes were more like Odysseus. Or, failing that, I’ll settle for Achilles, as long as he’s more than just a heel.

Written by nevalalee

March 6, 2015 at 9:12 am

The test of tone

with 2 comments

Brendan Gleeson and Colin Farrell in In Bruges

Tone, as I’ve mentioned before, can be a tricky thing. On the subject of plot, David Mamet writes: “Turn the thing around in the last two minutes, and you can live quite nicely. Turn it around in the last ten seconds and you can buy a house in Bel Air.” And if you can radically shift tones within a single story and still keep the audience on board, you can end up with even more. If you look at the short list of the most exciting directors around—Paul Thomas Anderson, David O. Russell, Quentin Tarantino, David Fincher, the Coen Brothers—you find that what most of them have in common is the ability to alter tones drastically from scene to scene, with comedy giving way unexpectedly to violence or pathos. (A big exception here is Christopher Nolan, who seems happiest when operating within a fundamentally serious tonal range. It’s a limitation, but one we’re willing to accept because Nolan is so good at so many other things. Take away those gifts, and you end up with Transcendence.) Tonal variation may be the last thing a director masters, and it often only happens after a few films that keep a consistent tone most of the way through, however idiosyncratic it may be. The Coens started with Blood Simple, then Raising Arizona, and once they made Miller’s Crossing, they never had to look back.

The trouble with tone is that it imposes tremendous switching costs on the audience. As Tony Gilroy points out, during the first ten minutes of a movie, a viewer is making a lot of decisions about how seriously to take the material. Each time the level of seriousness changes gears, whether upward or downward, it demands a corresponding moment of consolidation, which can be exhausting. For a story that runs two hours or so, more than a few shifts in tone can alienate viewers to no end. You never really know where you stand, or whether you’ll be watching the same movie ten minutes from now, so your reaction is often how Roger Ebert felt upon watching Pulp Fiction for the first time: “Seeing this movie last May at the Cannes Film Festival, I knew it was either one of the year’s best films, or one of the worst.” (The outcome is also extremely subjective. I happen to think that Vanilla Sky is one of the most criminally underrated movies of the last two decades—few other mainstream films have accommodated so many tones and moods—but I’m not surprised that so many people hate it.) It also annoys marketing departments, who can’t easily explain what the movie is about; it’s no accident that one of the worst trailers I can recall was for In Bruges, which plays with tone as dexterously as any movie in recent memory.

Hugh Dancy on Hannibal

As a result, tone is another element in which television has considerable advantages. Instead of two hours, a show ideally has at least one season, maybe more, to play around with tone, and the number of potential switching points is accordingly increased. A television series is already more loosely organized than a movie, which allows it to digress and go off on promising tangents, and we’re used to being asked to stop and start from week to week, so we’re more forgiving of departures. That said, this rarely happens all at once; like a director’s filmography, a show often needs a season or two to establish its strengths before it can go exploring. When we think back to a show’s pivotal episodes—the ones in which the future of the series seemed to lock into place—they’re often installments that discovered a new tone that worked within the rules that the show had laid down. Community was never the same after “Modern Warfare,” followed by “Abed’s Uncontrollable Christmas,” demonstrated how much it could push its own reality while still remaining true to its characters, and The X-Files was altered forever by Darin Morgan’s “Humbug,” which taught the show how far it could kid itself while probing into ever darker places.

At its best, this isn’t just a matter of having a “funny” episode of a dramatic series, or a very special episode of a sitcom, but of building a body of narrative that can accommodate surprise. One of the pleasures of following Hannibal this season has been watching the show acknowledge its own absurdity while drawing the noose ever tighter, which only happens after a show has enough history for it to engage in a dialogue with itself. Much the same happened to Breaking Bad, which had the broadest tonal range imaginable: it was able to move between borderline slapstick and the blackest of narrative developments because it could look back and reassure itself that it had already done a good job with both. (Occasionally, a show will emerge with that kind of tone in mind from the beginning. I haven’t had a chance to catch Fargo on FX, but I’m curious about it, because it draws its inspiration from one of the most virtuoso experiments with tone in movie history.) If it works, the result starts to feel like life itself, which can’t be confined easily within any one genre. Maybe that’s because learning to master tone is like putting together the pieces of one’s own life: first you try one thing, then something else, and if you’re lucky, you’ll find that they work well side by side.

Written by nevalalee

April 22, 2014 at 9:22 am

True Grit and the Coen Brothers

leave a comment »

I mean, who says exactly what they’re thinking? What kind of game is that?
—Kelly Kapoor, The Office

True Grit, as many critics have already noted, is the first movie that Joel and Ethan Coen have made without irony. I liked it a lot, but spent the entire movie waiting for a Coenesque twist that never came—which left me wondering if the twist was the fact that there was no twist. The truth, I think, is somewhat simpler: a combination of affection for the original source material and a desire by the Coens to show what they could do with a straightforward genre piece. (I also suspect that, after decades of thriving in the margins, the Coens were juiced by the prospect of their first real blockbuster.)

As much as I enjoyed True Grit, I found myself nostalgic for the old Coens, rather to my own surprise. There was a time, not long ago, when I would have argued that the Coen brothers, for all their craft and intelligence, were the most overrated directors in the world. In particular, I felt that the very qualities that made them so exceptional—their craft, their visual elegance, their astonishing control—made them especially unsuited for comedy, which requires more spontaneity and improvisation than they once seemed willing to allow. And even their best movies, like Fargo, never escaped a faint air of condescension toward their own characters.

As a result, with the notable exception of The Hudsucker Proxy, which I’ve always loved, I’ve never found the Coens all that funny—or at least not as funny as their admirers insist. Despite my affection for The Dude, I was never as big a fan of the movie in which he found himself, which reads wonderfully as a script, but never really takes off on the screen. And when the Coens try to work in pure comedic mode—as in The Ladykillers, Intolerable Cruelty, and the inexplicable Burn After Reading (which, I’ll grant, does have its admirers)—I find the results close to unwatchable.

In recent years, though, something happened. No Country For Old Men, though it never quite justifies the narrative confusion of its last twenty minutes, is both incredibly tense on the first viewing and hugely amusing thereafter. A Serious Man struck me as close to perfect—their best since Miller’s Crossing, which is still their masterpiece. The Coens, it seemed, had finally relaxed. Their craft, as flawless as ever, had been internalized, instead of storyboarded. Age and success had made them more humane. True Grit feels like the logical culmination of this trend: it’s a movie made, strangely enough, for the audience.

That said, though, I hope that their next movie finds the Coens back in their usual mode. (The rumor that they might still adapt The Yiddish Policemen’s Union is very promising.) True Grit is dandy, but it’s a movie that any number of other directors (like Steven Spielberg, its producer) might have made. For most filmmakers, this retreat from eccentricity would be a good thing, but the Coens have earned the right to be prickly and distinctive. With True Grit, they’ve proven their point: they can make a mainstream movie with the best of them. Now it’s time to get back to work.

%d bloggers like this: