Posts Tagged ‘True Detective’
Over the last week, I’ve read two stories that shed an unexpected light on the role of money in the artistic process. The first was the excellent Vulture article about the business of peak television, which I’ve already discussed here in detail. It notes that unprecedented amounts of cash are being thrown at prestige television series, with the top one percent of stars benefiting disproportionately, while actors who once might have played leading roles in network procedurals are struggling to get the same parts. After a decade in which pundits constantly predicted the demise of scripted television under an onslaught of cheap reality shows, the industry has expanded to make room for more writers than ever before—which has led to a corresponding shortage of qualified line producers. But a spike in financial resources doesn’t always translate into good storytelling. The difference between the first and second seasons of True Detective is a reminder, if we needed one, that the exact same factors on paper can yield very different results in practice, if that vital spark is missing. And what we’re really seeing is less a golden age than a codification of a new set of conventions. “Prestige television,” like “literary fiction,” is a genre, not a measure of quality, and its usual characteristics include ten episodes per season, a streaming or cable platform, outstanding production values, and a white male antihero. It may not always be great television, but as long as it satisfies the executives investing in new programming, it doesn’t have to be.
The other article that caught my eye was “Sunk,” Mitch Moxley’s memorable account in The Atavist of the Chinese billionaire Jon Jiang’s doomed attempt to bring his dream movie project, Empires of the Deep, to fruition. It defies easy summary, but the short version is that Jiang wrote an original screenplay, originally called Mermaid Island, and enlisted a bewildering array of collaborators—including the French filmmaker Pitof and the starlet Olga Kurylenko—to make it happen, only to blow more than $100 million on a production that chewed up a revolving door of screenwriters and directors and has yet to produce any usable footage. (Of the many strange stories that the article relates, perhaps the weirdest involves Irene Violette, the actress cast as a mermaid who had to slip out a window in the dead of night to get out of her contract.) Many of the cast and crew seem to have consoled themselves with the idea that great movies can emerge from troubled shoots, and it’s heartbreaking to hear how director Jonathan Lawrence hoped to make this unholy mess into something like Raiders of the Lost Ark. But the entire debacle hinges on what seems, at first, like a baffling paradox. Jiang had enormous financial resources to throw at the production, but he also cut corners, used cheap costumes and special effects, and never paid anyone on time. In spite of appearances, it’s possible that he invested very little of his own money in the film: a former production executive told Moxley that he believes that the billionaire relied mostly on outside investors, all of whom lost almost everything.
But I think the real explanation is more nuanced than this, and it ties back to the uneasy relationship between money, media, and creative freedom. The case of Empires of the Deep is only an exaggerated version of the dilemma that arises whenever the writer of the script is also the head of the studio, or at least the man who holds the pursestrings: without a higher authority to keep his worst tendencies in check, you end up with a movie that films the first draft of the script and has no incentive to make it any better. The situation becomes even more dire when the mogul in question seems to have no idea of how the medium works. You’d think that Jiang, a real estate tycoon, would at least have some notion of how to turn a blueprint into something real, but he appears to have taken a very different set of lessons from his business ventures. On visiting one of Jiang’s properties in Beijing, Moxley writes: “Although it’s only a decade old, up close the brick homes look cheap and worn, like so many properties hastily erected during China’s boom.” A movie made using the same principles would look pretty much like what we see here. Moxley also notes that the issue of guanxi, or relationships and connections, may have posed problems on the set. He observes:
One’s loyalty depends on who it is one has the strongest relationship with. That might be the director or a cinematographer or a producer—but it’s rarely the audience or the movie’s bottom line, which are generally the two highest priorities for American movies.
This is a remarkably shrewd point, and not just because it implies that what the production lacked, like many television shows, was a good line producer, whose job is to navigate those very networks. It might make us smile, but the plain fact is that such misaligned incentives are at the root of many artistic failures, and China doesn’t have a monopoly on this. A version of guanxi exists, in a less obvious form, at every Hollywood studio: each decision, from the lowest level to the highest, ultimately hinges on an individual executive’s desire not to get fired, which makes otherwise inexplicable choices easier to understand. Office politics, lines of succession, changes of regime, or the desire to maintain a relationship with a star can have a far greater impact on what gets made than “the audience or the movie’s bottom line.” This can be true of television, too: the need for streaming services like Hulu or Amazon to enhance their profiles, in the absence of concrete ratings, can lead to shows being produced that are less about real quality than its simulation, which for many viewers is more than enough. (Witness the success of House of Cards, which started the whole streaming revolution in the first place, despite a consistent lack of good writing.) Money isn’t the root of all evil in art: more worthwhile stories have died because of a lack of money than because of its overabundance. But without the constraints that a real audience provides, making a good movie can be harder than squeezing a mermaid through the eye of a needle.
Note: Spoilers follow for the first two episodes of the current season of Fargo.
The most striking aspect of the second season of Fargo—which, two episodes in, already ranks among the most exciting television I’ve seen in months—is its nervous visual style. If the first season had an icy, languid look openly inspired by its cinematic source, the current installment is looser, jazzier, and not particularly Coenesque: there are split screens, montages, dramatic chyrons and captions, and a lot of showy camerawork. (It’s so visually rich that the image of a murder victim’s blood mingling with a spilled vanilla milkshake, on which another show might have lingered, is only allowed to register for a fraction of a second.) The busy look of the season so far seems designed to mirror its plot, which is similarly overstuffed: an early scene involving a confrontation at a waffle joint piles on the complications until I almost wished that it had followed Coco Chanel’s advice and removed one accessory before leaving the house. But that’s part of the point. Fargo started off as a series that seemed so unlikely to succeed that it devoted much of its initial run to assuring us that it knew what it was doing. Now that its qualifications have been established, it’s free to spiral off into weirder directions without feeling the need to abide by any precedent, aside, of course, from the high bar it sets for itself.
And while it might seem premature to declare victory on its behalf, it’s already starting to feel like the best of what the anthology format has to offer. A few months ago, after the premiere of the second season of another ambitious show in much the same vein, I wrote: “Maintaining any kind of continuity for an anthology show is challenging enough, and True Detective has made it as hard on itself as possible: its cast, its period, its setting, its structure, even its overall tone have changed, leaving only the whisper of a conceit embedded in the title.” Like a lot of other viewers, I ended up bailing before the season was even halfway over: it not only failed to meet the difficult task it set for itself, but it fell short in most other respects as well. And I had really wanted it to work, if only because cracking the problem of the anthology series feels like a puzzle on which the future of television depends. We’re witnessing an epochal shift of talent from movies to the small screen, as big names on both sides of the camera begin to realize that the creative opportunities it affords are in many ways greater than what the studios are prepared to offer. And what we’re likely to see within the next ten years—to the extent that it hasn’t already happened—is an entertainment landscape in which Hollywood focuses exclusively on blockbusters while dramas and quirky smaller films migrate to cable or, in rare cases, even the networks.
It isn’t hard to imagine this scenario: in many ways, we’re halfway there. But the current situation leaves a lot of actors, writers, and directors stranded somewhere in the middle: unable to finance the projects they want in the movies, but equally unwilling to roll the dice on the uncertainties of conventional episodic television. The anthology format works best when it strikes a balance between those two extremes. It can be packaged as conveniently as a movie, with a finite beginning and ending, and it allows a single creative personality to exert control throughout the process. By now, its production values are more than comparable to those of many feature films. And instead of such a story being treated as a poor relation of the tentpole franchises that make up a studio’s bottom line, on television, it’s seen as an event. As a result, at a time when original screenplays are so undervalued in Hollywood that it’s newsworthy when one gets produced at all, it’s not surprising that television is attracting talent that would otherwise be stuck in turnaround. But brands are as important in television as they are anywhere else—it’s no accident that Fargo draws its name from a familiar title, however tenuous that connection turned out to be in practice—and for the experiment to work, it needs a few flagship properties to which such resources can be reliably channelled. If the anthology format didn’t already exist, it would be necessary to invent it.
That’s why True Detective once seemed so important, and why its slide into irrelevance was so alarming. And it’s why I also suspect that Fargo may turn out to be the most important television series on the air today. Its first season wasn’t perfect: the lengthy subplot devoted to Oliver Platt’s character was basically a shaggy dog story without an ending, and the finale didn’t quite succeed in living up to everything that had come before. Yet it remains one of the most viscerally riveting shows I’ve ever seen—you have to go back to the best years of Breaking Bad to find a series that sustains the tension in every scene so beautifully, and that mingles humor and horror until it’s hard to tell where one leaves off and the other begins. (But will Jesse Plemons ever get a television role that doesn’t force him to dispose of a corpse?) If the opening act of the second season is any indication, the show will continue to draw talent intrigued by the opportunities that it affords, which translate, in practical terms, into scene after scene that any actor would kill to play. And the fact that it can do this while departing strategically from its own template is especially heartening. If True Detective is defined, in theory, by the genre associations evoked by its title, Fargo is about a triangulation between the contrasts established by the movie that inspired it: politeness, quiet desperation, and sudden violence. It’s a technical trick, but it’s a very good one, and it’s a machine that can generate stories forever, with good and evil mixed together like blood in vanilla ice cream.
A section break in a novel, like the end of a chapter, is like the frame around a painting: it’s an artifact of the physical constraints of the medium, but it also becomes an expressive tool in its own right. When books like the Iliad or Odyssey were stored in scroll form, there were inherent limits to how large any one piece could be, and writers—or at least their scribes—began to structure those works accordingly. These days, there’s no particular reason why a novel needs to be cut up into sections or chapters at all, but we retain the convention because it turned out to be useful as a literary strategy. A chapter break gives the reader a bit of breathing space; it can be used as a punchline or statement in itself, like an abrupt cut to black in a movie; and it allows the patterns of the story to emerge more clearly, with the white space between scenes serving as a version of William Blake’s bounding line. This is particularly true of novels made up of many short chapters, which allow a rhythm to be established that creates an urgency of its own. We associate the technique with popular fiction, but even a literary novel like James Salter’s Solo Faces, which I’m reading now, benefits from that kind of momentum. (To be fair, more than one reader has criticized the chapters in my own novels as being too short.)
Elsewhere, I’ve defined a chapter as a unit of narrative that gives the reader something to anticipate. Ideally, every element should inform our expectations about what happens next, and as soon as that anticipation assumes a concrete form, the chapter ends. This is really more a guideline for writers than an empirical observation: in practice, chapters open and close in all kinds of places. But it’s a useful rule to keep in mind, along with the general principle that scenes should start as late and end as early as possible. And it’s often something that can only be achieved in the rewrite. When you’re writing a first draft, a chapter may simply be the maximum amount of narrative that you’re capable of keeping in your head all at once. The lengths of the chapters in my novels are organically connected to how much I can write in one day, which I suspect is also true of many other writers: “It isn’t at all surprising to write a chapter in a day,” John le Carré says to The Paris Review, “which for me is about twenty-two pages.” (I love the precision of that number, by the way—that’s the mark of a real novelist.) Later, you go back to see how the rhythms enforced by the writing process can be converted into the ones that the act of reading demands, and if you’re lucky, you’ll find that the two coincide. As Christian Friedrich Hebbel says:
Whoever absorbs a work of art into himself goes through the same process as the artist who produced it—only he reverses the order of the process and increases its speed.
What’s true of a chapter is also true of a larger section, except on a correspondingly grander scale. I’ve said before that when I start working on a novel, I usually know all of the major act breaks in advance, but that’s only half correct: more accurately, I have a handful of big moments in mind, and I know enough about craft to want to structure the act breaks around them. A major turning point that occurs without propelling one section into another feels like a waste of energy. (Any good novel will have more than three or four turning points, of course, but you intuitively sense which ones deserve the most prominent positions, and build the rest of the story around them.) There are times, too, when I know that a section break ought to occur at a certain position, so the scene that leads into it has to be correspondingly built up. A moment of peril, a cliffhanger, a sudden surprise or revelation: these are the kinds of scenes that we’ve been taught to expect just before a section ends. Sometimes they can seem artificial, or like an outright cheat—as many viewers felt about the end of a recent episode of True Detective. But if you learn to honor those conventions, which evolved that way for a reason, while still meeting the demands of the story, you often end up with something better than you would have had otherwise. Which is really the only reason to think in terms of genre at all.
When it came to the end of Part I of Eternal Empire, I knew more or less what had to happen before I began writing a single word. The novel opens with the mystery of why a painting was defaced at the Metropolitan Museum of Art, and this seemed like a good time to resolve it. Until now, my two leads, Maddy and Ilya, had been moving along separate paths, and I had to set things up for them to intersect—which meant placing Maddy in real physical danger for the first time. Either of these story beats could have served as a decent ending for a section, and common sense dictated that I put them both together. Hence the footstep that Maddy hears behind her, and the hood that comes down over her head, a few seconds after she’s figured out the true meaning of the painting. Neither moment is necessarily related, except by the logic of the structure itself. (There’s also a sense in which I made the circumstances of Maddy’s abduction more dramatic because of where it fell in the book: it could have just been a tap on the shoulder, but it was better if it was shocking enough to carry the reader through the next stretch of pages.) Story and structure end up influencing each other in both directions, and if I’m lucky, they should seem inseparable. That’s true of every line of a novel, but it’s particularly clear at these hinge points, which are like the places where a building has to be reinforced to sustain the stresses of the overall design. And those stresses are about to get a lot more intense…
A few days ago, a seemingly innocent question was posted to the Explain Like I’m Five forum on Reddit: “Why do we say someone was ‘in’ a movie, but ‘on’ a TV show?” This may not seem like a mindblower, but it’s something I’ve wanted to write about here for a long time, and I find myself thinking about it at least a couple of times a week. In particular, it occurs to me whenever I type up a descriptive tag for an image on this blog, which is often a screenshot of an actor in a movie or on a television show. When you’re doing the kind of routine housekeeping, your thoughts tend to wander in odd directions, and I’ve consistently found myself wondering why I need to type “Jon Hamm on Mad Men” on the one hand and “Matthew McConaughey in Interstellar” on the other. And while prepositions in any living language are inherently weird and inexplicable—as a few spoilsport commenters on the thread above take pains to point out—I think it’s still worth digging into the problem, since it seems to express something meaningful about the way we experience these two different but entwined forms of storytelling.
As usual, the discussion on Reddit involves a lot of wild guesswork and speculation, but it settles around a number of intriguing points:
- A television set is perceived as an object, while a movie is a collection of information. Saying that someone is “on television” is rooted in our experience of that piece of furniture in our living room on which stories are projected—hence the usage “What’s on?” A theatrical feature, by contrast, has an inherent intangibility, as a series of flickering images appearing at a distance: it’s less a physical thing than an event.
- Conversely, you could also think of a television show as an ongoing process, while a movie has a fixed beginning or end. Thus it seems intuitively correct to think of an actor as “on” a television show, as if he were a passenger on a journey with no obvious destination, while the same actor resides “in” the clearly defined container that a movie provides.
And while there are additional nuances involved here—can we say an actor was “in” a show that is no longer on the air?—it seems that these prepositions hinge on paradoxical properties of physicality and duration. A television show is a physical object with no endpoint; a movie is an intangible idea with definite boundaries.
If we follow this logic further, it sheds light on a number of problems of real practical resonance. There’s the issue, for instance, of why television stars have often encountered trouble finding the same success in film. Critics like to note that there’s a difference between the kind of personality we want to invite into our homes night after night and the kind we want to pay money to see in a theater. A face that resides comfortably in a physical box may not look nearly as appealing on a screen the size of a billboard: television actors tend to have faces, however attractive, that can fade into the background, while actors in feature films demand our attention. Similarly, a television show can—and often does—survive once its original lead has moved on, while nearly every mainstream movie is built explicitly around a star. Saying that an actor is “on” a show implies, rightly or not, that he could disembark while the series as a whole sailed on; try to remove an actor “in” a movie, though, and you’re talking about a fundamental disruption of the narrative fabric. It’s possible to take this kind of analogy too far, of course, and there are plenty of exceptions. But it’s hard not to regard those unassuming prepositions as signaling something deeper about how we relate to the fictional men and women in our lives.
Which raises the unanswerable question of how these linguistic conventions might be different if movies and television had somehow emerged together in their current form. (We can leave aside the related conundrum of why we “see” a movie in theaters but “watch” it everywhere else.) Neither film nor television is particularly tethered to any one device or delivery system these days: if anything, movies have gotten slightly more tangible, television harder to pin down. And while many shows have started to feel more like finite works of art, studio franchises resist tidy endings: it makes about as much sense to say that Matthew McConaughey was “in” True Detective as to say that Vin Diesel is “on” the Fast and Furious series. And while the line between these usages may continue to blur, to the point where our children may use them interchangeably, it seems likely that those propositions will persist for a while longer, much like the ideas underneath. A fossil word can live on in a language long after its original purpose has been forgotten, and old assumptions about media—like the premise that television is somehow a less reputable or prestigious medium than film, despite huge evidence to the contrary—also have a way of lingering on. And it can take a long time before we learn how to think, or speak, outside the box.
Of all the books on writing I’ve read, the one that fills me with the most mixed feelings is Blake Snyder’s Save the Cat! Everything about it, from its title to its cover art to the fact that its late author’s only two produced scripts were Stop! Or My Mom Will Shoot and Blank Check, seems designed to fill any thinking writer with dread. And the hate it inspires isn’t entirely unjustified. If every film released by a major studio these days seems to follow exactly the same structure, with a false crisis followed by a real crisis and so on down the line, it’s because writers are encouraged to follow Snyder’s beat sheet as closely as possible. It’s hard to see this as anything but bad for those of us who crave more interesting movies. And yet—and this is a third-act twist of its own—the book contains gems of genuinely useful advice. The number of reliable storytelling tricks in any medium can be counted on two hands, and Snyder provides a good four or five of them, even if he gives them insufferable names. The admonition to save the cat, for instance, is really a way of thinking about likability: if you show the protagonist doing something admirable early on, we’re more likely to follow him down the story’s darker paths. Snyder says, without irony: “They don’t put it into movies anymore.” Now, it’s in pretty much every movie, and the book’s most lasting impact may have been to wire this idea into the head of every aspiring screenwriter.
What I find particularly fascinating is that these scenes now pop up even in weird, unclassifiable movies that otherwise don’t seem to have much of an interest in conventional screenplay structure. Blackhat, for example, introduces Chris Hemsworth’s jailed hacker with a scene in which he’s admonished for breaking into the prison network and filling the commissary accounts of his fellow inmates with money. We’re meant to think of him as a technological badass—he carried out the hack using a stolen phone—with a good guy’s heart, and even if it doesn’t totally land, it sustains us ever so slightly throughout the rest of the movie, which turns Hemsworth into the taciturn, emotionally implosive hero that Michael Mann finds hard to resist. Similarly, in the new season of True Detective, we first see Colin Farrell’s character dropping his son off at school with a pep talk, followed by the line: “See you in two weeks.” A divorced cop with a kid he loves is one of the hoariest tropes of all, but again, it keeps us on board, even when Farrell shows some paternal love by beating a bully’s father to a pulp. Without that small moment at the beginning, we wouldn’t have much reason to feel invested in him at all. In other respects, Blackhat and True Detective don’t feel like products of the Snyder school: for all their flaws, neither is just a link from the sausage factory. But both Mann and Nick Pizzolatto know a good trick when they see one.
In fact, as counterintuitive as it might seem, you could say that an unconventional narrative is in greater need of a few good, cheap tricks than a more standard story. A film that makes great demands on its audience’s attention span or tolerance of complexity benefits from a few self-contained anchor points, and the nice thing about Snyder’s tips is that they exist in isolation from the real business at hand. You could think of saving the cat as the minimum effective dose for establishing a character’s likability. Mann has better things to do than to set Hemsworth up as a nice guy, so he slots in one fairly obvious scene and moves on. Whether or not it works—and a lot of viewers would say it doesn’t—is less important than the idea that a movie that resists formula benefits from inserting standard elements whenever they won’t detract from the whole. (For proof, look no further than L.A. Confidential, which I think is one of the best scripts of all time: it’s practically an anthology of tricks that brilliantly get the job done.) Most great artists, from Shakespeare on down, do this intuitively: the distinctive thing about screenwriting, in which writers tend to romanticize themselves as guns for hire, is that it tries to turn it into an industrial process, a readymade part that can be dropped in more or less intact whenever it’s required. And if the result works, that’s all the justification it needs.
I was reminded of this when I revisited Chapter 24 of Eternal Empire. When I wrote it, I don’t think I’d read Snyder’s book, but this chapter is as good an illustration as I can imagine of one of his other tips. Here’s how he puts it:
The problem of making antiheroes likable, or heroes of a comeuppance tale likable enough to root for, can also be finessed…When you have a semi-bad guy as your hero—just make his antagonist worse!
All three of my novels return to this well repeatedly, since their central character, Ilya Severin, is far from a conventionally likable lead: he’s a former hit man who kills in cold blood more than once in the course of the series. Yet he works as an engaging character, mostly because he’s always up against someone even scarier. Sharkovsky in The Icon Thief, Karvonen in City of Exiles, and Vasylenko in Eternal Empire were all conceived as antagonists who would make Ilya look better by comparison, and it’s rarely more explicit than it is here, when Vasylenko kills not one but two people—an innocent hostage and one of his own men. It’s a little excessive, maybe, but when I look back at it, it’s clear that I needed two bodies to get my point across. Nobody is safe, whether you’re a bystander or a member of the inner circle, and the scene propels Ilya, and the reader, into the next phase of the story. Because as bad as his situation looks now, it’s going to get worse very soon…
As a devoted viewer of the current golden age of television, I sometimes wake up at night haunted by the question: What if the most influential series of the decade turns out to be American Horror Story? I’ve never seen even a single episode of this show, and I’m not exactly a fan of Ryan Murphy. Yet there’s no denying that it provided the catalyst for our growing fascination with the anthology format, in which television shows are treated less as ongoing narratives with no defined conclusion than as self-contained stories, told over the course of a single season, with a clear beginning, middle, and end. And American Horror Story deserves enormous credit for initially keeping this fact under wraps. Until its first season finale aired, it looked for all the world like a conventional series, and Murphy never tipped his hand. As a result, when the season ended by killing off nearly every lead character, critics and audiences reacted with bewilderment, with many wondering how the show could possibly continue. (It’s especially amusing to read Todd VanDerWerff’s writeup on The A.V. Club, which opens by confessing his early hope that this might be an anthology series—”On one level, I knew this sort of blend between the miniseries and the anthology drama would never happen”—and ends with him resignedly trying to figure out what might happen to the Harmon family next year.)
It was only then that Murphy indicated that he would be tackling a different story each season. Even then, it took critics a while to catch on: I even remember some grumbling about the show’s decision to compete in the Best Miniseries category at the Emmys, as if it were some kind of weird strategic choice, when in fact it’s the logical place for a series like this. And at a time when networks seem inclined to spoil everything and anything for the sake of grabbing more viewers, the fact that this was actually kept a secret is a genuine achievement. It allowed the series to take the one big leap—killing off just about everybody—that nobody could have seen coming, but which was utterly consistent with the rules of its game. (It wouldn’t be the first or last time that horror, which has always been a sandbox for quick and dirty experimentation, pointed the way for more reputable genres, but that’s a topic for another post.) The result cleared a path for critical favorites from True Detective to Fargo to operate in a format that offers major advantages: it can draw big names for a limited run, it allows stories to be told over the course of ten tightly structured episodes rather than stretched over twenty or more, it lends itself well to being watched in one huge binge, and it offers viewers the chance for a definitive conclusion.
Yet the element of surprise that made the first season of American Horror Story so striking no longer exists. When we’re watching a standard television series, we go into it with a few baseline assumptions: the show may kill off important characters, but it isn’t likely to wipe out most of its cast at once, and it certainly won’t blow up its entire premise. American Horror Story worked because it walked all over those conventions, and it fooled its viewers because it shrewdly kept its big structural conceit a secret. But it reminds me a little of what Daffy Duck said after performing an incredible novelty act that involved blowing himself up with nitroglycerin: “I can only do it once.” With all the anthology series that follow, we know that everything is on the table: there’s no reason for the show to preserve anything at all. And it affects the way we watch these shows, not always to their benefit. During the first season of True Detective, fan speculation spiraled off in increasingly wild directions because we knew that there was no long game to keep the show from being exactly as crazy as it liked. There wasn’t any reason why Cohle or Hart couldn’t be the killer, or that they couldn’t both die, and I spent half the season convinced that Hart’s wife was maybe the Yellow King, if only because she otherwise seemed like just another thankless female character—and that couldn’t be what the show had in mind, could it?
And if viewers seem to have turned slightly against True Detective in retrospect, it’s in part because nothing could have lived up to the more outlandish speculations. It was simply an excellent genre show, without a closing mindblower of a twist, and I liked it just fine. And it’s possible that the second season will benefit from those adjusted expectations, although it has plenty of other obstacles to overcome. Maintaining any kind of continuity for an anthology show is challenging enough, and True Detective has made it as hard on itself as possible: its cast, its period, its setting, its structure, even its overall tone have changed, leaving only the whisper of a conceit embedded in the title. Instead of Southern Gothic, its new season feels like an homage to those Los Angeles noirs in which messy human drama plays out against a backdrop of urban development, which encompasses everything from Chinatown to L.A. Confidential to Who Framed Roger Rabbit. I’m a little mixed on last night’s premiere: these stories gain much of their power from contrasts between characters, and all the leads here share a common dourness. The episode ends with three haunted cops meeting each other for the first time, but they haven’t been made distinctive enough for that collision to seem particularly exciting. Still, despite some rote storytelling—Colin Farrell’s character is a divorced dad first seen dropping off his son at school, because of course he is—I really, really want it to work. There are countless stories, horror and otherwise, that the anthology format can tell. And this may turn out to be its greatest test yet.
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What fictional character embodies your masculine ideal?”
AMC used to stand for American Movie Classics, but over the last few years, it’s felt more like an acronym for “antiheroic male character.” You’ve met this man before. He’s a direct descendent of Tony Soprano, who owed a great deal in turn to Michael Corleone: a deeply flawed white male who screws up the lives of just about everyone around him, whether out of uncontrollable compulsion, like Don Draper, or icy calculation, like Walter White. Yet he’s also enormously attractive. He’s great at his job, he knows what he wants and how to get it, and he doesn’t play by the rules. It’s a reliable formula for an interesting protagonist, except that his underlying motivations are selfish, and everyone else in his life is a means to an end. And the more ruthless he is, the more we respond to him. I’m only four episodes into the current season of House of Cards, but I’ve already found myself flitting with boredom, because Frank Underwood has lost so much of his evil spark. As much as I enjoy Kevin Spacey’s performance, I’ve never found Frank to be an especially compelling or even coherent character, and without that core of hate and ambition, I’m no longer sure why I’m supposed to be watching him at all.
Ever since Mad Men and Breaking Bad brought the figure of the male antihero to its current heights, we’ve seen a lot of shows, from Low Winter Sun to Ray Donovan, attempting to replicate that recipe without the same critical success. In itself, this isn’t surprising: television has always been about trying to take apart the shows that worked and put the pieces together in a new way. But by fixating on the obvious traits of their antiheroic leads, rather than on deeper qualities of storytelling, the latest round of imitators runs the risk of embodying all the genre’s shortcomings and few of its strengths. There’s the fact, for instance, that even the best of these shows have problems with their female characters. Mad Men foundered with Betty Draper for much of its middle stretch, to the point where it seemed tempted to write her out entirely, and I never much cared for Skylar on Breaking Bad—not, as some would have it, because I resented her for getting in Walt’s way, but because she was shrill and uninteresting. Even True Detective, a minor masterpiece of the form with two unforgettable male leads, couldn’t figure out what to do with its women. (The great exception here is Fargo, which offered us a fantastic heroine, even if she felt a little sidelined toward the end.)
Of course, the figure of the antihero is as old as literature itself. It’s only a small step from Hamlet to Edmund or Iago, and the Iliad, which inaugurates nothing less than the entire western tradition, opens by invoking the wrath of Achilles. In many ways, Achilles is the prototype for all protagonists of this kind: he’s a figure of superhuman ability on the battlefield, with a single mythic vulnerability, and he’s willing to let others die as he sulks in his tent out of wounded pride, over a woman who is treated as a spoil in a conflict between men. Achilles stands alone, and he’s defined more by his own fate than by any of his human relationships. (To the extent that other characters are important in our understanding of him, it’s as a series of counterexamples: Achilles is opposed at one point or another to Hector, Odysseus, and Agamemnon, and the fact that he’s contrasted against three such different men only points to how complicated he is.) It’s no wonder that readers tend to feel more sympathy for Hector, who is allowed moments of recognizable tenderness: when he tries to embrace his son Astyanax, who bursts into tears at the sight of his father’s armor and plumed helmet, the result is my favorite passage in all of classical poetry, because it feels so much like an instant captured out of real life and transmitted across the centuries.
Yet Achilles is the hero of the Iliad for a reason; Hector, for all his appeal, isn’t cut out for sustaining an entire poem. An antihero, properly written, can be the engine that drives the whole machine, and in epic poetry, or television, you need one heck of a motor. But a motor isn’t a man, or at least it’s a highly incomplete version of what a man can be. And there’s a very real risk that the choices writers make for the sake of the narrative can shape the way the rest of us think and behave. As Joseph Meeker points out, we tend to glamorize the tragic hero, who causes nothing but suffering to those around him, over the comic hero, who simply muddles through. Fortunately, we have a model both for vivid storytelling and meaningful connection in Achilles’ opposite number. Odysseus isn’t perfect: he engages in dalliances of his own while his wife remains faithful, and his bright ideas lead to the deaths of most of his shipmates. But he’s much closer to a comic than a tragic hero, relying on wit and good timing as much as strength to get home, and his story is like a guided tour of all the things a man can be: king, beggar, father, son, husband, lover, and nobody. We’d live in a happier world if our fictional heroes were more like Odysseus. Or, failing that, I’ll settle for Achilles, as long as he’s more than just a heel.