Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Search Results

Too far to go

leave a comment »

Note: Spoilers follow for the third season of Fargo.

A year and a half ago, I wrote on this blog: “Fargo may turn out to be the most important television series on the air today.” Now that the third season has ended, it feels like a good time to revisit that prediction, which turned out to be sort of right, but not for the reasons that I expected. When I typed those words, cracking the problem of the anthology series felt like the puzzle on which the future of television itself depended. We were witnessing an epochal shift of talent, which is still happening, from movies to the small screen, as big names on both sides of the camera began to realize that the creative opportunities it afforded were in many ways greater than what the studios were prepared to offer. I remain convinced that we’re entering an entertainment landscape in which Hollywood focuses almost exclusively on blockbusters, while dramas and quirky smaller films migrate to cable, or even the networks. The anthology series was the obvious crossover point. It could draw big names for a limited run, it allowed stories to be told over the course of ten tight episodes rather than stretched over twenty or more, it lent itself well to being watched in one huge binge, and it offered viewers the prospect of a definitive conclusion. At its best, it felt like an independent movie given the breathing room and resources of an epic. Fargo, its exemplar, became one of the most fascinating dramas on television in large part because it drew its inspiration from one of the most virtuoso experiments with tone in movie history—a triangulation, established by the original film, between politeness, quiet desperation, and sudden violence. It was a technical trick, but a very good one, and it seemed like a machine that could generate stories forever.

After three seasons, I haven’t changed my mind, even if the show’s underlying formula feels more obvious than before. What I’ve begun to realize about Fargo is that it’s an anthology series that treats each season as a kind of miniature anthology, too, with scenes and entire episodes that stand for nothing but themselves. In the first season, the extended sequence about Oliver Platt’s supermarket magnate was a shaggy dog story that didn’t go anywhere, but now, it’s a matter of strategy. The current season was ostensibly focused on the misfortunes of the Stussey brothers, played with showy brilliance by Ewan McGregor, but it allowed itself so many digressions that the plot became more like a framing device. It opened with a long interrogation scene set in East Germany that was never referenced again, aside from serving as a thematic overture to the whole—although it can’t be an accident that “Stussey” sounds so much like “Stasi.” Later, there was a self-contained flashback episode set in the science fiction and movie worlds of the seventies, including an animated cartoon to dramatize a story by one of the characters, which turned the series into a set of nesting dolls. It often paused to stage the parables told by the loathsome Varga, which were evidently supposed to cast light on the situation, but rarely had anything to do it. After the icy control of the first season and the visual nervousness of the second, the third season threaded the needle by simultaneously disciplining its look and indulging its penchant for odd byways. Each episode was like a film festival of short subjects, some more successful than others, and unified mostly by creator Noah Hawley’s confidence that we would follow him wherever he went.

Mostly, he was right, although his success rate wasn’t perfect, as it hardly could have been expected to be. There’s no question that between Fargo and Legion, Hawley has been responsible for some of the most arresting television of the last few years, but the strain occasionally shows. The storytelling and character development on Legion were never as interesting as its visual experiments, possibly because a show can innovate along only so many parameters at once. And Fargo has been so good at its quirky components—it rarely gives us a scene that isn’t riveting in itself—that it sometimes loses track of the overall effect. Like its inspiration, it positions itself as based on true events, even though it’s totally fictional, and in theory, this frees it up to indulge in loose ends, coincidences, and a lack of conventional climaxes, just like real life. But I’ve never entirely bought this. The show is obsessively stylized and designed, and it never feels like a story that could take place anywhere but in the fictional Coenverse. At times, Hawley seems to want it both ways. The character of Nikki Swango, played by Mary Elizabeth Winstead, is endlessly intriguing, and I give the show credit for carrying her story through to what feels like a real conclusion, rather than using her suffering as an excuse to motivate a male protagonist. But when she’s gratuitously targeted by the show’s villains, only to survive and turn into an avenging angel, it’s exactly what I wanted, but I couldn’t really believe a second of it. It’s just as contrived as any number of storylines on more conventional shows, and although the execution is often spellbinding, it has a way of eliding reasonable objections. When it dispatches Nikki at the end with a standard trick of fate, it feels less like a subversion than the kind of narrative beat that the show has taught us to expect, and by now, it’s dangerously close to a cliché.

This is where the anthology format becomes both a blessing and a curse. By tying off each story after ten episodes, Fargo can allow itself to be wilder and more intense than a show that has to focus on the long game, but it also gets to indulge in problematic storytelling devices that wouldn’t stand up to scrutiny if we had to live with these characters for multiple seasons. Even in its current form, there are troubling patterns. Back in the first season, one of my few complaints revolved around the character of Bill Oswalt, who existed largely to foil the resourceful Molly as she got closer to solving the case. Bill wasn’t a bad guy, and the show took pains to explain the reasons for his skepticism, but their scenes together quickly grew monotonous. They occurred like clockwork, once every episode, and instead of building to something, they were theme and variations, constantly retarding the story rather than advancing it. In the third season, incredibly, Fargo does the same thing, but worse, in the form of Chief Moe Dammik, who exists solely to doubt, undermine, and make fun of our hero, Gloria Burgle, and without the benefit of Bill’s underlying sweetness. Maybe the show avoided humanizing Dammik because it didn’t want to present the same character twice—which doesn’t explain why he had to exist at all. He brought the show to a halt every time he appeared, and his dynamic with Gloria would have seemed lazy even on a network procedural. (And it’s a foil, significantly, that the original Fargo didn’t think was necessary.) Hawley and his collaborators are only human, but so are all writers. And if the anthology format allows them to indulge their strengths while keeping their weaknesses from going too far, that may be the strongest argument for it of all.

Written by nevalalee

June 22, 2017 at 8:45 am

Posted in Television

Tagged with , ,

Invitation to look

leave a comment »

Note: This post discusses plot elements from last night’s episode of Twin Peaks.

In order to understand the current run of Twin Peaks, it helps to think back to the most characteristic scene from the finale of the second season, which was also the last episode of the show to air for decades. I’m not talking about Cooper in the Black Lodge, or any of the messy, unresolved melodrama that swirled around the other characters, or even the notorious cliffhanger. I mean the scene at Twin Peaks Savings and Loan that lingers interminably on the figure of Dell Mibbler, an ancient, doddering bank manager whom we haven’t seen before and will never see again, as he crosses the floor, in a single unbroken shot, to get a glass of water for Audrey. Even at the time, when the hope of a third season was still alive, many viewers must have found the sequence agonizingly pointless. Later, when it seemed like this was the last glimpse of these characters that we would ever have, it felt even less explicable. With only so many minutes in any given episode, each one starts to seem precious, especially in a series finale, and this scene took up at least two of them. (Now that we’ve finally gotten another season, I’m not sure how it will play in the future, but I suspect that it will feel like what it must have been intended to be—a precarious, unnecessary, but still pretty funny gag.) Anecdotally speaking, for a lot of viewers, the third season is starting to feel like that bank scene played over and over again. In theory, we have plenty of room for digressions, with eighteen hours of television to fill. But as the tangents and apparent dead ends continue to pile up, like the scene last night in which the camera spends a full minute lovingly recording an employee sweeping up at the Bang Bang Bar, it sometimes feels like we’ve been tricked into watching Dell Mibbler: The Return.

Yet this has been David Lynch’s style from the beginning. Lynch directed only a few hours of the initial run of Twin Peaks, but his work, particularly on the pilot, laid down a template that other writers and directors did their best to follow. And many of the show’s iconic images—the streetlight at the corner where Laura was last seen, the waterfall, the fir trees blowing in the wind—consist of silent shots that are held for slightly longer than the viewer would expect. One of the oddly endearing things about the original series was how such eerie moments were intercut with scenes that, for all their quirkiness, were staged, shot, and edited more or less like any other network drama. The new season hasn’t offered many such respites, which is part of why it still feels like it’s keeping itself at arm’s length from its own history. For better or worse, Lynch doesn’t have to compromise here. (Last night’s episode was perhaps the season’s most plot-heavy installment to date, and it devoted maybe ten minutes to advancing the story.) Instead, Lynch is continuing to educate us, as he’s done erratically throughout his career, on how to slow down and pay attention. Not all of his movies unfold at the same meditative pace: Blue Velvet moves like a thriller, in part because of the circumstances of its editing, and Wild at Heart seems like an attempt, mostly unsuccessful, to sustain that level of frantic motion for the film’s entire length. But when we think back to the scenes from his work that we remember most vividly, they tend to be static shots that are held so long that they burn themselves into our imagination. And as movies and television shows become more anxious to keep the viewer’s interest from straying for even a second, Twin Peaks remains an invitation to look and contemplate.

It also invites us to listen, and while much of Lynch’s fascination with stillness comes from his background as a painter, it also emerges from his interest in sound. Lynch is credited as a sound designer on Twin Peaks, as he has been for most of his movies, and the show is suffused with what you might call the standard-issue Lynchian noise—a low, barely perceptible hum of static that occasionally rises to an oceanic roar. (In last night’s episode, Benjamin Horne and the character played by Ashley Judd try vainly to pin down the source of a similar hum at the Great Northern, and while it might eventually lead somewhere, it also feels like a subtle joke at Lynch’s own expense.) The sound is often associated with electronic or recording equipment, like the video cameras that are trained on the glass cube in the season premiere. My favorite instance is in Blue Velvet, when Jeffrey stumbles across the tableau of two victims in Dorothy’s apartment, one with his ear cut off, the other still standing with his brains shot out. There’s a hum coming from the shattered television set, and it’s pitched at so low a level that it’s almost subliminal, except to imperceptibly increase our anxiety. You only really become aware of it when it stops, after Jeffrey closes the door behind him and, a little later, when Frank shoots out the television tube. But you can’t hear it at all unless everything else onscreen is deathly quiet. It emerges from stillness, as if it were a form of background noise that surrounds us all the time, but is only audible when the rest of the world fades away. I don’t know whether Lynch’s fascination with this kind of sound effect came out of his interest in stillness or the other way around, and the most plausible explanation is that it all arose from the same place. But you could build a convincing reading of his career around the two meanings of the word “static.”

Taken together, the visual and auditory elements invite us to look on in silence, which may be a reflection of Lynch’s art school background. (I don’t know if Lynch was directly influenced by Marcel Duchamp’s Étant Donnés, a work of art that obsessed me so much that I wrote an entire novel about it, but they both ask us to stand and contemplate the inexplicable without speaking. And when you see the installation in person at the Philadelphia Museum of Art, as I’ve done twice, the memory is inevitably entwined with the low hum of the room’s climate control system.) By extending this state of narrative suspension to the breaking point, Twin Peaks is pushing in a direction that even the most innovative prestige dramas have mostly avoided, and it still fascinates me. The real question is when and how the silence will be broken. Lynch’s great hallmark is his use of juxtaposition, not just of light and dark, which horrified Roger Ebert so much in Blue Velvet, but of silence and sudden, violent action. We’ve already seen hints of this so far in Twin Peaks, particularly in the scenes involving the murderous Ike the Spike, who seems to be playing the same role, at random intervals, that a figure of similarly small stature did at the end of Don’t Look Now. And I have a feeling that the real payoff is yet to come. This might sound like the wishful thinking of a viewer who is waiting for the show’s teasing hints to lead somewhere, but it’s central to Lynch’s method, in which silence and stillness are most effective when framed by noise and movement. The shot of the two bodies in Dorothy’s apartment leads directly into the most dramatically satisfying—and, let it be said, most conventional—climax of Lynch’s career. And remember Dell Mibbler? At the end of the scene, the bank blows up.

Written by nevalalee

June 19, 2017 at 9:06 am

Moving through time

leave a comment »

Note: Spoilers follow for last night’s episode of Twin Peaks.

For all the debate over how best to watch a television show these days, which you see argued with various degrees of seriousness, the options that you’re offered are fairly predictable. If it’s a show on a streaming platform, you’re presented with all of it at once; if it’s on a cable or broadcast channel, you’re not. Between those two extremes, you’re allowed to structure your viewing experience pretty much however you like, and it isn’t just a matter of binging the whole season or parceling out each episode one week at a time. Few of us past the age of thirty have the ability or desire to watch ten hours of anything in one sitting, and the days of slavish faithfulness to an appointment show are ending, too—even if you aren’t recording it on DVR, you can usually watch it online the next day. Viewers are customizing their engagement with a series in ways that would have been unthinkable just fifteen years ago, and networks are experimenting with having it both ways, by airing shows on a weekly basis while simultaneously making the whole season available online. If there’s pushback, it tends to be from creators who are used to having their shows come out sequentially, like Dan Harmon, who managed to get Yahoo to release the sixth season of Community one episode at a time, as if it were still airing on Thursdays at eight. (Yahoo also buried the show on its site so that even fans had trouble figuring out that it was there, but that’s another story, as well as a reminder, in case we needed one, that such decisions aren’t always logical or considered.)

Twin Peaks, for reasons that I’ll discuss in a moment, doesn’t clearly lend itself to one approach or another, which may be why its launch was so muddled. Showtime premiered the first two hours on a Sunday evening, then quietly made the next two episodes available online, although this was so indifferently publicized that it took me a while to hear about it. It then ran episodes three and four yet again the following week, despite the fact that many of the show’s hardcore fans—and there’s hardly anyone else watching—would have seen them already, only to finally settle into the weekly delivery schedule that David Lynch had wanted in the first place. As a result, it stumbled a bit out of the gate, at least as far as shaping a wider conversation was concerned. You weren’t really sure who was watching those episodes or when. (To be fair, in the absence of blockbuster ratings, the existence of viewers watching at different times is what justifies this show’s existence.) As I’ve argued elsewhere, this isn’t a series that necessarily benefits from collective analysis, but there’s a real, if less tangible, emotional benefit to be had from collective puzzlement. It’s the understanding that a lot of other people are feeling the same things that you are, at roughly the same time, and that you have more in common with them than you will with anybody else in the world. I’m overstating it, but only a little. Whenever I meet someone who bought Julee Cruise’s first album or knows why Lil was wearing a sour face, I feel like I’ve found a kindred spirit. Twin Peaks started out as a huge cultural phenomenon, dwindling only gradually into a cult show that provided its adherents with their own set of passwords. And I think that it would have had a better chance of happening again now if Showtime had just aired all the episodes once a week from the beginning.

Yet I understand the network’s confusion, because this is both a show that needs to be seen over a period of time and one that can’t be analyzed until we’ve seen the full picture. Reviewing it must be frustrating. Writing about it here, I don’t need to go into much detail, and I’m free to let my thoughts wander wherever they will, but a site like the New York Times or The A.V. Club carries its own burden of expectations, which may not make sense for a show like this. A “recap” of an episode of Twin Peaks is almost a contradiction in terms. You can’t do much more than catalog the disconnected scenes, indulge in some desultory theorizing, and remind readers that they shouldn’t jump to any conclusions until they’ve seen more. It’s like reviewing Mulholland Drive ten minutes at a time—which is ridiculous, but it’s also exactly the position in which countless critics have found themselves. For ordinary viewers, there’s something alluring about the constant suspension of judgment that it requires: I’ve found it as absorbing as any television series I’ve seen in years. Despite its meditative pacing, an episode seems to go by more quickly than most installments of a more conventional show, even the likes of Fargo or Legion, which are clearly drawing from the same pool of ideas. (Noah Hawley is only the latest creator and showrunner to try to deploy the tone of Twin Peaks in more recognizable stories, and while he’s better at it than most, it doesn’t make the effort any less thankless.) But it also hamstrings the online critic, who has no choice but to publish a weekly first draft on the way to a more reasoned evaluation. Everything you write about Twin Peaks, even, or especially, if you love it, is bound to be provisional until you can look at it as a whole.

Still, there probably is a best way to watch Twin Peaks, which happens to be the way in which I first saw it. You stumble across it years after it originally aired, in bits and pieces, and with a sense that you’re the only person you know who is encountering it in quite this way. A decade from now, my daughter, or someone like her, will discover this show in whatever format happens to be dominant, and she’ll watch it alone. (I also suspect that she’ll view it after having internalized the soundtrack, which doesn’t even exist yet in this timeline.) It will deprive her, inevitably, of a few instants of shared bewilderment or revelation that can only occur when you’re watching a show on its first airing. When Albert Rosenfeld addresses the woman in the bar as Diane, and she turns around to reveal Laura Dern in a blonde wig, it’s as thrilling a moment as I’ve felt watching television in a long time—and by the way Lynch stages it, it’s clear that he knows it, too. My daughter won’t experience this. But there’s also something to be said for catching up with a show that meant a lot to people a long time ago, with your excitement tinged with a melancholy that you’re too late to have been a part of it. I frankly don’t know how often I’ll go back to watch this season again, any more than I’m inclined to sit through Inland Empire, which I loved, a second time. But I’m oddly consoled by the knowledge that it will continue to exist and mean a lot to future viewers after the finale airs, which isn’t something that you could take for granted if you were watching the first two seasons in the early nineties. And it makes this particular moment seem all the more precious, since it’s the last time that we’ll be able to watch Twin Peaks without any idea of where it might be going.

Written by nevalalee

June 12, 2017 at 9:07 am

Live from Twin Peaks

with one comment

What does Twin Peaks look like without Agent Cooper? It was a problem that David Lynch and his writing team were forced to solve for Fire Walk With Me, when Kyle MacLachlan declined to come back for much more than a token appearance, and now, in the show’s third season, Lynch and Mark Frost seem determined to tackle the question yet again, even though they’ve been given more screen time for their leading man than anyone could ever want. MacLachlan’s name is the first thing that we see in the closing credits, in large type, to the point where it’s starting to feel like a weekly punchline—it’s the only way that we’d ever know that the episode was over. He’s undoubtedly the star of the show. Yet even as we’re treated to an abundance of Dark Cooper and Dougie Jones, we’re still waiting to see the one character that I, and a lot of other fans, have been awaiting the most impatiently. Dale Cooper, it’s fair to say, is one of the most peculiar protagonists in television history. As the archetypal outsider coming into an isolated town to investigate a murder, he seems at first like a natural surrogate for the audience, but, if anything, he’s quirkier and stranger than many of the locals he encounters. When we first meet Cooper, he comes across as an almost unplayable combination of personal fastidiousness, superhuman deductive skills, and childlike wonder. But you’re anything like me, you wanted to be like him. I ordered my coffee black for years. And if he stood for the rest of us, it was as a representative of the notion, which crumbles in the face of logic but remains emotionally inescapable, that the town of Twin Peaks would somehow be a wonderful place to live, despite all evidence to the contrary.

In the third season, this version of Cooper, whom I’ve been waiting for a quarter of a century to see again, is nowhere in sight. And the buildup to his return, which I still trust will happen sooner or later, has been so teasingly long that it can hardly be anything but a conscious artistic choice. With every moment of recognition—the taste of coffee, the statue of the gunfighter in the plaza—we hope that the old Cooper will suddenly reappear, but the light in his eyes always fades. On some level, Lynch and Frost are clearly having fun with how long they can get away with this, but by removing the keystone of the original series, they’re also leaving us with some fascinating insights into what kind of show this has been from the very beginning. Let’s tick off its qualities one by one. Over the course of any given episode, it cuts between what seems like about a dozen loosely related plotlines. Most of the scenes last between two and four minutes, with about the same number of characters, and the components are too far removed from one another to provide anything in the way of narrative momentum. They aren’t built around any obligation to advance the plot, but around striking images or odd visual or verbal gags. The payoff, as in the case of Dr. Jacoby’s golden shovels, often doesn’t come for hours, and when it does, it amounts to the end of a shaggy dog story. (The closest thing we’ve had so far to a complete sequence is the sad case of Sam, Tracey, and the glass cube, which didn’t even make it past the premiere.) If there’s a pattern, it isn’t visible, but the result is still strangely absorbing, as long as you don’t approach it as a conventional drama but as something more like Twenty-Two Short Films About Twin Peaks.

You know what this sounds like to me? It sounds like a sketch comedy show. I’ve always seen Twin Peaks as a key element in a series of dramas that stretches from The X-Files through Mad Men, but you could make an equally strong case for it as part of a tradition that runs from SCTV to Portlandia, which went so far as to cast MacLachlan as its mayor. They’re set in a particular location with a consistent cast of characters, but they’re essentially sketch comedies, and when one scene is over, they simply cut to the next. In some ways, the use of a fixed setting is a partial solution to the problem of transitions, which shows from Monty Python onward have struggled to address, but it also creates a beguiling sense of encounters taking place beyond the edges of the frame. (Matt Groening has pointed to SCTV as an inspiration for The Simpsons, with its use of a small town in which the characters were always running into one another. Groening, let’s not forget, was born in Portland, just two hours away from Springfield, which raises the intriguing question of why such shows are so drawn to the atmosphere of the Pacific Northwest.) Without Cooper, the show’s affinities to sketch comedy are far more obvious—and this isn’t the first time this has happened. After Laura’s murderer was revealed in the second season, the show seemed to lose direction, and many of the subplots, like James’s terminable storyline with Evelyn, became proverbial for their pointlessness. But in retrospect, that arid middle stretch starts to look a lot like an unsuccessful sketch comedy series. And it’s worth remembering that Lynch and Frost originally hoped to keep the identity of the killer a secret forever, knowing that it was all that was holding together the rest.

In the absence of a connective thread, it takes a genius to make this kind of thing work, and the lack of a controlling hand is a big part of what made the second season so markedly unsuccessful. Fortunately, the third season has a genius readily available. The sketch format has always been David Lynch’s comfort zone, a fact that has been obscured by contingent factors in his long career. Lynch, who was trained as a painter and conceptual artist, thinks naturally in small narrative units, like the video installations that we glimpse for a second as we wander between rooms in a museum. Eraserhead is basically a bunch of sketches linked by its titular character, and he returned to that structure in Inland Empire, which, thanks to the cheapness of digital video, was the first movie in decades that he was able to make entirely on his own terms. In between, the inclination was present but constrained, sometimes for the better. In its original cut of three hours, Blue Velvet would have played much the same way, but in paring it down to its contractually mandated runtime, Lynch and editor Duwayne Dunham ended up focusing entirely on its backbone as a thriller. (It’s an exact parallel to Annie Hall, which began as a three-hour series of sketches called Anhedonia that assumed its current form after Woody Allen and Ralph Rosenbaum threw out everything that wasn’t a romantic comedy.) Most interesting of all is Mulholland Drive, which was originally shot as a television pilot, with fragmented scenes that were clearly supposed to lead to storylines of their own. When Lynch recut it into a movie, they became aspects of Betty’s dream, which may have been closer to what he wanted in the first place. And in the third season of Twin Peaks, it is happening again.

Lotion in two dimensions

leave a comment »

A few days ago, Christina Colizza of The Hairpin published a sharp, funny article on the phenomenon of “night lotion,” as documented by the Instagram feed of the same name. It’s the curiously prevalent image in movies and television of a character, invariably a woman, who is shown casually moisturizing over the course of an unrelated scene, usually during a bedroom conversation or argument with her husband. The account, which is maintained by Beth Wawerna of the band Bird of Youth, offers up relevant screenshots of the likes of Piper Laurie from Twin Peaks, Claire Foy from The Crown, and Anna Gunn from Breaking Bad, and although I’d never consciously noticed it before, it now seems like something I’ve been seeing without fully processing it for years. Colizza contends that this convention is entirely fictional: “I don’t know anyone who does this.” Wawerna concurs, saying that she asked multiple women if they ever spoke to their spouses for five minutes while applying lotion at bedtime, and they all said no. To be fair, though, it isn’t entirely nonexistent. My wife has kindly consented to issue the following statement:

I am a skincaretainment enthusiast and take great pleasure in my elaborate skincare routines (one for morning and one for nighttime, naturally). Before going to bed at night, I stand in front of the full-length mirror and talk to Alec while applying a succession of products to my face and neck in a very precise order. It is one of my favorite parts of the day. Wintertime is especially good for this routine because I use more products on my face and also put lotion on my feet after climbing into bed. This gives me even more time to tell Alec inane stories about my day while he tries to read a book.

Maybe this means that I’m living in a prestige cable drama without knowing it—but it also implies that this behavior isn’t entirely without parallels in real life. Still, it seems to appear with disproportionate frequency in film and television, and it’s worth asking why. Colizza and Wawerna make a few cheerful stabs at unpacking it, although they’re the first to admit that they raise more questions than they answer. “It’s obviously some sort of crutch,” Wawerna says. “I don’t know if it’s true and I need to do actual research to figure this out, but are most of those scenes written by men?” Colizza adds:

This is neither death wish on buying Jergens in bulk, nor a critique on moisturizing; we all need a bit of softness in our lives. The problem here is that the lotion, whether sensually applied or rubbed vigorously, is a visual distraction during moments of potential character development and depth. “Is there anything else a woman can do?” Wawerna asks me in giddy exasperation. “Can we just sit with this woman, who’s clearly having a moment with herself, or going through something?”

Elsewhere, Colizza helpfully classifies the two most common contexts for the trope, which tends to occur at a pivot point in the narrative: “This moonlit ritual is either a woman alone having a moment before bed. Or a woman tearing her hubby a new one.” And I think that she gets very close to the solution when she wonders whether television “demands some physicality on screen at all times, especially so if it can help convey a basic emotion.”

This strikes me as right on target, with one slight modification: it isn’t the medium reaching back to impose a physical action on the performer, but the actor introducing a technical device into a scene. Actors are always looking for something to do with their hands. This notion of “stage business” is an established point of craft, but it can also have unpredictable effects on viewers. The great example here, of course, is smoking. If we see so much of it in Hollywood, it isn’t because the studios are determined to glamorize tobacco use or in the pocket of the cigarette companies, but because smoking is a pragmatic performative tool. A cigarette gives actors a wide range of ways to emphasize lines or emotional beats: they can remove it from the pack, light it, peer through the smoke, exhale, study the ember, and grind it out to underline a moment. (One beautiful illustration is the last shot of The Third Man, with what Roger Ebert once described as “the perfect emotional parabola described as [Joseph] Cotten throws away his cigarette.” Revealingly, this was an actor’s choice: Carol Reed kept the camera rolling for an uncomfortably long time after Alida Valli exited the frame, and Cotten lit up just to have something to do.) In terms of providing useful tidbits of actorly behavior, nothing comes close to smoking, and until recently, I would have said that no comparable bit of business has managed to take its place. But that’s just what night lotion is. Its advantages are obvious. It requires nothing but a commonplace prop that isn’t likely to draw attention to itself, and it offers a small anthology of possible motions that can be integrated in countless ways with the delivery of dialogue. Unlike smoking, it’s constrained by the fact that it naturally lives in the bedroom, but this isn’t really a limitation, since this is where many interior scenes take place anyway.

And when you look at the instances that Wawerna has collected, you find that they nearly all occur at narrative moments in which a character of an earlier era would have unthinkingly lit up a cigarette. What’s really funny is how a technical solution to an acting problem can acquire coded meanings and externalities—it’s literally an emergent property. The movies may not have meant to encourage smoking, but they unquestionably did, and it isn’t unreasonable to say that people died because Humphrey Bogart needed something to do while he said his lines. (The fact that Bogart smoked a lot offscreen doesn’t necessarily invalidate this argument. These choices are often informed by an actor’s personality, and I assume that television stars are more likely to take an interest in their skin care than the rest of us.) The message carried by lotion is far more innocuous: as Colizza notes, we could all stand to moisturize more. In practice, though, it’s such a gendered convention that it trails along all kinds of unanticipated baggage involving female beauty and body image, at least when you see so many examples in a row. Taken in isolation, it’s invisible enough, except to exceptionally observant viewers, that it doesn’t seem likely to disappear anytime soon. In the past, I’ve compared stumbling across a useful writing technique to discovering a new industrial process, and a convention like night lotion, which can be incorporated intact into any number of dramatic situations, is a writer and actor’s dream. Not surprisingly, it ends up being used to death until it becomes a cliché. Unlike such conventions as the cinematic baguette, it isn’t designed to save time or convey information, but to serve as as what Wawerna accurately describes as a crutch for the performer. Like the cigarette, or the anonymous decanter of brown liquor in the sideboard that played a similar role in so many old movies, it seems destined to remain in the repertoire, even if its meaning is only skin deep.

Written by nevalalee

June 2, 2017 at 9:39 am

The space between us all

with 5 comments

In an interview published in the July 12, 1970 issue of Rolling Stone, the rock star David Crosby said: “My time has gotta be devoted to my highest priority projects, which starts with tryin’ to save the human race and then works its way down from there.” The journalist Ben Fong-Torres prompted him gently: “But through your music, if you affect the people you come in contact with in public, that’s your way of saving the human race.” And I’ve never forgotten Crosby’s response:

But somehow operating on that premise for the last couple of years hasn’t done it, see? Somehow Sgt. Pepper’s did not stop the Vietnam War. Somehow it didn’t work. Somebody isn’t listening. I ain’t saying stop trying; I know we’re doing the right thing to live, full on. Get it on and do it good. But the inertia we’re up against, I think everybody’s kind of underestimated it. I would’ve thought Sgt. Pepper’s could’ve stopped the war just by putting too many good vibes in the air for anybody to have a war around.

He was right about one thing—the Beatles didn’t stop the war. And while it might seem as if there’s nothing new left to say about Sgt. Pepper’s Lonely Hearts Club Band, which celebrates its fiftieth anniversary today, it’s worth asking what it tells us about the inability of even our greatest works of art to inspire lasting change. It’s probably ridiculous to ask this of any album. But if a test case exists, it’s here.

It seems fair to say that if any piece of music could have changed the world, it would have been Sgt. Pepper. As the academic Langdon Winner famously wrote:

The closest Western Civilization has come to unity since the Congress of Vienna in 1815 was the week the Sgt. Pepper album was released…At the time I happened to be driving across the country on Interstate 80. In each city where I stopped for gas or food—Laramie, Ogallala, Moline, South Bend—the melodies wafted in from some far-off transistor radio or portable hi-fi. It was the most amazing thing I’ve ever heard. For a brief while, the irreparably fragmented consciousness of the West was unified, at least in the minds of the young.

The crucial qualifier, of course, is “at least in the minds of the young,” which we’ll revisit later. To the critic Michael Bérubé, it was nothing less than the one week in which there was “a common culture of widely shared values and knowledge in the United States at any point between 1956 and 1976,” which seems to undervalue the moon landing, but never mind. Yet even this transient unity is more apparent than real. By the end of the sixties, the album had sold about three million copies in America alone. It’s a huge number, but even if you multiply it by ten to include those who were profoundly affected by it on the radio or on a friend’s record player, you end up with a tiny fraction of the population. To put it another way, three times as many people voted for George Wallace for president as bought a copy of Sgt. Pepper in those years.

But that’s just how it is. Even our most inescapable works of art seem to fade into insignificance when you consider the sheer number of human lives involved, in which even an apparently ubiquitous phenomenon is statistically unable to reach a majority of adults. (Fewer than one in three Americans paid to see The Force Awakens in theaters, which is as close as we’ve come in recent memory to total cultural saturation.) The art that feels axiomatic to us barely touches the lives of others, and it may leave only the faintest of marks on those who listen to it closely. The Beatles undoubtedly changed lives, but they were more likely to catalyze impulses that were already there, providing a shape and direction for what might otherwise have remained unexpressed. As Roger Ebert wrote in his retrospective review of A Hard Day’s Night:

The film was so influential in its androgynous imagery that untold thousands of young men walked into the theater with short haircuts, and their hair started growing during the movie and didn’t get cut again until the 1970s.

We shouldn’t underestimate this. But if you were eighteen when A Hard Day’s Night came out, it also means that you were born the same year as Donald Trump, who decisively won voters who were old enough to buy Sgt. Pepper on its initial release. Even if you took its message to heart, there’s a difference between the kind of change that marshals you the way that you were going and the sort that realigns society as a whole. It just isn’t what art is built to do. As David Thomson writes in Rosebud, alluding to Trump’s favorite movie: “The world is very large and the greatest films so small.”

If Sgt. Pepper failed to get us out of Vietnam, it was partially because those who were most deeply moved by it were more likely to be drafted and shipped overseas than to affect the policies of their own country. As Winner says, it united our consciousness, “at least in the young,” but all the while, the old men, as George McGovern put it, were dreaming up wars for young men to die in. But it may not have mattered. Wars are the result of forces that care nothing for what art has to say, and their operations are often indistinguishable from random chance. Sgt. Pepper may well have been “a decisive moment in the history of Western civilization,” as Kenneth Tynan hyperbolically claimed, but as Harold Bloom reminds us in The Western Canon:

Reading the very best writers—let us say Homer, Dante, Shakespeare, Tolstoy—is not going to make us better citizens. Art is perfectly useless, according to the sublime Oscar Wilde, who was right about everything.

Great works of art exist despite, not because of, the impersonal machine of history. It’s only fitting that the anniversary of Sgt. Pepper happens to coincide with a day on which our civilization’s response to climate change will be decided in a public ceremony with overtones of reality television—a more authentic reflection of our culture, as well as a more profound moment of global unity, willing or otherwise. If the opinions of rock stars or novelists counted for anything, we’d be in a very different situation right now. In “Within You Without You,” George Harrison laments “the people who gain the world and lose their soul,” which neatly elides the accurate observation that they, not the artists, are the ones who do in fact tend to gain the world. (They’re also “the people who hide themselves behind a wall.”) All that art can provide is private consolation, and joy, and the reminder that there are times when we just have to laugh, even when the news is rather sad.

The faults in our stars

leave a comment »

In his wonderful conversational autobiography Cavett, Dick Cavett is asked about his relationship with Johnny Carson, for whom he served as a writer on The Tonight Show. Cavett replies:

I did work for Carson. We didn’t go fishing together on weekends, and I never slept over at his house, the two of us lying awake in our jammies eating the fudge we had made together, talking of our dreams and hopes and fears. But I found him to be cordial and businesslike, and to have himself well in hand as far as the show as concerned…He is not a man who seems to seek close buddies, and, if he were, the staff of his own television show would not be the ideal place to seek them.

It’s a memorable passage, especially the last line, which seems particularly relevant at a time when our talk show hosts seem eager to seem accessible to everybody, and to depict their writing staffs as one big happy family. When asked to comment on the widespread notion that Carson was “cold,” Cavett responds:

I know very little about Johnny’s personal relationships. I have heard that he has been manipulated and screwed more than once by trusted associates, to the point where he is defensively wary to what some find an excessive degree. I see this as a perfectly reasonable response. It is, I suppose, the sort of thing that happens to a person in show business that makes his former friends say, with heavy disapprobation, “Boy, has he changed.”

Cavett could easily let the subject rest there, but something in the question seems to stick in his mind, and he continues:

While I’m at it, I’ll do a short cadenza on the subject of changing. If you are going to survive in show business, the chances are you are going to change or be changed. Whatever your reasons for going into the business, it is safe to admit they form a mixture of talent, ambition, and neurosis. If you are going to succeed and remain successful, you are going to do it at the expense of a number of people who are clamoring to climb the same rope you are climbing. When you suddenly acquire money, hangers-on, well-wishers, and ill-wishers; when you need to make baffling decisions quickly, to do too much in too little time, to try to lead a personal and a professional life when you can’t seem to find the time for either; when you have to kick some people’s fannies and kiss others’ to get to the point where you won’t need to do either any more; when you have to sort out conflicting advice, distinguish between the treacherous and the faithful or the competent and the merely aggressive, suffer fools when time is short and incompetents when you are in a pinch; and when you add to this the one thing that you don’t get in other professions—the need to be constantly fresh and presentable and at your best just at the times when you are bone-weary, snappish, and depressed; when all these things apply, it is possible that you are going to be altered, changed, and sometimes for the worse.

This is one of the best things I’ve ever read about show business, and if anything, it feels even more insightful today, when we collectively have so much invested in the idea that stars have inner lives that are more or less like our own.

It’s often been said that the reason that television actors have trouble crossing over to the movies is that we expect different things from our stars in either medium. One requires a personality that is larger than life, which allows it to survive being projected onto an enormous screen in a darkened theater; the other is a person whom we’d feel comfortable inviting on a regular basis into our living rooms. If that’s true of scripted television that airs once a week, it’s even more true of the talk shows that we’re expected to watch every night. And now that the online content created by such series has become so central to their success, we’re rapidly approaching this trend’s logical culmination: a talk show host has to be someone whose face we’d be comfortable seeing anywhere, at any time. This doesn’t just apply to television, either. As social media is increasingly called upon to supplement the marketing budgets of big movies, actors are obliged to make themselves accessible—on Twitter, on Instagram, as good sports on Saturday Night Live and in viral videos—to an extent that a star of the old studio system of the forties would have found utterly baffling. Deadline’s writeup of Alien: Covenant is typical:

RelishMix…assessed that Alien: Covenant has a strong social media universe…spread across Twitter, Facebook, YouTube views and Instagram followers…The company also adds that Covenant was challenged by a generally inactive cast, with Empire’s Jussie Smollett being the most popular activated star. Danny McBride across Twitter, Instagram and Facebook counts over 250,000. Michael Fassbender is not socially active.

I love the implication that stars these days need to be “activated,” like cell phones, to be fully functional, as well as the tone of disapproval at the fact that Michael Fassbender isn’t socially active. It’s hard to imagine how that would even look: Fassbender’s appeal as an actor emerges largely from his slight sense of reserve, even in showy parts. But in today’s climate, you could also argue that this has hampered his rise as a star.

And Cavett’s cadenza on change gets at an inherent tension in the way we see our stars, which may not be entirely sustainable. In The Way of the Gun, written and directed by Christopher McQuarrie, who knows more than anyone about survival in Hollywood, the character played by James Caan says: “The only thing you can assume about a broken-down old man is that he’s a survivor.” Similarly, the only thing you can assume about a movie star, talk show host, or any other figure in show business whose face you recognize is that he or she possesses superhuman levels of ambition. Luck obviously plays a large role in success, as does talent, but both require a preternatural drive, which is the matrix in which giftedness and good fortune have a chance to do their work. Ambition may not be sufficient, but it’s certainly necessary. Yet we persist in believing that stars are accessible and ordinary, when, by definition, they can hardly be other than extraordinary. It’s a myth that emerges from the structural assumptions of social media, a potent advertising tool that demands a kind of perceptual leveling to be effective. I was recently delighted to learn that the notorious feature “Stars—They’re Just Like Us!” originated when the editors at Us Magazine had to figure out how to use the cheap paparazzi shots that they could afford to buy on their tiny budget, like a picture of Drew Barrymore picking up a penny. Social media works in much the same way. It creates an illusion of intimacy that is as false as the airbrushed images of the movie stars of Hollywood’s golden age, and it deprives us of some of the distance required for dreams. Whether or not they want to admit it, stars, unlike the rich, truly are different. And I’ll let Cavett have the last word:

Unless you are one of these serene, saintly individuals about whom it can be truly said, “He or she hasn’t changed one bit from the day I knew them in the old house at Elm Street.” This is true mostly of those who have found others to do their dirty work for them. All I’m saying is that your demands and needs change, and if you don’t change with them you don’t survive.

Written by nevalalee

May 24, 2017 at 9:53 am

Quote of the Day

leave a comment »

The word “show” suggests that you’re revealing something. It doesn’t suggest finding. And because I do what I do every day, I have to make sure that the showing of things is in itself the seeking for things.

Es Devlin, on the television series Abstract

Written by nevalalee

May 24, 2017 at 7:30 am

Posted in Quote of the Day, Theater

Tagged with ,

The darkness of future past

leave a comment »

Note: Spoilers follow for the first two episodes of the third season of Twin Peaks.

“Is it future, or is it past?” Mike, the one-armed man, asks Cooper in the Black Lodge. During the premiere of the belated third season of Twin Peaks, there are times when it seems to be both at once. We often seem to be in familiar territory, and the twinge of recognition that it provokes has a way of alerting us to aspects of the original that we may have overlooked. When two new characters, played appealingly—and altogether too briefly—by Ben Rosenfield and Madeline Zima, engage in an oddly uninflected conversation, it’s a reminder of the appealingly flat tone that David Lynch likes to elicit from his actors, who sometimes seem to be reading their lines phonetically, like the kids in a Peanuts cartoon. It isn’t bad or amateurish acting, but an indication that even the performers aren’t entirely sure what they’re doing there. In recent years, accomplished imitators from Fargo to Legion have drawn on Lynch’s style, but they’re fully conscious of it, and we’re aware of the technical trickery of such players as Ewan McGregor or Dan Stevens. In Lynch’s best works, there’s never a sense that anyone involved is standing above or apart from the material. (The major exceptions are Dennis Hopper and Dean Stockwell in Blue Velvet, who disrupt the proceedings with their own brand of strangeness, and, eerily, Robert Blake in Lost Highway.) The show’s original cast included a few artful performers, notably Ray Wise and the late Miguel Ferrer, but most of the actors were endearingly unaffected. They were innocents. And innocence is a quality that we haven’t seen on television in a long time.

Yet it doesn’t take long to realize that some things have also changed. There’s the heightened level of sex and gore, which reflects the same kind of liberation from the standards of network television that made parts of Fire Walk With Me so difficult to watch. (I’d be tempted to observe that its violence against women is airing at a moment in which such scenes are likely to be intensely scrutinized, if it weren’t for the fact that Lynch has been making people uncomfortable in that regard for over thirty years.) The show is also premiering in an era in which every aspect of it will inevitably be picked apart in real time on social media, which strikes me as a diminished way of experiencing it. Its initial run obviously prompted plenty of theorizing around the nation’s water coolers, but if there’s anything that Twin Peaks has taught us, it’s that the clues are not what they seem. Lynch is a director who starts with a handful of intuitive images that are potent in themselves—an empty glass cube, a severed head, a talking tree. You could call them dreamlike, or the fruits of the unconscious, or the products, to use a slightly dated term, of the right hemisphere of the brain. Later on, the left hemisphere, which is widely but misleadingly associated with Lynch’s collaborator Mark Frost, circles back and tries to impose meaning on those symbols, but these readings are never entirely convincing. Decades ago, when the show tried to turn Cooper’s dream of the Black Lodge into a rebus for the killer’s identity, you could sense that it was straining. There isn’t always a deeper answer to be found, aside from the power of those pictures, which should be deep enough in itself.

As a result, I expect to avoid reading most reviews or analysis, at least until the season is over. Elements that seem inexplicable now may or may not pay off, but the series deserves the benefit of the doubt. This isn’t to say that what we’ve seen so far has been perfect: Twin Peaks, whatever else it may have been, was never a flawless show. Kyle MacLachlan has been as important to my inner life as any actor, but I’m not sure whether he has the range to convincingly portray Dark Cooper. He’s peerless when it comes to serving as the director’s surrogate, or a guileless ego wandering through the wilderness of the id, but he isn’t Dennis Hopper, and much of this material might have been better left to implication. Similarly, the new sequences in the Black Lodge are striking—and I’ve been waiting for them for what feels like my entire life—but they’re also allowed to run for too long. Those original scenes were so memorable that it’s easy to forget that they accounted for maybe twenty minutes, stretched across two seasons, and that imagination filled in the rest. (A screenshot of Cooper seated with the Man from Another Place was the desktop image on my computer for most of college.) If anything, the show seems almost too eager to give us more of Cooper in those iconic surroundings, and half as much would have gone a long way. In the finale of the second season, when Cooper stepped through those red curtains at last, it felt like the culmination of everything that the series had promised. Now it feels like a set where we have to linger for a while longer before the real story can begin. It’s exactly what the Man from Another Place once called it: the waiting room.

Lynch and Frost seem to be reveling in the breathing space and creative freedom that eighteen full hours on Showtime can afford, and they’ve certainly earned that right. But as I’ve noted elsewhere, Twin Peaks may have benefited from the constraints that a broadcast network imposed, just as Wild at Heart strikes me as one of the few films to have been notably improved by being edited for television. When Lynch made Blue Velvet, he and editor Duwayne Dunham, who is also editing the new season, were forced to cut the original version to the bone to meet their contractually mandated runtime, and the result was the best American movie I’ve ever seen. Lynch’s most memorable work has been forced to work within similar limitations, and I’m curious to see how it turns out when most of those barriers are removed. (I still haven’t seen any of the hours of additional footage that were recently released from Fire Walk With Me, but I wish now that I’d taken the trouble to seek them out. The prospect of viewing those lost scenes is less exciting, now that we’re being given the equivalent of a sequel that will be allowed to run for as long as it likes.) In the end, though, these are minor quibbles. When I look back at the first two seasons of Twin Peaks, I’m startled to realize how little of it I remember: it comes to about three hours of unforgettable images, mostly from the episodes directed by Lynch. If the first two episodes of the new run are any indication, it’s likely to at least double that number, which makes it a good deal by any standard. Twin Peaks played a pivotal role in my own past. And I still can’t entirely believe that it’s going to be part of my future, too.

Written by nevalalee

May 23, 2017 at 10:32 am

The voice of love

leave a comment »

Industrial Symphony No. 1

Note: I can’t wait to write about the return of Twin Peaks, which already feels like the television event of my lifetime, but I won’t be able to get to it until tomorrow. In the meantime, I’m reposting my piece on the show’s indelible score, which originally appeared, in a slightly different form, on August 10, 2016.

At some point, everyone owns a copy of The Album. The title or the artist differs from one person to another, but its impact on the listener is the same: it simply alerts you to the fact that it can be worth devoting every last corner of your inner life to music, rather than treating it as a source of background noise or diversion. It’s the first album that leaves a mark on your soul. Usually, it makes an appearance as you’re entering your teens, which means that there’s as much random chance involved as in any of the other cultural influences that dig in their claws at that age. You don’t have a lot of control over what it will be. Maybe it begins with a song on the radio, or a piece of art that catches your eye at a record store, or a stab of familiarity that comes from a passing moment of exposure. (In your early teens, you’re likely to love something just because you recognize it.) Whatever it is, unlike every other album you’ve ever heard, it doesn’t let you go. It gets into your dreams. You draw pictures of the cover and pick out a few notes from it on every piano you pass. And it shapes you in ways that you can’t fully articulate. The particular album that fills that role is different for everyone, or so it seems, although logic suggests that it’s probably the same for a lot of teenagers at any given time. In fact, I think that you can draw a clear line between those for whom the Album immersed them deeply in the culture of their era and those who wound up estranged from it. I’d be a different person—and maybe a happier one—if mine had been something like Nevermind. But it wasn’t. It was the soundtrack from Twin Peaks, followed by Julee Cruise’s Floating Into the Night.

If I had been born a few years earlier, this might not have been an issue, but I happened to get seriously into Twin Peaks, or at least its score, shortly after the series itself had ceased to be a cultural phenomenon. The finale had aired two full years beforehand, and it had been followed soon thereafter, with what seems today like startling speed, by Twin Peaks: Fire Walk With Me. After that, it mostly disappeared. There wasn’t even a chance for me to belatedly get into the show itself. I’d watched some of it back when it initially ran, including the pilot and the horrifying episode in which the identity of Laura’s killer is finally revealed. The European cut of the premiere was later released on video, but aside from that, I had to get by with a few grainy episodes that my parents had recorded on VHS. It wasn’t until many years later that the first box set became available, allowing me to fully experience a show that I ultimately ended up loving, even if it was far more uneven—and often routine—than its reputation had led me to believe. But that didn’t really matter. Twin Peaks was just a television show, admittedly an exceptional one, but the score by Angelo Badalamenti was something else: a vision of a world that was complete in itself. I’d have trouble conveying exactly what it represents, except that it takes place in the liminal area where a gorgeous nightmare shades imperceptibly into the everyday. In Blue Velvet, which I still think is David Lynch’s greatest achievement, Jeffrey expresses it as simply as possible: “It’s a strange world.” But you can hear it more clearly in “Laura Palmer’s Theme,” which Badalamenti composed in response to Lynch’s instructions:

Start it off foreboding, like you’re in a dark wood, and then segue into something beautiful to reflect the trouble of a beautiful teenage girl. Then, once you’ve got that, go back and do something that’s sad and go back into that sad, foreboding darkness.

And it wasn’t until years later that they realized that the song had the visual structure of a pair of mountain peaks, arranged side by side. It’s a strange world indeed.

Soundtrack from Twin Peaks

If all forms of art, as the critic Water Pater famously observed, aspire to the condition of music, then it isn’t an exaggeration to say that Twin Peaks aspired to the sublimity of its own soundtrack. Badalamenti’s score did everything that the series itself often struggled to accomplish, and there were times when I felt that the music was the primary work, with the show as a kind of visual adjunct. I still feel that way, on some level, about Fire Walk With Me: the movie played an important role in my life, but I don’t have a lot of interest in rewatching it, while I know every note of its soundtrack by heart. And even if I grant that a score is never really complete in itself, the music of Twin Peaks pointed toward an even more intriguing artifact. It included three tracks—“The Nightingale,” “Into the Night,” and “Falling”—sung by Julee Cruise, with music by Badalamenti and lyrics by Lynch, who had earlier written her haunting song “Mysteries of Love” for Blue Velvet. I loved them all, and I can still remember the moment when a close reading of the liner notes clued me into the fact that there was an entire album by Cruise, Floating Into the Night, that I could actually own. (In fact, there were two. As it happened, my brainstorm occurred only a few months after the release of The Voice of Love, a less coherent sophomore album that I wouldn’t have missed for the world.) Listening to it for the first time, I felt like the narrator of Borges’s “Tlön, Uqbar, Orbis Tertius,” who once saw a fragment of an undiscovered country, and now found himself confronted with all of it at once. The next few years of my life were hugely eventful, as they are for every teenager. I read, did, and thought about a lot of things, some of which are paying off only now. But whatever else I was doing, I was probably listening to Floating Into the Night.

Last year, when I heard that the Twin Peaks soundtrack was coming out in a deluxe vinyl release, it filled me with mixed feelings. (Of course, I bought a copy, and so should you.) The plain fact is that toward the end of my teens, I put Badalamenti and Cruise away, and I haven’t listened to them much since. Which isn’t to say that I didn’t give them a lifetime’s worth of listening in the meantime. I became obsessed with Industrial Symphony No. 1: The Dream of the Brokenhearted, the curious performance piece, directed by Lynch, in which Cruise floats on wires high above the stage at the Brooklyn Academy of Music, not far from the neighborhood where I ended up spending most of my twenties. Much later, I saw Cruise perform, somewhat awkwardly, in person. I tracked down her collaborations and guest appearances—including the excellent “If I Survive” with Hybrid—and even bought her third album, The Art of Being a Girl, which I liked a lot. Somehow I never got around to buying the next one, though, and long before I graduated from college, Cruise and Badalamenti had all but disappeared from my personal rotation. And I regret this. I still feel that Floating Into the Night is a perfect album, although it wasn’t until years later, when I heard Cruise’s real, hilariously brassy voice in her interviews, that I realized the extent to which I’d fallen in love with an ironic simulation. There are moments when I believe, with complete seriousness, that I’d be a better person today if I’d kept listening to this music: half of my life has been spent trying to live up to the values of my early adolescence, and I might have had an easier job of integrating all of my past selves if they shared a common soundtrack. Whenever I play it now, it feels like a part of me that has been locked away, ageless and untouched, in the Black Lodge. But life has a way of coming full circle. As Laura says to Cooper: “I’ll see you again in twenty-five years. Meanwhile…” And it feels sometimes as if she were talking to me.

Hollywood in Limbo

with 2 comments

In his essay on the fourth canto of Dante’s Inferno, which describes the circle of Limbo populated by the souls of virtuous pagans, Jorge Luis Borges discusses the notion of the uncanny, which has proven elusively hard to define:

Toward the beginning of the nineteenth century, or the end of the eighteenth, certain adjectives of Saxon or Scottish origin (eerie, uncanny, weird) came into circulation in the English language, serving to define those places or things that vaguely inspire horror…In German, they are perfectly translated by the word unheimlich; in Spanish, the best word may be siniestro.

I was reminded of this passage while reading, of all things, Benjamin Wallace’s recent article in Vanity Fair on the decline of National Lampoon. It’s a great piece, and it captures the sense of uncanniness that I’ve always associated with a certain part of Hollywood. Writing of the former Lampoon head Dan Laikin, Wallace says:

Poor choice of partners proved a recurring problem. Unable to get traction with the Hollywood establishment, Laikin appeared ready to work with just about anyone. “There were those of us who’d been in the business a long time,” [development executive Randi] Siegel says, “who told him not to do business with certain people. Dan had a tendency to trust people that were probably not the best people to trust. I think he wanted to see the good in it and change things.” He didn’t necessarily have much choice. If you’re not playing in Hollywood’s big leagues, you’re playing in its minors, which teem with marginal characters…“Everyone Danny hung out with was sketchy,” says someone who did business with Laikin. Laikin, for his part, blames the milieu: “I’m telling you, I don’t surround myself with these people. I don’t search them out. They’re all over this town.”

Years ago, I attended a talk by David Mamet in which he said something that I’ve never forgotten. Everybody gets a break in Hollywood after twenty-five years, but some get it at the beginning and others at the end, and the important thing is to be the one who stays after everyone else has gone home. Wallace’s article perfectly encapsulates that quality, which I’ve always found fascinating, perhaps because I’ve never had to live with it. It results in a stratum of players in the movie and television industry who haven’t quite broken through, but also haven’t reached the point where they drop out entirely. They end up, in short, in a kind of limbo, which Borges vividly describes in the same essay:

There is is something of the oppressive wax museum about this still enclosure: Caesar, armed and idle; Lavinia, eternally seated next to her father…A much later passage of the Purgatorio adds that the shades of the poets, who are barred from writing, since they are in the Inferno, seek to distract their eternity with literary discussions.

You could say that the inhabitants of Hollywood’s fourth circle of hell, who are barred from actually making movies, seek to distract their eternity by talking about the movies that they wish they could make. It’s easy to mock them, but there’s also something weirdly ennobling about their sheer persistence. They’re survivors in a profession where few of us would have lasted, if we even had the courage to go out there in the first place, and at a time when such people seem more likely to end up at something like the Fyre Festival, it’s nice to see that they still exist in Hollywood.

So what is it about the movie industry that draws and retains such personalities? One of its most emblematic figures is Robert Towne, who, despite his Oscar for Chinatown and his reputation as the dean of American screenwriters, has spent his entire career looking like a man on the verge of his big break. If Hollywood is Limbo, Towne is its Caesar, “armed and idle,” and he’s been there for five decades. Not surprisingly, he has a lot of insight into the nature of that hell. In his interview with John Brady in The Craft of the Screenwriter, Towne says:

You are often involved with a producer who is more interested in making money on the making of the movie than he is on the releasing of the movie. There is a lot of money to be made on the production of a movie, not just in salary, but all sorts of ways that are just not altogether honest. So he’s going to make his money on the making, which is really reprehensible.

“Movies are so difficult that you should really make movies that you feel you absolutely have to make,” Towne continues—and the fact that this happens so rarely implies that the studio ecosystem is set up for something totally different. Towne adds:

It’s easier for a director and an actor to be mediocre and get away with it than it is for a writer. Even a writer who happens to be mediocre has to work pretty hard to get through a script, whereas a cameraman will say to the director, “Where do you think you want to put the camera? You want it here? All right, I’m going to put it here.” In other words, a director can be carried along by the production if he’s mediocre, to some extent; and that’s true of an actor, too.

Towne tosses off these observations without dwelling on them, knowing that there’s plenty more where they came from, but if you put them together, you end up with a pretty good explanation of why Hollywood is the way it is. It’s built to profit from the making of movies, rather than from the movies themselves, which is only logical: if it depended on success at the box office, everybody would be out of a job. The industry also has structures in place that allow people to skate by for years without any particular skills, if they manage to stick to the margins. (In any field where past success is no guarantee of future performance, it’s the tall poppies that get their heads chopped off.) Under such conditions, survival isn’t a matter of talent, but of something much less definable. A brand like National Lampoon, which has been leveled by time but retains some of its old allure, draws such people like a bright light draws fish in the abyss, and it provides a place where they can be studied. The fact that Kato Kaelin makes an appearance in these circles shouldn’t be surprising—he’s the patron saint of those who hang on for decades for no particular reason. And it’s hard not to relate to the hope that sustains them:

“What everyone always does at the company is feel like something big is about to happen, and I want to be here for it,” [creative director] Marty Dundics says. “We’re one hit movie away from, or one big thing away from, being back on top. It’s always this underdog you’re rooting for. And you don’t want to miss it. That big thing that’s about to happen. That was always the mood.”

Extend that mood across a quarter of a century, and you have Hollywood, which also struggles against the realization that Borges perceives in Limbo: “The certainty that tomorrow will be like today, which was like yesterday, which was like every day.”

The critical path

leave a comment »

Renata Adler

Note: I’m taking a few days off, so I’ll be republishing some of my favorite pieces from earlier in this blog’s run. This post originally appeared, in a slightly different form, on February 16, 2016.

Every few years or so, I go back and revisit Renata Adler’s famous attack in the New York Review of Books on the reputation of the film critic Pauline Kael. As a lifelong Kael fan, I don’t agree with Adler—who describes Kael’s output as “not simply, jarringly, piece by piece, line by line, and without interruption, worthless”—but I respect the essay’s fire and eloquence, and it’s still a great read. What is sometimes forgotten is that Adler opens with an assault, not on Kael alone, but on the entire enterprise of professional criticism itself. Here’s what she says:

The job of the regular daily, weekly, or even monthly critic resembles the work of the serious intermittent critic, who writes only when he is asked to or genuinely moved to, in limited ways and for only a limited period of time…Normally, no art can support for long the play of a major intelligence, working flat out, on a quotidian basis. No serious critic can devote himself, frequently, exclusively, and indefinitely, to reviewing works most of which inevitably cannot bear, would even be misrepresented by, review in depth…

The simple truth—this is okay, this is not okay, this is vile, this resembles that, this is good indeed, this is unspeakable—is not a day’s work for a thinking adult. Some critics go shrill. Others go stale. A lot go simultaneously shrill and stale.

Adler concludes: “By far the most common tendency, however, is to stay put and simply to inflate, to pretend that each day’s text is after all a crisis—the most, first, best, worst, finest, meanest, deepest, etc.—to take on, since we are dealing in superlatives, one of the first, most unmistakable marks of the hack.” And I think that she has a point, even if I have to challenge a few of her assumptions. (The statement that most works of art “inevitably cannot bear, would even be misrepresented by, review in depth,” is particularly strange, with its implicit division of all artistic productions into the sheep and the goats. It also implies that it’s the obligation of the artist to provide a worthy subject for the major critic, when in fact it’s the other way around: as a critic, you prove yourself in large part through your ability to mine insight from the unlikeliest of sources.) Writing reviews on a daily or weekly basis, especially when you have a limited amount of time to absorb the work itself, lends itself inevitably to shortcuts, and you often find yourself falling back on the same stock phrases and judgments. And Adler’s warning about “dealing in superlatives” seems altogether prescient. As Keith Phipps and Tasha Robinson of The A.V. Club pointed out a few years back, the need to stand out in an ocean of competing coverage means that every topic under consideration becomes either an epic fail or an epic win: a sensible middle ground doesn’t generate page views.

Pauline Kael

But the situation, at least from Adler’s point of view, is even more dire than when she wrote this essay in the early eighties. When Adler’s takedown of Kael first appeared, the most threatening form of critical dilution lay in weekly movie reviews: today, we’re living in a media environment in which every episode of every television show gets thousands of words of critical analysis from multiple pop culture sites. (Adler writes: “Television, in this respect, is clearly not an art but an appliance, through which reviewable material is sometimes played.” Which is only a measure of how much the way we think and talk about the medium has changed over the intervening three decades.) The conditions that Adler identifies as necessary for the creation of a major critic like Edmund Wilson or Harold Rosenberg—time, the ability to choose one’s subjects, and the freedom to quit when necessary—have all but disappeared for most writers hoping to make a mark, or even just a living. To borrow a trendy phrase, we’ve reached a point of peak content, with a torrent of verbiage being churned out at an unsustainable pace without the advertising dollars to support it, in a situation that can be maintained only by the seemingly endless supply of aspiring writers willing to be chewed up by the machine. And if Adler thought that even a monthly reviewing schedule was deadly for serious criticism, I’d be curious to hear how she feels about the online apprenticeship that all young writers seem expected to undergo these days.

Still, I’d like to think that Adler got it wrong, just as I believe that she was ultimately mistaken about Kael, whose legacy, for all its flaws, still endures. (It’s revealing to note that Adler had a long, distinguished career as a writer and critic herself, and yet she almost certainly remains best known among casual readers for her Kael review.) Not every lengthy writeup of the latest episode of Riverdale is going to stand the test of time, but as a crucible for forming a critic’s judgment, this daily grind feels like a necessary component, even if it isn’t the only one. A critic needs time and leisure to think about major works of art, which is a situation that the current media landscape doesn’t seem prepared to offer. But the ability to form quick judgments about works of widely varying quality and to express them fluently on deadline is an indispensable part of any critic’s toolbox. When taken as an end itself, it can be deadening, as Adler notes, but it can also be the foundation for something more, even if it has to be undertaken outside of—or despite—the critic’s day job. The critic’s responsibility, now more than ever, isn’t to detach entirely from the relentless pace of pop culture, but to find ways of channeling it into something deeper than the instantaneous think piece or hot take. As a daily blogger who also undertakes projects that can last for months or years, I’m constantly mindful of the relationship between my work on demand and my larger ambitions. And I sure hope that the two halves can work together. Because, like it or not, every critic is walking that path already.

Written by nevalalee

April 18, 2017 at 9:00 am

The illusion of life

leave a comment »

Last week, The A.V. Club ran an entire article devoted to television shows in which the lead is also the best character, which only points to how boring many protagonists tend to be. I’ve learned to chalk this up to two factors, one internal, the other external. The internal problem stems from the reasonable principle that the narrative and the hero’s objectives should be inseparable: the conflict should emerge from something that the protagonist urgently needs to accomplish, and when the goal has been met—or spectacularly thwarted—the story is over. It’s great advice, but in practice, it often results in leads who are boringly singleminded: when every action needs to advance the plot, there isn’t much room for the digressions and quirks that bring characters to life. The supporting cast has room to go off on tangents, but the characters at the center have to constantly triangulate between action, motivation, and relatability, which can drain them of all surprise. A protagonist is under so much narrative pressure that when the story relaxes, he bursts, like a sea creature brought up from its crevasse to the surface. Elsewhere, I’ve compared a main character to a diagram of a pattern of forces, like one of the fish in D’Arcy Wentworth Thompson’s On Growth and Form, in which the animal’s physical shape is determined by the outside stresses to which it has been subjected. And on top of this, there’s an external factor, which is the universal desire of editors, producers, and studio executives to make the protagonist “likable,” which, whether or not you agree with it, tends to smooth out the rough edges that make a character vivid and memorable.

In the classic textbook Disney Animation: The Illusion of Life, we find a useful perspective on this problem. The legendary animators Frank Thomas and Ollie Johnston provide a list of guidelines for evaluating story material before the animation begins, including the following:

Tell your story through the broad cartoon characters rather than the “straight” ones. There is no way to animate strong-enough attitudes, feelings, or expressions on realistic characters to get the communication you should have. The more real, the less latitude for clear communication. This is more easily done with the cartoon characters who can carry the story with more interest and spirit anyway. Snow White was told through the animals, the dwarfs, and the witch—not through the prince or the queen or the huntsman. They had vital roles, but their scenes were essentially situation. The girl herself was a real problem, but she was helped by always working to a sympathetic animal or a broad character. This is the old vaudeville trick of playing the pretty girl against the buffoon; it helps both characters.

Even more than Snow White, the great example here is Sleeping Beauty, which has always fascinated me as an attempt by Disney to recapture past glories by a mechanical application of its old principles raised to dazzling technical heights. Not only do Aurora and Prince Philip fail to drive the story, but they’re all but abandoned by it—Aurora speaks fewer lines than any other Disney main character, and neither of them talk for the last thirty minutes. Not only does the film acknowledge the dullness of its protagonists, but it practically turns it into an artistic statement in itself.

And it arises from a tension between the nature of animation, which is naturally drawn to caricature, and the notion that sympathetic protagonists need to be basically realistic. With regard to the first point, Thomas and Johnston advise:

Ask yourself, “Can the story point be done in caricature?” Be sure the scenes call for action, or acting that can be caricatured if you are to make a clear statement. Just to imitate nature, illustrate reality, or duplicate live action not only wastes the medium but puts an enormous burden on the animator. It should be believable, but not realistic.

The italics are mine. This is a good rule, but it collides headlong with the principle that the “real” characters should be rendered with greater naturalism:

Of course, there is always a big problem in making the “real” or “straight” characters in our pictures have enough personality to carry their part of the story…The point of this is misinterpreted by many to mean that characters who have to be represented as real should be left out of feature films, that the stories should be told with broad characters who can be handled more easily. This would be a mistake, for spectators need to have someone or something they can believe in, or the picture falls apart.

And while you could make a strong case that viewers relate just as much to the sidekicks, it’s probably also true that a realistic central character serves an important functional role, which allows the audience to take the story seriously. This doesn’t just apply to animation, either, but to all forms of storytelling—including most fiction, film, and television—that work best with broad strokes. In many cases, you can sense the reluctance of animators to tackle characters who don’t lend themselves to such bold gestures:

Early in the story development, these questions will be asked: “Does this character have to be straight?” “What is the role we need here?” If it is a prince or a hero or a sympathetic person who needs acceptance from the audience to make the story work, then the character must be drawn realistically.

Figuring out the protagonists is a thankless job: they have to serve a function within the overall story, but they’re also liable to be taken out and judged on their own merits, in the absence of the narrative pressures that created them in the first place. The best stories, it seems, are the ones in which that pattern of forces results in something fascinating in its own right, or which transform a stock character into something more. (It’s revealing that Thomas and Johnston refer to the queen and the witch in Snow White as separate figures, when they’re really a single person who evolves over the course of the story into her true form.) And their concluding advice is worth bearing in mind by everyone: “Generally speaking, if there is a human character in a story, it is wise to draw the person with as much caricature as the role will permit.”

How to rest

with 4 comments

As a practical matter, there appears to be a limit to how long a novelist can work on any given day while still remaining productive. Anecdotally, the maximum effective period seems to fall somewhere in the range of four to six hours, which leaves some writers with a lot of time to kill. In a recent essay for The New Yorker, Gary Shteyngart writes:

I believe that a novelist should write for no more than four hours a day, after which returns truly diminish; this, of course, leaves many hours for idle play and contemplation. Usually, such a schedule results in alcoholism, but sometimes a hobby comes along, especially in middle age.

In Shteyngart’s case, the hobby took the form of a fascination with fine watches, to the point where he was spending thousands of dollars on his obsession every year. This isn’t a confession designed to elicit much sympathy from others—especially when he observes that spending $4,137.25 on a watch means throwing away “roughly 4.3 writing days”—but I’d like to believe that he chose a deliberately provocative symbol of wasted time. Most novelists have day jobs, with all their writing squeezed into the few spare moments that remain, so to say that writers have hours of idleness at their disposal, complete with that casual “of course,” implies an unthinking acceptance of a privilege that only a handful of authors ever attain. Shteyngart, I think, is smarter than this, and he may simply be using the luxury watch as an emblem of how precious each minute can be for writers for whom time itself hasn’t become devalued.

But let’s assume that you’re lucky enough to write for a living, and that your familial or social obligations are restricted enough to leave you with over half the day to spend as you see fit. What can you do with all those leisure hours? Alcoholism, as Shteyngart notes, is an attractive possibility, but perhaps you want to invest your time in an activity that enhances your professional life. Georg von Békésy, the Hungarian biophysicist, thought along similar lines, as his biographer Floyd Ratliff relates:

His first idea about how to excel as a scientist was simply to work hard and long hours, but he realized that his colleagues were working just as hard and just as long. So he decided instead to follow the old rule: sleep eight hours, work eight hours, and rest eight hours. But Békésy put a “Hungarian twist” on this, too. There are many ways to rest, and he reasoned that perhaps he could work in some way that would improve his judgment, and thus improve his work. The study of art, in which he already had a strong interest, seemed to offer this possibility…By turning his attention daily from science to art, Békésy refreshed his mind and sharpened his faculties.

This determination to turn even one’s free time into a form of self-improvement seems almost inhuman. (His “old rule” reminds me of the similar advice that Ursula K. LeGuin offers in The Left Hand of Darkness: “When action grows unprofitable, gather information; when information grows unprofitable, sleep.”) But I think that Békésy was also onto something when he sought out a hobby that provided a contrast to what he was doing for a living. A change, as the saying goes, is as good as a rest.

In fact, you could say that there are two types of hobbies, although they aren’t mutually exclusive. There are hobbies that are orthogonal to the rest of our lives, activating parts of the mind or personality that otherwise go unused, or providing a soothing mechanical respite from the nervous act of brainwork—think of Churchill and his bricklaying. Alternatively, they can channel our professional urges into a contained, orderly form that provides a kind of release. Ayn Rand, of all people, wrote perceptively about stamp collecting:

Stamp collecting is a hobby for busy, purposeful, ambitious people…because, in pattern, it has the essential elements of a career, but transposed to a clearly delimited, intensely private world…In stamp collecting, one experiences the rare pleasure of independent action without irrelevant burdens or impositions.

In my case, this blog amounts to a sort of hobby, and I keep at it for both reasons. It’s a form of writing, so it provides me with an outlet for those energies, but it also allows me to think about subjects that aren’t directly connected to my work. The process is oddly refreshing—I often feel more awake and alert after I’ve spent an hour writing a post, as if I’ve been practicing my scales on the piano—and it saves an hour from being wasted in unaccountable ways. This may be why many people are drawn to hobbies that leave you with a visible result in the end, whether it’s a blog post, a stamp collection, or a brick wall.

But there’s also something to be said for doing nothing. If you’ve devoted four hours—or whatever amount seems reasonable—to work that you love, you’ve earned the right to spend your remaining time however you like. As Sir Walter Scott wrote in a letter to a friend:

And long ere dinner time, I have
Full eight close pages wrote;
What, duty, has thou now to crave?
Well done, Sir Walter Scott!

At the end of the day, I often feel like watching television, and the show I pick serves as an index to how tired I am. If I’m relatively energized, I can sit through a prestige drama; if I’m more drained, I’ll suggest a show along the lines of Riverdale; and if I can barely see straight, I’ll put on a special feature from my Lord of the Rings box set, which is my equivalent of comfort food. And you can see this impulse in far more illustrious careers. Ludwig Wittgenstein, who thought harder than anyone else of his century, liked to relax by watching cowboy movies. The degree to which he felt obliged to unplug is a measure of how much he drove himself, and in the absence of other vices, this was as good a way of decompressing as any. It prompted Nicholson Baker to write: “[Wittgenstein] would go every afternoon to watch gunfights and arrows through the chest for hours at a time. Can you take seriously a person’s theory of language when you know that he was delighted by the woodenness and tedium of cowboy movies?” To which I can only respond: “Absolutely.”

Written by nevalalee

April 5, 2017 at 9:36 am

The cliché factory

with one comment

A few days ago, Bob Mankoff, the cartoon editor of The New Yorker, devoted his weekly email newsletter to the subject of “The Great Clichés.” A cliché, as Mankoff defines it, is a restricted comic situation “that would be incomprehensible if the other versions had not first appeared,” and he provides a list of examples that should ring bells for all readers of the magazine, from the ubiquitous “desert island” to “The-End-Is-Nigh Guy.” Here are a few of my favorites:

Atlas holding up the world; big fish eating little fish; burglars in masks; cave paintings; chalk outline at crime scene; crawling through desert; galley slaves; guru on mountain; mobsters and victim with cement shoes; man in stocks; police lineup; two guys in horse costume.

Inevitably, Mankoff’s list includes a few questionable choices, while also omitting what seem like obvious contenders. (Why “metal detector,” but not “Adam and Eve?”) But it’s still something that writers of all kinds will want to clip and save. Mankoff doesn’t make the point explicitly, but most gag artists probably keep a similar list of clichés as a starting point for ideas, as we read in Mort Gerberg’s excellent book Cartooning:

List familiar situations—clichés. You might break them down into categories, like domestic (couple at breakfast, couple watching television); business (boss berating employee, secretary taking dictation); historic (Paul Revere’s ride, Washington crossing the Delaware); even famous cartoon clichés (the desert island, the Indian snake charmer)…Then change something a little bit.

As it happened, when I saw Mankoff’s newsletter, I had already been thinking about a far more harmful kind of comedy cliché. Last week, Kal Penn went on Twitter to post some of the scripts from his years auditioning as a struggling actor, and they amount to an alternative list of clichés kept by bad comedy writers, consciously or otherwise: “Gandhi lookalike,” “snake charmer,” “foreign student.” One character has a “slight Hindi accent,” another is a “Pakistani computer geek who dresses like Beck and is in a perpetual state of perspiration,” while a third delivers dialogue that is “peppered with Indian cultural references…[His] idiomatic conversation is hit and miss.” A typical one-liner: “We are propagating like flies on elephant dung.” One script describes a South Asian character’s “spastic techno pop moves,” with Penn adding that “the big joke was an accent and too much cologne.” (It recalls the Morrissey song “Bengali in Platforms,” which included the notorious line: “Life is hard enough when you belong here.” You could amend it to read: “Being a comedy writer is hard enough when you belong here.”) Penn closes by praising shows with writers “who didn’t have to use external things to mask subpar writing,” which cuts to the real issue here. The real person in “a perpetual state of perspiration” isn’t the character, but the scriptwriter. Reading the teleplay for an awful sitcom is a deadening experience in itself, but it’s even more depressing to realize that in most cases, the writer is falling back on a stereotype to cover up the desperate unfunniness of the writing. When Penn once asked if he could play a role without an accent, in order to “make it funny on the merits,” he was told that he couldn’t, probably because everybody else knew that the merits were nonexistent.

So why is one list harmless and the other one toxic? In part, it’s because we’ve caught them at different stages of evolution. The list of comedy conventions that we find acceptable is constantly being culled and refined, and certain art forms are slightly in advance of the others. Because of its cultural position, The New Yorker is particularly subject to outside pressures, as it learned a decade ago with its Obama terrorist cover—which demonstrated that there are jokes and images that aren’t acceptable even if the magazine’s attitude is clear. Turn back the clock, and Mankoff’s list would include conventions that probably wouldn’t fly today. Gerberg’s list, like Penn’s, includes “snake charmer,” which Mankoff omits, and he leaves out “Cowboys and Indians,” a cartoon perennial that seems to be disappearing. And it can be hard to reconstruct this history, because the offenders tend to be consigned to the memory hole. When you read a lot of old magazine fiction, as I do, you inevitably find racist stereotypes that would be utterly unthinkable today, but most of the stories in which they appear have long since been forgotten. (One exception, unfortunately, is the Sherlock Holmes short story “The Adventure of the Three Gables,” which opens with a horrifying racial caricature that most Holmes fans must wish didn’t exist.) If we don’t see such figures as often today, it isn’t necessarily because we’ve become more enlightened, but because we’ve collectively agreed to remove certain figures from the catalog of stock comedy characters, while papering over their use in the past. A list of clichés is a snapshot of a culture’s inner life, and we don’t always like what it says. The demeaning parts still offered to Penn and actors of similar backgrounds have survived for longer than they should have, but sitcoms that trade in such stereotypes will be unwatchable in a decade or two, if they haven’t already been consigned to oblivion.

Of course, most comedy writers aren’t thinking in terms of decades, but about getting through the next five minutes. And these stereotypes endure precisely because they’re seen as useful, in a shallow, short-term kind of way. There’s a reason why such caricatures are more visible in comedy than in drama: comedy is simply harder to write, but we always want more of it, so it’s inevitable that writers on a deadline will fall back on lazy conventions. The really insidious thing about these clichés is that they sort of work, at least to the extent of being approved by a producer without raising any red flags. Any laughter that they inspire is the equivalent of empty calories, but they persist because they fill a cynical need. As Penn points out, most writers wouldn’t bother with them at all if they could come up with something better. Stereotypes, like all clichés, are a kind of fallback option, a cheap trick that you deploy if you need a laugh and can’t think of another way to get one. Clichés can be a precious commodity, and all writers resort to them occasionally. They’re particularly valuable for gag cartoonists, who can’t rely on a good idea from last week to fill the blank space on the page—they’ve got to produce, and sometimes that means yet another variation on an old theme. But there’s a big difference between “Two guys in a horse costume” and “Gandhi lookalike.” Being able to make that distinction isn’t a matter of political correctness, but of craft. The real solution is to teach people to be better writers, so that they won’t even be tempted to resort to such tired solutions. This might seem like a daunting task, but in fact, it happens all the time. A cliché factory operates on the principle of supply and demand. And it shuts down as soon as people no longer find it funny.

Written by nevalalee

March 20, 2017 at 11:18 am

A series of technical events

with 6 comments

In his book Four Arguments for the Elimination of Television, which was first published in the late seventies, the author Jerry Mander, a former advertising executive, lists a few of the “technical tricks” that television can use to stimulate the viewer’s interest:

Editors make it possible for a scene in one room to be followed instantly by a scene in another room, or at another time, or another place. Words appears over the images. Music rises and falls in the background. Two images or three can appear simultaneously. One image can be superposed on another on the screen. Motion can be slowed down or sped up.

These days, we take most of these effects for granted, as part of the basic grammar of the medium, but to Mander, they’re something more sinister. Technique, he argues, is replacing content, and at its heart, it’s something of a confidence game:

Through these technical events, television images alter the usual, natural imagery possibilities, taking on the quality of a naturally highlighted event. They make it seem that what you are looking at is unique, unusual, and extraordinary…But nothing unusual is going on. All that’s happening is that the viewer is watching television, which is the same thing that happened an hour ago, or yesterday. A trick has been played. The viewer is fixated by a conspiracy of dimmed-out environments combined with an artificial, impossible, fictitious unusualness.

In order to demonstrate “the extent to which television is dependent upon technical tricks to maintain your interest,” Mander invites the reader to conduct what he calls a technical events test:

Put on your television set and simply count the number of times there is a cut, a zoom, a superimposition, a voiceover, the appearance of words on the screen—a technical event of some kind…Each technical event—each alteration of what would be natural imagery—is intended to keep your attention from waning as it might otherwise…Every time you are about to relax your attention, another technical event keeps you attached..

You will probably find that in the average commercial television program, there are eight or ten technical events for every sixty-second period…You may also find that there is rarely a period of twenty seconds without any sort of technical event at all. That may give you an idea of the extent to which producers worry about whether the content itself can carry your interest.

He goes on to list the alleged consequences of exposure to such techniques, from shortened attention span in adults to heightened hyperactivity in children, and concludes: “Advertisers are the high artists of the medium. They have gone further in the technologies of fixation than anyone else.”

Mander’s argument was prophetic in many ways, but in one respect, he was clearly wrong. In the four decades since his book first appeared, it has become obvious that the “high artists” of distraction and fixation aren’t advertisers, but viewers themselves, and its true canvas isn’t television, but the Internet. Instead of passively viewing a series of juxtaposed images, we assemble our online experience for ourselves, and each time we open a new link, we’re effectively acting as our own editors. Every click is a cut. (The anecdotal figure that the reader spends less than fifteen seconds on the average web page is very close to the frequency of technical events on television, which isn’t an accident.) We do a better job of distracting ourselves than any third party ever could, as long as we’re given sufficient raw material and an intuitive interface—which explains much of the evolution of online content. When you look back at web pages from the early nineties, it’s easy to laugh at how noisy and busy they tended to be, with music, animated graphics, and loud colors. This wasn’t just a matter of bad taste, but of a mistaken analogy to television. Web designers thought that they had to grab our attention using the same technical tricks employed by other media, but that wasn’t the case. The hypnotic browsing state that we’ve all experienced isn’t produced by any one page, but by the succession of similar pages as the user moves between them at his or her private rhythm. Ideally, from the point of view of a media company, that movement will take place within the same family of pages, but it also leads to a convergence of style and tone between sites. Most web pages these days look more or less the same because it creates a kind of continuity of experience. Instead of the loud, colorful pages of old, they’re static and full of white space. Mander calls this “the quality of even tone” of television, and the Internet does it one better. It’s uniform and easily aggregated, and you can cut it together however you like, like yard goods.

In fact, it isn’t content that gives us the most pleasure, but the act of clicking, with the sense of control it provides. This implies that bland, interchangeable content is actually preferable to more arresting material. The easier it is to move between basically similar units, the closer the experience is to that of an ideally curated television show—which is why different sources have a way of blurring together into the same voice. When I’m trying to tell my wife about a story I read online, I often have trouble remembering if I read it on Vox, Vulture, or Vice, which isn’t a knock against those sites, but a reflection of the unconscious pressure to create a seamless browsing experience. From there, it’s only a short step to outright content mills and fake news. In the past, I’ve called this AutoContent, after the interchangeable bullet points used to populate slideshow presentations, but it’s only effective if you can cut quickly from one slide to another. If you had to stare at it for longer than fifteen seconds, you wouldn’t be able to stand it. (This may be why we’ve come to associate quality with length, which is more resistant to being to reduced to the filler between technical events. The “long read,” as I’ve argued elsewhere, can be a marketing category in itself, but it does need to try a little harder.) The idea that browsing online is a form of addictive behavior isn’t a new one, of course, and it’s often explained in terms of the “random rewards” that the brain receives when we check email or social media. But the notion of online content as a convenient source of technical events is worth remembering. When we spend any period of time online, we’re essentially watching a television show while simultaneously acting as its editor and director, and often as its writer and actors. In the end, to slightly misquote Mander, all that’s happening is that the reader is seated in front of a computer or looking at a phone, “which is the same thing that happened an hour ago, or yesterday.” The Internet is better at this than television ever was. And in a generation or two, it may result in television being eliminated after all.

Written by nevalalee

March 14, 2017 at 9:18 am

Farewell to Mystic Falls

with one comment

Note: Spoilers follow for the series finale of The Vampire Diaries.

On Friday, I said goodbye to The Vampire Diaries, a series that I once thought was one of the best genre shows on television, only to stop watching it for its last two seasons. Despite its flaws, it occupies a special place in my memory, in part because its strengths were inseparable from the reasons that I finally abandoned it. Like Glee, The Vampire Diaries responded to its obvious debt to an earlier franchise—High School Musical for the former, Twilight for the latter—both by subverting its predecessor and by burning through ideas as relentlessly as it could. It’s as if both shows decided to refute any accusations of unoriginality by proving that they could be more ingenious than their inspirations, and amazingly, it sort of worked, at least for a while. There’s a limit to how long any series can repeatedly break down and reassemble itself, however, and both started to lose steam after about three years. In the case of The Vampire Diaries, its problems crystallized around its ostensible lead, Elena Gilbert, as portrayed by the game and talented Nina Dobrev, who left the show two seasons ago before returning for an encore in the finale. Elena spent most of her first sendoff asleep, and she isn’t given much more to do here. There’s a lot about the episode that I liked, and it provides satisfying moments of closure for many of its characters, but Elena isn’t among them. In the end, when she awakens from the magical coma in which she has been slumbering, it’s so anticlimactic that it reminds me of what Pauline Kael wrote of Han’s revival in Return of the Jedi: “It’s as if Han Solo had locked himself in the garage, tapped on the door, and been let out.”

And what happened to Elena provides a striking case study of why the story’s hero is often fated to become the least interesting person in sight. The main character of a serialized drama is under such pressure to advance the plot that he or she becomes reduced to the diagram of a pattern of forces, like one of the fish in D’Arcy Wentworth Thompson’s On Growth and Form, in which the animal’s physical shape is determined by the outside stresses to which it has been subjected. Instead of making her own decisions, Elena was obliged to become whatever the series needed her to be. Every protagonist serves as a kind of motor for the story, which is frequently a thankless role, but it was particularly problematic on a show that defined itself by its willingness to burn through a year of potential storylines each month. Every episode felt like a season finale, and characters were freely killed, resurrected, and brainwashed to keep the wheels turning. It was hardest on Elena, who, at her best, was a compelling, resourceful heroine. After six seasons of personality changes, possessions, memory wipes, and the inexplicable choices that she made just because the story demanded it, she became an empty shell. If you were designing a show in a laboratory to see what would happen if its protagonist was forced to live through plot twists at an accelerated rate, like the stress tests that engineers use to put a component through a lifetime’s worth of wear in a short period of time, you couldn’t do much better than The Vampire Diaries. And while it might have been theoretically interesting to see what happened to the series after that one piece was removed, I didn’t think it was worth sitting through another two seasons of increasingly frustrating television.

After the finale was shot, series creators Kevin Williamson and Julie Plec made the rounds of interviews to discuss the ending, and they shared one particular detail that fascinates me. If you haven’t watched The Vampire Diaries, all you need to know is that its early seasons revolved around a love triangle between Elena and the vampire brothers Stefan and Damon, a nod to Twilight that quickly became one of the show’s least interesting aspects. Elena seemed fated to end up with Stefan, but she spent the back half of the series with Damon, and it ended with the two of them reunited. In a conversation with Deadline, Williamson revealed that this wasn’t always the plan:

Well, I always thought it would be Stefan and Elena. They were sort of the anchor of the show, but because we lost Elena in season six, we couldn’t go back. You know Nina could only come back for one episode—maybe if she had came back for the whole season, we could even have warped back towards that, but you can’t just do it in forty-two minutes.

Dobrev’s departure, in other words, froze that part of the story in place, even as the show around it continued its usual frantic developments, and when she returned, there wasn’t time to do anything but keep Elena and Damon where they had left off. There’s a limit to how much ground you can cover in the course of a single episode, so it seemed easier for the producers to stick with what they had and figure out a way to make it seem inevitable.

The fact that it works at all is a tribute to the skill of the writers and cast, as well as to the fact that the whole love triangle was basically arbitrary in the first place. As James Joyce said in a very different context, it was a bridge across which the characters could walk, and once they were safely on the other side, it could be blown to smithereens. The real challenge was how to make the finale seem like a definitive ending, after the show had killed off and resurrected so many characters that not even death itself felt like a conclusion. It resorted to much the same solution that Lost did when faced with a similar problem: it shut off all possibility of future narrative by reuniting its characters in heaven. This partially a form of wish fulfillment, as we’ve seen with so many other television series, but it also puts a full stop on the story by leaving us in an afterlife, where, by definition, nothing can ever change. It’s hilariously unlike the various versions of the world to come that the series has presented over the years, from which characters can always be yanked back to life when necessary, but it’s also oddly moving and effective. Watching it, I began to appreciate how the show’s biggest narrative liability—a cast that just can’t be killed—also became its greatest asset. The defining image of The Vampire Diaries was that of a character who has his neck snapped, and then just shakes it off. Williamson and Plec must have realized, consciously or otherwise, that it was a reset button that would allow them to go through more ideas than would be possible than a show on which a broken neck was permanent. Every denizen of Mystic Falls got a great death scene, often multiple times per season, and the show exploited that freedom until it exhausted itself. It only really worked for three years out of eight, but it was a great run while it lasted. And now, after life’s fitful fever, the characters can sleep well, as they sail off into the mystic.

From Sputnik to WikiLeaks

with 2 comments

In Toy Story 2, there’s a moment in which Woody discovers that his old television series, Woody’s Roundup, was abruptly yanked off the air toward the end of the fifties. He asks: “That was a great show. Why cancel it?” The Prospector replies bitterly: “Two words: Sput-nik. Once the astronauts went up, children only wanted to play with space toys.” And while I wouldn’t dream of questioning the credibility of a man known as Stinky Pete, I feel obliged to point out that his version of events isn’t entirely accurate. The space craze among kids really began more than half a decade earlier, with the premiere of Tom Corbett, Space Cadet, and the impact of Sputnik on science fiction was far from a positive one. Here’s what John W. Campbell wrote about it in the first issue of Astounding to be printed after the satellite’s launch:

Well, we lost that race; Russian technology achieved an important milestone in human history—one that the United States tried for, talked about a lot, and didn’t make…One of the things Americans have long been proud of—and with sound reason—is our ability to convert theoretical science into practical, working engineering…This time we’re faced with the uncomfortable realization that the Russians have beaten us in our own special field; they solved a problem of engineering technology faster and better than we did.

And while much of the resulting “Sputnik crisis” was founded on legitimate concerns—Sputnik was as much a triumph of ballistic rocketry as it was of satellite technology—it also arose from the notion that the United States had been beaten at its own game. As Arthur C. Clarke is alleged to have said, America had become “a second-rate power.”

Campbell knew right away that he had reason to worry. Lester del Rey writes in The World of Science Fiction:

Sputnik simply convinced John Campbell that he’d better watch his covers and begin cutting back on space scenes. (He never did, but the art director of the magazine and others were involved in that decision.) We agreed in our first conversation after the satellite went up that people were going to react by deciding science had caught up with science fiction, and with a measure of initial fear. They did. Rather than helping science fiction, Sputnik made it seem outmoded.

And that’s more or less exactly what happened. There was a brief spike in sales, followed by a precipitous fall as mainstream readers abandoned the genre. I haven’t been able to find specific numbers for this period, but one source, the Australian fan Wynne Whitford, states that the circulation of Astounding fell by half after Sputnik—which seems high, but probably reflects a real decline. In a letter written decades later, Campbell said of Sputnik: “Far from encouraging the sales of science fiction magazines—half the magazines being published lost circulation so drastically they went out of business!” An unscientific glance at a list of titles appears to support this. In 1958, the magazines Imagination, Imaginative Tales, Infinity Science Fiction, Phantom, Saturn, Science Fiction Adventures, Science Fiction Quarterly, Star Science Fiction, and Vanguard Science Fiction all ceased publication, followed by three more over the next twelve months. The year before, just four magazines had folded. There was a bubble, and after Sputnik, it burst.

At first, this might seem like a sort of psychological self-care, of the same kind that motivated me to scale back my news consumption after the election. Americans were simply depressed, and they didn’t need any reminders of the situation they were in. But it also seems to have affected the public’s appetite for science fiction in particular, rather than science as a whole. In fact, the demand for nonfiction science writing actually increased. As Isaac Asimov writes in his memoir In Joy Still Felt:

The United States went into a dreadful crisis of confidence over the fact that the Soviet Union had gotten there first and berated itself for not being interested enough in science. And I berated myself for spending too much time on science fiction when I had the talent to be a great science writer…Sputnik also served to increase the importance of any known public speaker who could talk on science and, particularly, on space, and that meant me.

What made science fiction painful to read, I think, was its implicit assumption of American superiority, which had been disproven so spectacularly. Campbell later compared it to the reaction after the bomb fell, claiming that it was the moment when people realized that science fiction wasn’t a form of escapism, but a warning:

The reactions to Sputnik have been more rapid, and, therefore, more readily perceptible and correlatable. There was, again, a sudden rise in interest in science fiction…and there is, now, an even more marked dropping of the science-fiction interest. A number of the magazines have been very heavily hit…I think the people of the United States thought we were kidding.

And while Campbell seemed to believe that readers had simply misinterpreted science fiction’s intentions, the conventions of the genre itself clearly bore part of the blame.

In his first editorials after Sputnik, Campbell drew a contrast between the American approach to engineering, which proceeded logically and with vast technological resources, and the quick and dirty Soviet program, which was based on rules of thumb, trial and error, and the ability to bull its way through on one particular point of attack. It reminds me a little of the election. Like the space race, last year’s presidential campaign could be seen as a kind of proxy war between the American and Russian administrations, and regardless of what you believe about the Trump camp’s involvement, which I suspect was probably a tacit one, there’s no question as to which side Putin favored. On one hand, you had a large, well-funded political machine, and on the other, one that often seemed comically inept. Yet it was the quick and dirty approach that triumphed. “The essence of ingenuity is the ability to get precision results without precision equipment,” Campbell wrote, and that’s pretty much what occurred. A few applications of brute force in the right place made all the difference, and they were aided, to some extent, by a similar complacency. The Americans saw the Soviets as bunglers, and they never seriously considered the possibility that they might be beaten by a bunch of amateurs. As Campbell put it: “We earned what we got—fully, and of our own efforts. The ridicule we’ve collected is our just reward for our consistent efforts.” Sometimes I feel the same way. Right now, we’re entering a period in which the prospect of becoming a second-rate power is far more real than it was when Clarke made his comment. It took a few months for the implications of Sputnik to really sink in. And if history is any indication, we haven’t even gotten to the crisis yet.

Who we are in the moment

with 59 comments

Jordan Horowitz and Barry Jenkins

By now, you’re probably sick of hearing about what happened at the Oscars. I’m getting a little tired of it, too, even though it was possibly the strangest and most riveting two minutes I’ve ever seen on live television. It left me feeling sorry for everyone involved, but there are at least three bright spots. The first is that it’s going to make a great case study for somebody like Malcolm Gladwell, who is always looking for a showy anecdote to serve as a grabber opening for a book or article. So many different things had to go wrong for it to happen—on the levels of design, human error, and simple dumb luck—that you can use it to illustrate just about any point you like. A second silver lining is that it highlights the basically arbitrary nature of all such awards. As time passes, the list of Best Picture winners starts to look inevitable, as if Cimarron and Gandhi and Chariots of Fire had all been canonized by a comprehensible historical process. If anything, the cycle of inevitability is accelerating, so that within seconds of any win, the narratives are already locking into place. As soon as La La Land was announced as the winner, a story was emerging about how Hollywood always goes for the safe, predictable choice. The first thing that Dave Itzkoff, a very smart reporter, posted on the New York Times live chat was: “Of course.” Within a couple of minutes, however, that plot line had been yanked away and replaced with one for Moonlight. And the fact that the two versions were all but superimposed onscreen should warn us against reading too much into outcomes that could have gone any number of ways.

But what I want to keep in mind above all else is the example of La La Land producer Jordan Horowitz, who, at a moment of unbelievable pressure, simply said: “I’m going to be really proud to hand this to my friends from Moonlight.” It was the best thing that anybody could have uttered under those circumstances, and it tells us a lot about Horowitz himself. If you were going to design a psychological experiment to test a subject’s reaction under the most extreme conditions imaginable, it’s hard to think of a better one—although it might strike a grant committee as possibly too expensive. It takes what is undoubtedly one of the high points of someone’s life and twists it instantly into what, if perhaps not the worst moment, at least amounts to a savage correction. Everything that the participants onstage did or said, down to the facial expressions of those standing in the background, has been subjected to a level of scrutiny worthy of the Zapruder film. At the end of an event in which very little occurs that hasn’t been scripted or premeditated, a lot of people were called upon to figure out how to act in real time in front of an audience of hundreds of millions. It’s proverbial that nobody tells the truth in Hollywood, an industry that inspires insider accounts with titles like Hello, He Lied and Which Lie Did I Tell? A mixup like the one at the Oscars might have been expressly conceived as a stress test to bring out everyone’s true colors. Yet Horowitz said what he did. And I suspect that it will do more for his career than even an outright win would have accomplished.

Kellyanne Conway

It also reminds me of other instances over the last year in which we’ve learned exactly what someone thinks. When we get in trouble for a remark picked up on a hot mike, we often say that it doesn’t reflect who we really are—which is just another way of stating that it doesn’t live up to the versions of ourselves that we create for public consumption. It’s far crueler, but also more convincing, to argue that it’s exactly in those unguarded, unscripted moments that our true selves emerge. (Freud, whose intuition on such matters was uncanny, was onto something when he focused on verbal mistakes and slips of the tongue.) The justifications that we use are equally revealing. Maybe we dismiss it as “locker room talk,” even if it didn’t take place anywhere near a locker room. Kellyanne Conway excused her reference to the nonexistent Bowling Green Massacre by saying “I misspoke one word,” even though she misspoke it on three separate occasions. It doesn’t even need to be something said on the spur of the moment. At his confirmation hearing for the position of ambassador to Israel, David M. Friedman apologized for an opinion piece he had written before the election: “These were hurtful words, and I deeply regret them. They’re not reflective of my nature or my character.” Friedman also said that “the inflammatory rhetoric that accompanied the presidential campaign is entirely over,” as if it were an impersonal force that briefly took possession of its users and then departed. We ask to be judged on our most composed selves, not the ones that we reveal at our worst.

To some extent, that’s a reasonable request. I’ve said things in public and in private that I’ve regretted, and I wouldn’t want to be judged solely on my worst moments as a writer or parent. At a time when a life can be ruined by a single tweet, it’s often best to err on the side of forgiveness, especially when there’s any chance of misinterpretation. But there’s also a place for common sense. You don’t refer to an event as a “massacre” unless you really think of it that way or want to encourage others to do so. And we judge our public figures by what they say when they think that nobody is listening, or when they let their guard down. It might seem like an impossibly high standard, but it’s also the one that’s effectively applied in practice. You can respond by becoming inhumanly disciplined, like Obama, who in a decade of public life has said maybe five things he has reason to regret. Or you can react like Trump, who says five regrettable things every day and trusts that its sheer volume will reduce it to a kind of background noise—which has awakened us, as Trump has in so many other ways, to a political option that we didn’t even knew existed. Both strategies are exhausting, and most of us don’t have the energy to pursue either path. Instead, we’re left with the practical solution of cultivating the inner voice that, as I wrote last week, allows us to act instinctively. Kant writes: “Live your life as though your every act were to become a universal law.” Which is another way of saying that we should strive to be the best version of ourselves at all times. It’s probably impossible. But it’s easier than wearing a mask.

Written by nevalalee

February 28, 2017 at 9:00 am

Swallowing the turkey

with 2 comments

Benjamin Disraeli

Lord Rowton…says that he once asked Disraeli what was the most remarkable, the most self-sustained and powerful sentence he knew. Dizzy paused for a moment, and then said, “Sufficient unto the day is the evil thereof.”

—Augustus J.C. Hare, The Story of My Life

Disraeli was a politician and a novelist, which is an unusual combination, and he knew his business. Politics and writing have less to do with each other than a lot of authors might like to believe, and the fact that you can create a compelling world on paper doesn’t mean that you can do the same thing in real life. (One of the hidden themes of Astounding is that the skills that many science fiction writers acquired in organizing ideas on the page turned out to be notably inadequate when it came to getting anything done during World War II.) Yet both disciplines can be equally daunting and infuriating to novices, in large part because they both involve enormously complicated projects—often requiring years of effort—that need to be approached one day at a time. A single day’s work is rarely very satisfying in itself, and you have to cling to the belief that countless invisible actions and compromises will somehow result in something real. It doesn’t always happen, and even if it does, you may never get credit or praise. The ability to deal with the everyday tedium of politics or writing is what separates professionals from amateurs. And in both cases, the greatest accomplishments are usually achieved by freaks who can combine an overarching vision with a finicky obsession with minute particulars. As Eugène-Melchior de Vogüé, who was both a diplomat and literary critic, said of Tolstoy, it requires “a queer combination of the brain of an English chemist with the soul of an Indian Buddhist.”

And if you go into either field without the necessary degree of patience, the results can be unfortunate. If you’re a writer who can’t subordinate yourself to the routine of writing on a daily basis, the most probable outcome is that you’ll never finish your novel. In politics, you end up with something very much like what we’ve all observed over the last few weeks. Regardless of what you might think about the presidential refugee order, its rollout was clearly botched, thanks mostly to a president and staff that want to skip over all the boring parts of governing and get right to the good stuff. And it’s tempting to draw a contrast between the incumbent, who achieved his greatest success on reality television, and his predecessor, a detail-oriented introvert who once thought about becoming a novelist. (I’m also struck, yet again, by the analogy to L. Ron Hubbard. He spent most of his career fantasizing about a life of adventure, but when he finally got into the Navy, he made a series of stupid mistakes—including attacking two nonexistent submarines off the coast of Oregon—that ultimately caused him to be stripped of his command. The pattern repeated itself so many times that it hints at a fundamental aspect of his personality. He was too impatient to deal with the tedious reality of life during wartime, which failed to live up to the version he had dreamed of himself. And while I don’t want to push this too far, it’s hard not to notice the difference between Hubbard, who cranked out his fiction without much regard for quality, and Heinlein, a far more disciplined writer who was able to consciously tame his own natural impatience into a productive role at the Philadelphia Navy Yard.)

R.H. Blyth

Which brings us back to the sentence that impressed Disraeli. It’s easy to interpret it as an admonition not to think about the future, which isn’t quite right. We can start by observing that it comes at the end of what The Five Gospels notes is possibly “the longest connected discourse that can be directly attributed to Jesus.” It’s the one that asks us to consider the birds of the air and the lilies of the field, which, for a lot of us, prompts an immediate flashback to The Life of Brian. (“Consider the lilies?” “Uh, well, the birds, then.” “What birds?” “Any birds.” “Why?” “Well, have they got jobs?”) But whether or not you agree with the argument, it’s worth noticing that the advice to focus on the evils of each day comes only after an extended attempt at defining a larger set of values—what matters, what doesn’t, and what, if anything, you can change by worrying. You’re only in a position to figure out how best to spend your time after you’ve considered the big questions. As the physician William Osler put it:

[My ideal is] to do the day’s work well and not to bother about tomorrow. You may say that is not a satisfactory ideal. It is; and there is not one which the student can carry with him into practice with greater effect. To it more than anything else I owe whatever success I have had—to this power of settling down to the day’s work and trying to do it well to the best of my ability, and letting the future take care of itself.

This has important implications for both writers and politicians, as well as for progressives who wonder how they’ll be able to get through the next twenty-four hours, much less the next four years. When you’re working on any important project, even the most ambitious agenda comes down to what you’re going to do right now. In On Directing Film, David Mamet expresses it rather differently:

Now, you don’t eat a whole turkey, right? You take off the drumstick and you take a bite of the drumstick. Okay. Eventually you get the whole turkey done. It’ll probably get dry before you do, unless you have an incredibly good refrigerator and a very small turkey, but that is outside the scope of this lecture.

A lot of frustration in art, politics, and life in general comes from attempting to swallow the turkey in one bite. Jesus, I think, was aware of the susceptibility of his followers to grandiose but meaningless gestures, which is why he offered up the advice, so easy to remember and so hard to follow, to simultaneously focus on the given day while keeping the kingdom of heaven in mind. Nearly every piece of practical wisdom in any field is about maintaining that double awareness. Fortunately, it goes in both directions: small acts of discipline aid us in grasping the whole, and awareness of the whole tells us what to do in the moment. As R.H. Blyth says of Zen: “That is all religion is: eat when you are hungry, sleep when you are tired.” And don’t try to eat the entire turkey at once.

%d bloggers like this: