Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Search Results

The world spins

with one comment

Note: This post discusses plot points from Sunday’s episode of Twin Peaks.

“Did you call me five days ago?” Dark Cooper asks the shadowy shape in the darkness in the most recent episode of Twin Peaks. It’s a memorable moment for a number of reasons, not the least of which is that he’s addressing the disembodied Philip Jeffries, who was played by David Bowie in Fire Walk With Me, and is now portrayed by a different voice actor and what looks to be a sentient tea kettle. But that didn’t even strike me as the weirdest part. What hit me hardest is the implication that everything that we’ve seen so far this season has played out over less than a week in real time—the phone call to which Dark Cooper is referring occurred during the second episode. Admittedly, there are indications that the events onscreen have unfolded in a nonlinear fashion, not to draw attention to itself, but to allow David Lynch and Mark Frost to cut between storylines according to their own rhythms, rather than being tied down to chronology. (The text message that Dark Cooper sends at the end of the scene was received by Diane a few episodes ago, while Audrey’s painful interactions with Charlie apparently consist of a single conversation parceled out over multiple weeks. And the Dougie Jones material certainly feels as if it occurs over a longer period than five days, although it’s probably possible to squeeze it into that timeline if necessary.) And if viewers are brought up short by the contrast between the show’s internal calendar and its emotional duration, it’s happened before. When I look back at the first two seasons of the show, I’m still startled to realize that every event from Laura’s murder to Cooper’s possession unfolds over just one month.

Why does this feel so strange? The obvious answer is that we get to know these characters over a period of years, while we really only see them in action for a few weeks, and their interactions with one another end up carrying more weight than you might expect for people who, in some cases, met only recently. And television is the one medium that routinely creates that kind of disparity. It’s inherently impossible for a movie to take longer to watch than the events that it depicts—apart from a handful, like Run Lola Run or Vantage Point, that present scrambled timelines or stage the same action from multiple perspectives—and it usually compresses days or weeks of action within a couple of hours. With books, the length of the act of reading varies from one reader to the next, and we’re unlikely to find it particularly strange that it can take months to finish Ulysses, which recounts the events of a single day. It’s only television, particularly when experienced in its original run, that presents such a sharp contrast between narrative and emotional time, even if we don’t tend to worry about this with sitcoms, procedurals, and other nonserialized shows. (One interesting exception consists of shows set in high school or college, in which it’s awfully tempting to associate each season with an academic year, although there’s no reason why a series like Community couldn’t take place over a single semester.) Shows featuring children or teenagers have a built-in clock that reminds us of how time is passing in the real world, as Urkel or the Olsen twins progress inexorably toward puberty. And occasionally there’s an outlier like The Simpsons, in which a quarter of a century’s worth of storylines theoretically takes place within the same year or so.

But the way in which a serialized show can tell a story that occurs over a short stretch of narrative time while simultaneously drawing on the emotional energy that builds up over years is one of the unsung strengths of the entire medium. Our engagement with a favorite show that airs on a weekly basis isn’t just limited to the hour that we spend watching it every Sunday, but expands to fill much of the time in between. If a series really matters to us, it gets into our dreams. (I happened to miss the initial airing of this week’s episode because I was on vacation with my family, and I’ve been so conditioned to get my fix of Twin Peaks on a regular basis that I had a detailed dream about an imaginary episode that night—which hasn’t happened to me since I had to wait a week to watch the series finale of Breaking Bad. As far as I can remember, my dream involved the reappearance of Sheriff Harry Truman, who has been institutionalized for years, with his family and friends describing him euphemistically as “ill.” And I wouldn’t mention it here at all if this weren’t a show that has taught me to pay close attention to my dreamlife.) Many of us also spend time between episodes in reading reviews, discussing plot points online, and catching up with various theories about where it might go next. In a few cases, as with Westworld, this sort of active analysis can be detrimental to the experience of watching the show itself, if you see it as a mystery with clues that the individual viewer is supposed to crack on his or her own. For the most part, though, it’s an advantage, with time conferring an emotional weight that the show might not have otherwise had. As the world spins, the series stays where it was, and we’ve all changed in the meantime.

The revival of Twin Peaks takes this tendency and magnifies it beyond anything else we’ve seen before, with its fans investing it with twenty-five years of accumulated energy—and this doesn’t even account for the hundreds of hours that I spent listening to the show’s original soundtrack, which carries an unquantifiable duration of its own. And one of the charming things about this season is how Lynch and Frost seem to have gone through much the same experience themselves, mulling over their own work until stray lines and details take on a greater significance. When Dark Cooper goes to his shadowy meeting above a convenience store, it’s paying off on a line that Mike, the one-armed man, uttered in passing during a monologue from the first Bush administration. The same applies to the show’s references to a mysterious “Judy,” whom Jeffries mentioned briefly just before disappearing forever. I don’t think that these callbacks reflect a coherent plan that Lynch and Frost have been keeping in their back pockets for decades, but a process of going back to tease out meanings that even they didn’t know were there. Smart writers of serialized narratives learn to drop vague references into their work that might pay off later on. (Two of my favorite examples are Spock’s “Remember” at the end of Star Trek II: The Wrath of Khan, and the Second Foundation, which Isaac Asimov introduced in case he needed it in a subsequent installment.) What Twin Peaks is doing now is analogous to what the writers of Breaking Bad did when they set up problems that they didn’t know how to solve, trusting that they would figure it out eventually. The only difference is that Lynch and Frost, like the rest of us, have had more time to think about it. And it might take us another twenty-five years before we—or they—figure out what they were actually doing.

Written by nevalalee

August 22, 2017 at 9:08 am

The sense of an ending

leave a comment »

Note: This post discusses details from last night’s episode of Twin Peaks.

When I was working as a film critic in college, one of my first investments was a wristwatch that could glow in the dark. If you’re sitting through an interminable slog of a movie, sometimes you simply want to know how much longer the pain will last, and, assuming that you have a sense of the runtime, a watch puts a piece of narrative information at your disposal that has nothing to do with the events of the story itself. Even if you’re enjoying yourself, the knowledge that a film has twenty minutes left to run—which often happens if you’re watching it at home and staring right at the numbers on the display of your DVD player—affects the way you think about certain scenes. A climax plays differently near the end, as opposed to somewhere in the middle. The length of a work of art is a form of metadata that influences the way we watch movies and read books, as Douglas Hofstadter points out in Gödel, Escher, Bach:

You have undoubtedly noticed how some authors go to so much trouble to build up great tension a few pages before the end of their stories—but a reader who is holding the book physically in his hands can feel that the story is about to end. Hence, he has some extra information which acts as an advance warning, in a way. The tension is a bit spoiled by the physicality of the book. It would be so much better if, for instance, there were a lot of padding at the end of novels…A lot of extra printed pages which are not part of the story proper, but which serve to conceal the exact location of the end from a cursory glance, or from the feel of the book.

Not surprisingly, I tend to think about the passage of time the most when I’m not enjoying the story. When I’m invested in the experience, I’ll do the opposite: I’ll actively resist glancing at the clock or looking to see how much time has elapsed. When I know that the credits are going to roll no matter what within the next five minutes, it amounts to a spoiler. With Twin Peaks, which has a narrative that can seemingly be cut anywhere, like yard goods, I try not to think about how long I’ve been watching. Almost inevitably, the episode ends before I’m ready for it, in part because it provides so few of the usual cues that we’ve come to expect from television. There aren’t any commercial breaks, obviously, but the stories also don’t divide neatly into three or four acts. In the past, most shows, even those that aired without interruption on cable networks, followed certain structural conventions that allow us to guess when the story is coming to an end. (This is even more true of Hollywood movies, which, with their mandated beat sheets—the inciting incident, the midpoint, the false dawn, the crisis—practically tell the audience how much longer they need to pay attention, which may be the reason why such rules exist in the first place.) Now that streaming services allow serialized stories to run for hours without worrying about the narrative shape of individual episodes, this is less of an issue, and it can be a mixed blessing. But at its best, on a show like Twin Peaks, it creates a feeling of narrative suspension, cutting us off from any sense of the borders of the episode until the words Starring Kyle MacLachlan appear suddenly onscreen.

Yet there’s also another type of length of which we can’t help but be conscious, at least if we’re the kind of viewers likely to be watching Twin Peaks in the first place. We know that there are eighteen episodes in this season, the fourteenth of which aired last night, and the fact that we only have four hours left to go adds a degree of tension to the narrative that wouldn’t be there if we weren’t aware of it. This external pressure also depends on the knowledge that this is the only new season of the show that we’re probably going to get, which, given how hard it is to avoid this sort of news these days, is reasonable to expect of most fans. Maybe we’ve read the Rolling Stone interview in which David Lynch declared, in response to the question of whether there would be additional episodes: “I have no idea. It depends on how it goes over. You’re going to have to wait and see.” Or we’ve seen that David Nevins of Showtime said to Deadline: “It was always intended to be one season. A lot of people are speculating but there’s been zero contemplation, zero discussions other than fans asking me about it.” Slightly more promisingly, Kyle MacLachlan told the Hollywood Reporter: “I don’t know. David has said: ‘Everything is Twin Peaks.’ It leads me to believe that there are other stories to tell. I think it’s just a question of whether David and Mark want to tell them. I don’t know.” And Lynch even said to USA Today: “You never say never.” Still, it’s fair to say that the current season was conceived, written, and filmed to stand on its own, and until we know otherwise, we have to proceed under the assumption that this is the last time we’ll ever see these characters.

This has important implications for how we watch it from one week to the next. For one thing, it means that episodes near the end will play differently than they would have earlier in the season. Last night’s installment was relatively packed with incident—the revelation of the identity of Diane’s estranged half sister, Andy’s trip into the void, the green gardening glove, Monica Bellucci—but we’re also aware of how little time remains for the show to pay off any of these developments. Most series would have put an episode like this in the fourth slot, rather than the fourteenth, and given the show’s tendency to drop entire subplots for months, it leaves us keenly aware that many of these storylines may never be resolved. Every glimpse of a character, old or new, feels like a potential farewell. And with each episode that passes without the return of Agent Cooper, every minute in which we don’t see him increases our sense of urgency. (If this were the beginning of an open-ended run, rather than the presumptive final season, the response to the whole Dougie Jones thread would have been very different.) This information has nothing to do with the contents of the show itself, which, with one big exception, haven’t changed much since the premiere. But it’s hard not to think about it. In some ways, this may be the greatest difference between this season and the initial run, since there was always hope that the series would be renewed by ABC, or that Fire Walk With Me would tie off any loose ends. Unlike the first generation of fans, we know that this is it, and it can hardly fail to affect our impressions, even if Lynch still whispers in our heads: “You never say never.”

Written by nevalalee

August 14, 2017 at 8:48 am

Bester of both worlds

with one comment

In 1963, the editor Robert P. Mills put together an anthology titled The Worlds of Science Fiction, for which fifteen writers—including Isaac Asimov, Robert A. Heinlein, and Ray Bradbury—were invited to contribute one of their favorite stories. Mills also approached Alfred Bester, the author of the classic novels The Demolished Man and The Stars My Destination, who declined to provide a selection, explaining: “I don’t like any of [my stories]. They’re all disappointments to me. This is why I rarely reread my old manuscripts; they make me sick. And when, occasionally, I come across a touch that pleases me, I’m convinced that I never wrote it—I believe that an editor added it.” When Mills asked if he could pick a story that at least gave him pleasure in the act of writing it, Bester responded:

No. A writer is extremely schizophrenic; he is both author and critic. As an author he may have moments of happiness while he’s creating, but as a critic he is indifferent to his happiness. It cannot influence his merciless appraisal of his work. But there’s an even more important reason. The joy you derive from creating a piece of work has no relationship to the intrinsic value of the work. It’s a truism on Broadway that when an actor particularly enjoys the performance he gives, it’s usually his worst. It’s also true that the story which gives the author the most pain is often his best.

Bester finally obliged with the essay “My Private World of Science Fiction,” which Mills printed as an epilogue. Its centerpiece is a collection of two dozen ideas that Bester plucked from his commonplace book, which he describes as “the heavy leather-bound journal that I’ve been keeping for twenty years.” These scraps and fragments, Bester explains, are his best works, and they inevitably disappoint him when they’re turned into stories. And the bits and pieces that he provides are often dazzling in their suggestiveness: “A circulating brain library in a Womrath’s of the future, where you can rent a brain for any purpose.” “A story about weather smugglers.” “There must be a place where you can go to remember all the things that never happened to you.” And my personal favorite:

The Lefthanded Killer: a tour de force about a murder which (we tell the reader immediately) was committed by a lefthanded killer. But we show, directly or indirectly, that every character is righthanded. The story starts with, “I am the murderer,” and then goes on to relate the mystery, never revealing who the narrator is…The final twist; killer-narrator turns out to be an unborn baby, the survivor of an original pair of twins. The lefthand member killed his righthand brother in the womb. The entire motivation for the strange events that follow is the desire to conceal the crime. The killer is a fantastic and brilliant monster who does not realize that the murder would have gone unnoticed.

Every writer has a collection of story fragments like this—mine takes up a page in a notebook of my own—but few ever publish theirs, and it’s fascinating to wonder at Bester’s motivations for making his unused ideas public. I can think of three possible reasons. The first, and perhaps the most plausible, is that he knew that many of these premises were more interesting in capsule form than when written out as full stories, and so, in acknowledgement of what I’ve called the Borges test, he simply delivered them that way. (He also notes that ideas are cheap: “The idea itself is relatively unimportant; it’s the writer who develops it that makes the big difference…It is only the amateur who worries about ‘his idea being stolen.'”) Another possibility is that he wanted to convey how stray thoughts in a journal like this can mingle and combine in surprising ways, which is one of the high points of any writer’s life:

That’s the wonder of the Commonplace Book; the curious way an incomprehensible note made in 1950 can combine with a vague entry made in 1960 to produce a story in 1970. In A Life in the Day of a Writer, perhaps the most brilliant portrait of an author in action ever painted, Tess Slesinger wrote: “He rediscovered the miracle of something on page twelve tying up with something on page seven which he had not understood when he wrote it…”

Bester concludes of his ideas: “They’ll cross-pollinate, something totally unforeseen will emerge, and then, alas, I’ll have to write the story and destroy it. This is why your best is always what you haven’t written yet.”

Yet the real explanation, I suspect, lies in that line “I’ll have to write the story,” which gets at the heart of Bester’s remarkable career. In reality, Bester is all but unique among major science fiction writers in that he never seemed to “have to write” anything. He contributed short stories to Astounding for a few heady years before World War II, then disappeared for the next decade to do notable work in comic books, radio, and television. Even after he returned, there was a sense that science fiction only occupied part of his attention. He published a mainstream novel, wrote television scripts, and worked as a travel writer and senior editor for the magazine Holiday, and the fact that he had so many ideas that he never used seems to reflect the fact that he only turned to science fiction when he really felt like it. (Bester should have been an ideal writer for John W. Campbell, who, if he could have managed it, would have loved a circle of writers that consisted solely of professional men in other fields who wrote on the side—they were more likely to take his ideas and rewrite to order than either full-time pulp authors or hardcore science fiction fans. And the story of how Campbell alienated Bester over the course of a single meeting is one of the most striking anecdotes from the whole history of the genre.) Most professional writers couldn’t afford to allow their good ideas to go to waste, but Bester was willing to let them go, both because he had other sources of income and because he knew that there was plenty more where that came from. I still think of Heinlein as the genre’s indispensable writer, but Bester might be a better role model, if only because he seemed to understand, rightly, that there were realms to explore beyond the worlds of science fiction.

Written by nevalalee

August 11, 2017 at 9:33 am

The conveyor belt

leave a comment »

For all the endless discussion of various aspects of Twin Peaks, one quality that sometimes feels neglected is the incongruous fact that it had one of the most attractive casts in television history. In that respect—and maybe in that one alone—it was like just about every other series that ever existed. From prestige dramas to reality shows to local newscasts, the story of television has inescapably been that of beautiful men and women on camera. A show like The Hills, which was one of my guilty pleasures, seemed to be consciously trying to see how long it could coast on surface beauty alone, and nearly every series, ambitious or otherwise, has used the attractiveness of its actors as a commercial or artistic strategy. (In one of the commentary tracks on The Simpsons, a producer describes how a network executive might ask indirectly about the looks of the cast of a sitcom: “So how are we doing aesthetically?”) If this seemed even more pronounced on Twin Peaks, it was partially because, like Mad Men, it took its conventionally glamorous actors into dark, unpredictable places, and also because David Lynch had an eye for a certain kind of beauty, both male and female, that was more distinctive than that of the usual soap opera star. He’s continued this trend in the third season, which has been populated so far by such striking presences as Chrysta Bell, Ben Rosenfield, and Madeline Zima, and last night’s episode features an extended, very funny scene between a delighted Gordon Cole and a character played by Bérénice Marlohe, who, with her red lipstick and “très chic” spike heels, might be the platonic ideal of his type.

Lynch isn’t the first director to display a preference for actors, particularly women, with a very specific look—although he’s thankfully never taken it as far as his precursor Alfred Hitchcock did. And the notion that a film or television series can consist of little more than following around two beautiful people with a camera has a long and honorable history. My two favorite movies of my lifetime, Blue Velvet and Chungking Express, both understand this implicitly. It’s fair to say that the second half of the latter film would be far less watchable if it didn’t involve Tony Leung and Faye Wong, two of the most attractive people in the world, and Wong Kar-Wai, like so many filmmakers before him, uses it as a psychological hook to take us into strange, funny, romantic places. Blue Velvet is a much darker work, but it employs a similar lure, with the actors made up to look like illustrations of themselves. In a Time cover story on Lynch from the early nineties, Richard Corliss writes of Kyle MacLachlan’s face: “It is a startling visage, as pure of line as an art deco vase, with soft, all-American features and a comic-book hero’s jutting chin—you could park a Packard on it.” It echoes what Pauline Kael says of Isabella Rossellini in Blue Velvet: “She even has the kind of nostrils that cover artists can represent accurately with two dots.” MacLachlan’s chin and Rossellini’s nose would have caught our attention in any case, but it’s also a matter of lighting and makeup, and Lynch shoots them to emphasize their roots in the pulp tradition, or, more accurately, in the subconscious store of images that we take from those sources. And the casting gets him halfway there.

This leaves us in a peculiar position when it comes to the third season of Twin Peaks, which, both by nature and by design, is about aging. Mark Frost said in an interview: “It’s an exercise in engaging with one of the most powerful themes in all of art, which is the ruthless passage of time…We’re all trapped in time and we’re all going to die. We’re all traveling along this conveyor belt that is relentlessly moving us toward this very certain outcome.” One of the first, unforgettable images from the show’s promotional materials was Kyle MacLachlan’s face, a quarter of a century older, emerging from the darkness into light, and our feelings toward these characters when they were younger inevitably shape the way we regard them now. I felt this strongly in two contrasting scenes from last night’s episode. It offers us our first extended look at Sarah Palmer, played by Grace Zabriskie, who delivers a freakout in a grocery store that reminds us of how much we’ve missed and needed her—it’s one of the most electrifying moments of the season. And we also finally see Audrey Horne again, in a brutally frustrating sequence that feels to me like the first time that the show’s alienating style comes off as a miscalculation, rather than as a considered choice. Audrey isn’t just in a bad place, which we might have expected, but a sad, unpleasant one, with a sham marriage and a monster of a son, and she doesn’t even know the worst of it yet. It would be a hard scene to watch with anyone, but it’s particularly painful when we set it against our first glimpse of Audrey in the original series, when we might have said, along with the Norwegian businessman at the Great Northern Hotel: “Excuse me, is there something wrong, young pretty girl?”

Yet the two scenes aren’t all that dissimilar. Both Sarah and Audrey are deeply damaged characters who could fairly say: “Things can happen. Something happened to me.” And I can only explain away the difference by confessing that I was a little in love in my early teens with Audrey. Using those feelings against us—much as the show resists giving us Dale Cooper again, even as it extravagantly develops everything around him—must have been what Lynch and Frost had in mind. And it isn’t the first time that this series has toyed with our emotions about beauty and death. The original dream girl of Twin Peaks, after all, was Laura Palmer herself, as captured in two of its most indelible images: Laura’s prom photo, and her body wrapped in plastic. (Sheryl Lee, like January Jones in Mad Men, was originally cast for her look, and only later did anyone try to find out whether or not she could act.) The contrast between Laura’s lovely features and her horrifying fate, in death and in the afterlife, was practically the motor on which the show ran. Her face still opens every episode of the revival, dimly visible in the title sequence, but it also ended each installment of the original run, gazing out from behind the prison bars of the closing credits to the strains of “Laura Palmer’s Theme.” In the new season, the episodes generally conclude with whatever dream pop band Lynch feels like showcasing, usually with a few cool women, and I wouldn’t want to give that up. But I also wonder whether we’re missing something when we take away Laura at the end. This season began with Cooper being asked to find her, but she often seems like the last thing on anyone’s mind. Twin Peaks never allowed us to forget her before, because it left us staring at her photograph each week, which was the only time that one of its beautiful faces seemed to be looking back at us.

The driver and the signalman

leave a comment »

In his landmark book Design With Nature, the architect Ian L. McHarg shares an anecdote from the work of an English biologist named George Scott Williamson. McHarg, who describes Williamson as “a remarkable man,” mentions him in passing in a discussion of the social aspects of health: “He believed that physical, mental, and social health were unified attributes and that there were aspects of the physical and social environment that were their corollaries.” Before diving more deeply into the subject, however, McHarg offers up an apparently unrelated story that was evidently too interesting to resist:

One of the most endearing stories of this man concerns a discovery made when he was undertaking a study of the signalmen who maintain lonely vigils while operating the switches on British railroads. The question to be studied was whether these lonely custodians were subject to boredom, which would diminish their dependability. It transpired that lonely or not, underpaid or not, these men had a strong sense of responsibility and were entirely dependable. But this was not the major perception. Williamson learned that every single signalman, from London to Glasgow, could identify infallibly the drivers of the great express trains which flashed past their vision at one hundred miles per hour. The drivers were able to express their unique personalities through the unlikely and intractable medium of some thousand tons of moving train, passing in a fraction of a second. The signalmen were perceptive to this momentary expression of the individual, and Williamson perceived the power of the personality.

I hadn’t heard of Williamson before reading this wonderful passage, and all that I know about him is that he was the founder of the Peckham Experiment, an attempt to provide inexpensive health and recreation services to a neighborhood in Southeast London. The story of the signalmen seems to make its first appearance in his book Science, Synthesis, and Sanity: An Inquiry Into the Nature of Living, which he cowrote with his wife and collaborator Innes Hope Pearse. They relate:

Or again, sitting in a railway signal box on a dark night, in the far distance from several miles away came the rumble of the express train from London. “Hallo,” said my friend the signalman. “Forsyth’s driving her—wonder what’s happened to Courtney?” Next morning, on inquiry of the stationmaster at the junction, I found it was true. Courtney had been taken ill suddenly and Forsyth had deputized for him—all unknown, of course, to the signalman who in any case had met neither Forsyth nor Courtney. He knew them only as names on paper and by their “action-pattern” impressed on a dynamic medium—a unique action-pattern transmitted through the rumble of an unseen train. Or, in a listening post with nothing visible in the sky, said the listener: “That’s ‘Lizzie,’ and Crompton’s flying her.” “Lizzie” an airplane, and her pilot imprinting his action-pattern on her course.

And while Williamson and Pearse are mostly interested in the idea of an individual’s “action-pattern” being visible in an unlikely medium, it’s hard not to come away more struck, like McHarg, by the image of the lone signalman, the passing machine, and the transient moment of connection between them.

As I read over this, it occurred to me that it perfectly encapsulated our relationship with a certain kind of pop culture. We’re the signalmen, and the movie or television show is the train. As we sit in our living rooms, lonely and relatively isolated, something passes across our field of vision—an episode of Game of Thrones, say, which often feels like a locomotive to the face. This is the first time that we’ve seen it, but it represents the end result of a process that has unfolded for months or years, as the episode was written, shot, edited, scored, and mixed, with the contributions of hundreds of men and women we wouldn’t be able to name. As we experience it, however, we see the glimmer of another human being’s personality, as expressed through the narrative machine. It isn’t just a matter of the visible choices made on the screen, but of something less definable, a “style” or “voice” or “attitude,” behind which, we think, we can make out the amorphous factors of influence and intent. We identify an artist’s obsessions, hangups, and favorite tricks, and we believe that we can recognize the mark of a distinctive style even when it goes uncredited. Sometimes we have a hunch about what happened on the set that day, or the confluence of studio politics that led to a particular decision, even if we have no way of knowing it firsthand. (This was one of the tics of Pauline Kael’s movie reviews that irritated Renata Adler: “There was also, in relation to filmmaking itself, an increasingly strident knowingness: whatever else you may think about her work, each column seemed more hectoringly to claim, she certainly does know about movies. And often, when the point appeared most knowing, it was factually false.”) We may never know the truth, but it’s enough if a theory seems plausible. And the primary difference between us and the railway signalman is that we can share our observations with everyone in sight.

I’m not saying that these inferences are necessarily incorrect, any more than the signalmen were wrong when they recognized the personal styles of particular drivers. If Williamson’s account is accurate, they were often right. But it’s worth emphasizing that the idea that you can recognize a driver from the passage of a train is no less strange than the notion that we can know something about, say, Christopher Nolan’s personality from Dunkirk. Both are “unlikely and intractable” mediums that serve as force multipliers for individual ability, and in the case of a television show or movie, there are countless unseen variables that complicate our efforts to attribute anything to anyone, much less pick apart the motivations behind specific details. The auteur theory in film represents an attempt to read movies like novels, but as Thomas Schatz pointed out decades ago in his book The Genius of the System, trying to read Casablanca as the handiwork of Michael Curtiz, rather than that of all of its collaborators taken together, is inherently problematic. And this is easy to forget. (I was reminded of this by the recent controversy over David Benioff and D.B. Weiss’s pitch for their Civil War alternate history series Confederate. I agree with the case against it that the critic Roxane Gay presents in her opinion piece for the New York Times, but the fact that we’re closely scrutinizing a few paragraphs for clues about the merits of a show that doesn’t even exist only hints at how fraught the conversation will be after it actually premieres.) There’s a place for informed critical discussion about any work of art, but we’re often drawing conclusions based on the momentary passage of a huge machine before our eyes, and we don’t know much about how it got there or what might be happening inside. Most of us aren’t even signalmen, who are a part of the system itself. We’re trainspotters.

Off the hook

leave a comment »

In his wonderful interview in John Brady’s The Craft of the Screenwriter, Robert Towne—who might best be described as the Christopher McQuarrie of his time—tosses off a statement that is typically dense with insight:

One of the things that people say when they first start writing movies is, “Jeez, I have this idea for a movie. This is the way it opens. It’s a really great opening.” And of course they don’t know where to go from there. That’s true not only of people who are just thinking of writing movies, but very often of people who write them. They’re anxious for a splashy beginning to hook an audience, but then you end up paying for it with an almost mathematical certainty. If you have a lot of action and excitement at the beginning of a picture, there’s going to have to be some explanation, some character development somewhere along the line, and there will be a big sag about twenty minutes after you get into a film with a splashy opening. It’s something you learn. I don’t know if you’d call it technique. It’s made me prefer soft openings for films. It’s been my experience that an audience will forgive you almost anything at the beginning of the picture, but almost nothing at the end. If they’re not satisfied with the end, nothing that led up to it is going to help.

There’s a lot to absorb and remember here, particularly the implication, which I love, that a narrative has a finite amount of energy, and that if you use up too much of it at the start, you end up paying for it later.

For now, though, I’d like to focus on what Towne says about openings. He’s right in cautioning screenwriters against trying to start at a high point, which may not even be possible: I’ve noted elsewhere that few of the great scenes that we remember from movies come at the very beginning, since they require a degree of setup to really pay off. Yet at this very moment, legions of aspiring writers are undoubtedly sweating over a perfect grabber opening for their screenplay. In his interview with Brady, which was published in 1981, Towne blames this on television:

Unlike television, you don’t have to keep people from turning the channel to another network when they’re in the theater. They’ve paid three-fifty or five dollars and if the opening ten or fifteen minutes of a film are a little slow, they are still going to sit a few minutes, as long as it eventually catches hold. I believe in soft openings…Why bother to capture [the audience’s] interest at the expense of the whole film? They’re there. They’re not going anywhere.

William Goldman draws a similar contrast between the two forms in Adventures in the Screen Trade, writing a clumsy opening hook for a screenplay—about a girl being chased through the woods by a “disfigured giant”—and explaining why it’s bad: “Well, among other things, it’s television.” He continues:

This paragraph contains all that I know about writing for television. They need a hook. And they need it fast. Because they’re panicked you’ll switch to ABC. So TV stuff tends to begin with some kind of grabber. But in a movie, and only at the beginning of a movie, we have time. Not a lot, but some.

And while a lot has changed since Towne and Goldman made these statements, including the “three-fifty” that used to be the price of a ticket, the underlying point remains sound. Television calls for a different kind of structure and pacing than a movie, and screenwriters shouldn’t confuse the two. Yet I don’t think that the average writer who is fretting about the opening of his script is necessarily making that mistake, or thinking in terms of what viewers will see in a theater. I suspect that he or she is worrying about a very different audience—the script reader at an agency or production company. A moviegoer probably won’t walk out if the opening doesn’t grab them, but the first reader of a screenplay will probably toss it aside if the first few pages don’t work. (This isn’t just the case with screenwriters, either. Writers of short stories are repeatedly told that they need to hook the reader in the first paragraph, and the result is often a kind of palpable desperation that can actively turn off editors.) One reason, of course, why Towne and Goldman can get away with “soft” openings is that they’ve been successful enough to be taken seriously, both in person and in print. As Towne says:

There have been some shifts in attitudes toward me. If I’m in a meeting with some people, and if I say, “Look, fellas, I don’t think it’s gonna work this way,” there is a tendency to listen to me more. Before, they tended to dismiss a little more quickly than now.

Which, when you think about it, is exactly the same phenomenon as giving the script the benefit of the doubt—it buys Towne another minute or two to make his point, which is all a screenwriter can ask.

The sad truth is that a script trying to stand out from the slush pile and a filmed narrative have fundamentally different needs. In some cases, they’re diametrically opposed. Writers trying to break into the business can easily find themselves caught between the need to hype the movie on the page and their instincts about how the story deserves to be told, and that tension can be fatal. A smart screenwriter will often draw a distinction between the selling version, which is written with an eye to the reader, and the shooting script, which provides the blueprint for the movie, but most aspiring writers don’t have that luxury. And if we think of television as a model for dealing with distracted viewers or readers, it’s only going to get worse. In a recent essay for Uproxx titled “Does Anyone Still Have Time to Wait For Shows to Get Good?”, the legendary critic Alan Sepinwall notes that the abundance of great shows makes it hard to justify waiting for a series to improve, concluding:

We all have a lot going on, in both our TV and non-TV lives, and if you don’t leave enough breadcrumbs in the early going, your viewers will just wander off to watch, or do, something else. While outlining this post, I tweeted a few things about the phenomenon, phrasing it as “It Gets Better After Six Episodes”—to which many people replied with incredulous variations on, “Six? If it’s not good after two, or even one, I’m out, pal.”

With hundreds of shows instantly at our disposal—as opposed to the handful of channels that existed when Towne and Goldman were speaking—we’ve effectively been put into the position of a studio reader with a stack of scripts. If we don’t like what we see, we can move on. The result has been the emotionally punishing nature of so much peak television, which isn’t about storytelling so much as heading off distraction. And if it sometimes seems that many writers can’t do anything else, it’s because it’s all they were ever taught to do.

Frogs for snakes

with one comment

If you’re the sort of person who can’t turn away from a show business scandal with leaked memos, insider anecdotes, and accusations of bad conduct on both sides, the last two weeks have offered a pair of weirdly similar cases. The first involves Frank Darabont, the former showrunner of The Walking Dead, who was fired during the show’s second season and is now suing the network for a share of profits from the most popular series in the history of cable television. In response, AMC released a selection of Darabont’s emails intended to demonstrate that his firing was justified, and it makes for queasily riveting reading. Some are so profane that I don’t feel comfortable quoting them here, but this one gives you a sense of the tone:

If it were up to me, I’d have not only fired [these two writers] when they handed me the worst episode three script imaginable, I’d have hunted them down and f—ing killed them with a brick, then gone and burned down their homes. I haven’t even spoken to those worthless talentless hack sons-of-bitches since their third draft was phoned in after five months of all their big talk and promises that they’d dig deep and have my back covered…Calling their [script] “phoned-in” would be vastly overstating, because they were too busy wasting my time and your money to bother picking the damn phone up. Those f—ing overpaid con artists.

In an affidavit, Darabont attempted to justify his words: “Each of these emails must be considered in context. They were sent during an intense and stressful two-year period of work during which I was fighting like a mother lion to protect the show from harm…Each of these emails was sent because a ‘professional’ showed up whose laziness, indifference, or incompetence threatened to sink the ship. My tone was the result of the stress and magnitude of this extraordinary crisis. The language and hyperbole of my emails were harsh, but so were the circumstances.”

Frankly, I don’t find this quite as convincing as the third act of The Shawshank Redemption. As it happened, the Darabont emails were released a few days before a similar dispute engulfed Steve Whitmire, the puppeteer who had been performing Kermit the Frog since the death of Jim Henson. After the news broke last week that Whitmire had been fired, accounts soon emerged of his behavior that strikingly echoed the situation with Darabont: “He’d send emails and letters attacking everyone, attacking the writing and attacking the director,” Brian Henson told the New York Times. Whitmire has disputed the characterization: “Nobody was yelling and screaming or using inappropriate language or typing in capitals. It was strictly that I was sending detailed notes. I don’t feel that I was, in any way, disrespectful by doing that.” And his defense, like Darabont’s, stems from what he frames as a sense of protectiveness toward the show and its characters. Of a plot point involving Kermit and his nephew Robin on the defunct series The Muppets, Whitmire said to the Hollywood Reporter:

I don’t think Kermit would lie to him. I think that as Robin came to Kermit, he would say “Things happen, people go their separate ways, but that doesn’t mean we don’t care about you.” Kermit is too compassionate to lie to him to spare his feelings…We have been doing these characters for a long, long time and we know them better than anybody. I thought I was aiding to keep it on track, and I think a big reason why the show was canceled…was because that didn’t happen. I am not saying my notes would have saved it, but I think had they listened more to all of the performers, it would have made a really big difference.

Unfortunately, the case of Whitmire, like that of Darabont, is more complicated than it might seem. Henson’s children have come out in support of the firing, with Brian Henson, the public face of the company, saying that he had reservations about Whitmire’s behavior for over a decade:

I have to say, in hindsight, I feel pretty guilty that I burdened Disney by not having recast Kermit at that point because I knew that it was going to be a real problem. And I have always offered that if they wanted to recast Kermit, I was all for it, and I would absolutely help. I am very glad we have done this now. I think the character is better served to remove this destructive energy around it.

Elsewhere, Lisa Henson told the Times that Whitmire had become increasingly controlling, refusing to hire an understudy and blackballing aspiring puppeteers after the studio tried to cast alternate performers, as a source said to Gizmodo: “[Steve] told Disney that the people who were in the audition room are never allowed to work with the Muppets again.” For a Muppet fan, this is all very painful, so I’ll stop here, except to venture two comments. One is that Darabont and Whitmire may well have been right to be concerned. The second is that in expressing their thoughts, they alienated a lot of the people around them, and their protectiveness toward the material ended in them being removed from the creative process altogether. If they were simply bad at giving notes—and the evidence suggests that at least Darabont was—they weren’t alone. No one gives or takes notes well. You could even argue that the whole infrastructure of movie and television production exists to make the exchange of notes, which usually goes in just one direction, incrementally less miserable. And it doesn’t work.

Both men responded by trying to absorb more creative control into themselves, which is a natural response. Brian Henson recalls Whitmire saying: “I am now Kermit, and if you want the Muppets, you better make me happy, because the Muppets are Kermit.” And the most fascinating passage in Darabont’s correspondence is his proposal for how the show ought to be run in the future:

The crew goes away or stands there silently without milling or chattering about bullshit that doesn’t apply to the job at hand…The director [and crew]…stand there and carefully read the scene out loud word for word. Especially and including all description…The important beats are identified and discussed in terms of how they are to be shot. In other words, sole creative authority is being taken out of the director’s hands. It doesn’t matter that our actors are doing good work if the cameras fail to capture it. Any questions come straight to me by phone or text. If necessary I will shoot the coverage on my iPhone and text it to the set. The staging follows the script to the letter and is no longer willy-nilly horseshit with cameras just hosing it down from whatever angle…If the director tries to not shoot what is written, the director is beaten to death on the spot. A trained monkey is brought in to complete the job.

Reading this, I found myself thinking of an analogous situation that arose when David Mamet was running The Unit. (I’m aware that The Unit wasn’t exactly a great show—I don’t think I got through more than two episodes—but my point remains the same.) Mamet, like Darabont, was unhappy with the scripts that he was getting, but instead of writing everything himself, he wrote a memo on plot structure so lucid and logical that it has been widely shared online as a model of how to tell a story. Instead of raging at those around him, he did what he could to elevate them to his level. It strikes me as the best possible response. But as Kermit might say, that’s none of my business.

Written by nevalalee

July 19, 2017 at 9:02 am

The genius naïf

leave a comment »

Last night, after watching the latest episode of Twin Peaks, I turned off the television before the premiere of the seventh season of Game of Thrones. This is mostly because I only feel like subscribing to one premium channel at a time, but even if I still had HBO, I doubt that I would have tuned in. I gave up on Game of Thrones a while back, both because I was uncomfortable with its sexual violence and because I felt that the average episode had degenerated into a holding pattern—it cut between storylines simply to remind us that they still existed, and it relied on unexpected character deaths and bursts of bloodshed to keep the audience awake. The funny thing, of course, is that you could level pretty much the same charges against the third season of Twin Peaks, which I’m slowly starting to feel may be the television event of the decade. Its images of violence against women are just as unsettling now as they were a quarter of a century ago, when Madeline Ferguson met her undeserved end; it cuts from one subplot to another so inscrutably that I’ve compared its structure to that of a sketch comedy show; and it has already delivered a few scenes that rank among the goriest in recent memory. So what’s the difference? If you’re feeling generous, you can say that one is an opportunistic display of popular craftsmanship, while the other is a singular, if sometimes incomprehensible, artistic vision. And if you’re less forgiving, you can argue that I’m being hard on one show that I concluded was jerking me around, while indulging another that I wanted badly to love.

It’s a fair point, although I don’t think it’s necessarily true, based solely on my experience of each show in the moment. I’ve often found my attention wandering during even solid episodes of Game of Thrones, while I’m rarely less than absorbed for the full hour of Twin Peaks, even though I’d have trouble explaining why. But there’s no denying the fact that I approach each show in a different state of mind. One of the most obvious criticisms of Twin Peaks, then and now, is that its pedigree prompts viewers to overlook or forgive scenes that might seem questionable within in a more conventional series. (There have been times, I’ll confess, when I’ve felt like Homer Simpson chuckling “Brilliant!” and then confessing: “I have absolutely no idea what’s going on.”) Yet I don’t think we need to apologize for this. The history of the series, the track record of its creators, and everything implied by its brand mean that most viewers are willing to give it the benefit the doubt. David Lynch and Mark Frost are clearly aware of their position, and they’ve leveraged it to the utmost, resulting in a show in which they’re free to do just about anything they like. It’s hard to imagine any other series getting away with this, but’s also hard to imagine another show persuading a million viewers each week to meet it halfway. The implicit contract between Game of Thrones and its audience is very different, which makes the show’s lapses harder to forgive. One of the great fascinations of Lynch’s career is whether he even knows what he’s doing half the time, and it’s much less interesting to ask this question of David Benioff and D.B. Weiss, any more than it is of Chris Carter.

By now, I don’t think there’s any doubt that Lynch knows exactly what he’s doing, but that confusion is still central to his appeal. Pauline Kael’s review of Blue Velvet might have been written of last night’s Twin Peaks:

You wouldn’t mistake frames from Blue Velvet for frames from any other movie. It’s an anomaly—the work of a genius naïf. If you feel that there’s very little art between you and the filmmaker’s psyche, it may be because there’s less than the usual amount of inhibition…It’s easy to forget about the plot, because that’s where Lynch’s naïve approach has its disadvantages: Lumberton’s subterranean criminal life needs to be as organic as the scrambling insects, and it isn’t. Lynch doesn’t show us how the criminals operate or how they’re bound to each other. So the story isn’t grounded in anything and has to be explained in little driblets of dialogue. But Blue Velvet has so much aural-visual humor and poetry that it’s sustained despite the wobbly plot and the bland functional dialogue (that’s sometimes a deliberate spoof of small-town conventionality and sometimes maybe not)…Lynch skimps on these commercial-movie basics and fouls up on them, too, but it’s as if he were reinventing movies.

David Thomson, in turn, called the experience of seeing Blue Velvet a moment of transcendence: “A kind of passionate involvement with both the story and the making of a film, so that I was simultaneously moved by the enactment on screen and by discovering that a new director had made the medium alive and dangerous again.”

Twin Peaks feels more alive and dangerous than Game of Thrones ever did, and the difference, I think, lies in our awareness of the effects that the latter is trying to achieve. Even at its most shocking, there was never any question about what kind of impact it wanted to have, as embodied by the countless reaction videos that it inspired. (When you try to imagine videos of viewers reacting to Twin Peaks, you get a sense of the aesthetic abyss that lies between these two shows.) There was rarely a scene in which the intended emotion wasn’t clear, and even when it deliberately sought to subvert our expectations, it was by substituting one stimulus and response for another—which doesn’t mean that it wasn’t effective, or that there weren’t moments, at its best, that affected me as powerfully as any I’ve ever seen. Even the endless succession of “Meanwhile, back at the Wall” scenes had a comprehensible structural purpose. On Twin Peaks, by contrast, there’s rarely any sense of how we’re supposed to be feeling about any of it. Its violence is shocking because it doesn’t seem to serve anything, certainly not anyone’s character arc, and our laughter is often uncomfortable, so that we don’t know if we’re laughing at the situation onscreen, at the show, or at ourselves. It may not be an experiment that needs to be repeated ever again, any more than Blue Velvet truly “reinvented” anything over the long run, except my own inner life. But at a time when so many prestige dramas seem content to push our buttons in ever more expert and ruthless ways, I’m grateful for a show that resists easy labels. Lynch may or may not be a genius naïf, but no ordinary professional could have done what he does here.

Written by nevalalee

July 17, 2017 at 7:54 am

Children of the Lens

with 3 comments

During World War II, as the use of radar became widespread in battle, the U.S. Navy introduced the Combat Information Center, a shipboard tactical room with maps, consoles, and screens of the kind that we’ve all seen in television and the movies. At the time, though, it was like something out of science fiction, and in fact, back in 1939, E.E. “Doc” Smith had described a very similar display in the serial Gray Lensman:

Red lights are fleets already in motion…Greens are fleets still at their bases. Ambers are the planets the greens took off from…The white star is us, the Directrix. That violet cross way over there is Jalte’s planet, our first objective. The pink comets are our free planets, their tails showing their intrinsic velocities.

After the war, in a letter dated June 11, 1947, the editor John W. Campbell told Smith that the similarity was more than just a coincidence. Claiming to have been approached “unofficially, and in confidence” by a naval officer who played an important role in developing the C.I.C., Campbell said:

The entire setup was taken specifically, directly, and consciously from the Directrix. In your story, you reached the situation the Navy was in—more communications channels than integration techniques to handle it. You proposed such an integrating technique, and proved how advantageous it could be…Sitting in Michigan, some years before Pearl Harbor, you played a large share in the greatest and most decisive naval action of the recent war!

Unfortunately, this wasn’t true. The naval officer in question, Cal Laning, was indeed a science fiction fan—he was close friends with Robert A. Heinlein—but any resemblance to the Directrix was coincidental, or, at best, an instance of convergence as fiction and reality addressed the same set of problems. (An excellent analysis of the situation can be found in Ed Wysocki’s very useful book An Astounding War.)

If Campbell was tempted to overstate Smith’s influence, this isn’t surprising—the editor was disappointed that science fiction hadn’t played the role that he had envisioned for it in the war, and this wasn’t the first or last time that he would gently exaggerate it. Fifteen years later, however, Smith’s fiction had a profound impact on a very different field. In 1962, Steve Russell of M.I.T. developed Spacewar, the first video game to be played on more than one computer, with two spaceships dueling with torpedoes in the gravity well of a star. In an article for Rolling Stone written by my hero Stewart Brand, Russell recalled:

We had this brand new PDP-1…It was the first minicomputer, ridiculously inexpensive for its time. And it was just sitting there. It had a console typewriter that worked right, which was rare, and a paper tape reader and a cathode ray tube display…Somebody had built some little pattern-generating programs which made interesting patterns like a kaleidoscope. Not a very good demonstration. Here was this display that could do all sorts of good things! So we started talking about it, figuring what would be interesting displays. We decided that probably you could make a two-dimensional maneuvering sort of thing, and decided that naturally the obvious thing to do was spaceships…

I had just finished reading Doc Smith’s Lensman series. He was some sort of scientist but he wrote this really dashing brand of science fiction. The details were very good and it had an excellent pace. His heroes had a strong tendency to get pursued by the villain across the galaxy and have to invent their way out of their problem while they were being pursued. That sort of action was the thing that suggested Spacewar. He had some very glowing descriptions of spaceship encounters and space fleet maneuvers.

The “somebody” whom he mentions was Marvin Minsky, another science fiction fan, and Russell’s collaborator Martin Graetz elsewhere cited Smith’s earlier Skylark series as an influence on the game.

But the really strange thing is that Campbell, who had been eager to claim credit for Smith when it came to the C.I.C., never made this connection in print, at least not as far as I know, although he was hugely interested in Spacewar. In the July 1971 issue of Analog, he published an article on the game by Albert W. Kuhfeld, who had developed a variation of it at the University of Minnesota. Campbell wrote in his introductory note:

For nearly a dozen years I’ve been trying to get an article on the remarkable educational game invented at M.I.T. It’s a great game, involving genuine skill in solving velocity and angular relation problems—but I’m afraid it will never be widely popular. The playing “board” costs about a quarter of a megabuck!

Taken literally, the statement “nearly a dozen years” implies that the editor heard about Spacewar before it existed, but the evidence legitimately implies that he learned of it almost at once. Kuhfeld writes: “Although it uses a computer to handle orbital mechanics, physicists and mathematicians have no great playing advantage—John Campbell’s seventeen-year-old daughter beat her M.I.T. student-instructor on her third try—and thereafter.” Campbell’s daughter was born in 1945, which squares nicely with a visit around the time of the game’s first appearance. It isn’t implausible that Campbell would have seen and heard about it immediately—he had been close to the computer labs at Harvard and M.I.T. since the early fifties, and he made a point of dropping by once a year. If the Lensman series, the last three installments of which he published, had really been an influence on Spacewar, it seems inconceivable that nobody would have told him. For some reason, however, Campbell, who cheerfully promoted the genre’s impact on everything from the atomic bomb to the moon landing, didn’t seize the opportunity to do the same for video games, in an article that he badly wanted to publish. (In a letter to the manufacturers of the PDP-1, whom he had approached unsuccessfully for a writeup, he wrote: “I’ve tried for years to get a story on Spacewar, and I’ve repeatedly had people promise one…and not deliver.”)

So why didn’t he talk about it? The obvious answer is that he didn’t realize that Spacewar, which he thought would “never be widely popular,” was anything more than a curiosity, and if he had lived for another decade—he died just a few months after the article came out—he would have pushed the genre’s connection to video games as insistently as he did anything else. But there might have been another factor at play. For clues, we can turn to the article in Rolling Stone, in which Brand visited the Stanford Artificial Intelligence Laboratory with Annie Leibovitz, which is something that I wish I could have seen. Brand opens with the statement that computers are coming to the people, and he adds: “That’s good news, maybe the best since psychedelics.” It’s a revealing comparison, and it indicates the extent to which the computing movement was moving away from everything that Campbell represented. A description of the group’s offices at Stanford includes a detail that, if Campbell had read it, would only have added insult to injury:

Posters and announcements against the Vietnam War and Richard Nixon, computer printout photos of girlfriends…and signs on every door in Tolkien’s elvish Fëanorian script—the director’s office is Imladris, the coffee room The Prancing Pony, the computer room Mordor. There’s a lot of hair on those technicians, and nobody seems to be telling them where to scurry.

In the decade since the editor first encountered Spacewar, a lot had changed, and Campbell might have been reluctant to take much credit for it. The Analog article, which Brand mentions, saw the game as a way to teach people about orbital mechanics; Rolling Stone recognized it as a leading indicator of a development that was about to change the world. And even if he had lived, there might not have been room for Campbell. As Brand concludes:

Spacewar as a parable is almost too pat. It was the illegitimate child of the marrying of computers and graphic displays. It was part of no one’s grand scheme. It served no grand theory. It was the enthusiasm of irresponsible youngsters. It was disreputably competitive…It was an administrative headache. It was merely delightful.

The pop culture of computing

with one comment

We all thought that the next level of programming language would be much more strategic and even policy-oriented and would have much more knowledge about what it was trying to do. But a variety of different things conspired together, and that next generation actually didn’t show up. One could actually argue—as I sometimes do—that the success of commercial personal computing and operating systems has actually led to a considerable retrogression in many, many respects.

You could think of it as putting a low-pass filter on some of the good ideas from the sixties and seventies, as computing spread out much, much faster than educating unsophisticated people can happen. In the last twenty-five years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were.

So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture…If you look at software today, through the lens of the history of engineering, it’s certainly engineering of a sort—but it’s the kind of engineering that people without the concept of the arch did. Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves.

Alan Kay, to ACM Queue

Written by nevalalee

July 13, 2017 at 7:18 am

Posted in Quote of the Day

Tagged with ,

The X factor

leave a comment »

On Wednesday, the Washington Post published an article on the absence of women on the writing staff of The X-Files. Its author, Sonia Rao, pointed out that all of the writers for the upcoming eleventh season—including creator Chris Carter, Darin Morgan, Glen Morgan, James Wong, and three newcomers who had worked on the series as assistants—are men, adding: “It’s an industry tradition for television writers to rise through the ranks in this manner, so Carter’s choices were to be expected. But in 2017, it’s worth asking: How is there a major network drama that’s so dominated by male voices?” It’s a good question. The network didn’t comment, but Gillian Anderson responded on Twitter: “I too look forward to the day when the numbers are different.” In the same tweet, she noted that out of over two hundred episodes, only two were directed by women, one of whom was Anderson herself. (The other was Michelle MacLaren, who has since gone on to great things in partnership with Vince Gilligan.) Not surprisingly, there was also a distinct lack of female writers on the show’s original run, with just a few episodes written by women, including Anderson, Sara B. Cooper, and Kim Newton, the latter of whom, along with Darin Morgan, was responsible for one of my favorite installments, “Quagmire.” And you could argue that their continued scarcity is due to a kind of category selection, in which we tend to hire people who look like those who have filled similar roles in the past. It’s largely unconscious, but no less harmful, and I say this as a fan of a show that means more to me than just about any other television series in history.

I’ve often said elsewhere that Dana Scully might be my favorite fictional character in any medium, but I’m also operating from a skewed sample set. If you’re a lifelong fan of a show like The X-Files, you tend to repeatedly revisit your favorite episodes, but you probably never rewatch the ones that were mediocre or worse, which leads to an inevitable distortion. My picture of Scully is constructed out of four great Darin Morgan episodes, a tiny slice of the mytharc, and a dozen standout casefiles like “Pusher” and even “Triangle.” I’ve watched each of these episodes countless times, so that’s the version of the series that I remember—but it isn’t necessarily the show that actually exists. A viewer who randomly tunes into a rerun on syndication is much more likely to see Scully on an average week than in “War of the Coprophages,” and in many episodes, unfortunately, she’s little more than a foil for her partner or a convenient victim to be rescued. (Darin Morgan, who understood Scully better than anyone, seems to have gravitated toward her in part out of his barely hidden contempt for Mulder.) Despite these flaws, Scully still came to mean the world to thousands of viewers, including young women whom she inspired to go into medicine and the sciences. Gillian Anderson herself is deeply conscious of this, and this seems to have contributed to her refreshing candor here, as well as on such related issues as the fact that she was initially offered half of David Duchovny’s salary to return. Anderson understands exactly how much she means to us, and she’s conducted herself accordingly.

The fact that the vast majority of the show’s episodes were written by men also seems to have fed into one of its least appealing qualities, which was how Scully’s body—and particularly her reproductive system—was repeatedly used as a plot point. Part of this was accidental: Anderson’s pregnancy had to be written into the second season, and the writers ended up with an abduction arc with a medical subtext that became hopelessly messy later on. It may not have been planned that way, any more than anything else on this show ever was, but it had the additional misfortune of being tethered to a conspiracy storyline for which it was expected to provide narrative clarity. After the third season, nobody could keep track of the players and their motivations, so Scully’s cancer and fertility issues were pressed into service as a kind of emotional index to the rest. These were pragmatic choices, but they were also oddly callous, especially as their dramatic returns continued to diminish. And in its use of a female character’s suffering to motivate a male protagonist, it was unfortunately ahead of the curve. When you imagine flipping the narrative so that Mulder, not Scully, was one whose body was under discussion, you see how unthinkable this would have been. It’s exactly the kind of unexamined notion that comes out of a roomful of writers who are all operating with the same assumptions. It isn’t just a matter of taste or respect, but of storytelling, and in retrospect, the show’s steady decline seems inseparable from the monotony of its creative voices.

And this might be the most damning argument of all. Even before the return of Twin Peaks reminded us of how good this sort of revival could be, the tenth season of The X-Files was a crushing disappointment. It had exactly one good episode, written, not coincidentally, by Darin Morgan, and featuring Scully at her sharpest and most joyous. Its one attempt at a new female character, despite the best efforts of Lauren Ambrose, was a frustrating misfire. Almost from the start, it was clear that Chris Carter didn’t have a secret plan for saving the show, and that he’d already used up all his ideas over the course of nine increasingly tenuous seasons. It’s tempting to say that the show had burned though all of its possible plotlines, but that’s ridiculous. This was a series that had all of science fiction, fantasy, and horror at its disposal, combined with the conspiracy thriller and the procedural, and it should have been inexhaustible. It wasn’t the show that got tired, but its writers. Opening up the staff to a more diverse set of talents would have gone a long way toward addressing this. (The history of science fiction is as good an illustration as any of the fact that diversity is good for everyone, not simply its obvious beneficiaries. Editors and showrunners who don’t promote it end up paying a creative price in the long run.) For a show about extreme possibilities, it settled for formula distressingly often, and it would have benefited from adding a wider range of perspectives—particularly from writers with backgrounds that have historically offered insight into such matters as dealing with oppressive, impersonal institutions, which is what the show was allegedly about. It isn’t too late. But we might have to wait for the twelfth season.

Written by nevalalee

June 30, 2017 at 8:56 am

The bed of the future

leave a comment »

Earlier this week, I noticed a post on the front page of Reddit with the headline: “After a 1946 plane crash, Howard Hughes decided he did not like the design of the hospital bed he was laying in [sic]. He called in his engineers and had them design a new bed that would allow someone with severe burns to move freely. It became the prototype for the modern hospital bed.” This wasn’t the first time that this particular fact, with a link to the Wikipedia article on Hughes, had been posted there—in fact, it was copied and pasted from an identical submission from last year, which in itself duplicated at least two earlier posts—but it happened to catch my eye for reasons that I’ll explain later. Surprisingly enough, there appears to be a germ of truth to it. After Hughes crashed his XF-11 test plane on July 7, 1946, he did indeed ask his staff to build an improved hospital bed. As far as I can tell, it was first reported the following month in an article by the Associated Press, “Hughes Designs Hospital Bed,” which read in its entirety as follows:

Plane-maker Howard Hughes, critically injured July 7 in an airplane crash, didn’t like his hospital bed so he called in plant engineers to design a “tailor-made,” equipped with hot and cold running water. The motorized bed, on which he now is resting at the home of a friend, is built in six sections and is operated by thirty electric motors. Push-button adjustments helped him ease his pain considerably during the thirty-seven days he spent in the hospital suffering from eleven broken ribs and severe burns. Hughes took the bed, tailored to the contours of his spine, with him when he left the hospital Saturday. “I think he left in an ambulance,” said a nurse, “but I’d believe it if someone told me he flew home in that bed.”

After that, the story reappears sporadically in treatments of Hughes’s life, with elaborations that reflect either additional sources, apocryphal expansion, or some combination of the two. In Hughes: The Private Diaries, Memos, and Letters, for instance, we read:

Hughes had ordered his aviation engineers to devise a mattress that could be adjusted mechanically with his body’s movement as he continued the healing process. Working through the night, the factory created foam bedding that was divided into thirty-two sections, each controlled by a pneumatic piston and its own motor. When the mattress was rolled into Hughes’ room, he took one look at the complicated controls and sent it into storage, while leaking news of its invention and taking credit for its creation.

Note that the “six sections…operated by thirty electric motors” has somehow become “thirty-two sections.” But the detail that Hughes leaked the story to the press seems credible, while a footnote adds: “The mattress was discovered, unused, in a storage locker at Hughes Aircraft in 1976.” Other sources plausibly claim that it was Hughes’s associate Glenn Odekirk who oversaw the project. Over time, however, obvious exaggerations and distortions begin to creep in. One biography states: “[The bed] was quickly built and worked admirably, helping speed his recovery.” And then there’s this version:

Hughes’s bed was self-propelled, powered by thirty electric motors and controlled from an elaborate aircraft-style cockpit. From the comfort of this mobile sleeping machine, Hughes could tour the hospital wards, position his bed wherever he fancied, and summon up creature comforts such as music and hot and cold running water, all at the touch of a button.

What’s missing from all of these sources is the assertion that Hughes’s design was the basis of the modern hospital bed—and as a matter of fact, it wasn’t. In the November 12, 1945 issue of Life, which was published more than seven months before Hughes’s accident, an article titled “Push-Button Hospital Bed” presents a bed that includes all of the features mentioned above, using remarkably similar language. The wonderfully named Dr. Marvel Darlington Beem, it states, has built “a streamlined, electrically powered hospital bed which has a full-sized toilet built in,” and it goes on to describe it in detail:

Dr. Beem’s bed also includes other features which almost make it possible for patients to take care of themselves without any help at all. Piloting the bed like an airplane [italics mine] from a panel of switches…a patient may raise his head and feet, swing in front of a washbasin with hot and cold running water, open and shut windows, draw blinds, heat the bed, turn on lights anywhere in the room, or call a nurse. Also built into the bed are a collapsible table, an ultraviolet lamp, and an overhead trapeze bar for the patient to move himself around.

At the time of the XF-11 crash, Beem’s bed was still in the prototype stage, and it isn’t clear if anyone on the Hughes team ever saw it. (As the Life article notes, Beem practiced in Los Angeles, and Hughes was taken to Good Samaritan Hospital on Wilshire Boulevard, so it isn’t impossible that one was the inspiration for the other. Beem’s design was also written up in the August 1946 issue of Popular Mechanics, which would have been on newsstands when Hughes had his accident.) Judging from the few scraps of information that I’ve been able to find about Beem, he continued to show his bed at trade shows and to promote it in magazines well into the fifties, which indicates that it wasn’t in wide use for years. The modern hospital bed may well have developed along independent lines. But you can make a much better case for Beem than you can for Hughes.

Of course, this isn’t as good of a story, which may be why it emerged in the first place. Although Wikipedia includes the line “Hughes’s bed served as a prototype for the modern hospital bed,” the source to which it links, Donald L. Bartlett and James B. Steele’s Howard Hughes: His Life and Madness, makes no such claim. But it’s more fun to credit it to Hughes—even if he never did anything with it—than to the doctor who actually developed it and spent a decade shopping it around. (Amusingly, after the article about the bed appeared in Life, the magazine published a letter from the legendary science fiction editor Hugo Gernsback, founder of Amazing Stories, who noted that he had recently published a diagram of an “electronic bed,” pictured above, in his annual Christmas issue for subscribers. Life thanked him and informed its readers: “Years before they came true, [Gernsback] also predicted radio loudspeakers, television, radio-controlled vehicles and almost every other mechanical invention.” But that doesn’t mean he invented the modern hospital bed, either.) The Hughes factoid only caught my attention at all because it reminded me of the story that Robert A. Heinlein designed an early version of a water bed, as he recounts in Expanded Universe:

I designed the waterbed during years as a bed patient in the middle thirties; a pump to control water level, side supports to permit one to float rather than simply lying on a not very soft water filled mattress. Thermostatic control of temperature, safety interfaces to avoid all possibility of electric shock, waterproof box to make a leak no more important than a leaky hot water bottle rather than a domestic disaster…[It was] an attempt to design the perfect hospital bed by one who had spent too damn much time in hospital beds.

You see this anecdote repeated a lot, and, with some caveats, it’s basically correct. But it’s also one of the least interesting things about Heinlein. Similarly, if you were to list all of the most fascinating facts about Howard Hughes, the notion that he designed the modern hospital bed, even if it were true, wouldn’t rank in the top ten. Yet it’s one of the only items about Hughes that makes it consistently onto Reddit, which implies that there’s something about it that appeals to us. It’s a cute story. But it’s time to put it to bed.

Written by nevalalee

June 27, 2017 at 9:06 am

White Sands, Black Lodge

leave a comment »

Note: This post discusses details of last night’s episode of Twin Peaks.

Before the premiere of the third season of Twin Peaks, I occasionally found myself wondering how it could possibly stand out in an entertainment landscape that it had helped to create. After all, we’re living in an era of peak television—it’s right there in the name—and it seemed possible that its “quirkiness,” which now seems like a comically inadequate word, would come across as more of the same. By now, it’s clear that my fears were groundless. Last night’s episode was one of the strangest things I’ve ever witnessed in any medium, and it confirms that this series is still capable of heady stylistic and conceptual surprises. The first ten minutes are as close as it gets to business as usual, with Dark Cooper ensnared by a surprisingly routine double cross. Then it gets deeply weird, with an extended musical guest performance by Nine Inch Nails and a Kubrickian star gate sequence emerging from the atomic bomb at Trinity, with some atypically good special effects. (David Lynch is usually happier with camera tricks that he seems to have cooked up in his basement.) The rest is alternately bewildering, lovely, horrifying, slow, incomprehensible, and hypnotic, and it just keeps going. Any one element wouldn’t have been totally out of place, but taken together, it’s the longest sequence of its kind in Lynch’s entire body of work, and it aired on Showtime. We aren’t even halfway through this season, but it feels like a hinge moment, the dividing line in which all the ways we thought we were learning how to watch this show literally blew up in our faces. A girl also swallows a giant bug.

Yet if this was possibly the weirdest hour of television I’ve ever seen, it’s also the most conventional episode of the season so far. This observation deeply annoyed my wife when I came up with it last night, but hear me out. Instead of a collection of sketches and dead ends, this was a hugely eventful episode in terms of how it affected its viewers, and it was full of information—weird information, but information nonetheless. Without trying to parse or interpret the images themselves, I feel comfortable in saying that they’re the equivalent of an origin story, however vague the details might be. In the extended scene between the Giant and the new character identified in the closing credits as Señorita Dido, there’s even the implication that the whole series is about restoring balance to the Force, with Laura Palmer’s spirit migrating earthward, decades before her birth, in response to the rise of evil. Even if we end our speculations here, this is more data than we’ve ever been given about the show’s backstory. The very idea of a “mythology” seems uncharacteristically prosaic for a series that has always stubbornly resisted being pinned down, but in its period setting, it feels kind of like one of those episodes of The X-Files in which unexplained events unfold decades ago in the New Mexico desert. (Between Alamogordo and Roswell, that state has come to play a very specific role in the American collective unconscious, and I almost wish that Twin Peaks had gone elsewhere for inspiration.)

In other words, if you’re approaching Twin Peaks as a code or a series of clues, this episode gave you more material than any previous installment. In its particulars, it was as crazy as hell, but its functional role was curiously straightforward. And while it’s always a fool’s game to pick apart the contributions of the show’s creators, the impulse to ground the story in the past feels less like Lynch than like Mark Frost, who published an entire book last year, The Secret History of Twin Peaks, that went over similar ground. (I haven’t read it, but a quick browse reveals that it mentions L. Ron Hubbard and his sojourn with the rocket scientist and occultist Jack Parsons, which means that I’ll probably need to take a closer look.) Frost has an odd resume, with a body of work since the show’s initial run that includes conspiracy thrillers, nonfiction books about golf, and the scripts to the first two Fantastic Four movies. He has an unusual interest in the past, but it feels literal-minded in comparison to Lynch, who uses the iconography of previous eras as a backdrop for dreams. Their interplay, like that of Lynch and his other longtime collaborator Barry Gifford, yield results that are strikingly different from what you get when the director is off working on his own. Among other things, their cultural reference points—like The Wizard of Oz in Wild at Heart—are more transparent. And we seem to be reaching a point in the series in which that shape is becoming incrementally more visible.

That’s why I’m slightly wary of what comes next, as much as I loved what I saw here. I don’t want Twin Peaks to become a crossword puzzle, or to have a coherent mythology that can be picked apart online. It was always most fascinating when it hinted at the existence of a pattern that lay behind the surface of the series, and even the viewer’s own life—a dream world, overheard in the soundtrack, that grows more elusive the older we get, and then revisits us in old age. At its best, it was a show that seemed knew something that we didn’t. If anything, it may have just shown us too much, although that depends on what happens next. The episode ends without returning us to the present, but I’d be very happy if, when it picks up next week, we moved on without referring to any of it, as if it were an extended footnote or appendix that didn’t need to be read to appreciate the text. It’s information for the audience, not the characters, and the nice thing about this revival is that it allows for the kind of massive structural digression that wouldn’t have been feasible twenty-five years ago. (Some of the least successful scenes in the original run of Twin Peaks involve Cooper and Sheriff Truman speculating about the true nature of the Black Lodge. The fact that we just got so much backstory in visual form hopefully removes the need to spell it out in the dialogue. And I only wish that The X-Files had taken the same approach.) It was brilliant and unforgettable. And I hope that we never have to talk about it again.

Written by nevalalee

June 26, 2017 at 8:54 am

Too far to go

leave a comment »

Note: Spoilers follow for the third season of Fargo.

A year and a half ago, I wrote on this blog: “Fargo may turn out to be the most important television series on the air today.” Now that the third season has ended, it feels like a good time to revisit that prediction, which turned out to be sort of right, but not for the reasons that I expected. When I typed those words, cracking the problem of the anthology series felt like the puzzle on which the future of television itself depended. We were witnessing an epochal shift of talent, which is still happening, from movies to the small screen, as big names on both sides of the camera began to realize that the creative opportunities it afforded were in many ways greater than what the studios were prepared to offer. I remain convinced that we’re entering an entertainment landscape in which Hollywood focuses almost exclusively on blockbusters, while dramas and quirky smaller films migrate to cable, or even the networks. The anthology series was the obvious crossover point. It could draw big names for a limited run, it allowed stories to be told over the course of ten tight episodes rather than stretched over twenty or more, it lent itself well to being watched in one huge binge, and it offered viewers the prospect of a definitive conclusion. At its best, it felt like an independent movie given the breathing room and resources of an epic. Fargo, its exemplar, became one of the most fascinating dramas on television in large part because it drew its inspiration from one of the most virtuoso experiments with tone in movie history—a triangulation, established by the original film, between politeness, quiet desperation, and sudden violence. It was a technical trick, but a very good one, and it seemed like a machine that could generate stories forever.

After three seasons, I haven’t changed my mind, even if the show’s underlying formula feels more obvious than before. What I’ve begun to realize about Fargo is that it’s an anthology series that treats each season as a kind of miniature anthology, too, with scenes and entire episodes that stand for nothing but themselves. In the first season, the extended sequence about Oliver Platt’s supermarket magnate was a shaggy dog story that didn’t go anywhere, but now, it’s a matter of strategy. The current season was ostensibly focused on the misfortunes of the Stussey brothers, played with showy brilliance by Ewan McGregor, but it allowed itself so many digressions that the plot became more like a framing device. It opened with a long interrogation scene set in East Germany that was never referenced again, aside from serving as a thematic overture to the whole—although it can’t be an accident that “Stussey” sounds so much like “Stasi.” Later, there was a self-contained flashback episode set in the science fiction and movie worlds of the seventies, including an animated cartoon to dramatize a story by one of the characters, which turned the series into a set of nesting dolls. It often paused to stage the parables told by the loathsome Varga, which were evidently supposed to cast light on the situation, but rarely had anything to do it. After the icy control of the first season and the visual nervousness of the second, the third season threaded the needle by simultaneously disciplining its look and indulging its penchant for odd byways. Each episode was like a film festival of short subjects, some more successful than others, and unified mostly by creator Noah Hawley’s confidence that we would follow him wherever he went.

Mostly, he was right, although his success rate wasn’t perfect, as it hardly could have been expected to be. There’s no question that between Fargo and Legion, Hawley has been responsible for some of the most arresting television of the last few years, but the strain occasionally shows. The storytelling and character development on Legion were never as interesting as its visual experiments, possibly because a show can innovate along only so many parameters at once. And Fargo has been so good at its quirky components—it rarely gives us a scene that isn’t riveting in itself—that it sometimes loses track of the overall effect. Like its inspiration, it positions itself as based on true events, even though it’s totally fictional, and in theory, this frees it up to indulge in loose ends, coincidences, and a lack of conventional climaxes, just like real life. But I’ve never entirely bought this. The show is obsessively stylized and designed, and it never feels like a story that could take place anywhere but in the fictional Coenverse. At times, Hawley seems to want it both ways. The character of Nikki Swango, played by Mary Elizabeth Winstead, is endlessly intriguing, and I give the show credit for carrying her story through to what feels like a real conclusion, rather than using her suffering as an excuse to motivate a male protagonist. But when she’s gratuitously targeted by the show’s villains, only to survive and turn into an avenging angel, it’s exactly what I wanted, but I couldn’t really believe a second of it. It’s just as contrived as any number of storylines on more conventional shows, and although the execution is often spellbinding, it has a way of eliding reasonable objections. When it dispatches Nikki at the end with a standard trick of fate, it feels less like a subversion than the kind of narrative beat that the show has taught us to expect, and by now, it’s dangerously close to a cliché.

This is where the anthology format becomes both a blessing and a curse. By tying off each story after ten episodes, Fargo can allow itself to be wilder and more intense than a show that has to focus on the long game, but it also gets to indulge in problematic storytelling devices that wouldn’t stand up to scrutiny if we had to live with these characters for multiple seasons. Even in its current form, there are troubling patterns. Back in the first season, one of my few complaints revolved around the character of Bill Oswalt, who existed largely to foil the resourceful Molly as she got closer to solving the case. Bill wasn’t a bad guy, and the show took pains to explain the reasons for his skepticism, but their scenes together quickly grew monotonous. They occurred like clockwork, once every episode, and instead of building to something, they were theme and variations, constantly retarding the story rather than advancing it. In the third season, incredibly, Fargo does the same thing, but worse, in the form of Chief Moe Dammik, who exists solely to doubt, undermine, and make fun of our hero, Gloria Burgle, and without the benefit of Bill’s underlying sweetness. Maybe the show avoided humanizing Dammik because it didn’t want to present the same character twice—which doesn’t explain why he had to exist at all. He brought the show to a halt every time he appeared, and his dynamic with Gloria would have seemed lazy even on a network procedural. (And it’s a foil, significantly, that the original Fargo didn’t think was necessary.) Hawley and his collaborators are only human, but so are all writers. And if the anthology format allows them to indulge their strengths while keeping their weaknesses from going too far, that may be the strongest argument for it of all.

Written by nevalalee

June 22, 2017 at 8:45 am

Posted in Television

Tagged with , ,

Invitation to look

leave a comment »

Note: This post discusses plot elements from last night’s episode of Twin Peaks.

In order to understand the current run of Twin Peaks, it helps to think back to the most characteristic scene from the finale of the second season, which was also the last episode of the show to air for decades. I’m not talking about Cooper in the Black Lodge, or any of the messy, unresolved melodrama that swirled around the other characters, or even the notorious cliffhanger. I mean the scene at Twin Peaks Savings and Loan that lingers interminably on the figure of Dell Mibbler, an ancient, doddering bank manager whom we haven’t seen before and will never see again, as he crosses the floor, in a single unbroken shot, to get a glass of water for Audrey. Even at the time, when the hope of a third season was still alive, many viewers must have found the sequence agonizingly pointless. Later, when it seemed like this was the last glimpse of these characters that we would ever have, it felt even less explicable. With only so many minutes in any given episode, each one starts to seem precious, especially in a series finale, and this scene took up at least two of them. (Now that we’ve finally gotten another season, I’m not sure how it will play in the future, but I suspect that it will feel like what it must have been intended to be—a precarious, unnecessary, but still pretty funny gag.) Anecdotally speaking, for a lot of viewers, the third season is starting to feel like that bank scene played over and over again. In theory, we have plenty of room for digressions, with eighteen hours of television to fill. But as the tangents and apparent dead ends continue to pile up, like the scene last night in which the camera spends a full minute lovingly recording an employee sweeping up at the Roadhouse, it sometimes feels like we’ve been tricked into watching Dell Mibbler: The Return.

Yet this has been David Lynch’s style from the beginning. Lynch directed only a few hours of the initial run of Twin Peaks, but his work, particularly on the pilot, laid down a template that other writers and directors did their best to follow. And many of the show’s iconic images—the streetlight at the corner where Laura was last seen, the waterfall, the fir trees blowing in the wind—consist of silent shots that are held for slightly longer than the viewer would expect. One of the oddly endearing things about the original series was how such eerie moments were intercut with scenes that, for all their quirkiness, were staged, shot, and edited more or less like any other network drama. The new season hasn’t offered many such respites, which is part of why it still feels like it’s keeping itself at arm’s length from its own history. For better or worse, Lynch doesn’t have to compromise here. (Last night’s episode was perhaps the season’s most plot-heavy installment to date, and it devoted maybe ten minutes to advancing the story.) Instead, Lynch is continuing to educate us, as he’s done erratically throughout his career, on how to slow down and pay attention. Not all of his movies unfold at the same meditative pace: Blue Velvet moves like a thriller, in part because of the circumstances of its editing, and Wild at Heart seems like an attempt, mostly unsuccessful, to sustain that level of frantic motion for the film’s entire length. But when we think back to the scenes from his work that we remember most vividly, they tend to be static shots that are held so long that they burn themselves into our imagination. And as movies and television shows become more anxious to keep the viewer’s interest from straying for even a second, Twin Peaks remains an invitation to look and contemplate.

It also invites us to listen, and while much of Lynch’s fascination with stillness comes from his background as a painter, it also emerges from his interest in sound. Lynch is credited as a sound designer on Twin Peaks, as he has been for most of his movies, and the show is suffused with what you might call the standard-issue Lynchian noise—a low, barely perceptible hum of static that occasionally rises to an oceanic roar. (In last night’s episode, Benjamin Horne and the character played by Ashley Judd try vainly to pin down the source of a similar hum at the Great Northern, and while it might eventually lead somewhere, it also feels like a subtle joke at Lynch’s own expense.) The sound is often associated with electronic or recording equipment, like the video cameras that are trained on the glass cube in the season premiere. My favorite instance is in Blue Velvet, when Jeffrey stumbles across the tableau of two victims in Dorothy’s apartment, one with his ear cut off, the other still standing with his brains shot out. There’s a hum coming from the shattered television set, and it’s pitched at so low a level that it’s almost subliminal, except to imperceptibly increase our anxiety. You only really become aware of it when it stops, after Jeffrey closes the door behind him and, a little later, when Frank shoots out the television tube. But you can’t hear it at all unless everything else onscreen is deathly quiet. It emerges from stillness, as if it were a form of background noise that surrounds us all the time, but is only audible when the rest of the world fades away. I don’t know whether Lynch’s fascination with this kind of sound effect came out of his interest in stillness or the other way around, and the most plausible explanation is that it all arose from the same place. But you could build a convincing reading of his career around the two meanings of the word “static.”

Taken together, the visual and auditory elements invite us to look on in silence, which may be a reflection of Lynch’s art school background. (I don’t know if Lynch was directly influenced by Marcel Duchamp’s Étant Donnés, a work of art that obsessed me so much that I wrote an entire novel about it, but they both ask us to stand and contemplate the inexplicable without speaking. And when you see the installation in person at the Philadelphia Museum of Art, as I’ve done twice, the memory is inevitably entwined with the low hum of the room’s climate control system.) By extending this state of narrative suspension to the breaking point, Twin Peaks is pushing in a direction that even the most innovative prestige dramas have mostly avoided, and it still fascinates me. The real question is when and how the silence will be broken. Lynch’s great hallmark is his use of juxtaposition, not just of light and dark, which horrified Roger Ebert so much in Blue Velvet, but of silence and sudden, violent action. We’ve already seen hints of this so far in Twin Peaks, particularly in the scenes involving the murderous Ike the Spike, who seems to be playing the same role, at random intervals, that a figure of similarly small stature did at the end of Don’t Look Now. And I have a feeling that the real payoff is yet to come. This might sound like the wishful thinking of a viewer who is waiting for the show’s teasing hints to lead somewhere, but it’s central to Lynch’s method, in which silence and stillness are most effective when framed by noise and movement. The shot of the two bodies in Dorothy’s apartment leads directly into the most dramatically satisfying—and, let it be said, most conventional—climax of Lynch’s career. And remember Dell Mibbler? At the end of the scene, the bank blows up.

Written by nevalalee

June 19, 2017 at 9:06 am

Moving through time

leave a comment »

Note: Spoilers follow for last night’s episode of Twin Peaks.

For all the debate over how best to watch a television show these days, which you see argued with various degrees of seriousness, the options that you’re offered are fairly predictable. If it’s a show on a streaming platform, you’re presented with all of it at once; if it’s on a cable or broadcast channel, you’re not. Between those two extremes, you’re allowed to structure your viewing experience pretty much however you like, and it isn’t just a matter of binging the whole season or parceling out each episode one week at a time. Few of us past the age of thirty have the ability or desire to watch ten hours of anything in one sitting, and the days of slavish faithfulness to an appointment show are ending, too—even if you aren’t recording it on DVR, you can usually watch it online the next day. Viewers are customizing their engagement with a series in ways that would have been unthinkable just fifteen years ago, and networks are experimenting with having it both ways, by airing shows on a weekly basis while simultaneously making the whole season available online. If there’s pushback, it tends to be from creators who are used to having their shows come out sequentially, like Dan Harmon, who managed to get Yahoo to release the sixth season of Community one episode at a time, as if it were still airing on Thursdays at eight. (Yahoo also buried the show on its site so that even fans had trouble figuring out that it was there, but that’s another story, as well as a reminder, in case we needed one, that such decisions aren’t always logical or considered.)

Twin Peaks, for reasons that I’ll discuss in a moment, doesn’t clearly lend itself to one approach or another, which may be why its launch was so muddled. Showtime premiered the first two hours on a Sunday evening, then quietly made the next two episodes available online, although this was so indifferently publicized that it took me a while to hear about it. It then ran episodes three and four yet again the following week, despite the fact that many of the show’s hardcore fans—and there’s hardly anyone else watching—would have seen them already, only to finally settle into the weekly delivery schedule that David Lynch had wanted in the first place. As a result, it stumbled a bit out of the gate, at least as far as shaping a wider conversation was concerned. You weren’t really sure who was watching those episodes or when. (To be fair, in the absence of blockbuster ratings, the existence of viewers watching at different times is what justifies this show’s existence.) As I’ve argued elsewhere, this isn’t a series that necessarily benefits from collective analysis, but there’s a real, if less tangible, emotional benefit to be had from collective puzzlement. It’s the understanding that a lot of other people are feeling the same things that you are, at roughly the same time, and that you have more in common with them than you will with anybody else in the world. I’m overstating it, but only a little. Whenever I meet someone who bought Julee Cruise’s first album or knows why Lil was wearing a sour face, I feel like I’ve found a kindred spirit. Twin Peaks started out as a huge cultural phenomenon, dwindling only gradually into a cult show that provided its adherents with their own set of passwords. And I think that it would have had a better chance of happening again now if Showtime had just aired all the episodes once a week from the beginning.

Yet I understand the network’s confusion, because this is both a show that needs to be seen over a period of time and one that can’t be analyzed until we’ve seen the full picture. Reviewing it must be frustrating. Writing about it here, I don’t need to go into much detail, and I’m free to let my thoughts wander wherever they will, but a site like the New York Times or The A.V. Club carries its own burden of expectations, which may not make sense for a show like this. A “recap” of an episode of Twin Peaks is almost a contradiction in terms. You can’t do much more than catalog the disconnected scenes, indulge in some desultory theorizing, and remind readers that they shouldn’t jump to any conclusions until they’ve seen more. It’s like reviewing Mulholland Drive ten minutes at a time—which is ridiculous, but it’s also exactly the position in which countless critics have found themselves. For ordinary viewers, there’s something alluring about the constant suspension of judgment that it requires: I’ve found it as absorbing as any television series I’ve seen in years. Despite its meditative pacing, an episode seems to go by more quickly than most installments of a more conventional show, even the likes of Fargo or Legion, which are clearly drawing from the same pool of ideas. (Noah Hawley is only the latest creator and showrunner to try to deploy the tone of Twin Peaks in more recognizable stories, and while he’s better at it than most, it doesn’t make the effort any less thankless.) But it also hamstrings the online critic, who has no choice but to publish a weekly first draft on the way to a more reasoned evaluation. Everything you write about Twin Peaks, even, or especially, if you love it, is bound to be provisional until you can look at it as a whole.

Still, there probably is a best way to watch Twin Peaks, which happens to be the way in which I first saw it. You stumble across it years after it originally aired, in bits and pieces, and with a sense that you’re the only person you know who is encountering it in quite this way. A decade from now, my daughter, or someone like her, will discover this show in whatever format happens to be dominant, and she’ll watch it alone. (I also suspect that she’ll view it after having internalized the soundtrack, which doesn’t even exist yet in this timeline.) It will deprive her, inevitably, of a few instants of shared bewilderment or revelation that can only occur when you’re watching a show on its first airing. When Albert Rosenfeld addresses the woman in the bar as Diane, and she turns around to reveal Laura Dern in a blonde wig, it’s as thrilling a moment as I’ve felt watching television in a long time—and by the way Lynch stages it, it’s clear that he knows it, too. My daughter won’t experience this. But there’s also something to be said for catching up with a show that meant a lot to people a long time ago, with your excitement tinged with a melancholy that you’re too late to have been a part of it. I frankly don’t know how often I’ll go back to watch this season again, any more than I’m inclined to sit through Inland Empire, which I loved, a second time. But I’m oddly consoled by the knowledge that it will continue to exist and mean a lot to future viewers after the finale airs, which isn’t something that you could take for granted if you were watching the first two seasons in the early nineties. And it makes this particular moment seem all the more precious, since it’s the last time that we’ll be able to watch Twin Peaks without any idea of where it might be going.

Written by nevalalee

June 12, 2017 at 9:07 am

Live from Twin Peaks

with one comment

What does Twin Peaks look like without Agent Cooper? It was a problem that David Lynch and his writing team were forced to solve for Fire Walk With Me, when Kyle MacLachlan declined to come back for much more than a token appearance, and now, in the show’s third season, Lynch and Mark Frost seem determined to tackle the question yet again, even though they’ve been given more screen time for their leading man than anyone could ever want. MacLachlan’s name is the first thing that we see in the closing credits, in large type, to the point where it’s starting to feel like a weekly punchline—it’s the only way that we’d ever know that the episode was over. He’s undoubtedly the star of the show. Yet even as we’re treated to an abundance of Dark Cooper and Dougie Jones, we’re still waiting to see the one character that I, and a lot of other fans, have been awaiting the most impatiently. Dale Cooper, it’s fair to say, is one of the most peculiar protagonists in television history. As the archetypal outsider coming into an isolated town to investigate a murder, he seems at first like a natural surrogate for the audience, but, if anything, he’s quirkier and stranger than many of the locals he encounters. When we first meet Cooper, he comes across as an almost unplayable combination of personal fastidiousness, superhuman deductive skills, and childlike wonder. But you’re anything like me, you wanted to be like him. I ordered my coffee black for years. And if he stood for the rest of us, it was as a representative of the notion, which crumbles in the face of logic but remains emotionally inescapable, that the town of Twin Peaks would somehow be a wonderful place to live, despite all evidence to the contrary.

In the third season, this version of Cooper, whom I’ve been waiting for a quarter of a century to see again, is nowhere in sight. And the buildup to his return, which I still trust will happen sooner or later, has been so teasingly long that it can hardly be anything but a conscious artistic choice. With every moment of recognition—the taste of coffee, the statue of the gunfighter in the plaza—we hope that the old Cooper will suddenly reappear, but the light in his eyes always fades. On some level, Lynch and Frost are clearly having fun with how long they can get away with this, but by removing the keystone of the original series, they’re also leaving us with some fascinating insights into what kind of show this has been from the very beginning. Let’s tick off its qualities one by one. Over the course of any given episode, it cuts between what seems like about a dozen loosely related plotlines. Most of the scenes last between two and four minutes, with about the same number of characters, and the components are too far removed from one another to provide anything in the way of narrative momentum. They aren’t built around any obligation to advance the plot, but around striking images or odd visual or verbal gags. The payoff, as in the case of Dr. Jacoby’s golden shovels, often doesn’t come for hours, and when it does, it amounts to the end of a shaggy dog story. (The closest thing we’ve had so far to a complete sequence is the sad case of Sam, Tracey, and the glass cube, which didn’t even make it past the premiere.) If there’s a pattern, it isn’t visible, but the result is still strangely absorbing, as long as you don’t approach it as a conventional drama but as something more like Twenty-Two Short Films About Twin Peaks.

You know what this sounds like to me? It sounds like a sketch comedy show. I’ve always seen Twin Peaks as a key element in a series of dramas that stretches from The X-Files through Mad Men, but you could make an equally strong case for it as part of a tradition that runs from SCTV to Portlandia, which went so far as to cast MacLachlan as its mayor. They’re set in a particular location with a consistent cast of characters, but they’re essentially sketch comedies, and when one scene is over, they simply cut to the next. In some ways, the use of a fixed setting is a partial solution to the problem of transitions, which shows from Monty Python onward have struggled to address, but it also creates a beguiling sense of encounters taking place beyond the edges of the frame. (Matt Groening has pointed to SCTV as an inspiration for The Simpsons, with its use of a small town in which the characters were always running into one another. Groening, let’s not forget, was born in Portland, just two hours away from Springfield, which raises the intriguing question of why such shows are so drawn to the atmosphere of the Pacific Northwest.) Without Cooper, the show’s affinities to sketch comedy are far more obvious—and this isn’t the first time this has happened. After Laura’s murderer was revealed in the second season, the show seemed to lose direction, and many of the subplots, like James’s terminable storyline with Evelyn, became proverbial for their pointlessness. But in retrospect, that arid middle stretch starts to look a lot like an unsuccessful sketch comedy series. And it’s worth remembering that Lynch and Frost originally hoped to keep the identity of the killer a secret forever, knowing that it was all that was holding together the rest.

In the absence of a connective thread, it takes a genius to make this kind of thing work, and the lack of a controlling hand is a big part of what made the second season so markedly unsuccessful. Fortunately, the third season has a genius readily available. The sketch format has always been David Lynch’s comfort zone, a fact that has been obscured by contingent factors in his long career. Lynch, who was trained as a painter and conceptual artist, thinks naturally in small narrative units, like the video installations that we glimpse for a second as we wander between rooms in a museum. Eraserhead is basically a bunch of sketches linked by its titular character, and he returned to that structure in Inland Empire, which, thanks to the cheapness of digital video, was the first movie in decades that he was able to make entirely on his own terms. In between, the inclination was present but constrained, sometimes for the better. In its original cut of three hours, Blue Velvet would have played much the same way, but in paring it down to its contractually mandated runtime, Lynch and editor Duwayne Dunham ended up focusing entirely on its backbone as a thriller. (It’s an exact parallel to Annie Hall, which began as a three-hour series of sketches called Anhedonia that assumed its current form after Woody Allen and Ralph Rosenbaum threw out everything that wasn’t a romantic comedy.) Most interesting of all is Mulholland Drive, which was originally shot as a television pilot, with fragmented scenes that were clearly supposed to lead to storylines of their own. When Lynch recut it into a movie, they became aspects of Betty’s dream, which may have been closer to what he wanted in the first place. And in the third season of Twin Peaks, it is happening again.

Lotion in two dimensions

leave a comment »

A few days ago, Christina Colizza of The Hairpin published a sharp, funny article on the phenomenon of “night lotion,” as documented by the Instagram feed of the same name. It’s the curiously prevalent image in movies and television of a character, invariably a woman, who is shown casually moisturizing over the course of an unrelated scene, usually during a bedroom conversation or argument with her husband. The account, which is maintained by Beth Wawerna of the band Bird of Youth, offers up relevant screenshots of the likes of Piper Laurie from Twin Peaks, Claire Foy from The Crown, and Anna Gunn from Breaking Bad, and although I’d never consciously noticed it before, it now seems like something I’ve been seeing without fully processing it for years. Colizza contends that this convention is entirely fictional: “I don’t know anyone who does this.” Wawerna concurs, saying that she asked multiple women if they ever spoke to their spouses for five minutes while applying lotion at bedtime, and they all said no. To be fair, though, it isn’t entirely nonexistent. My wife has kindly consented to issue the following statement:

I am a skincaretainment enthusiast and take great pleasure in my elaborate skincare routines (one for morning and one for nighttime, naturally). Before going to bed at night, I stand in front of the full-length mirror and talk to Alec while applying a succession of products to my face and neck in a very precise order. It is one of my favorite parts of the day. Wintertime is especially good for this routine because I use more products on my face and also put lotion on my feet after climbing into bed. This gives me even more time to tell Alec inane stories about my day while he tries to read a book.

Maybe this means that I’m living in a prestige cable drama without knowing it—but it also implies that this behavior isn’t entirely without parallels in real life. Still, it seems to appear with disproportionate frequency in film and television, and it’s worth asking why. Colizza and Wawerna make a few cheerful stabs at unpacking it, although they’re the first to admit that they raise more questions than they answer. “It’s obviously some sort of crutch,” Wawerna says. “I don’t know if it’s true and I need to do actual research to figure this out, but are most of those scenes written by men?” Colizza adds:

This is neither death wish on buying Jergens in bulk, nor a critique on moisturizing; we all need a bit of softness in our lives. The problem here is that the lotion, whether sensually applied or rubbed vigorously, is a visual distraction during moments of potential character development and depth. “Is there anything else a woman can do?” Wawerna asks me in giddy exasperation. “Can we just sit with this woman, who’s clearly having a moment with herself, or going through something?”

Elsewhere, Colizza helpfully classifies the two most common contexts for the trope, which tends to occur at a pivot point in the narrative: “This moonlit ritual is either a woman alone having a moment before bed. Or a woman tearing her hubby a new one.” And I think that she gets very close to the solution when she wonders whether television “demands some physicality on screen at all times, especially so if it can help convey a basic emotion.”

This strikes me as right on target, with one slight modification: it isn’t the medium reaching back to impose a physical action on the performer, but the actor introducing a technical device into a scene. Actors are always looking for something to do with their hands. This notion of “stage business” is an established point of craft, but it can also have unpredictable effects on viewers. The great example here, of course, is smoking. If we see so much of it in Hollywood, it isn’t because the studios are determined to glamorize tobacco use or in the pocket of the cigarette companies, but because smoking is a pragmatic performative tool. A cigarette gives actors a wide range of ways to emphasize lines or emotional beats: they can remove it from the pack, light it, peer through the smoke, exhale, study the ember, and grind it out to underline a moment. (One beautiful illustration is the last shot of The Third Man, with what Roger Ebert once described as “the perfect emotional parabola described as [Joseph] Cotten throws away his cigarette.” Revealingly, this was an actor’s choice: Carol Reed kept the camera rolling for an uncomfortably long time after Alida Valli exited the frame, and Cotten lit up just to have something to do.) In terms of providing useful tidbits of actorly behavior, nothing comes close to smoking, and until recently, I would have said that no comparable bit of business has managed to take its place. But that’s just what night lotion is. Its advantages are obvious. It requires nothing but a commonplace prop that isn’t likely to draw attention to itself, and it offers a small anthology of possible motions that can be integrated in countless ways with the delivery of dialogue. Unlike smoking, it’s constrained by the fact that it naturally lives in the bedroom, but this isn’t really a limitation, since this is where many interior scenes take place anyway.

And when you look at the instances that Wawerna has collected, you find that they nearly all occur at narrative moments in which a character of an earlier era would have unthinkingly lit up a cigarette. What’s really funny is how a technical solution to an acting problem can acquire coded meanings and externalities—it’s literally an emergent property. The movies may not have meant to encourage smoking, but they unquestionably did, and it isn’t unreasonable to say that people died because Humphrey Bogart needed something to do while he said his lines. (The fact that Bogart smoked a lot offscreen doesn’t necessarily invalidate this argument. These choices are often informed by an actor’s personality, and I assume that television stars are more likely to take an interest in their skin care than the rest of us.) The message carried by lotion is far more innocuous: as Colizza notes, we could all stand to moisturize more. In practice, though, it’s such a gendered convention that it trails along all kinds of unanticipated baggage involving female beauty and body image, at least when you see so many examples in a row. Taken in isolation, it’s invisible enough, except to exceptionally observant viewers, that it doesn’t seem likely to disappear anytime soon. In the past, I’ve compared stumbling across a useful writing technique to discovering a new industrial process, and a convention like night lotion, which can be incorporated intact into any number of dramatic situations, is a writer and actor’s dream. Not surprisingly, it ends up being used to death until it becomes a cliché. Unlike such conventions as the cinematic baguette, it isn’t designed to save time or convey information, but to serve as as what Wawerna accurately describes as a crutch for the performer. Like the cigarette, or the anonymous decanter of brown liquor in the sideboard that played a similar role in so many old movies, it seems destined to remain in the repertoire, even if its meaning is only skin deep.

Written by nevalalee

June 2, 2017 at 9:39 am

The space between us all

with 5 comments

In an interview published in the July 12, 1970 issue of Rolling Stone, the rock star David Crosby said: “My time has gotta be devoted to my highest priority projects, which starts with tryin’ to save the human race and then works its way down from there.” The journalist Ben Fong-Torres prompted him gently: “But through your music, if you affect the people you come in contact with in public, that’s your way of saving the human race.” And I’ve never forgotten Crosby’s response:

But somehow operating on that premise for the last couple of years hasn’t done it, see? Somehow Sgt. Pepper’s did not stop the Vietnam War. Somehow it didn’t work. Somebody isn’t listening. I ain’t saying stop trying; I know we’re doing the right thing to live, full on. Get it on and do it good. But the inertia we’re up against, I think everybody’s kind of underestimated it. I would’ve thought Sgt. Pepper’s could’ve stopped the war just by putting too many good vibes in the air for anybody to have a war around.

He was right about one thing—the Beatles didn’t stop the war. And while it might seem as if there’s nothing new left to say about Sgt. Pepper’s Lonely Hearts Club Band, which celebrates its fiftieth anniversary today, it’s worth asking what it tells us about the inability of even our greatest works of art to inspire lasting change. It’s probably ridiculous to ask this of any album. But if a test case exists, it’s here.

It seems fair to say that if any piece of music could have changed the world, it would have been Sgt. Pepper. As the academic Langdon Winner famously wrote:

The closest Western Civilization has come to unity since the Congress of Vienna in 1815 was the week the Sgt. Pepper album was released…At the time I happened to be driving across the country on Interstate 80. In each city where I stopped for gas or food—Laramie, Ogallala, Moline, South Bend—the melodies wafted in from some far-off transistor radio or portable hi-fi. It was the most amazing thing I’ve ever heard. For a brief while, the irreparably fragmented consciousness of the West was unified, at least in the minds of the young.

The crucial qualifier, of course, is “at least in the minds of the young,” which we’ll revisit later. To the critic Michael Bérubé, it was nothing less than the one week in which there was “a common culture of widely shared values and knowledge in the United States at any point between 1956 and 1976,” which seems to undervalue the moon landing, but never mind. Yet even this transient unity is more apparent than real. By the end of the sixties, the album had sold about three million copies in America alone. It’s a huge number, but even if you multiply it by ten to include those who were profoundly affected by it on the radio or on a friend’s record player, you end up with a tiny fraction of the population. To put it another way, three times as many people voted for George Wallace for president as bought a copy of Sgt. Pepper in those years.

But that’s just how it is. Even our most inescapable works of art seem to fade into insignificance when you consider the sheer number of human lives involved, in which even an apparently ubiquitous phenomenon is statistically unable to reach a majority of adults. (Fewer than one in three Americans paid to see The Force Awakens in theaters, which is as close as we’ve come in recent memory to total cultural saturation.) The art that feels axiomatic to us barely touches the lives of others, and it may leave only the faintest of marks on those who listen to it closely. The Beatles undoubtedly changed lives, but they were more likely to catalyze impulses that were already there, providing a shape and direction for what might otherwise have remained unexpressed. As Roger Ebert wrote in his retrospective review of A Hard Day’s Night:

The film was so influential in its androgynous imagery that untold thousands of young men walked into the theater with short haircuts, and their hair started growing during the movie and didn’t get cut again until the 1970s.

We shouldn’t underestimate this. But if you were eighteen when A Hard Day’s Night came out, it also means that you were born the same year as Donald Trump, who decisively won voters who were old enough to buy Sgt. Pepper on its initial release. Even if you took its message to heart, there’s a difference between the kind of change that marshals you the way that you were going and the sort that realigns society as a whole. It just isn’t what art is built to do. As David Thomson writes in Rosebud, alluding to Trump’s favorite movie: “The world is very large and the greatest films so small.”

If Sgt. Pepper failed to get us out of Vietnam, it was partially because those who were most deeply moved by it were more likely to be drafted and shipped overseas than to affect the policies of their own country. As Winner says, it united our consciousness, “at least in the young,” but all the while, the old men, as George McGovern put it, were dreaming up wars for young men to die in. But it may not have mattered. Wars are the result of forces that care nothing for what art has to say, and their operations are often indistinguishable from random chance. Sgt. Pepper may well have been “a decisive moment in the history of Western civilization,” as Kenneth Tynan hyperbolically claimed, but as Harold Bloom reminds us in The Western Canon:

Reading the very best writers—let us say Homer, Dante, Shakespeare, Tolstoy—is not going to make us better citizens. Art is perfectly useless, according to the sublime Oscar Wilde, who was right about everything.

Great works of art exist despite, not because of, the impersonal machine of history. It’s only fitting that the anniversary of Sgt. Pepper happens to coincide with a day on which our civilization’s response to climate change will be decided in a public ceremony with overtones of reality television—a more authentic reflection of our culture, as well as a more profound moment of global unity, willing or otherwise. If the opinions of rock stars or novelists counted for anything, we’d be in a very different situation right now. In “Within You Without You,” George Harrison laments “the people who gain the world and lose their soul,” which neatly elides the accurate observation that they, not the artists, are the ones who do in fact tend to gain the world. (They’re also “the people who hide themselves behind a wall.”) All that art can provide is private consolation, and joy, and the reminder that there are times when we just have to laugh, even when the news is rather sad.

The faults in our stars

leave a comment »

In his wonderful conversational autobiography Cavett, Dick Cavett is asked about his relationship with Johnny Carson, for whom he served as a writer on The Tonight Show. Cavett replies:

I did work for Carson. We didn’t go fishing together on weekends, and I never slept over at his house, the two of us lying awake in our jammies eating the fudge we had made together, talking of our dreams and hopes and fears. But I found him to be cordial and businesslike, and to have himself well in hand as far as the show as concerned…He is not a man who seems to seek close buddies, and, if he were, the staff of his own television show would not be the ideal place to seek them.

It’s a memorable passage, especially the last line, which seems particularly relevant at a time when our talk show hosts seem eager to seem accessible to everybody, and to depict their writing staffs as one big happy family. When asked to comment on the widespread notion that Carson was “cold,” Cavett responds:

I know very little about Johnny’s personal relationships. I have heard that he has been manipulated and screwed more than once by trusted associates, to the point where he is defensively wary to what some find an excessive degree. I see this as a perfectly reasonable response. It is, I suppose, the sort of thing that happens to a person in show business that makes his former friends say, with heavy disapprobation, “Boy, has he changed.”

Cavett could easily let the subject rest there, but something in the question seems to stick in his mind, and he continues:

While I’m at it, I’ll do a short cadenza on the subject of changing. If you are going to survive in show business, the chances are you are going to change or be changed. Whatever your reasons for going into the business, it is safe to admit they form a mixture of talent, ambition, and neurosis. If you are going to succeed and remain successful, you are going to do it at the expense of a number of people who are clamoring to climb the same rope you are climbing. When you suddenly acquire money, hangers-on, well-wishers, and ill-wishers; when you need to make baffling decisions quickly, to do too much in too little time, to try to lead a personal and a professional life when you can’t seem to find the time for either; when you have to kick some people’s fannies and kiss others’ to get to the point where you won’t need to do either any more; when you have to sort out conflicting advice, distinguish between the treacherous and the faithful or the competent and the merely aggressive, suffer fools when time is short and incompetents when you are in a pinch; and when you add to this the one thing that you don’t get in other professions—the need to be constantly fresh and presentable and at your best just at the times when you are bone-weary, snappish, and depressed; when all these things apply, it is possible that you are going to be altered, changed, and sometimes for the worse.

This is one of the best things I’ve ever read about show business, and if anything, it feels even more insightful today, when we collectively have so much invested in the idea that stars have inner lives that are more or less like our own.

It’s often been said that the reason that television actors have trouble crossing over to the movies is that we expect different things from our stars in either medium. One requires a personality that is larger than life, which allows it to survive being projected onto an enormous screen in a darkened theater; the other is a person whom we’d feel comfortable inviting on a regular basis into our living rooms. If that’s true of scripted television that airs once a week, it’s even more true of the talk shows that we’re expected to watch every night. And now that the online content created by such series has become so central to their success, we’re rapidly approaching this trend’s logical culmination: a talk show host has to be someone whose face we’d be comfortable seeing anywhere, at any time. This doesn’t just apply to television, either. As social media is increasingly called upon to supplement the marketing budgets of big movies, actors are obliged to make themselves accessible—on Twitter, on Instagram, as good sports on Saturday Night Live and in viral videos—to an extent that a star of the old studio system of the forties would have found utterly baffling. Deadline’s writeup of Alien: Covenant is typical:

RelishMix…assessed that Alien: Covenant has a strong social media universe…spread across Twitter, Facebook, YouTube views and Instagram followers…The company also adds that Covenant was challenged by a generally inactive cast, with Empire’s Jussie Smollett being the most popular activated star. Danny McBride across Twitter, Instagram and Facebook counts over 250,000. Michael Fassbender is not socially active.

I love the implication that stars these days need to be “activated,” like cell phones, to be fully functional, as well as the tone of disapproval at the fact that Michael Fassbender isn’t socially active. It’s hard to imagine how that would even look: Fassbender’s appeal as an actor emerges largely from his slight sense of reserve, even in showy parts. But in today’s climate, you could also argue that this has hampered his rise as a star.

And Cavett’s cadenza on change gets at an inherent tension in the way we see our stars, which may not be entirely sustainable. In The Way of the Gun, written and directed by Christopher McQuarrie, who knows more than anyone about survival in Hollywood, the character played by James Caan says: “The only thing you can assume about a broken-down old man is that he’s a survivor.” Similarly, the only thing you can assume about a movie star, talk show host, or any other figure in show business whose face you recognize is that he or she possesses superhuman levels of ambition. Luck obviously plays a large role in success, as does talent, but both require a preternatural drive, which is the matrix in which giftedness and good fortune have a chance to do their work. Ambition may not be sufficient, but it’s certainly necessary. Yet we persist in believing that stars are accessible and ordinary, when, by definition, they can hardly be other than extraordinary. It’s a myth that emerges from the structural assumptions of social media, a potent advertising tool that demands a kind of perceptual leveling to be effective. I was recently delighted to learn that the notorious feature “Stars—They’re Just Like Us!” originated when the editors at Us Magazine had to figure out how to use the cheap paparazzi shots that they could afford to buy on their tiny budget, like a picture of Drew Barrymore picking up a penny. Social media works in much the same way. It creates an illusion of intimacy that is as false as the airbrushed images of the movie stars of Hollywood’s golden age, and it deprives us of some of the distance required for dreams. Whether or not they want to admit it, stars, unlike the rich, truly are different. And I’ll let Cavett have the last word:

Unless you are one of these serene, saintly individuals about whom it can be truly said, “He or she hasn’t changed one bit from the day I knew them in the old house at Elm Street.” This is true mostly of those who have found others to do their dirty work for them. All I’m saying is that your demands and needs change, and if you don’t change with them you don’t survive.

Written by nevalalee

May 24, 2017 at 9:53 am

%d bloggers like this: