Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Search Results

American Stories #4: A Wrinkle in Time

leave a comment »

Note: As we enter what Joe Scarborough justifiably expects to be “the most consequential political year of our lives,” I’m looking back at ten works of art—books, film, television, and music—that deserve to be reexamined in light of where America stands today. You can find the earlier installments here

These days, it’s hard to read Madeleine L’Engle’s A Wrinkle in Time without being struck by its description of the planet Camazotz, with its picture of perfect suburban conformity: “The doors of all the houses opened simultaneously, and out came women like a row of paper dolls. The print of their dresses was different, but they all gave the appearance of being the same.” (In the trailer for the upcoming movie, in which the Murry children are brilliantly reimagined as being of mixed race, the sequence has shades of Get Out.) Camazotz has often been interpreted as an allegory for a totalitarian society, as Anna Quindlen writes in her introduction to a recent paperback edition: “The identical houses outside which identical children bounce balls and jump rope in mindless unison evoke the fear so many Americans had of Communist regimes that enshrined the interests of state-mandated order over the rights of the individual.” In fact, L’Engle’s true inspiration was much closer to home. As she says in a fascinating interview with Justin Wintle in The Pied Pipers:

I think it sprang mostly from seeing Camazotz round the country. When you leave New York tonight you’ll be flying over Camazotz—house after house after house, the people in them all watching the same television programs, and all eating the same things for dinner, and the kids in their mandatory uniforms of blue jeans and satchels or whatever. I keep getting asked whether Camazotz is a protest against Communism. I suppose it is, but really it’s against forced conformity of any kind.

And L’Engle is far too elusive and interesting a writer to be easily categorized. When Wintle casually refers to “Christian piety” as an element in her books, L’Engle devastatingly responds: “I wrote A Wrinkle in Time as a violent rebellion against Christian piety.” She elaborates:

New England is Congregational. It’s been Congregational ever since this country was born. Life in a little tiny village tends to revolve around the church. If there’s any reading done the minister does it. Not many others read books, so if you want to know something you have to consult the minister. I got to know several Congregational ministers when I lived in the country simply from the hunger of having somebody to talk to who didn’t discount words…I think that in all fairness I could be anti-church. I’m not sure why, and I know it’s a contradiction. I still go to church.

In explaining why the book’s antagonist, IT, is a gigantic brain, L’Engle explains that “the brain tends to be vicious when it’s not informed by the heart”—which implies that IT might have been a naked heart as well. And Meg’s confrontation with IT culminates in what I think is the most moving passage in all of children’s literature:

If she could give love to IT perhaps it would shrivel up and die, for she was sure that IT could not withstand love. But she, in all her weakness and foolishness and baseness and nothingness, was incapable of loving IT. Perhaps it was not too much to ask of her, but she could not do it.

The italics are mine. A Wrinkle in Time asks us to love our enemies, but it also knows how difficult this is, and L’Engle’s final message is one of hope for those of us who fall short of our own high ideals: “I was looking for…something that would tumble over the world’s idea of what is successful and what is powerful. Therefore Meg succeeds through all her weaknesses and all her faults.”

Written by nevalalee

January 4, 2018 at 9:01 am

American Stories #3: Vertigo

leave a comment »

Note: As we enter what Joe Scarborough justifiably expects to be “the most consequential political year of our lives,” I’m looking back at ten works of art—books, film, television, and music—that deserve to be reexamined in light of where America stands today. You can find the earlier installments here

Vertigo, which may well be the most beautiful art object ever made in America, was based on a French novel, D’entre les morts, by Pierre Boileau and Pierre Ayraud, who wrote it in the express hope that Alfred Hitchcock would adapt it into a movie. I don’t know if Hitchcock ever explained why he transferred the setting to San Francisco, but I suspect that he was reasoning backward from its proximity to the Spanish missions, which would provide a bell tower tall enough for a woman to leap to her death, but not so high that a man couldn’t plausibly run up the stairs. Once the decision was made, Hitchcock indulged in his customary preference for utilizing his locations to their fullest. It gave us Madeline’s plunge into the bay near the Golden Gate Bridge and her haunting speech by the rings of the redwood tree: “Here I was born, and there I died. It was only a moment for you; you took no notice.” Above all else, it allowed Hitchcock to give Judy a room at the Empire Hotel, lit from outside by its green neon sign, which enabled the single greatest shot in all of cinema. And the resulting film is inseparable from the state of which Joan Didion wrote:

Rationality, reasonableness bewilder me. I think it comes out of being a “daughter of the Golden West.” A lot of the stories I was brought up on had to do with extreme actions—leaving everything behind, crossing the trackless wastes, and in those stories the people who stayed behind and had their settled ways—those people were not the people who got the prize. The prize was California.

Vertigo, like many of the best movies to come out of Hollywood, is about how the prize is won and then lost because of greed, jealousy, or nostalgia. As Scotty says despairingly to Judy at the end: “You shouldn’t have been that sentimental.”

Like many great works of American art, Vertigo lingers in the imagination because it oscillates so nervously between its surface pleasures and its darkest depths. It’s both the ultimate Hitchcock entertainment, with its flawless cinematography, iconic Edith Head costumes, and romantic Bernard Herrmann score, and the most psychologically complex film I’ve ever seen. It’s as mysterious as a movie can be, but it’s also grounded in its evocative but realistic San Francisco settings. Early on, it can come off as routine, even banal, which leaves us even less prepared for its climax, which is a sick joke that also breaks the heart. There’s no greater ending in film, and it works because it’s so cruel, arbitrary, and unfair. I’ve noted before how the original novel keeps its crucial revelation for the very end, while the film puts it almost forty minutes earlier, shifting points of view and dividing the viewer’s loyalties in the process. It’s a brilliant change—arguably no other creative decision in any cinematic adaptation has been more significant—and it turns the movie from an elegant curiosity into something indescribably beautiful and painful. When Judy turns to the camera and the image is flooded with red, we’re as close to the heart of movies as we’ll ever get. The more we learn about Hitchcock’s treatment of women, the more confessional it all seems, and it implicates us as well: Scotty desires, attains, and finally destroys Judy in his efforts to turn her into Madeline, and it ends up feeling like the most honest story that Hollywood has ever told about itself.

Written by nevalalee

January 3, 2018 at 9:00 am

American Stories #2: Citizen Kane

leave a comment »

Note: As we enter what Joe Scarborough justifiably expects to be “the most consequential political year of our lives,” I’m looking back at ten works of art—books, film, television, and music—that deserve to be reexamined in light of where America stands today. You can find the earlier installments here

In his essay collection America in the Dark, the film critic David Thomson writes of Citizen Kane, which briefly went under the portentous working title American:

Citizen Kane grows with every year as America comes to resemble it. Kane is the willful success who tries to transcend external standards, and many plain Americans know his pent-up fury at lonely liberty. The film absorbs praise and criticism, unabashed by being voted the best ever made or by Pauline Kael’s skillful reassessment of its rather nasty cleverness. Perhaps both those claims are valid. The greatest film may be cunning, slick, and meretricious.

It might be even more accurate to say that the greatest American movie ever made needs to be cunning, slick, and meretricious, at least if it’s going to be true to the values of its country. Kane is “a shallow masterpiece,” as Kael famously put it, but it could hardly be anything else. (Just a few years later, Kael expressed a similar sentiment about Norman Mailer: “I think he’s our greatest writer. And what is unfortunate is that our greatest writer should be a bum.”) It’s a masterwork of genial fakery by and about a genial faker—Susan Alexander asks Kane at their first meeting if he’s a professional magician—and its ability to spin blatant artifice and sleight of hand into something unbearably moving goes a long way toward explaining why it was a favorite movie of men as different as Charles Schulz, L. Ron Hubbard, and Donald Trump.

And the most instructive aspect of Kane in these troubled times is how completely it deceives even its fans, including me. Its portrait of a man modeled on William Randolph Hearst is far more ambiguous than it was ever intended to be, because we’re distracted throughout by our fondness for the young Welles. He’s visible all too briefly in the early sequences at the Inquirer, and he winks at us through his makeup as an older man. As a result, the film that Hearst wanted to destroy turned out to be the best thing that could have happened to his legacy—it makes him far more interesting and likable than he ever was. The same factor tends to obscure the movie’s politics, as Kael wrote in the early seventies:

When Welles was young—he was twenty-five when the film opened—he used to be accused of “excessive showmanship,” but the same young audiences who now reject “theatre” respond innocently and wholeheartedly to the most unabashed tricks of theatre—and of early radio plays—in Citizen Kane. At some campus showings, they react so gullibly that when Kane makes a demagogic speech about “the underprivileged,” stray students will applaud enthusiastically, and a shout of “Right on!” may be heard.

Kane is a master manipulator, but so was Welles, and our love for all that this film represents shouldn’t blind us to how the same tricks can be turned to more insidious ends. As Kane says to poor Mr. Carter, shortly after taking over a New York newspaper at the age of twenty-five, just as Jared Kushner once did: “If the headline is big enough, it makes the news big enough.” Hearst understood this. And so does Steve Bannon.

Written by nevalalee

January 2, 2018 at 9:00 am

American Stories #1: The Postman Always Rings Twice

leave a comment »

Note: As we enter what Joe Scarborough justifiably expects to be “the most consequential political year of our lives,” I’m looking back at ten works of art—books, film, television, and music—that deserve to be reexamined in light of where America stands today.

The opening sentence of James M. Cain’s The Postman Always Rings Twice—“They threw me off the hay truck about noon”—is my favorite first line of any novel, and I’ve written about it here before. Yet when you look more closely at the paragraph in which it appears, you find that what Tom Wolfe praised as the “momentum” of Cain’s style is carrying you past some significant material. Here’s how it reads in full:

They threw me off the hay truck about noon. I had swung on the night before, down at the border, and as soon as I got up there under the canvas, I went to sleep. I needed plenty of that, after three weeks in Tia Juana, and I was still getting it when they pulled off to one side to let the engine cool. Then they saw a foot sticking out and threw me off. I tried some comical stuff, but all I got was a dead pan, so that gag was out. They gave me a cigarette, though, and I hiked down the road to find something to eat.

Cain described his narrator, Frank, as “a hobo with good grammar,” but he’s also a white man who passes easily back and forth across the border between Mexico and southern California. When he meets Cora, the wife of the doomed gas station owner Nick Papadakis, he drops a casual reference to “you people,” prompting her to shoot back: “You think I’m Mex…Well, get this. I’m just as white as you are, see? I may have dark hair and look a little that way, but I’m just as white as you are.” But Frank sees to the bottom of her indignation at once: “It was being married to that Greek that made her feel she wasn’t white.”

Yet it’s Nick Papadakis, whom Frank always calls “the Greek,” who somehow emerges as the book’s most memorable creation—he may be the most vivid murder victim in all of crime fiction—and Cain’s ability to make him real while channeling everything that we know about him through the narrator’s contempt is an act of immense technical skill. Nick is also the figure in whom the story’s secret theme comes most clearly into view. In order to be alone with Cora, Frank tricks Nick into going into town to buy a new neon sign, and he comes back with a resplendent declaration of love for his adoptive land: “It had a Greek flag and an American flag, and a hand shaking hands…It was all in red, white and blue.” Later, after Nick has unknowingly survived a botched attempt on his life, he proudly shows Frank his scrapbook: “He had inked in the curlicues, and then colored it with red, white and blue. Over the naturalization certificate, he had a couple of American flags, and an eagle.” It isn’t the murderous couple’s shared lust, but Cora’s resentment toward her immigrant husband, that really drives the story, and it spills out in her bitter words to Frank: “Do you think I’m going to let you wear a smock, with Service Auto Parts printed on the back…while he has four suits and a dozen silk shirts?” It still rings uncomfortably true today, and it echoed in the imagination of Cain’s most unlikely imitator. As Alice Kaplan writes in Looking for The Stranger:

When [Albert] Camus said The Postman Always Rings Twice inspired The Stranger, he didn’t go into detail. It is easy to imagine that when he observed the effect Cain got by using “the Greek” in place of a proper name, he realized he could create a similar effect by calling the murder victim in his own novel “the Arab.”

The Ballad of Jack and Rose

leave a comment »

Note: To commemorate the twentieth anniversary of the release of Titanic, I’m republishing a post that originally appeared, in a slightly different form, on April 16, 2012.

Is it possible to watch Titanic again with fresh eyes? Was it ever possible? When I caught Titanic 3D five years ago in Schaumburg, Illinois, it had been a decade and a half since I last saw it. (I’ve since watched it several more times, mostly while writing an homage in my novel Eternal Empire.) On its initial release, I liked it a lot, although I wouldn’t have called it the best movie of a year that gave us L.A. Confidential, and since then, I’d revisited bits and pieces of it on television, but had never gone back and watched the whole thing. All the same, my memories of it remained positive, if somewhat muted, so I was curious to see what my reaction would be, and what I found is that this is a really good, sometimes even great movie that looks even better with time. Once we set aside our preconceived notions, we’re left with a spectacularly well-made film that takes a lot of risks and seems motivated by a genuine, if vaguely adolescent, fascination with the past, an unlikely labor of love from a prodigiously talented director who willed himself into a genre that no one would have expected him to understand—the romantic epic—and emerged with both his own best work and a model of large-scale popular storytelling.

So why is this so hard for some of us to admit? The trouble, I think, is that the factors that worked so strongly in the film’s favor—its cinematography, special effects, and art direction; its beautifully choreographed action; its incredible scale—are radically diminished on television, which was the only way that it could be seen for a long time. On the small screen, we lose all sense of scope, leaving us mostly with the charisma of its two leads and conventional dramatic elements that James Cameron has never quite been able to master. Seeing Titanic in theaters again reminds us of why we responded to it in the first place. It’s also easier to appreciate that it was made at precisely the right moment in movie history, an accident of timing that allowed it to take full advantage of digital technology while still deriving much of its power from stunts, gigantic sets, and practical effects. If it were made again today, even by Cameron himself, it’s likely that much of this spectacle would be rendered on computers, which would be a major aesthetic loss. A huge amount of this film’s appeal lies in its physicality, in those real crowds and flooded stages, all of which can only be appreciated in the largest venue possible. Titanic is still big; it’s the screens that got small.

It’s also time to retire the notion that James Cameron is a bad screenwriter. It’s true that he doesn’t have any ear for human conversation, and that he tends to freeze up when it comes to showing two people simply talking—I’m morbidly curious to see what he’d do with a conventional drama, but I’m not sure that I want to see the result. Yet when it comes to structuring exciting stories on the largest possible scale, and setting up and delivering climactic set pieces and payoffs, he’s without equal. I’m a big fan of Christopher Nolan, for instance—I think he’s the most interesting mainstream filmmaker alive—but his films can seem fussy and needlessly intricate compared to the clean, powerful narrative lines that Cameron sets up here. (The decision, for instance, to show us a simulation of the Titanic’s sinking before the disaster itself is a masterstroke: it keeps us oriented throughout an hour of complex action that otherwise would be hard to understand.) Once the movie gets going, it never lets up. It moves toward its foregone conclusion with an efficiency, confidence, and clarity that Peter Jackson, or even Spielberg, would have reason to envy. And its production was one of the last great adventures—apart from The Lord of the Rings—that Hollywood ever allowed itself.

Despite James Cameron’s reputation as a terror on the set, I met him once, and he was very nice to me. In 1998, as an overachieving high school senior, I was a delegate at the American Academy of Achievement’s annual Banquet of the Golden Plate in Jackson Hole, Wyoming, an extraordinarily surreal event that I hope to discuss in more detail one of these days. The high point of the weekend was the banquet itself, a black-tie affair in a lavish indoor auditorium with the night’s honorees—a range of luminaries from science, politics, and the arts—seated in alphabetical order at the periphery of the room. One of them was James Cameron, who had swept the Oscars just a few months earlier. Halfway through the evening, leaving my own seat, I went up to his table to say hello, only to find him surrounded by a flock of teenage girls anxious to know what it was like to work with Leonardo DiCaprio. Seeing that there was no way of approaching him yet, I chatted for a bit with a man seated nearby, who hadn’t attracted much, if any, attention. We made small talk for a minute or two, but when I saw an opening with Cameron, I quickly said goodbye, leaving the other guest on his own. It was Dick Cheney.

Written by nevalalee

December 20, 2017 at 9:00 am

How to be useful

leave a comment »

In his recent review in The New Yorker of a new collection of short stories by Susan Sontag, the critic Tobi Haslett quotes its author’s explanation of why she wrote her classic book Illness as Metaphor: “I wanted to be useful.” I was struck enough by this statement to look up the full version, in which Sontag explains how she approached the literary challenge of addressing her own experience with cancer:

I didn’t think it would be useful—and I wanted to be useful—to tell yet one more story in the first person of how someone learned that she or he had cancer, wept, struggled, was comforted, suffered, took courage…though mine was also that story. A narrative, it seemed to me, would be less useful than an idea…And so I wrote my book, wrote it very quickly, spurred by evangelical zeal as well as anxiety about how much time I had left to do any living or writing in. My aim was to alleviate unnecessary suffering…My purpose was, above all, practical.

This is a remarkable way to look at any book, and it emerged both from Sontag’s own illness and from her awareness of her peculiar position in the culture of her time, as Haslett notes: “Slung between aesthetics and politics, beauty and justice, sensuous extravagance and leftist commitment, Sontag sometimes found herself contemplating the obliteration of her role as public advocate-cum-arbiter of taste. To be serious was to stake a belief in attention—but, in a world that demands action, could attention be enough?”

Sontag’s situation may seem remote from that of most authors, but it’s a problem that every author faces when he or she decides to tackle a book, which usually amounts to a call for attention over action. We write for all kinds of reasons, some more admirable than others, and selecting one idea or project over another comes down prioritizing such factors as our personal interests, commercial potential, and what we want to think about for a year or more of our lives. But as time goes by, I’ve found that Sontag’s test—that the work be useful—is about as sensible a criterion as any. I’ve had good and bad luck in both cases, but as a rule, whenever I’ve tried to be useful to others, I’ve done well, and whenever I haven’t, I’ve failed. Being useful doesn’t necessarily mean providing practical information or advice, although that’s a fine reason to write a book, but rather writing something that would have value even if you weren’t the one whose name was on the cover, simply because it deserves to exist. You often don’t know until long after you start if a project meets that standard, and it might even be a mistake to consciously pursue it. The best approach, in the end, might simply to develop a lot of ideas in hope that some small fraction will survive. I still frequently write just for my own pleasure, out of personal vanity, or for the desire to see something in print, but it only lasts if the result is also useful, so it’s worth at least keeping it in mind as a kind of sieve for deciding between alternatives. As Lin-Manuel Miranda once put it to Grantland, in words that have never ceased to resound in my head: “What’s the thing that’s not in the world that should be in the world?”

One of my favorite examples is the writer Euell Gibbons, who otherwise might seem less like Susan Sontag than any human being imaginable. As John McPhee writes in a short reminiscence, “The Forager,” in the New York Times:

Euell had begun learning about wild and edible vegetation when he was small boy in the Red River Valley. Later, in the dust‐bowl era, his family moved to central New Mexico. They lived in a semi‐dugout and almost starved there. His father left in a desperate search for work. The food supply diminished until all that was left were a few pinto beans and a single egg, which no one would eat. Euell, then teen‐aged and one of four children, took a knapsack one morning and left for the horizon mountains. He came hack with puffball mushrooms, piñon nuts, and fruits of the yellow prickly pear. For nearly a month, the family lived wholly on what he provided, and he saved their lives. “Wild food has meant different things to me at different times,” he said to me once. “Right then it was a means of salvation, a way to keep from dying.”

In years that followed, Euell worked as a cowboy. He pulled cotton. He was for a long time a hobo. He worked in a shipyard. He combed beaches. The longest period during which he lived almost exclusively on wild food was five years. All the while, across decades, he wished to be a writer. He produced long pieces of fiction and he had no luck…He passed the age of fifty with virtually nothing published. He saw himself as a total failure, and he had no difficulty discerning that others tended to agree.

What happened next defied all expectation. McPhee writes: “Finally, after listening to the advice of a literary agent, he sat down to try to combine his interests. He knew his subject first- and second-hand; he knew it backward to the botanies of the tribes. And now he told everybody else how to gather and prepare wild food.” The result was the book Stalking the Wild Asparagus, which became the first in a bestselling series. At times, Gibbons didn’t seem to know how to handle his own success, as McPhee recalls:

He would live to be widely misassessed. His books gave him all the money he would ever need. The deep poverty of his other years was not forgotten, though, and he took to going around with a minimum of $1,500 in his pocket, because with any less there, he said, he felt insecure. Whatever he felt, it was enough to cause him, in his last years, to appear on television munching Grape-Nuts—hard crumbs ground from tough bread—and, in doing so, he obscured his accomplishments behind a veil of commercial personality. He became a household figure of a cartoon sort. People laughed when they heard his name. All too suddenly, he stood for what he did not stand for.

But Gibbons also deserves to be remembered as a man who finally understood and embraced the admonition that a writer be useful. McPhee concludes: “He was a man who knew the wild in a way that no one else in this time has even marginally approached. Having brought his knowledge to print, he died the writer he wished to be.” Gibbons and Sontag might not have had much in common—it’s difficult to imagine them even having a conversation—but they both confronted the same question: “What book should I write?” And we all might have better luck with the answer if we ask ourselves instead: “What have I done to survive?”

Written by nevalalee

December 19, 2017 at 8:14 am

The Eye of the Skeksis

leave a comment »

Every now and then, you’re able to date the precise moment when your life incrementally changed. For me, one of those turning points was January 9, 1983, when the documentary The World of the Dark Crystal aired on public television, a few weeks after the movie itself debuted in theaters. (This weekend marks the thirty-fifth anniversary of its initial release.) It seems implausible now that I would have watched it at the time, but fortunately, my dad taped it, and it must have lived in our house for years afterward, like a tiny imaginative bomb waiting for its chance to detonate. As I’ll mention in a second, our copy cut off the first four minutes of the documentary—it must have taken my dad that long to get the videocassette recorder set up—and I didn’t see it in its entirety until decades later. It was preserved for me by chance, and when I look at it today, it feels doubly precious. We’re living in an era when a series like The Lord of the Rings can offer dozens of hours of production footage, much of it beautifully presented, while even the most mediocre blockbusters usually provide a bonus disc packed with special features. The World of the Dark Crystal isn’t even an hour long, but it was enough to fuel my imagination for a lifetime. And it wasn’t just an element of what would eventually come to be known as an electronic press kit, or even an anomaly like Les Blank’s Burden of Dreams, but a labor of love in its own right, a document made by creative artists who were convinced that what they were doing was worth recording because it had the potential to change movies forever.

That isn’t how it worked out, but at least it changed me, and the moment in particular that I never forgot comes near the beginning of the documentary. Our copy of the tape abruptly opened with a shot of the artist Brian Froud, who provided the movie’s conceptual designs, wandering across the moor near his home in Devon. Shortly afterward, it cut to a sequence of Froud seated at his drafting table, working on a sketch of a Skeksis and musing on the soundtrack:

Jim [Henson] had feelings about what the major creatures were, and some of their characteristics, and it was my job to show how they looked. I always start with the eye—the eye is the focal point of all these characters. And for the Skeksis, they needed to have a penetrating stare….They are part reptile, part predatory bird, part dragon.

He drew rapidly for the camera, filling in the details around the eye before extending the illustration—with what struck me at the time as a startling flourish—into the downward curve of the mouth. Watching the movement of the pencil, I experienced what I can only describe as a moment of revelation. If nothing else, it was probably the first time that I’d ever seen an artist actually drawing, and it kindled something in me that has never entirely gone away.

I must have been about six years old when it really took hold, and I reacted much like any other kid when presented with this sort of stimulus: I imitated it. To be specific, I slavishly copied that one drawing, not just in its final shape, but in the process that Froud took to get there. I started with the eye, like he did, and then ritualistically added in the rest. It never would have occurred to me to do otherwise, and I suspect that I drew it hundreds of times, sometimes as a doodle in the margin of a notepad, occasionally more systematically, which doesn’t even include the countless other drawings that I made of creatures that were “part reptile, part predatory bird, part dragon.” It wasn’t so much a reaction to The Dark Crystal itself—which I liked, although not as much as Labyrinth—as to that brief glimpse of a creative mind expressed in the pencil on the page. Combined with a few technical tricks that I picked up from the show The Secret City, which is worth a blog post of its own, it was enough to turn me into a pretty good artist, at least by the standards of the second grade. (It’s worth noting that both The World of the Dark Crystal and The Secret City aired on public television, which is also where Jim Henson made his most lasting impact, and an argument in itself for defending it as a proving ground for the imaginations of the young.) I haven’t done a lot of art in recent years, except when sketching with my daughter, and I knew by the end of college that I didn’t have it in me to be a painter. But I’m grateful to have even a little of it, and I owe it largely to that chance encounter with a Skeksis.

I don’t doubt that there are kids who experienced the same kind of epiphany while watching the lovingly detailed profiles of conceptual designers John Howe and Alan Lee—Froud’s old collaborator—in the special features for The Lord of the Rings. The Hobbit provides hours more, and those featurettes, unlike so much else in those bloated box sets, remain fascinating and magical. (The life of a fantasy illustrator must not be a particularly lucrative one under most circumstances, and one of the small pleasures of watching the behind-the-scenes footage from these two trilogies is seeing Howe and Lee growing visibly more prosperous.) But something in the fragmentary nature of The World of the Dark Crystal was stimulating in itself. It wasn’t a textbook, but a series of hints, and it left me to fill in the gaps on my own. You can draw a straight line from that pencil drawing to my interest in science fiction and fantasy, not just as fan, but as someone with an interest in the practicalities of how it all gets done. The forms have changed, but the underlying impulse remains the same. And what really haunts me is the fact that the scene at the drawing table occurs just a minute and a half after our tape started, and my dad could easily have missed it. If it had taken him a few minutes longer to cue up the recorder that night, he might have skipped it entirely, and opened instead with the sequence in which the creatures that Froud designed were coming to life in Jim Henson’s workshop. And maybe I would have become a puppeteer.

Written by nevalalee

December 15, 2017 at 8:28 am

Science fiction studies


Astounding: John W. Campbell, Isaac Asimov, Robert A. Heinlein, L. Ron Hubbard, and the Golden Age of Science Fiction. Forthcoming from Dey Street Books, an imprint of HarperCollins, on August 14, 2018.

Selected Nonfiction

“Karl Rove’s Labyrinth.” The Daily Beast. November 20, 2012. Essay on Karl Rove’s surprising love of the Argentine writer Jorge Luis Borges.

“Lessons from The X-Files.” Salon. September 17, 2013. The twentieth anniversary of The X-Files and its lessons for modern television.

“Xenu’s Paradox: The Fiction of L. Ron Hubbard and the Making of Scientology.” Longreads. February 1, 2017. An overview of the science fiction and fantasy stories of the controversial founder of dianetics and the Church of Scientology. Featured on The A.V. Club on March 12, 2017.

Reviews of Classic Stories

Astounding Stories #1: Galactic Patrol
Astounding Stories #2: For Us, the Living
Astounding Stories #3: “The Legion of Time”
Astounding Stories #4: Sinister Barrier
Astounding Stories #5: Death’s Deputy and Final Blackout
Astounding Stories #6: “Microcosmic God” and “E for Effort”
Astounding Stories #7: “Mimsy Were the Borogoves”
Astounding Stories #8: The World of Null-A
Astounding Stories #9: “The Mule”
Astounding Stories #10: “Way in the Middle of the Air”
Astounding Stories #11: The Moon is Hell
Astounding Stories #12: “Izzard and the Membrane”
Astounding Stories #13: “The Cold Equations”
Astounding Stories #14: The Heinlein Juveniles
Astounding Stories #15: The Space Merchants
Astounding Stories #16: “Witches Must Burn”
Astounding Stories #17: The Thiotimoline Papers
Astounding Stories #18: “Noise Level”
Astounding Stories #19: They’d Rather Be Right

Blog Posts

“Asimov’s ABCs.” Isaac Asimov on the secret of group creativity. October 28, 2014.
“Pohl and the pulpsters.” Frederik Pohl and the world of the pulp writer. July 28, 2015.
“Who went there?” John W. Campbell and The Thing. March 2, 2016.
“Smoking on spaceships.” A short history of smoking in science fiction. March 15, 2016.
“The myth of the competent man.” Science fiction’s most persistent delusion. April 12, 2016.
“Back to the Futurians.” Science fiction fandom in the thirties as a social network. July 26, 2016.
“Days of Futurians Past.” New Fandom and the Futurians. July 27, 2016.
“Return to Dimension X.” The golden age of radio science fiction. August 9, 2016.
“Advertising the future.” A history of advertising in Astounding. September 8, 2016.
“Beyond cyberspace.” John W. Campbell, Norbert Wiener, and cybernetics. October 7, 2016.
“To be or not to be.” Alfred Korzybski’s ideas and their influence on science fiction. October 11, 2016.
“Fear of a female planet.” The absence of women in science fiction. December 7, 2016.
“The Slan solution.” The supermen of Slan, “Solution Unsatisfactory,” and dianetics. December 12, 2016.
“From Xenu to Xanadu.” L. Ron Hubbard and Donald Trump. February 2, 2017.
“A Hawk from a Handsaw.” Uri Geller, Robert Anton Wilson, and a few sinister hawks. February 15-17, 2017.
“The Imaginary Dr. Kutzman.” A lost refutation of dianetics by L. Ron Hubbard. February 23, 2017.
“The moon is a harsh fortress.” Hubbard’s “Fortress in the Sky” and its influence on Heinlein’s The Moon is a Harsh Mistress. February 27, 2017.
“The dianetics epidemic.” Dianetics as a viral phenomenon. March 2, 2017.
“The innumerable ways of being a man.” Sir Richard Francis Burton’s influence on Hubbard. March 8, 2017.
“Falls the Shadow.” John W. Campbell’s parallels to Orson Welles. March 17, 2017.
“The Mule and the Beaver.” The sources of Isaac Asimov’s remarkable productivity. March 22, 2017.
“The vision thing.” Two cinematic versions of “Who Goes There?” April 21, 2017.
“The dark side of the moon.” Charles Manson and science fiction. March 24, 2017.
“The acid test.” Attitudes toward LSD and other drugs in science fiction. April 27, 2017.
“Of a Fyre on the Moon.” The Fyre Festival and Voyage Beyond Apollo. May 1, 2017.
“Hubbard in the Wild.” L. Ron Hubbard’s sojourn in Alaska. May 25, 2017.
“The bed of the future.” Howard Hughes, Hugo Gernsback, Heinlein, and the ultimate bed. June 27, 2017.
“The science fiction sieve.” John W. Campbell and the boundaries of science fiction. June 28, 2017.
“The saucer people.” Flying saucers in Astounding. July 7, 2017.
“The search for the zone.” Twin Peaks and Heinlein’s “Universe.” July 10, 2017.
“Children of the Lens.” Science fiction and the early video game Spacewar. July 14, 2017.
“Bester of Both Worlds.” The genius of Alfred Bester. August 11, 2017.
“The creeps of the cosmos.” William S. Burroughs and Scientology. August 16, 2017.
“Handbook for morals.” The mass-buying tactics of the Church of Scientology. August 25, 2017.
“Asimov’s close encounter.” Asimov’s vendetta against Close Encounters of the Third Kind. August 30, 2017.
“The First Foundation.” Campbell, Asimov, Jack Williamson, and psychohistory. September 5-7, 2017.
“The passion of the pulps.” More on the absence of women in science fiction. September 12, 2017.
“Sci-Fi and Si.” Si Newhouse, Condé Nast, and Analog. October 2, 2017.
“Two against the gods.” Hubbard and William Bolitho’s Twelve Against the Gods. October 4, 2017.
“The Heirs of Sputnik.” Sputnik, science fiction, and the Cold War. October 6, 2017.
“The flicker effect.” W. Grey Walter, John W. Campbell, and the Dream Machine. October 10, 2017.
“When Del met Elron.” An encounter between Hubbard and comedy legend Del Close. October 20, 2017.
“The Strange Land.” More on Charles Manson and science fiction. November 20, 2017.

Written by nevalalee

December 8, 2017 at 8:25 am

Posted in

The art of preemptive ingenuity

leave a comment »

Yesterday, my wife drew my attention to the latest episode of the podcast 99% Invisible, which irresistibly combines two of my favorite topics—film and graphic design. Its subject is Annie Atkins, who has designed props and visual materials for such works as The Tudors and The Grand Budapest Hotel. (Her account of how a misspelled word nearly made it onto a crucial prop in the latter film is both hilarious and horrifying.) But my favorite story that she shares is about a movie that isn’t exactly known for its flashy art direction:

The next job I went onto—it would have been Spielberg’s Bridge of Spies, which was a true story. We made a lot of newspapers for that film, and I remember us beginning to check the dates against the days, because I wanted to get it right. And then eventually the prop master said to me, “Do you know what, I think we’re just going to leave the dates off.” Because it wasn’t clear [what] sequence…these things were going to be shown in. And he said, you know, if you leave the dates off altogether, nobody will look for it. But if you put something there that’s wrong, then it might jump out. So we went with no dates in the end for those newspapers.

As far as filmmaking advice is concerned, this is cold, hard cash, even if I’ll never have the chance to put it into practice for myself. And I especially like the fact that it comes out of Bridge of Spies, a writerly movie with a screenplay by none other than the Coen Brothers, but which was still subject to decisions about its structure as late in the process as the editing stage.

Every movie, I expect, requires some degree of editorial reshuffling, and experienced directors will prepare for this during the production itself. The absence of dates on newspapers is one good example, and there’s an even better one in the book The Conversations, which the editor Walter Murch relates to the novelist Michael Ondaatje:

One thing that made it possible to [rearrange the order of scenes] in The Conversation was Francis [Coppola]’s belief that people should wear the same clothes most of the time. Harry is almost always wearing that transparent raincoat and his funny little crepe-soled shoes. This method of using costumes is something Francis had developed on other films, quite an accurate observation. He recognized that, first of all, people don’t change clothes in real life as often as they do in film. In film there’s a costume department interested in showing what it can do—which is only natural—so, on the smallest pretext, characters will change clothes. The problem is, that locks filmmakers into a more rigid scene structure. But if a character keeps the same clothes, you can put a scene in a different place and it doesn’t stand out.

Murch observes: “There’s a delicate balance between the timeline of a film’s story—which might take place over a series of days or weeks or months—and the fact that the film is only two hours long. You can stretch the amount of time somebody is in the same costume because the audience is subconsciously thinking, Well, I’ve only been here for two hours, so it’s not strange that he hasn’t changed clothes.”

The editor concludes: “It’s amazing how consistent you can make somebody’s costume and have it not stand out.” (Occasionally, a change of clothes will draw attention to editorial manipulation, as one scene is lifted out from its original place and slotted in elsewhere. One nice example is in Bullitt, where we see Steve McQueen in one scene at a grocery store in his iconic tweed coat and blue turtleneck, just before he goes home, showers, and changes into those clothes, which he wears for the rest of the movie.) The director Judd Apatow achieves the same result in another way, as his longtime editor Brent White notes: “[He’ll] have something he wants to say, but he doesn’t know exactly where it goes in the movie. Does it service the end? Does it go early? So he’ll shoot the same exact scene, the same exchange, with the actors in different wardrobes, so that I can slot it in at different points.” Like the newspapers in Bridge of Spies, this all assumes that changes to the plan will be necessary later on, and it prepares for them in advance. Presumably, you always hope to keep the order of scenes from the script when you cut the movie together, but the odds are that something won’t quite work when you sit down to watch the first assembly, so you build in safeguards to allow you to fix these issues when the time comes. If your budget is high enough, you can include reshoots in your shooting schedule, as Peter Jackson does, while the recent films of David Fincher indicate the range of problems that can be solved with digital tools in postproduction. But when you lack the resources for such expensive solutions, your only recourse is to be preemptively ingenious on the set, which forces you to think in terms of what you’ll want to see when you sit down to edit the footage many months from now.

This is the principle behind one of my favorite pieces of directorial advice ever, which David Mamet provides in the otherwise flawed Bambi vs. Godzilla:

Always get an exit and an entrance. More wisdom for the director in the cutting room. The scene involves the hero sitting in a café. Dialogue scene, blah blah blah. Well and good, but when you shoot it, shoot the hero coming in and sitting down. And then, at the end, shoot him getting up and leaving. Why? Because the film is going to tell you various things about itself, and many of your most cherished preconceptions will prove false. The scene that works great on paper will prove a disaster. An interchange of twenty perfect lines will be found to require only two, the scene will go too long, you will discover another scene is needed, and you can’t get the hero there if he doesn’t get up from the table, et cetera. Shoot an entrance and an exit. It’s free.

I learned a corollary from John Sayles: at the end of the take, in a close-up or one-shot, have the speaker look left, right, up, and down. Why? Because you might just find you can get out of the scene if you can have the speaker throw the focus. To what? To an actor or insert to be shot later, or to be found in (stolen from) another scene. It’s free. Shoot it, ’cause you just might need it.

This kind of preemptive ingenuity, in matters both large and small, is what really separates professionals from amateurs. Something always goes wrong, and the plan that we had in mind never quite matches what we have in the end. Professionals don’t always get it right the first time, either—but they know this, and they’re ready for it.

The Wrath of Cohn, Part 2

leave a comment »

In the June 8, 1992 issue of The New Republic, the journalist Carl Bernstein published a long essay titled “The Idiot Culture.” Twenty years had passed since Watergate, which had been followed by what Bernstein called “a strange frenzy of self-congratulation and defensiveness” on the part of the press about how it had handled the story. Bernstein felt that the latter was more justified than the former, and he spent four pages decrying what he saw as an increasing obsession within the media with celebrity, gossip, and the “sewer” of political discourse. He began by noting that the investigation by the Washington Post was based on the unglamorous work of knocking on doors and tracking down witnesses, far from the obvious centers of power, and that the Nixon administration’s response was “to make the conduct of the press the issue in Watergate, instead of the conduct of the president and his men” and to dismiss the Post as “a fountain of misinformation.” Bernstein observed that both Ronald Reagan and George H.W. Bush had displayed a Nixonian contempt for the press, but the media itself hadn’t gone out of its way to redeem itself, either. And he reserved his harshest words for what he saw as the nadir of celebrity culture:

Last month Ivana Trump, perhaps the single greatest creation of the idiot culture, a tabloid artifact if ever there was one, appeared on the cover of Vanity Fair. On the cover, that is, of Condé Nast’s flagship magazine, the same Condé Nast/Newhouse/Random House whose executives will yield to nobody in their solemnity about their profession, who will tell you long into the night how seriously in touch with American culture they are, how serious they are about the truth.

By calling Ivana Trump “the single greatest creation of the idiot culture,” Bernstein pulled off the rare trick of managing to seem both eerily prescient and oddly shortsighted at the exact same time. In fact, his article, which was published a quarter of a century ago, returned repeatedly to the figure of Donald Trump. As an example of the media’s increasing emphasis on titillation, he cited the question that Diane Sawyer asked Marla Maples, Trump’s girlfriend at the time, on ABC News: “All right, was it really the best sex you ever had?” He also lamented: “On the day that Nelson Mandela returned to Soweto and the allies of World War II agreed to the unification of Germany, the front pages of many ‘responsible’ newspapers were devoted to the divorce of Donald and Ivana Trump.” To be fair, though, he did sound an important warning:

Now the apotheosis of this talk-show culture is before us…A candidate created and sustained by television…whose willingness to bluster and pose is far less in tune with the workings of liberal democracy than with the sumo-pundits of The McLaughlin Group, a candidate whose only substantive proposal is to replace representative democracy with a live TV talk show for the entire nation. And this candidate, who has dismissively deflected all media scrutiny with shameless assertions of his own ignorance, now leads both parties’ candidates in the polls in several major states.

He was speaking, of course, of Ross Perot. And while it’s easy to smile at a time when the worst of political television was The McLaughlin Group, it’s also a reminder of how little has changed, on the anniversary of the election of the man whom Bernstein has called “dangerous beyond any modern presidency.” (I also can’t resist pointing out that the Ivana Trump cover of Vanity Fair included this headline in the lower right corner: “Hilary Clinton: Will She Get to the White House With or Without Him?” And this was half a year before Bill Clinton was even elected president.)

Yet it’s the “Condé Nast/Newhouse/Random House” nexus that fascinates and troubles me the most. In the biography Newhouse, Thomas Maier quotes an unnamed source who worked on The Art of the Deal, which Si Newhouse aggressively packaged for the protégé of his friend Roy Cohn: “It’s obvious that this book was like Vanity Fair, the preeminent example of a certain instinct that Si has for a kind of glamour and power and public presence. It’s like Trump was a kind of shadow for him, in the sense that Si is so shy and so bumbling with words and so uncomfortable in social situations. I think his attraction to Trump was that he was so much his opposite. So out there, so aggressive, so full of himself.” More pragmatically, Trump was also a major advertiser. Maier quotes the editor Tina Brown, speaking way back in 1986: “If you were producing a funny magazine, you’d have to go for people like Trump…[But] there is also that awful commercial fact that you can’t make fun of Calvin Klein, Donald Trump, and Tiffany.” And this wasn’t just theoretical. Maier writes:

Those who were truly powerful in its world were granted immunity from any real journalistic scrutiny. When Donald Trump was a high-flying entrepreneur, he learned that Vanity Fair was preparing a short item about how the doorknobs were falling off in Trump Tower. Shortly after this journalistic enterprise was launched, however, Brown received a call from Si Newhouse, who had gotten a call from Trump himself…Newhouse was not going to let Trump’s advertising cease because of some silly little item. (Only after he suffered a huge financial loss in the 1990s did the magazine dare to examine Trump in any critical way.)

Given the vast reach of Newhouse’s media empire, this is truly frightening. And it’s hard not to see the hand of Roy Cohn, whose fifty-second birthday in 1979 seems to have been the moment when Newhouse and Trump first found themselves in the same room. “More than anyone else outside the direct kinship of blood,” Maier writes, “Cohn seemed to hold the keys to Si Newhouse’s world.” Cohn prided himself on being a power broker, and he eagerly used Newhouse’s publications to reward his patrons and punish his enemies. (There were also more tangible compensations. According to Maier, Sam Newhouse, Sr. once wrote Cohn a check for $250,000 to get him out of a financial jam, much as Si would later do, at Cohn’s request, for Norman Mailer.) And this intimacy was expressed in public in ways that must have seemed inexplicable to ordinary readers. On April 3, 1983, Cohn appeared on the cover of Newhouse’s Parade, which had the highest circulation of any magazine in the world, with a story titled “You Can Beat the IRS.” Cohn spent much of the article mocking the accusations of tax evasion that had been filed against him, and he offered tips about keeping your financial information private that were dubious even at the time:

Keep one step ahead of them: If there is a problem, change bank accounts so they can’t grab your funds by knowing from your records where you bank. If they get canceled checks and information from your bank, they will be in a position to know much more about your life than is acceptable.

And this was just a dry run. Cohn was serving as a placeholder, first for his patron, then for his ultimate pupil. Tomorrow, I’ll be looking at how Cohn and Newhouse are part of a direct line that connects Reagan to Trump, and what this means for us today.

Written by nevalalee

November 8, 2017 at 8:29 am

The sound of the teletypes

leave a comment »

A few days ago, after a string of horrifying sexual harassment accusations were leveled against the political journalist Mark Halperin, HBO announced that it was canceling a planned miniseries based on an upcoming book by Halperin and John Heilemann about last year’s presidential election. (Penguin, their publisher, pulled the plug on the book itself later that day.) It’s hard to argue with this decision, which also raises the question of why anyone thought that there would be demand for a television series on this subject at all. We’re still in the middle of this story, which shows no sign of ending, and the notion that viewers would voluntarily submit themselves to a fictionalized version of it—on top of everything else—is hard to believe. But it isn’t the first time that this issue has come up. Over four decades ago, while working on the adaptation of Woodward and Bernstein’s All the President’s Men, the screenwriter William Goldman ran up against the same skepticism, as he recounts in his great book Adventures in the Screen Trade:

When I began researching the Woodward-Bernstein book, before it was published, it seemed, at best, a dubious project. Politics were anathema at the box office, the material was talky, there was no action, etc., etc. Most of all, though, people were sick to fucking death of Watergate. For months, whenever anyone asked me what I was working on, and I answered, there was invariably the same reply: “Gee, don’t you think we’ve heard enough about Watergate?” Repeated often enough, that can make you lose confidence.

He concludes: “Because, of course, we had. Had enough and more than enough. Years of headlines, claims and disclaimers, lies, and occasional clarifying truths.”

This certainly sounds familiar. And even if that Trump miniseries never happens, we can still learn a lot from the effort by one of America’s smartest writers to come to terms with the most complicated political story of his time. When Goldman was brought on board by Robert Redford, he knew that he could hardly turn down the assignment, but he was uncomfortably aware of the challenges that it would present: “There were all those goddam names that no one could keep straight: Stans and Sturgis and Barker and Segretti and McCord and Kalmbach and Magruder and Kleindienst and Strachan and Abplanalp and Rebozo and backward reeled the mind.” (If we’re lucky, there will come a day when Manafort and Gates and Goldstone and Veselnitskaya and Page and even Kushner will blur together, too.) As he dug into the story, he was encouraged to find a lot of interesting information that nobody else seemed to know. There had actually been an earlier attempt to break into the Democratic National Committee offices at the Watergate, for instance, but the burglars had to turn back because they had brought the wrong set of keys. Goldman was so taken by this story that it became the opening scene in his first draft, as a way of alerting viewers that they had to pay attention, although he later admitted that it was perhaps for the best that it was cut: “If the original opening had been incorporated, and you looked at it today, I think you would wonder what the hell it was doing there.” Despite such wrong turns, he continued to work on the structure, and as he was trying to make sense of it, he asked Bob Woodward to list what he thought were the thirteen most important events in the Watergate story. Checking what he had written so far, he saw that he had included all of them already: “So even if the screenplay stunk, at least the structure would be sound.”

As it turned out, the structure would be his primary contribution to the movie that eventually won him an Academy Award. After laboring over the screenplay, Goldman was infamously ambushed at a meeting by Redford, who informed him that Carl Bernstein and Nora Ephron had secretly written their own version of the script, and that he should read it. (Goldman’s account of the situation, which he calls “a gutless betrayal” by Redford, throws a bit of shade that I’ve always loved: “One other thing to note about [Bernstein and Ephron’s] screenplay: I don’t know about real life, but in what they wrote, Bernstein was sure catnip to the ladies.”) From his perspective, matters got even worse after the hiring of director Alan Pakula, who asked him for multiple versions of every scene and kept him busy with rewrites for months. A subplot about Woodward’s love life, which Goldman knew would never make it into the film, turned out to be a huge waste of everyone’s time. Finally, he says, the phone stopped ringing, and he didn’t have any involvement with the film’s production. Goldman recalls in his book:

I saw it at my local neighborhood theater and it seemed very much to resemble what I’d done; of course there were changes but there are always changes. There was a lot of ad-libbing, scenes were placed in different locations, that kind of thing. But the structure of the piece remained unchanged. And it also seemed, with what objectivity I could bring to it, to be well directed and acted, especially by the stars.

In the end, however, Goldman says that if he could live his entire movie career over again, “I’d have written exactly the screenplays I’ve written. Only I wouldn’t have come near All the President’s Men.”

But the thing that sticks in my head the most about the screenplay is the ending. Goldman writes: “My wife remembers my telling her that my biggest problem would be somehow to make the ending work, since the public already knew the outcome.” Here’s how he solved it:

Bernstein and Woodward had made one crucial mistake dealing with the knowledge of one of Nixon’s top aides. It was a goof that, for a while, cost them momentum. I decided to end the story on their mistake, because the public already knew they had eventually been vindicated, and one mistake didn’t stop them. The notion behind it was to go out with them down and let the audience supply their eventual triumph.

In practice, this meant that the movie doesn’t even cover the book’s second half, which is something that most viewers don’t realize. (In his later memoir Which Lie Did I Tell?, Goldman writes: “In All the President’s Men, we got great credit for our faithfulness to the Woodward-Bernstein book. Total horseshit: the movie ended halfway through the book.”) Instead, it gives us the unforgettable shot of the reporters working in the background as Nixon’s inauguration plays on television, followed by the rattle of the teletype machines covering the events of the next two years. The movie trusts us to fill in the blanks because we know what happened next, and it works brilliantly. If I bring this up now, it’s because the first charges have just been filed in the Mueller investigation. This is only the beginning. But when the Trump movie gets made, and it probably will, today might be the very last scene.

Bringing up the bodies

with one comment

For the last few weeks, my wife and I have been slowly working our way through Ken Burns and Lynn Novick’s devastating documentary series Vietnam. The other night, we finished the episode “Resolve,” which includes an extraordinary sequence—you can find it here around the twenty-five minute mark—about the war’s use of questionable metrics. As narrator Peter Coyote intones: “Since there was no front in Vietnam, as there had been in the first and second World Wars, since no ground was ever permanently won or lost, the American military command in Vietnam—MACV—fell back more and more on a single grisly measure of supposed success: counting corpses. Body count.” The historian and retired Army officer James Willbanks observes:

The problem with the war, as it often is, are the metrics. It is a situation where if you can’t count what’s important, you make what you can count important. So, in this particular case, what you could count was dead enemy bodies.

And as the horrifying images of stacked bodies fill the screen, we hear the quiet, reasonable voice of Robert Gard, a retired lieutenant general and former chairman of the board of the Center for Arms Control and Non-Proliferation: “If body count is the measure of success, then there’s the tendency to count every body as an enemy soldier. There’s a tendency to want to pile up dead bodies and perhaps to use less discriminate firepower than you otherwise might in order to achieve the result that you’re charged with trying to obtain.”

These days, we casually use the phrase “body count” to describe violence in movies and video games, and I was startled to realize how recent the term really is—the earliest reported instance is from 1962, and the oldest results that I can find in a Google Book search are from the early seventies. (Its first use as a book’s title, as far as I can determine, is for the memoir of William Calley, the officer convicted of murder for his involvement in the My Lai massacre.) Military metaphors have a way of seeping into everyday use, in part because of their vividness and, perhaps, because we all like to think of ourselves as fighting in one war or another, but after watching Vietnam, I think that “body count” ought to be forcibly restored to its original connotations. It doesn’t take a lot of introspection to see that it was a statistic that was only possible in a war in which the enemy could be easily dehumanized, and that it encouraged a lack of distinction between military and civilian combatants. Like most faulty metrics, it created a toxic set of incentives from the highest levels of command to the soldiers on the ground. As the full extent of the war’s miscalculations grew more clear, these facts became hard to ignore, and the term itself came to encapsulate the mistakes and deceptions of the conflict as a whole. Writing in Playboy in 1982, Philip Caputo called it “one of the most hideous, morally corrupting ideas ever conceived by the military mind.” Yet most of its emotional charge has since been lost. Words matter, and as the phrase’s significance is obscured, the metric itself starts to creep back. And the temptation to fall back on it increases in response to a confluence of specific factors, as a country engages in military action in which the goals are unclear and victory is poorly defined.

As a result, it’s no surprise that we’re seeing a return to body count. As far back as 2005, Bradley Graham of the Washington Post reported: “The revival of body counts, a practice discredited during the Vietnam War, has apparently come without formal guidance from the Pentagon’s leadership.” More recently, Reed Richardson wrote on FAIR:

In the past few years, official body count estimates have made a notable comeback, as U.S. military and administration officials have tried to talk up the U.S. coalition’s war against ISIS in Syria and Iraq…For example, last August, the U.S. commander of the Syrian-Iraq war garnered a flurry of favorable coverage of the war when he announced that the coalition had killed 45,000 ISIS militants in the past two years. By December, the official ISIS body count number, according to an anonymous “senior U.S. official,” had risen to 50,000 and led headlines on cable news. Reading through that media coverage, though, one finds little skepticism about the figures or historical context about how these killed in action numbers line up with the official estimates of ISIS’s overall size, which have stayed stubbornly consistent year after year. In fact, the official estimated size of ISIS in 2015 and 2016 averaged 25,000 fighters, which means the U.S. coalition had supposedly wiped out the equivalent of its entire force over both years without making a dent in its overall size.

Richardson sums up: “As our not-too-distant past has clearly shown, enemy body counts are a handy, hard-to-resist tool that administrations of both parties often use for war propaganda to promote the idea we are ‘winning’ and to stave off dissent about why we’re fighting in the first place.”

It’s worth pointing out, as Richardson does, that such language isn’t confined to any one party, and it was equally prevalent during the Obama administration. But we should be even more wary of it now. (Richardson writes: “In February, Gen. Tony Thomas, the commander of US Special Operations Command, told a public symposium that 60,000 ISIS fighters had been killed. Thomas added this disingenuous qualifier to his evidence-free number: ‘I’m not that into morbid body count, but that matters.’”) Trump has spent his entire career inflating his numbers, from his net worth to the size of his inauguration crowds, and because he lacks a clear grasp of policy, he’s more inclined to gauge his success—and the lack thereof by his enemies—in terms that lend themselves to the most mindless ways of keeping score, like television ratings. He’s also fundamentally disposed to claim that everything that he does is the biggest and the best, in the face of all evidence to the contrary. This extends to areas that can’t be easily quantified, like international relations, so that every negotiation becomes a zero-sum game in which, as Joe Nocera put it a few years ago: “In every deal, he has to win and you have to lose.” It encourages Trump and his surrogates to see everything as a war, even if it leads them to inflict just as much damage on themselves, and the incentives that he imposes on those around him, in which no admission of error is possible, drag down even the best of his subordinates. And we’ve seen this pattern before. As the journalist Joe Galloway says in Vietnam: “You don’t get details with a body count. You get numbers. And the numbers are lies, most of ‘em. If body count is your success mark, then you’re pushing otherwise honorable men, warriors, to become liars.”

Written by nevalalee

October 24, 2017 at 8:15 am

The screenwriter paradox

leave a comment »

A few weeks ago, I had occasion to discuss “Time Risk,” a huge blog post—it’s the length of a short book—by the screenwriter Terry Rossio. It’s endlessly quotable, and I encourage you to skim it yourself, although you might come away with the impression that the greatest form of time risk is trying to write movies at all. Rossio spends much of the piece encouraging you to write a novel or make an animated short instead, and his most convincing argument is basically unanswerable:

Let’s examine the careers of several brand-name feature screenwriters, to see how they did it. In the same way we can speak of a Stephen King novel, or a Neil Simon play, we can talk about the unique qualities of a Woody Allen screenplay—Whoops, wait. Allen is best known as a director. Okay, how about a Lawrence Kasdan script—Whoops, same thing. Kasdan gained fame, even for his screenwriting, through directing his own work. Let’s see, James Cameron, George Lucas, Christopher Nolan, Nora Ephron, Coen Brothers, John Milius, Cameron Crowe, hmn—

Wait! A Charlie Kaufman screenplay. Thank goodness for Charlie Kaufman, or I wouldn’t be able to think of a single brand-name screenwriter working today, who didn’t make their name primarily through directing. Okay, perhaps Aaron Sorkin, but he made his main fame in plays and television. Why so few? Because—screenwriters do the bulk of their work prior to the green light. Cameras not rolling. Trying to get films made. They toil at the wrong end of the time risk curve, taking on time risk in a myriad of forms.

As Rossio memorably explains a little later on: “It’s only when cameras are rolling that power accumulates, and brands are established.” I found myself thinking about this while reading Vulture’s recent list of the hundred best screenwriters of all time, as determined by forty of their fellow writers, including Diablo Cody, Zak Penn, Wesley Strick, Terence Winter, and a bunch of others who have achieved critical acclaim and name recognition without being known predominantly for directing. And who did they pick? The top ten are Billy Wilder, Joel and Ethan Coen, Robert Towne, Quentin Tarantino, Francis Ford Coppola, William Goldman, Charlie Kaufman, Woody Allen, Nora Ephron, and Ernest Lehman. Of the ten, only Goldman has never directed a movie, and of the others, only Kaufman, Towne, and Lehman are primarily known for their screenwriting. That’s forty percent. And the rest of the list consists mostly of directors who write. Glancing over it, I find the following who are renowned mostly as writers: Aaron Sorkin, Paddy Chayefsky, Frances Marion, Buck Henry, Ruth Prawer Jhabvala, Bo Goldman, Eric Roth, Steven Zaillian, Callie Khouri, Richard Curtis, Dalton Trumbo, Frank Pierson, Cesare Zavattini, Norman Wexler, Waldo Salt, Melissa Mathison, Herman J. Mankiewicz, Alvin Sargent, Ben Hecht, Scott Frank, Jay Presson Allen, John Logan, Guillermo Arriaga, Horton Foote, Leigh Brackett, Lowell Ganz, Babaloo Mandel, David Webb Peoples, Burt Kennedy, Charles Lederer, John Ridley, Diablo Cody, and Mike White. Borderline cases include Paul Schrader, David Mamet, Elaine May, Robert Benton, Christopher McQuarrie, and Shane Black. Even when you throw these names back into the hopper, the “pure” screenwriters number maybe four in ten. And this is a list compiled from the votes of writers who have every reason to highlight the work of their underappreciated colleagues.

So why do directors dominate? I can think of three possible reasons. The first, and perhaps the most likely, is that in a poll like this, a voter’s mind is more likely to turn to a more famous name at the expense of equally deserving candidates. Hence the otherwise inexplicable presence on the list of Steven Spielberg, whose only two credits as a screenwriter, Close Encounters and A.I., owe a lot more, respectively, to Paul Schrader and Stanley Kubrick. Another possibility is that Hollywood is structured to reward writers by turning them into directors, which implies that many of the names here are just screenwriters who ascended. This would be a tempting theory, if it weren’t for the presence of so many auteurs—Welles, Tarantino, the Coens—who started out directing their own screenplays and never looked back. And the third explanation is the one that Rossio offers: “[Screenwriters] toil at the wrong end of the time risk curve.” Invisibility, fungibility, and the ability to do competent work while keeping one’s head down are qualities that the system encourages, and it’s only in exceptional cases, after a screenwriter directs a movie or wins an Oscar, that he or she is given permission to be noticed. (Which doesn’t mean that there weren’t simply some glaring omissions. I’m a little stunned by the absence of Emeric Pressburger, who I think can be plausibly set forth as the finest screenwriter of all time. It’s possible that his contributions have been obscured by the fact that he and Michael Powell were credited as writer, producer, and director of the movies that they made as the Archers, but the division of labor seems fairly clear. And I don’t think any other writer on this list has three scripts as good as those for The Red Shoes, The Life and Death of Colonel Blimp, and A Canterbury Tale, along with your choice of A Matter of Life and Death, Black Narcissus, The Small Back Room, and I Know Where I’m Going!)

The one glaring exception is Joe Eszterhas, who became a household name, along with his rival Shane Black, as the two men traded records throughout the nineties for the highest price ever paid for a script. As he tells it in his weirdly riveting book The Devil’s Guide to Hollywood:

I read about Shane’s sale [for The Last Boy Scout]—and my record being broken—on the front page of the Los Angeles Times while I was vacationing at the Kahala Hilton in Hawaii. Shane’s sale pissed me off. I wanted my record back. I wanted to see an article on the front page of the Los Angeles Times about me setting a new record. I flew home from Hawaii and sat down immediately and stated writing the most commercial script I could think of. Twelve days later, I had my record back. I had the article on the front page of the Los Angeles Times about my new record. And I had my $3 million.

The script was Basic Instinct. Would it have been enough to make Eszterhas famous if he hadn’t been paid so much for it? I don’t know—although it’s worth noting that he had previously held the record for City Hall, which was never made, and Big Shots, which nobody remembers, and he sold millions of dollars’ worth of other screenplays that never got produced. And the moment that made it all possible has passed. Eszterhas didn’t make the Vulture list; studios are no longer throwing money at untested properties; and even a monster sale doesn’t guarantee anything. The current record is still held by the script for Déjà Vu, which sold for $3 million against $5 million over a decade ago, and it serves as a sort of A/B test to remind us how much of success in Hollywood is out of anyone’s hands. There were two writers on Déjà Vu. One was Bill Marsilii, who hasn’t been credited on a movie since. The other was Terry Rossio.

The playboy and the playwright

with 2 comments

In 1948, the playwright and screenwriter Samson Raphaelson spent four months teaching a legendary writing course at the University of Illinois. His lectures were published as The Human Nature of Playwriting, a book that until recently was remarkably difficult to find—I ended up photographing every page of it in the reading room of the Newberry Library. (A digital edition is now available for eight dollars on Kindle, which is a real bargain.) It’s as much about living a meaningful life as it is about becoming a good writer, and my favorite passage is Raphaelson’s praise of those who live by their wits:

I intend to gamble to my dying day on my capacity to provide bread and butter, a roof and an overcoat. That kind of gambling, where you pit yourself against the primary hazards of life, is something I believe in. Not merely for writers, but for everyone. I think security tends to make us timid. You do well at something, you know you can continue doing well at it, and you hesitate about trying anything else. Then you begin to put all your energies into protecting and reinforcing what you have. You become conservative and face all the dangers of conservatism in an age when revolutions, seen and unseen, are occurring every day.

One of the students in his class was the young Hugh Hefner, who was twenty-two years old. And the more I think about Hefner’s implausible career, which ended yesterday, the more I suspect that he listened intently to Raphaelson, even if his inner life was shaped less by the stage than by the movies. In Thy Neighbor’s Wife, Gay Talese writes of Hefner’s teenage days working as an usher at the Rockne Theater in Chicago: “As he stood watching in the darkened theater, he often wished that the lights would never turn on, that the story on the screen would continue indefinitely.”

And Hefner’s improbable existence starts to make more sense if see him as at the star of a movie that he was furiously writing in real time. These impulses were central to his personality, as Talese notes:

Not content with merely presenting fantasy, [Hefner] wished to experience it, connect with it, to synthesize his strong visual sense with his physical drives, and to manufacture a mood, a love scene, that he could both feel and observe…He was, and had always been, visually aware of whatever he did as he did it. He was a voyeur of himself. He acted at times in order to watch. Once he allowed himself to be picked up by a homosexual in a bar, more to see than to enjoy sex with a man. During Hefner’s first extramarital affair, he made a film of himself making love to his girlfriend, a 16mm home movie that he keeps with cartons of other personal documents and mementos, photo albums, and notebooks that depict and describe his entire personal life.

Talese observes elsewhere that as Playboy grew in popularity, Hefner dressed the set with the obsessiveness of an experienced stage manager:

The reclusive Hefner was now beginning to reveal himself in his own pages…by inserting evidence of his existence in the backgrounds of nude photographs that were shot exclusively for Playboy. In a picture of a young woman taking a shower, Hefner’s shaving brush and comb appeared on the bathroom sink. His tie was hung near the mirror. Although Hefner was now presenting only the illusion of himself as the lover of the women in the pictures, he foresaw the day when, with the increasing power of his magazine, he would truly possess these women sexually and emotionally; he would be realizing his readers’ dreams, as well as his own, by touching, wooing, and finally penetrating the desirable Playmate of the Month.

“[Hefner] saw himself as a fantasy matchmaker between his male readers and the females who adorned his pages,” Talese writes, and the logical conclusion was to assume this role in reality, as a kind of Prospero composing encounters for real men and women. In The Human Nature of Playwriting, Raphaelson advises:

If you start writing and suddenly it isn’t going where you want it to go, what you expected to happen can’t happen, and you are within five pages of your second-act curtain and you’re stuck, there is a procedure which I have found invaluable. I make a list of my principal characters and check to see if each character has had a major scene with every other character, and by “major” I mean a scene in which they are in conflict and explore each other…I would say a good play, all other things being equal, should have thorough exploration of each other by all the major characters.

Hefner clearly conceived of the Playboy Mansion as a stage where such “thorough exploration” could take place, and its habitués included everyone from Gene Siskel to Shel Silverstein. The Playboy offices also attracted a curious number of science fiction writers, including Ray Russell and my hero, Robert Anton Wilson, who answered the letters in the Playboy Forum as an associate editor for five years. (Wilson writes in Cosmic Trigger: “You all want to know, of course, does Hef really fuck all the Playmates, and is he really homosexual…We have no real inside information—but our impression is that Hef has made love to a lot of the Playmates, though by no means all of them, and that he is not homosexual.”) On September 2, 1962, after participating in a symposium on the future, Robert A. Heinlein attended a party at the mansion, of which he recalled:

This fabulous house illustrated a couple of times in Playboy—and it really is fabulous, with a freeform swimming pool in the basement, a bar under that with a view window into the pool, and all sorts of weird and wonderful fancies…I saw my chum Shel Silverstein…I got into a long, drunken, solemn discussion with Hefner in the bar and stayed until 7:30am—much too late or early, both from health and from standpoint of proper behavior of a guest. I like Hefner very much—my kind of son of a bitch. No swank at all and enjoying his remarkable success.

But it can be dangerous when a man creates a dream, walks into it, and invites the rest of us to follow. Hefner sometimes reminds me of John Updike—another aspiring cartoonist who took the exploration of extramartial sex as his artistic territory—but he’s also uncomfortably reminiscent of another famous figure. Talese writes: “Although there were numerous men who were far wealthier than Hefner, the public was either unaware or unenvious of them since they rarely appeared on television and never called attention to the fact that they were enjoying themselves.” It’s hard to read these words now without thinking at once of Donald Trump, whose victory over Ted Cruz in the primaries Hefner hailed as “a sexual revolution in the Republican Party.” Like Trump, Hefner became a caricature of himself over time, perhaps failing to heed Raphaelson’s warning: “When you make money and are known as being a competent and well-heeled fellow, it’s natural to accept yourself at that value and to be horrified at the thought that you should ever again be broke—that is, that anyone should know of it.” And Talese’s description of Hefner in the sixties carries a new resonance today:

Hugh Hefner saw himself as the embodiment of the masculine dream, the creator of a corporate utopia, the focal point of a big-budget home movie that continuously enlarged upon its narcissistic theme month after month in his mind—a film of unfolding romance and drama in which he was simultaneously the producer, the director, the writer, the casting agent, the set designer, and the matinee idol and lover of each desirable new starlet who appeared on cue to enhance, but never upstage, his preferred position on the edge of satiation.

This sounds a lot like our current president. Trump had a long association with Playboy, and while we may never know how much of his personality was shaped in some way by Hefner, I suspect that it was just as profound as it was for countless other American males of his generation. It might seem a stretch to draw the line from Raphaelson to Hefner to Trump—but we’re all part of the play now. And the curtain hasn’t fallen yet.

The desolation of slog

with 2 comments

Over the last few months, I’ve developed a hobby that I’d have trouble justifying even to myself—I’ve spent countless hours watching the special features for Peter Jackson’s Hobbit trilogy, a series that I don’t even like. (It would be nice to pretend that I’ve been celebrating the eightieth anniversary of the publication of J.R.R. Tolkien’s original novel, which took place last week, but I frankly wasn’t even aware of it until the other day.) My deep dive into Hobbit featurettes came out of a confluence of circumstances that I doubt will ever recur. I’ve always loved the production videos for The Lord of the Rings, which I’ve compared elsewhere to a film school in a box set, and for years, they’ve served as my evening comfort food of choice, especially on days when I’m so tired from work and parenting to do anything but stare blankly at a television screen. During a period when I was exceptionally busy with the book, I worked through most of them yet again, proceeding backward from The Return of the King to Fellowship. Before long, though, I’d burned through them all, and it occurred to me that I might be able to get a similar fix from that other series of movies about Middle-earth. A glance at Amazon and some good timing revealed that I could buy the extended editions of all three Hobbit films for about ten dollars apiece. I’d been meaning to check out the special features ever since seeing the extraordinary authorized video that highlighted Jackson’s exhaustion during the filming of The Battle of the Five Armies, and shelling out thirty bucks for fifteen DVDs seemed like it would provide a decent return on investment.

As it turned out, it did. Not because of the featurettes themselves, which for the most part are a step down from their equivalents for The Lord of the Rings, but because of the light that they inadvertently shed on what went wrong with The Hobbit. Viewers hoping for Peter Jackson’s equivalent of Burden of Dreams or Hearts of Darkness are likely to be disappointed—the tone of the bonus features is relentlessly upbeat, and there are only occasional admissions of the possibility that anything might be going sideways. (Jackson’s graying hair, fluctuating weight, and visible tiredness tell us more than anything that he says aloud.) What sticks with you, unfortunately, is the length and tediousness of most of these videos, which seem like an expression of the same misconceptions that went into the movies themselves. The Hobbit trilogy reunited much of the original cast and crew for a project that, on paper, had no excuse for not reproducing at least some of the magic of its predecessor. Yet it feels for all the world like an attempt at reverse engineering, based only on the qualities of the first trilogy that could be most efficiently replicated. The Lord of the Rings consisted of three movies that came close to three hours each; therefore, so does The Hobbit. Viewers loved the epic battle scenes of the earlier films, so The Hobbit gives them lots of the same. A badass action sequence in which Legolas defies gravity? Check. A love triangle? Why not? Fan service reappearances from Elrond, Saruman, Galadriel, and other characters we liked the first time around? Of course.  And when the characters couldn’t return, The Hobbit finds their non-union equivalents. Bard the Bowman is called “the Aragon of The Hobbit” so often in the bonus features that I lost count.

By now, many viewers have come to see The Hobbit as a kind of simulation of the original, recreating it in broad, quantitative strokes while missing most of the qualitative factors that made The Lord of the Rings special. What surprised me, at this late date, was the discovery that the bonus features did exactly the same thing. The Lord of the Rings featurettes expanded to epic length because there was simply so much to explore, from conceptual design to training the horses to the workers at Weta who made so many suits of chain mail that they literally rubbed away their fingerprints. With The Hobbit, the special features seem to be just as long, if not longer, and they seem to have been driven by the same logic that went into the movies. Viewers love having multiple discs of bonus material, the reasoning goes, so we’ll give it to them—and if you’re simply weighing the physical size of these editions against the Lord of the Rings box sets that you already own, you’ll be happy. (It’s the opposite of the metric preferred by Apple, which uses thinness as a proxy for quality.) But it’s hard to convey how bloated these videos are. To give just one example, there’s a scene in The Desolation of Smaug in which the Master of Laketown, played gamely by Stephen Fry, eats a plate of goat testicles for breakfast. As the bonus features take pains to inform us, they aren’t real testicles, but carefully molded meatballs, although Fry still had to gulp them down in vast quantities. In a Lord of the Rings featurette, this detail might have merited a cutaway shot, a funny outtake, and a dry witticism during Fry’s talking head. With The Hobbit, it goes on for minutes on end. I had my laptop out while I was watching it, and when I glanced up after what seemed like an inordinate amount of time, they were still talking about testicles.

It isn’t hard to guess what happened. The creators of the bonus features—who, it must be said, know how to put together an attractive, professional product—were expected to produce a certain volume of footage, on the assumption that fans would be happy with hours of anything. As a result, the most trivial byways of the production, like the fake testicles, get the same loving treatment as the hallway fight in Inception. I don’t blame the makers of the featurettes, who were just doing their best, but the mindset of the producers who gave them a brief that measured the quality of the outcome by how many discs it managed to fill. (Some of it, I hasten to add, is worth watching. Aside from the weirdly candid postmortem of The Battle of Five Armies that I mentioned above, there’s a fascinating treatment of the orchestrations for The Desolation of Smaug, and my attention perks up whenever Richard Taylor, Alan Lee, or John Howe appear onscreen.) But I keep going back to the fatal flaw of The Hobbit movies themselves. After a certain point, you lose track of why you’re here, so you fall back on benchmarks and targets that worked the first time around. You forget that people didn’t love The Lord of the Rings because each movie was three hours long, but the movies were long because there was so much there that people would love. The tale grew in the telling, as Tolkien famously said, but it’s a mistake to confuse that growth for the imaginative impulse that nurtured it. Bonus features might seem like a modest form of art, but the Lord of the Rings featurettes were a masterpiece of their kind, and those for The Hobbit bear exactly the same relationship to their predecessors as the films did. What was lacking in both cases was a basic clarity of thought. As John Fowles wrote in his great novel Daniel Martin, about an English screenwriter in Hollywood: “Whole sight; or all the rest is desolation.”

Out of the past

leave a comment »

You shouldn’t have been that sentimental.


About halfway through the beautiful, devastating finale of Twin Peaks—which I’ll be discussing here in detail—I began to reflect on what the figure of Dale Cooper really means. When we encounter him for the first time in the pilot, with his black suit, fastidious habits, and clipped diction, he’s the embodiment of what we’ve been taught to expect of a special agent of the Federal Bureau of Investigation. The FBI occupies a role in movies and television far out of proportion to its actual powers and jurisdiction, in part because it seems to exist on a level intriguingly beyond that of ordinary law enforcement, and it’s often been used to symbolize the sinister, the remote, or the impersonal. Yet when Cooper reveals himself to be a man of real empathy, quirkiness, and faith in the extraordinary, it comes almost as a relief. We want to believe that a person like this exists. Cooper carries a badge, he wears a tie, and he’s comfortable with a gun, but he’s here to enforce human reason in the face of a bewildering universe. The Black Lodge might be out there, but the Blue Rose task force is on it, and there’s something oddly consoling about the notion that it’s a part of the federal government. A few years later, Chris Carter took this premise and refined it into The X-Files, which, despite its paranoia, reassured us that somebody in a position of authority had noticed the weirdness in the world and was trying to make sense of it. They might rarely succeed, but it was comforting to think that their efforts had been institutionalized, complete with a basement office, a place in the org chart, and a budget. And for a lot of viewers, Mulder and Scully, like Cooper, came to symbolize law and order in stories that laugh at our attempts to impose it.

Even if you don’t believe in the paranormal, the image of the lone FBI agent—or two of them—arriving in a small town to solve a supernatural mystery is enormously seductive. It appeals to our hopes that someone in power cares enough about us to investigate problems that can’t be rationally addressed, which all stand, in one way or another, for the mystery of death. This may be why both Twin Peaks and The X-Files, despite their flaws, have sustained so much enthusiasm among fans. (No other television dramas have ever meant more to me.) But it’s also a myth. This isn’t really how the world works, and the second half of the Twin Peaks finale is devoted to tearing down, with remarkable cruelty and control, the very idea of such solutions. It can only do this by initially giving us what we think we want, and the first of last night’s two episodes misleads us with a satisfying dose of wish fulfillment. Not only is Cooper back, but he’s in complete command of the situation, and he seems to know exactly what to do at every given moment. He somehow knows all about Freddie and his magical green glove, which he utilizes to finally send Bob into oblivion. After rescuing Diane, he uses his room key from the Great Northern, like a magical item in a video game, to unlock the door that leads him to Mike and the disembodied Phillip Jeffries. He goes back in time, enters the events of Fire Walk With Me, and saves Laura on the night of her murder. The next day, Pete Martell simply goes fishing. Viewers at home even get the appearance by Julee Cruise that I’ve been awaiting since the premiere. After the credits ran, I told my wife that if it had ended there, I would have been totally satisfied.

But that was exactly what I was supposed to think, and even during the first half, there are signs of trouble. When Cooper first sees the eyeless Naido, who is later revealed to be the real Diane, his face freezes in a huge closeup that is superimposed for several minutes over the ensuing action. It’s a striking device that has the effect of putting us, for the first time, in Cooper’s head, rather than watching him with bemusement from the outside. We identify with him, and at the very end, when his efforts seemingly come to nothing, despite the fact that he did everything right, it’s more than heartbreaking—it’s like an existential crisis. It’s the side of the show that was embodied by Sheryl Lee’s performance as Laura Palmer, whose tragic life and horrifying death, when seen in its full dimension, put the lie to all the cozy, comforting stories that the series told us about the town of Twin Peaks. Nothing good could ever come out of a world in which Laura died in the way that she did, which was the message that Fire Walk With Me delivered so insistently. And seeing Laura share the screen at length with Cooper presents us with both halves of the show’s identity within a single frame. (It also gives us a second entry, after Blue Velvet, in the short list of great scenes in which Kyle MacLachlan enters a room to find a man sitting down with his brains blown out.) For a while, as Cooper drives Laura to the appointment with her mother, it seems almost possible that the series could pull off one last, unfathomable trick. Even if it means erasing the show’s entire timeline, it would be worth it to save Laura. Or so we think. In the end, they return to a Twin Peaks that neither of them recognize, in which the events of the series presumably never took place, and Cooper’s only reward is Laura’s scream of agony.

As I tossed and turned last night, thinking about Cooper’s final, shattering moment of comprehension, a line of dialogue from another movie drifted into my head: “It’s too late. There’s no bringing her back.” It’s from Vertigo, of course, which is a movie that David Lynch and Mark Frost have been quietly urging us to revisit all along. (Madeline Ferguson, Laura’s identical cousin, who was played by Lee, is named after the film’s two main characters, and both works of art pivot on a necklace and a dream sequence.) Along with so much else, Vertigo is about the futility of trying to recapture or change the past, and its ending, which might be the most unforgettable of any film I’ve ever seen, destroys Scotty’s delusions, which embody the assumptions of so many American movies: “One final thing I have to do, and then I’ll be rid of the past forever.” I think that Lynch and Frost are consciously harking back to Vertigo here—in the framing of the doomed couple on their long drive, as well as in Cooper’s insistence that Laura revisit the scene of the crime—and it doesn’t end well in either case. The difference is that Vertigo prepares us for it over the course of two hours, while Twin Peaks had more than a quarter of a century. Both works offer a conclusion that feels simultaneously like a profound statement of our helplessness in the face of an unfair universe and like the punchline to a shaggy dog story, and perhaps that’s the only way to express it. I’ve quoted Frost’s statement on this revival more than once: “It’s an exercise in engaging with one of the most powerful themes in all of art, which is the ruthless passage of time…We’re all trapped in time and we’re all going to die. We’re all traveling along this conveyor belt that is relentlessly moving us toward this very certain outcome.” Thirty seconds before the end, I didn’t know what he meant. But I sure do now. And I know at last why this show’s theme is called “Falling.”

Written by nevalalee

September 4, 2017 at 9:40 am

The number nine

leave a comment »

Note: This post reveals plot details from last night’s episode of Twin Peaks.

One of the central insights of my life as a reader is that certain kinds of narrative are infinitely expansible or contractible. I first started thinking about this in college, when I was struggling to read Homer in Greek. Oral poetry, I discovered, wasn’t memorized, but composed on the fly, aided by the poet’s repertoire of stock lines, formulas, and images that happened to fit the meter. This meant that the overall length of the composition was highly variable. A scene that takes up just a few lines in the Iliad that survives could be expanded into an entire night’s recital, based on what the audience wanted to hear. (For instance, the characters of Crethon and Orsilochus, who appear for only twenty lines in the existing version before being killed by Aeneas, might have been the stars of the evening if the poet happened to be working in Pherae.) That kind of flexibility originated as a practical consequence of the oral form, but it came to affect the aesthetics of the poem itself, which could grow or shrink to accommodate anything that the poet wanted to talk about. Homer uses his metaphors to introduce miniature narratives of human life that don’t otherwise fit into a poem of war, and some amount to self-contained short stories in themselves. Proust operates in much the same way. One observation leads naturally to another, and an emotion or analogy evoked in passing can unfold like a paper flower into three dense pages of reflections. In theory, any novel could be expanded like this, like a hypertext that opens into increasingly deeper levels. In Search of Lost Time happens to be the one book in existence in which all of these flowerings have been preserved, with a plot could fit into a novella of two hundred unhurried pages.

Something similar appears to have happened with the current season of Twin Peaks, and when you start to think of it in those terms, its structure, which otherwise seems almost perversely shapeless, begins to make more sense. In the initial announcement by Showtime, the revival was said to consist of nine episodes, and Mark Frost even said to Buzzfeed:

If you think back about the first season, if you put the pilot together with the seven that we did, you get nine hours. It just felt like the right number. I’ve always felt the story should take as long as the story takes to tell. That’s what felt right to us.

It was doubled to eighteen after a curious interlude in which David Lynch dropped out of the project, citing budget constraints: “I left because not enough money was offered to do the script the way I felt it needed to be done.” He came back, of course, and shortly thereafter, it was revealed that the length of the season had increased. Yet there was never any indication that either Lynch or Frost had done any additional writing. My personal hunch is that they always had nine episodes of material, and this never changed. What happened is that the second act of the show expanded in the fashion that I’ve described above, creating a long central section that was free to explore countless byways without much concern for the plot. The beginning, and presumably the end, remained more or less as conceived—it was the middle that grew. And a quick look at the structure of the season so far seems to confirm this. The first three episodes, which take Cooper from inside the Black Lodge to slightly before his meeting with his new family in Las Vegas, seemed weird at the time, but now they look positively conventional in terms of how much story they covered. They were followed by three episodes, the Dougie Jones arc, that were expanded beyond recognition. And now that we’ve reached the final three, which account for the third act of the original outline, it makes sense for Cooper to return at last.

If the season had consisted of just those nine episodes, I suspect that more viewers would have been able to get behind it. Even if the second act had doubled in length—giving us a total of twelve installments, of which three would have been devoted to detours and loose ends—I doubt that most fans would have minded. It’s expanding that middle section to four times its size, without any explanation, that lost a lot of people. But it’s clearly the only way that Lynch would have returned. For most of the last decade, Lynch has been contentedly pottering around with odd personal projects, concentrating on painting, music, digital video, and other media that don’t require him to be answerable to anyone but himself. The Twin Peaks revival, after the revised terms had been negotiated with Showtime, allowed him to do this with a larger budget and for a vastly greater audience. Much of this season has felt like Lynch’s private sketchbook or paintbox, allowing him to indulge himself within each episode as long as the invisible scaffolding of the original nine scripts remained. The fact that so much of the strangeness of this season has been visual and nonverbal points to Lynch, rather than Frost, as the driving force on this end. And at its best, it represents something like a reinvention of television, which is the most expandable or compressible medium we have, but which has rarely utilized this quality to its full extent. (There’s an opening here, obviously, for a fan edit that condenses the season down to nine episodes, leaving the first and last three intact while shrinking the middle twelve. It would be an interesting experiment, although I’m not sure I’d want to watch it.)

Of course, this kind of aggressive attack on the structure of the narrative doesn’t come without a cost. In the case of Twin Peaks, the primary casualty has been the Dougie Jones storyline, which has been criticized for three related reasons. The first, and most understandable, is that we’re naturally impatient to get the old Cooper back. Another is that this material was never meant to go on for this long, and it starts to feel a little thin when spread over twelve episodes. And the third is that it prevents Kyle MacLachlan, the ostensible star of the show, from doing what he does best. This last criticism feels like the most valid. MacLachlan has played an enormous role in my life as a moviegoer and television viewer, but he operates within a very narrow range, with what I might inadequately describe as a combination of rectitude, earnestness, and barely concealed eccentricity. (In other words, it’s all but indistinguishable from the public persona of David Lynch himself.) It’s what made his work as Jeffrey in Blue Velvet so moving, and a huge part of the appeal of Twin Peaks lay in placing this character at the center of what looked like a procedural. MacLachlan can also convey innocence and darkness, but by bringing these two traits to the forefront, and separating them completely in Dougie and Dark Cooper, it robs us of the amalgam that makes MacLachlan interesting in the first place. Like many stars, he’s chafed under the constraints of his image, and perhaps he even welcomed the challenges that this season presented—although he may not have known how his performance would look when extended past its original dimensions and cut together with the rest. When Cooper returned last night, it reminded me of how much I’ve missed him. And the fact that we’ll get him for two more episodes, along with everything else that this season has offered us, feels more than ever like a gift.

Written by nevalalee

August 28, 2017 at 9:17 am

Amplifying the dream

leave a comment »

In the book Nobody Turn Me Around, Charles Euchner shares a story about Bayard Rustin, a neglected but pivotal figure in the civil rights movement who played a crucial role in the March on Washington in 1963:

Bayard Rustin had insisted on renting the best sound system money could buy. To ensure order at the march, Rustin insisted, people needed to hear the program clearly. He told engineers what he wanted. “Very simple,” he said, pointing at a map. “The Lincoln Memorial is here, the Washington Monument is there. I want one square mile where anyone can hear.” Most big events rented systems for $1,000 or $2,000, but Rustin wanted to spend ten times that. Other members of the march committee were skeptical about the need for a deluxe system. “We cannot maintain order where people cannot hear,” Rustin said. If the Mall was jammed with people baking in the sun, waiting in long lines for portable toilets, anything could happen. Rustin’s job was to control the crowd. “In my view it was a classic resolution of the problem of how can you keep a crowd from becoming something else,” he said. “Transform it into an audience.”

Ultimately, Rustin was able to convince the United Auto Workers and International Ladies’ Garment Workers’ Unions to raise twenty thousand dollars for the sound system, and when he was informed that it ought to be possible to do it for less, he replied: “Not for what I want.” The company American Amplifier and Television landed the contract, and after the system was sabotaged by persons unknown the night before the march, Walter Fauntroy, who was in charge of operations on the ground, called Attorney General Robert Kennedy and said: “We have a serious problem. We have a couple hundred thousand people coming. Do you want a fight here tomorrow after all we’ve done?”

The system was fixed just in time, and its importance to the march is hard to overstate. As Zeynep Tufekci writes in her recent book Twitter and Tear Gas: “Rustin knew that without a focused way to communicate with the massive crowd and to keep things orderly, much could go wrong…The sound system worked without a hitch during the day of the march, playing just the role Rustin had imagined: all the participants could hear exactly what was going on, hear instructions needed to keep things orderly, and feel connected to the whole march.” And its impact on our collective memory of the event may have been even more profound. In an article in last week’s issue of The New Yorker, which is where I first encountered the story, Nathan Heller notes in a discussion of Tufekci’s work:

Before the march, Martin Luther King, Jr., had delivered variations on his “I Have a Dream” speech twice in public. He had given a longer version to a group of two thousand people in North Carolina. And he had presented a second variation, earlier in the summer, before a vast crowd of a hundred thousand at a march in Detroit. The reason we remember only the Washington, D.C., version, Tufekci argues, has to do with the strategic vision and attentive detail work of people like Rustin. Framed by the Lincoln Memorial, amplified by a fancy sound system, delivered before a thousand-person press bay with good camera sight lines, King’s performance came across as something more than what it had been in Detroit—it was the announcement of a shift in national mood, the fulcrum of a movement’s story line and power. It became, in other words, the rarest of protest performances: the kind through which American history can change.

Heller concludes that successful protest movements hinge on the existence of organized, flexible, practical structures with access to elites, noting that the sound system was repaired, on Kennedy’s orders, by the Army Corps of Engineers: “You can’t get much cozier with the Man than that.”

There’s another side to the story, however, which neither Tufekci or Heller mention. In his memoir Behind the Dream, the activist Clarence B. Jones recalls:

The Justice Department and the police had worked hand in hand with the March Committee to design a public address system powerful enough to get the speakers’ voices across the Mall; what march coordinators wouldn’t learn until after the event had ended was that the government had built in a bypass to the system so that they could instantly take over control if they deemed it necessary…Ted [Brown] and Bayard [Rustin] told us that right after the march ended those officers approached them, eager to relieve their consciences and reveal the truth about the sound system. There was a kill switch and an administration official’s thumb had been on it the entire time.

The journalist Gary Jounge—whose primary source seems to be Jones—expands on this claim in his book The Speech: “Fearing incitement from the podium, the Justice Department secretly inserted a cutoff switch into the sound system so they could turn off the speakers if an insurgent group hijacked the microphone. In such an eventuality, the plan was to play a recording to Mahalia Jackson singing ‘He’s Got the Whole World in His Hands’ in order to calm down the crowd.” And in Pillar of Fire, Taylor Branch identifies the official in question as Jerry Bruno, President Kennedy’s “advance man,” who “positioned himself to cut the power to the public address system if rally speeches proved incendiary.” Regardless of the truth of the matter, it speaks to the extent to which Rustin’s sound system was central to the question of who controlled the march and its message. If nothing else, the people who sabotaged it understood this intuitively. (I should also mention the curious rumor, shared by Dave Chapelle in a recent comedy special on Netflix: “I heard when Martin Luther King stood on the steps of the Lincoln Memorial and said he had a dream, he was speaking into a PA system that Bill Cosby paid for.” It’s demonstrably untrue, but it also speaks to the hold that the sound system has on the stories that we tell about the march.)

But what strikes me the most is the sheer practicality of the ends that Rustin, Fauntroy, and the others on the ground were trying to achieve. Listen to how they describe it: “We cannot maintain order where people cannot hear.” “How can you keep a crowd from becoming something else?” “Do you want a fight here tomorrow after all we’ve done?” They weren’t worried about history, but about making it safely to the end of the day. Rustin had been thinking about this march for two decades, and he spent years actively planning for it, conscious that it presented massive organizational challenges that could only be addressed by careful preparation in advance. He had specifically envisioned it as ending at the Lincoln Memorial, with a crowd filling the National Mall, a huge space that imposed enormous logistical problems of its own. The primary purpose of the sound system was to allow a quarter of a million people to assemble and disperse in a peaceful fashion, and its properties were chosen with that end in mind. (As Euchner notes: “To get one square mile of clear sound, you need to spend upwards of twenty thousand dollars.”) A system of unusual power, expense, and complexity was the minimum required to ensure the orderly conclusion of an event on this scale. But when the audacity to envision the National Mall as a backdrop was combined with the attention to detail to make it work, the result was an electrically charged platform that would amplify any message, figuratively and literally, which made it both powerful and potentially dangerous. Everyone understood this. The saboteurs did. So did the Justice Department. The march’s organizers were keenly aware of it, which was why potentially controversial speakers—including James Baldwin—were excluded from the program. In the end, it became a stage for King, and at least one lesson is clear. When you aim high, and then devote everything you can to the practical side, the result might be more than you could have dreamed.

The world spins

with one comment

Note: This post discusses plot points from Sunday’s episode of Twin Peaks.

“Did you call me five days ago?” Dark Cooper asks the shadowy shape in the darkness in the most recent episode of Twin Peaks. It’s a memorable moment for a number of reasons, not the least of which is that he’s addressing the disembodied Philip Jeffries, who was played by David Bowie in Fire Walk With Me, and is now portrayed by a different voice actor and what looks to be a sentient tea kettle. But that didn’t even strike me as the weirdest part. What hit me hardest is the implication that everything that we’ve seen so far this season has played out over less than a week in real time—the phone call to which Dark Cooper is referring occurred during the second episode. Admittedly, there are indications that the events onscreen have unfolded in a nonlinear fashion, not to draw attention to itself, but to allow David Lynch and Mark Frost to cut between storylines according to their own rhythms, rather than being tied down to chronology. (The text message that Dark Cooper sends at the end of the scene was received by Diane a few episodes ago, while Audrey’s painful interactions with Charlie apparently consist of a single conversation parceled out over multiple weeks. And the Dougie Jones material certainly feels as if it occurs over a longer period than five days, although it’s probably possible to squeeze it into that timeline if necessary.) And if viewers are brought up short by the contrast between the show’s internal calendar and its emotional duration, it’s happened before. When I look back at the first two seasons of the show, I’m still startled to realize that every event from Laura’s murder to Cooper’s possession unfolds over just one month.

Why does this feel so strange? The obvious answer is that we get to know these characters over a period of years, while we really only see them in action for a few weeks, and their interactions with one another end up carrying more weight than you might expect for people who, in some cases, met only recently. And television is the one medium that routinely creates that kind of disparity. It’s inherently impossible for a movie to take longer to watch than the events that it depicts—apart from a handful, like Run Lola Run or Vantage Point, that present scrambled timelines or stage the same action from multiple perspectives—and it usually compresses days or weeks of action within a couple of hours. With books, the length of the act of reading varies from one reader to the next, and we’re unlikely to find it particularly strange that it can take months to finish Ulysses, which recounts the events of a single day. It’s only television, particularly when experienced in its original run, that presents such a sharp contrast between narrative and emotional time, even if we don’t tend to worry about this with sitcoms, procedurals, and other nonserialized shows. (One interesting exception consists of shows set in high school or college, in which it’s awfully tempting to associate each season with an academic year, although there’s no reason why a series like Community couldn’t take place over a single semester.) Shows featuring children or teenagers have a built-in clock that reminds us of how time is passing in the real world, as Urkel or the Olsen twins progress inexorably toward puberty. And occasionally there’s an outlier like The Simpsons, in which a quarter of a century’s worth of storylines theoretically takes place within the same year or so.

But the way in which a serialized show can tell a story that occurs over a short stretch of narrative time while simultaneously drawing on the emotional energy that builds up over years is one of the unsung strengths of the entire medium. Our engagement with a favorite show that airs on a weekly basis isn’t just limited to the hour that we spend watching it every Sunday, but expands to fill much of the time in between. If a series really matters to us, it gets into our dreams. (I happened to miss the initial airing of this week’s episode because I was on vacation with my family, and I’ve been so conditioned to get my fix of Twin Peaks on a regular basis that I had a detailed dream about an imaginary episode that night—which hasn’t happened to me since I had to wait a week to watch the series finale of Breaking Bad. As far as I can remember, my dream involved the reappearance of Sheriff Harry Truman, who has been institutionalized for years, with his family and friends describing him euphemistically as “ill.” And I wouldn’t mention it here at all if this weren’t a show that has taught me to pay close attention to my dreamlife.) Many of us also spend time between episodes in reading reviews, discussing plot points online, and catching up with various theories about where it might go next. In a few cases, as with Westworld, this sort of active analysis can be detrimental to the experience of watching the show itself, if you see it as a mystery with clues that the individual viewer is supposed to crack on his or her own. For the most part, though, it’s an advantage, with time conferring an emotional weight that the show might not have otherwise had. As the world spins, the series stays where it was, and we’ve all changed in the meantime.

The revival of Twin Peaks takes this tendency and magnifies it beyond anything else we’ve seen before, with its fans investing it with twenty-five years of accumulated energy—and this doesn’t even account for the hundreds of hours that I spent listening to the show’s original soundtrack, which carries an unquantifiable duration of its own. And one of the charming things about this season is how Lynch and Frost seem to have gone through much the same experience themselves, mulling over their own work until stray lines and details take on a greater significance. When Dark Cooper goes to his shadowy meeting above a convenience store, it’s paying off on a line that Mike, the one-armed man, uttered in passing during a monologue from the first Bush administration. The same applies to the show’s references to a mysterious “Judy,” whom Jeffries mentioned briefly just before disappearing forever. I don’t think that these callbacks reflect a coherent plan that Lynch and Frost have been keeping in their back pockets for decades, but a process of going back to tease out meanings that even they didn’t know were there. Smart writers of serialized narratives learn to drop vague references into their work that might pay off later on. (Two of my favorite examples are Spock’s “Remember” at the end of Star Trek II: The Wrath of Khan, and the Second Foundation, which Isaac Asimov introduced in case he needed it in a subsequent installment.) What Twin Peaks is doing now is analogous to what the writers of Breaking Bad did when they set up problems that they didn’t know how to solve, trusting that they would figure it out eventually. The only difference is that Lynch and Frost, like the rest of us, have had more time to think about it. And it might take us another twenty-five years before we—or they—figure out what they were actually doing.

Written by nevalalee

August 22, 2017 at 9:08 am

The sense of an ending

leave a comment »

Note: This post discusses details from last night’s episode of Twin Peaks.

When I was working as a film critic in college, one of my first investments was a wristwatch that could glow in the dark. If you’re sitting through an interminable slog of a movie, sometimes you simply want to know how much longer the pain will last, and, assuming that you have a sense of the runtime, a watch puts a piece of narrative information at your disposal that has nothing to do with the events of the story itself. Even if you’re enjoying yourself, the knowledge that a film has twenty minutes left to run—which often happens if you’re watching it at home and staring right at the numbers on the display of your DVD player—affects the way you think about certain scenes. A climax plays differently near the end, as opposed to somewhere in the middle. The length of a work of art is a form of metadata that influences the way we watch movies and read books, as Douglas Hofstadter points out in Gödel, Escher, Bach:

You have undoubtedly noticed how some authors go to so much trouble to build up great tension a few pages before the end of their stories—but a reader who is holding the book physically in his hands can feel that the story is about to end. Hence, he has some extra information which acts as an advance warning, in a way. The tension is a bit spoiled by the physicality of the book. It would be so much better if, for instance, there were a lot of padding at the end of novels…A lot of extra printed pages which are not part of the story proper, but which serve to conceal the exact location of the end from a cursory glance, or from the feel of the book.

Not surprisingly, I tend to think about the passage of time the most when I’m not enjoying the story. When I’m invested in the experience, I’ll do the opposite: I’ll actively resist glancing at the clock or looking to see how much time has elapsed. When I know that the credits are going to roll no matter what within the next five minutes, it amounts to a spoiler. With Twin Peaks, which has a narrative that can seemingly be cut anywhere, like yard goods, I try not to think about how long I’ve been watching. Almost inevitably, the episode ends before I’m ready for it, in part because it provides so few of the usual cues that we’ve come to expect from television. There aren’t any commercial breaks, obviously, but the stories also don’t divide neatly into three or four acts. In the past, most shows, even those that aired without interruption on cable networks, followed certain structural conventions that allow us to guess when the story is coming to an end. (This is even more true of Hollywood movies, which, with their mandated beat sheets—the inciting incident, the midpoint, the false dawn, the crisis—practically tell the audience how much longer they need to pay attention, which may be the reason why such rules exist in the first place.) Now that streaming services allow serialized stories to run for hours without worrying about the narrative shape of individual episodes, this is less of an issue, and it can be a mixed blessing. But at its best, on a show like Twin Peaks, it creates a feeling of narrative suspension, cutting us off from any sense of the borders of the episode until the words Starring Kyle MacLachlan appear suddenly onscreen.

Yet there’s also another type of length of which we can’t help but be conscious, at least if we’re the kind of viewers likely to be watching Twin Peaks in the first place. We know that there are eighteen episodes in this season, the fourteenth of which aired last night, and the fact that we only have four hours left to go adds a degree of tension to the narrative that wouldn’t be there if we weren’t aware of it. This external pressure also depends on the knowledge that this is the only new season of the show that we’re probably going to get, which, given how hard it is to avoid this sort of news these days, is reasonable to expect of most fans. Maybe we’ve read the Rolling Stone interview in which David Lynch declared, in response to the question of whether there would be additional episodes: “I have no idea. It depends on how it goes over. You’re going to have to wait and see.” Or we’ve seen that David Nevins of Showtime said to Deadline: “It was always intended to be one season. A lot of people are speculating but there’s been zero contemplation, zero discussions other than fans asking me about it.” Slightly more promisingly, Kyle MacLachlan told the Hollywood Reporter: “I don’t know. David has said: ‘Everything is Twin Peaks.’ It leads me to believe that there are other stories to tell. I think it’s just a question of whether David and Mark want to tell them. I don’t know.” And Lynch even said to USA Today: “You never say never.” Still, it’s fair to say that the current season was conceived, written, and filmed to stand on its own, and until we know otherwise, we have to proceed under the assumption that this is the last time we’ll ever see these characters.

This has important implications for how we watch it from one week to the next. For one thing, it means that episodes near the end will play differently than they would have earlier in the season. Last night’s installment was relatively packed with incident—the revelation of the identity of Diane’s estranged half sister, Andy’s trip into the void, the green gardening glove, Monica Bellucci—but we’re also aware of how little time remains for the show to pay off any of these developments. Most series would have put an episode like this in the fourth slot, rather than the fourteenth, and given the show’s tendency to drop entire subplots for months, it leaves us keenly aware that many of these storylines may never be resolved. Every glimpse of a character, old or new, feels like a potential farewell. And with each episode that passes without the return of Agent Cooper, every minute in which we don’t see him increases our sense of urgency. (If this were the beginning of an open-ended run, rather than the presumptive final season, the response to the whole Dougie Jones thread would have been very different.) This information has nothing to do with the contents of the show itself, which, with one big exception, haven’t changed much since the premiere. But it’s hard not to think about it. In some ways, this may be the greatest difference between this season and the initial run, since there was always hope that the series would be renewed by ABC, or that Fire Walk With Me would tie off any loose ends. Unlike the first generation of fans, we know that this is it, and it can hardly fail to affect our impressions, even if Lynch still whispers in our heads: “You never say never.”

Written by nevalalee

August 14, 2017 at 8:48 am

%d bloggers like this: