Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Search Results

The illusion of life

leave a comment »

Last week, The A.V. Club ran an entire article devoted to television shows in which the lead is also the best character, which only points to how boring many protagonists tend to be. I’ve learned to chalk this up to two factors, one internal, the other external. The internal problem stems from the reasonable principle that the narrative and the hero’s objectives should be inseparable: the conflict should emerge from something that the protagonist urgently needs to accomplish, and when the goal has been met—or spectacularly thwarted—the story is over. It’s great advice, but in practice, it often results in leads who are boringly singleminded: when every action needs to advance the plot, there isn’t much room for the digressions and quirks that bring characters to life. The supporting cast has room to go off on tangents, but the characters at the center have to constantly triangulate between action, motivation, and relatability, which can drain them of all surprise. A protagonist is under so much narrative pressure that when the story relaxes, he bursts, like a sea creature brought up from its crevasse to the surface. Elsewhere, I’ve compared a main character to a diagram of a pattern of forces, like one of the fish in D’Arcy Wentworth Thompson’s On Growth and Form, in which the animal’s physical shape is determined by the outside stresses to which it has been subjected. And on top of this, there’s an external factor, which is the universal desire of editors, producers, and studio executives to make the protagonist “likable,” which, whether or not you agree with it, tends to smooth out the rough edges that make a character vivid and memorable.

In the classic textbook Disney Animation: The Illusion of Life, we find a useful perspective on this problem. The legendary animators Frank Thomas and Ollie Johnston provide a list of guidelines for evaluating story material before the animation begins, including the following:

Tell your story through the broad cartoon characters rather than the “straight” ones. There is no way to animate strong-enough attitudes, feelings, or expressions on realistic characters to get the communication you should have. The more real, the less latitude for clear communication. This is more easily done with the cartoon characters who can carry the story with more interest and spirit anyway. Snow White was told through the animals, the dwarfs, and the witch—not through the prince or the queen or the huntsman. They had vital roles, but their scenes were essentially situation. The girl herself was a real problem, but she was helped by always working to a sympathetic animal or a broad character. This is the old vaudeville trick of playing the pretty girl against the buffoon; it helps both characters.

Even more than Snow White, the great example here is Sleeping Beauty, which has always fascinated me as an attempt by Disney to recapture past glories by a mechanical application of its old principles raised to dazzling technical heights. Not only do Aurora and Prince Philip fail to drive the story, but they’re all but abandoned by it—Aurora speaks fewer lines than any other Disney main character, and neither of them talk for the last thirty minutes. Not only does the film acknowledge the dullness of its protagonists, but it practically turns it into an artistic statement in itself.

And it arises from a tension between the nature of animation, which is naturally drawn to caricature, and the notion that sympathetic protagonists need to be basically realistic. With regard to the first point, Thomas and Johnston advise:

Ask yourself, “Can the story point be done in caricature?” Be sure the scenes call for action, or acting that can be caricatured if you are to make a clear statement. Just to imitate nature, illustrate reality, or duplicate live action not only wastes the medium but puts an enormous burden on the animator. It should be believable, but not realistic.

The italics are mine. This is a good rule, but it collides headlong with the principle that the “real” characters should be rendered with greater naturalism:

Of course, there is always a big problem in making the “real” or “straight” characters in our pictures have enough personality to carry their part of the story…The point of this is misinterpreted by many to mean that characters who have to be represented as real should be left out of feature films, that the stories should be told with broad characters who can be handled more easily. This would be a mistake, for spectators need to have someone or something they can believe in, or the picture falls apart.

And while you could make a strong case that viewers relate just as much to the sidekicks, it’s probably also true that a realistic central character serves an important functional role, which allows the audience to take the story seriously. This doesn’t just apply to animation, either, but to all forms of storytelling—including most fiction, film, and television—that work best with broad strokes. In many cases, you can sense the reluctance of animators to tackle characters who don’t lend themselves to such bold gestures:

Early in the story development, these questions will be asked: “Does this character have to be straight?” “What is the role we need here?” If it is a prince or a hero or a sympathetic person who needs acceptance from the audience to make the story work, then the character must be drawn realistically.

Figuring out the protagonists is a thankless job: they have to serve a function within the overall story, but they’re also liable to be taken out and judged on their own merits, in the absence of the narrative pressures that created them in the first place. The best stories, it seems, are the ones in which that pattern of forces results in something fascinating in its own right, or which transform a stock character into something more. (It’s revealing that Thomas and Johnston refer to the queen and the witch in Snow White as separate figures, when they’re really a single person who evolves over the course of the story into her true form.) And their concluding advice is worth bearing in mind by everyone: “Generally speaking, if there is a human character in a story, it is wise to draw the person with as much caricature as the role will permit.”

How to rest

with 4 comments

As a practical matter, there appears to be a limit to how long a novelist can work on any given day while still remaining productive. Anecdotally, the maximum effective period seems to fall somewhere in the range of four to six hours, which leaves some writers with a lot of time to kill. In a recent essay for The New Yorker, Gary Shteyngart writes:

I believe that a novelist should write for no more than four hours a day, after which returns truly diminish; this, of course, leaves many hours for idle play and contemplation. Usually, such a schedule results in alcoholism, but sometimes a hobby comes along, especially in middle age.

In Shteyngart’s case, the hobby took the form of a fascination with fine watches, to the point where he was spending thousands of dollars on his obsession every year. This isn’t a confession designed to elicit much sympathy from others—especially when he observes that spending $4,137.25 on a watch means throwing away “roughly 4.3 writing days”—but I’d like to believe that he chose a deliberately provocative symbol of wasted time. Most novelists have day jobs, with all their writing squeezed into the few spare moments that remain, so to say that writers have hours of idleness at their disposal, complete with that casual “of course,” implies an unthinking acceptance of a privilege that only a handful of authors ever attain. Shteyngart, I think, is smarter than this, and he may simply be using the luxury watch as an emblem of how precious each minute can be for writers for whom time itself hasn’t become devalued.

But let’s assume that you’re lucky enough to write for a living, and that your familial or social obligations are restricted enough to leave you with over half the day to spend as you see fit. What can you do with all those leisure hours? Alcoholism, as Shteyngart notes, is an attractive possibility, but perhaps you want to invest your time in an activity that enhances your professional life. Georg von Békésy, the Hungarian biophysicist, thought along similar lines, as his biographer Floyd Ratliff relates:

His first idea about how to excel as a scientist was simply to work hard and long hours, but he realized that his colleagues were working just as hard and just as long. So he decided instead to follow the old rule: sleep eight hours, work eight hours, and rest eight hours. But Békésy put a “Hungarian twist” on this, too. There are many ways to rest, and he reasoned that perhaps he could work in some way that would improve his judgment, and thus improve his work. The study of art, in which he already had a strong interest, seemed to offer this possibility…By turning his attention daily from science to art, Békésy refreshed his mind and sharpened his faculties.

This determination to turn even one’s free time into a form of self-improvement seems almost inhuman. (His “old rule” reminds me of the similar advice that Ursula K. LeGuin offers in The Left Hand of Darkness: “When action grows unprofitable, gather information; when information grows unprofitable, sleep.”) But I think that Békésy was also onto something when he sought out a hobby that provided a contrast to what he was doing for a living. A change, as the saying goes, is as good as a rest.

In fact, you could say that there are two types of hobbies, although they aren’t mutually exclusive. There are hobbies that are orthogonal to the rest of our lives, activating parts of the mind or personality that otherwise go unused, or providing a soothing mechanical respite from the nervous act of brainwork—think of Churchill and his bricklaying. Alternatively, they can channel our professional urges into a contained, orderly form that provides a kind of release. Ayn Rand, of all people, wrote perceptively about stamp collecting:

Stamp collecting is a hobby for busy, purposeful, ambitious people…because, in pattern, it has the essential elements of a career, but transposed to a clearly delimited, intensely private world…In stamp collecting, one experiences the rare pleasure of independent action without irrelevant burdens or impositions.

In my case, this blog amounts to a sort of hobby, and I keep at it for both reasons. It’s a form of writing, so it provides me with an outlet for those energies, but it also allows me to think about subjects that aren’t directly connected to my work. The process is oddly refreshing—I often feel more awake and alert after I’ve spent an hour writing a post, as if I’ve been practicing my scales on the piano—and it saves an hour from being wasted in unaccountable ways. This may be why many people are drawn to hobbies that leave you with a visible result in the end, whether it’s a blog post, a stamp collection, or a brick wall.

But there’s also something to be said for doing nothing. If you’ve devoted four hours—or whatever amount seems reasonable—to work that you love, you’ve earned the right to spend your remaining time however you like. As Sir Walter Scott wrote in a letter to a friend:

And long ere dinner time, I have
Full eight close pages wrote;
What, duty, has thou now to crave?
Well done, Sir Walter Scott!

At the end of the day, I often feel like watching television, and the show I pick serves as an index to how tired I am. If I’m relatively energized, I can sit through a prestige drama; if I’m more drained, I’ll suggest a show along the lines of Riverdale; and if I can barely see straight, I’ll put on a special feature from my Lord of the Rings box set, which is my equivalent of comfort food. And you can see this impulse in far more illustrious careers. Ludwig Wittgenstein, who thought harder than anyone else of his century, liked to relax by watching cowboy movies. The degree to which he felt obliged to unplug is a measure of how much he drove himself, and in the absence of other vices, this was as good a way of decompressing as any. It prompted Nicholson Baker to write: “[Wittgenstein] would go every afternoon to watch gunfights and arrows through the chest for hours at a time. Can you take seriously a person’s theory of language when you know that he was delighted by the woodenness and tedium of cowboy movies?” To which I can only respond: “Absolutely.”

Written by nevalalee

April 5, 2017 at 9:36 am

The cliché factory

with one comment

A few days ago, Bob Mankoff, the cartoon editor of The New Yorker, devoted his weekly email newsletter to the subject of “The Great Clichés.” A cliché, as Mankoff defines it, is a restricted comic situation “that would be incomprehensible if the other versions had not first appeared,” and he provides a list of examples that should ring bells for all readers of the magazine, from the ubiquitous “desert island” to “The-End-Is-Nigh Guy.” Here are a few of my favorites:

Atlas holding up the world; big fish eating little fish; burglars in masks; cave paintings; chalk outline at crime scene; crawling through desert; galley slaves; guru on mountain; mobsters and victim with cement shoes; man in stocks; police lineup; two guys in horse costume.

Inevitably, Mankoff’s list includes a few questionable choices, while also omitting what seem like obvious contenders. (Why “metal detector,” but not “Adam and Eve?”) But it’s still something that writers of all kinds will want to clip and save. Mankoff doesn’t make the point explicitly, but most gag artists probably keep a similar list of clichés as a starting point for ideas, as we read in Mort Gerberg’s excellent book Cartooning:

List familiar situations—clichés. You might break them down into categories, like domestic (couple at breakfast, couple watching television); business (boss berating employee, secretary taking dictation); historic (Paul Revere’s ride, Washington crossing the Delaware); even famous cartoon clichés (the desert island, the Indian snake charmer)…Then change something a little bit.

As it happened, when I saw Mankoff’s newsletter, I had already been thinking about a far more harmful kind of comedy cliché. Last week, Kal Penn went on Twitter to post some of the scripts from his years auditioning as a struggling actor, and they amount to an alternative list of clichés kept by bad comedy writers, consciously or otherwise: “Gandhi lookalike,” “snake charmer,” “foreign student.” One character has a “slight Hindi accent,” another is a “Pakistani computer geek who dresses like Beck and is in a perpetual state of perspiration,” while a third delivers dialogue that is “peppered with Indian cultural references…[His] idiomatic conversation is hit and miss.” A typical one-liner: “We are propagating like flies on elephant dung.” One script describes a South Asian character’s “spastic techno pop moves,” with Penn adding that “the big joke was an accent and too much cologne.” (It recalls the Morrissey song “Bengali in Platforms,” which included the notorious line: “Life is hard enough when you belong here.” You could amend it to read: “Being a comedy writer is hard enough when you belong here.”) Penn closes by praising shows with writers “who didn’t have to use external things to mask subpar writing,” which cuts to the real issue here. The real person in “a perpetual state of perspiration” isn’t the character, but the scriptwriter. Reading the teleplay for an awful sitcom is a deadening experience in itself, but it’s even more depressing to realize that in most cases, the writer is falling back on a stereotype to cover up the desperate unfunniness of the writing. When Penn once asked if he could play a role without an accent, in order to “make it funny on the merits,” he was told that he couldn’t, probably because everybody else knew that the merits were nonexistent.

So why is one list harmless and the other one toxic? In part, it’s because we’ve caught them at different stages of evolution. The list of comedy conventions that we find acceptable is constantly being culled and refined, and certain art forms are slightly in advance of the others. Because of its cultural position, The New Yorker is particularly subject to outside pressures, as it learned a decade ago with its Obama terrorist cover—which demonstrated that there are jokes and images that aren’t acceptable even if the magazine’s attitude is clear. Turn back the clock, and Mankoff’s list would include conventions that probably wouldn’t fly today. Gerberg’s list, like Penn’s, includes “snake charmer,” which Mankoff omits, and he leaves out “Cowboys and Indians,” a cartoon perennial that seems to be disappearing. And it can be hard to reconstruct this history, because the offenders tend to be consigned to the memory hole. When you read a lot of old magazine fiction, as I do, you inevitably find racist stereotypes that would be utterly unthinkable today, but most of the stories in which they appear have long since been forgotten. (One exception, unfortunately, is the Sherlock Holmes short story “The Adventure of the Three Gables,” which opens with a horrifying racial caricature that most Holmes fans must wish didn’t exist.) If we don’t see such figures as often today, it isn’t necessarily because we’ve become more enlightened, but because we’ve collectively agreed to remove certain figures from the catalog of stock comedy characters, while papering over their use in the past. A list of clichés is a snapshot of a culture’s inner life, and we don’t always like what it says. The demeaning parts still offered to Penn and actors of similar backgrounds have survived for longer than they should have, but sitcoms that trade in such stereotypes will be unwatchable in a decade or two, if they haven’t already been consigned to oblivion.

Of course, most comedy writers aren’t thinking in terms of decades, but about getting through the next five minutes. And these stereotypes endure precisely because they’re seen as useful, in a shallow, short-term kind of way. There’s a reason why such caricatures are more visible in comedy than in drama: comedy is simply harder to write, but we always want more of it, so it’s inevitable that writers on a deadline will fall back on lazy conventions. The really insidious thing about these clichés is that they sort of work, at least to the extent of being approved by a producer without raising any red flags. Any laughter that they inspire is the equivalent of empty calories, but they persist because they fill a cynical need. As Penn points out, most writers wouldn’t bother with them at all if they could come up with something better. Stereotypes, like all clichés, are a kind of fallback option, a cheap trick that you deploy if you need a laugh and can’t think of another way to get one. Clichés can be a precious commodity, and all writers resort to them occasionally. They’re particularly valuable for gag cartoonists, who can’t rely on a good idea from last week to fill the blank space on the page—they’ve got to produce, and sometimes that means yet another variation on an old theme. But there’s a big difference between “Two guys in a horse costume” and “Gandhi lookalike.” Being able to make that distinction isn’t a matter of political correctness, but of craft. The real solution is to teach people to be better writers, so that they won’t even be tempted to resort to such tired solutions. This might seem like a daunting task, but in fact, it happens all the time. A cliché factory operates on the principle of supply and demand. And it shuts down as soon as people no longer find it funny.

Written by nevalalee

March 20, 2017 at 11:18 am

A series of technical events

with 6 comments

In his book Four Arguments for the Elimination of Television, which was first published in the late seventies, the author Jerry Mander, a former advertising executive, lists a few of the “technical tricks” that television can use to stimulate the viewer’s interest:

Editors make it possible for a scene in one room to be followed instantly by a scene in another room, or at another time, or another place. Words appears over the images. Music rises and falls in the background. Two images or three can appear simultaneously. One image can be superposed on another on the screen. Motion can be slowed down or sped up.

These days, we take most of these effects for granted, as part of the basic grammar of the medium, but to Mander, they’re something more sinister. Technique, he argues, is replacing content, and at its heart, it’s something of a confidence game:

Through these technical events, television images alter the usual, natural imagery possibilities, taking on the quality of a naturally highlighted event. They make it seem that what you are looking at is unique, unusual, and extraordinary…But nothing unusual is going on. All that’s happening is that the viewer is watching television, which is the same thing that happened an hour ago, or yesterday. A trick has been played. The viewer is fixated by a conspiracy of dimmed-out environments combined with an artificial, impossible, fictitious unusualness.

In order to demonstrate “the extent to which television is dependent upon technical tricks to maintain your interest,” Mander invites the reader to conduct what he calls a technical events test:

Put on your television set and simply count the number of times there is a cut, a zoom, a superimposition, a voiceover, the appearance of words on the screen—a technical event of some kind…Each technical event—each alteration of what would be natural imagery—is intended to keep your attention from waning as it might otherwise…Every time you are about to relax your attention, another technical event keeps you attached..

You will probably find that in the average commercial television program, there are eight or ten technical events for every sixty-second period…You may also find that there is rarely a period of twenty seconds without any sort of technical event at all. That may give you an idea of the extent to which producers worry about whether the content itself can carry your interest.

He goes on to list the alleged consequences of exposure to such techniques, from shortened attention span in adults to heightened hyperactivity in children, and concludes: “Advertisers are the high artists of the medium. They have gone further in the technologies of fixation than anyone else.”

Mander’s argument was prophetic in many ways, but in one respect, he was clearly wrong. In the four decades since his book first appeared, it has become obvious that the “high artists” of distraction and fixation aren’t advertisers, but viewers themselves, and its true canvas isn’t television, but the Internet. Instead of passively viewing a series of juxtaposed images, we assemble our online experience for ourselves, and each time we open a new link, we’re effectively acting as our own editors. Every click is a cut. (The anecdotal figure that the reader spends less than fifteen seconds on the average web page is very close to the frequency of technical events on television, which isn’t an accident.) We do a better job of distracting ourselves than any third party ever could, as long as we’re given sufficient raw material and an intuitive interface—which explains much of the evolution of online content. When you look back at web pages from the early nineties, it’s easy to laugh at how noisy and busy they tended to be, with music, animated graphics, and loud colors. This wasn’t just a matter of bad taste, but of a mistaken analogy to television. Web designers thought that they had to grab our attention using the same technical tricks employed by other media, but that wasn’t the case. The hypnotic browsing state that we’ve all experienced isn’t produced by any one page, but by the succession of similar pages as the user moves between them at his or her private rhythm. Ideally, from the point of view of a media company, that movement will take place within the same family of pages, but it also leads to a convergence of style and tone between sites. Most web pages these days look more or less the same because it creates a kind of continuity of experience. Instead of the loud, colorful pages of old, they’re static and full of white space. Mander calls this “the quality of even tone” of television, and the Internet does it one better. It’s uniform and easily aggregated, and you can cut it together however you like, like yard goods.

In fact, it isn’t content that gives us the most pleasure, but the act of clicking, with the sense of control it provides. This implies that bland, interchangeable content is actually preferable to more arresting material. The easier it is to move between basically similar units, the closer the experience is to that of an ideally curated television show—which is why different sources have a way of blurring together into the same voice. When I’m trying to tell my wife about a story I read online, I often have trouble remembering if I read it on Vox, Vulture, or Vice, which isn’t a knock against those sites, but a reflection of the unconscious pressure to create a seamless browsing experience. From there, it’s only a short step to outright content mills and fake news. In the past, I’ve called this AutoContent, after the interchangeable bullet points used to populate slideshow presentations, but it’s only effective if you can cut quickly from one slide to another. If you had to stare at it for longer than fifteen seconds, you wouldn’t be able to stand it. (This may be why we’ve come to associate quality with length, which is more resistant to being to reduced to the filler between technical events. The “long read,” as I’ve argued elsewhere, can be a marketing category in itself, but it does need to try a little harder.) The idea that browsing online is a form of addictive behavior isn’t a new one, of course, and it’s often explained in terms of the “random rewards” that the brain receives when we check email or social media. But the notion of online content as a convenient source of technical events is worth remembering. When we spend any period of time online, we’re essentially watching a television show while simultaneously acting as its editor and director, and often as its writer and actors. In the end, to slightly misquote Mander, all that’s happening is that the reader is seated in front of a computer or looking at a phone, “which is the same thing that happened an hour ago, or yesterday.” The Internet is better at this than television ever was. And in a generation or two, it may result in television being eliminated after all.

Written by nevalalee

March 14, 2017 at 9:18 am

Farewell to Mystic Falls

with one comment

Note: Spoilers follow for the series finale of The Vampire Diaries.

On Friday, I said goodbye to The Vampire Diaries, a series that I once thought was one of the best genre shows on television, only to stop watching it for its last two seasons. Despite its flaws, it occupies a special place in my memory, in part because its strengths were inseparable from the reasons that I finally abandoned it. Like Glee, The Vampire Diaries responded to its obvious debt to an earlier franchise—High School Musical for the former, Twilight for the latter—both by subverting its predecessor and by burning through ideas as relentlessly as it could. It’s as if both shows decided to refute any accusations of unoriginality by proving that they could be more ingenious than their inspirations, and amazingly, it sort of worked, at least for a while. There’s a limit to how long any series can repeatedly break down and reassemble itself, however, and both started to lose steam after about three years. In the case of The Vampire Diaries, its problems crystallized around its ostensible lead, Elena Gilbert, as portrayed by the game and talented Nina Dobrev, who left the show two seasons ago before returning for an encore in the finale. Elena spent most of her first sendoff asleep, and she isn’t given much more to do here. There’s a lot about the episode that I liked, and it provides satisfying moments of closure for many of its characters, but Elena isn’t among them. In the end, when she awakens from the magical coma in which she has been slumbering, it’s so anticlimactic that it reminds me of what Pauline Kael wrote of Han’s revival in Return of the Jedi: “It’s as if Han Solo had locked himself in the garage, tapped on the door, and been let out.”

And what happened to Elena provides a striking case study of why the story’s hero is often fated to become the least interesting person in sight. The main character of a serialized drama is under such pressure to advance the plot that he or she becomes reduced to the diagram of a pattern of forces, like one of the fish in D’Arcy Wentworth Thompson’s On Growth and Form, in which the animal’s physical shape is determined by the outside stresses to which it has been subjected. Instead of making her own decisions, Elena was obliged to become whatever the series needed her to be. Every protagonist serves as a kind of motor for the story, which is frequently a thankless role, but it was particularly problematic on a show that defined itself by its willingness to burn through a year of potential storylines each month. Every episode felt like a season finale, and characters were freely killed, resurrected, and brainwashed to keep the wheels turning. It was hardest on Elena, who, at her best, was a compelling, resourceful heroine. After six seasons of personality changes, possessions, memory wipes, and the inexplicable choices that she made just because the story demanded it, she became an empty shell. If you were designing a show in a laboratory to see what would happen if its protagonist was forced to live through plot twists at an accelerated rate, like the stress tests that engineers use to put a component through a lifetime’s worth of wear in a short period of time, you couldn’t do much better than The Vampire Diaries. And while it might have been theoretically interesting to see what happened to the series after that one piece was removed, I didn’t think it was worth sitting through another two seasons of increasingly frustrating television.

After the finale was shot, series creators Kevin Williamson and Julie Plec made the rounds of interviews to discuss the ending, and they shared one particular detail that fascinates me. If you haven’t watched The Vampire Diaries, all you need to know is that its early seasons revolved around a love triangle between Elena and the vampire brothers Stefan and Damon, a nod to Twilight that quickly became one of the show’s least interesting aspects. Elena seemed fated to end up with Stefan, but she spent the back half of the series with Damon, and it ended with the two of them reunited. In a conversation with Deadline, Williamson revealed that this wasn’t always the plan:

Well, I always thought it would be Stefan and Elena. They were sort of the anchor of the show, but because we lost Elena in season six, we couldn’t go back. You know Nina could only come back for one episode—maybe if she had came back for the whole season, we could even have warped back towards that, but you can’t just do it in forty-two minutes.

Dobrev’s departure, in other words, froze that part of the story in place, even as the show around it continued its usual frantic developments, and when she returned, there wasn’t time to do anything but keep Elena and Damon where they had left off. There’s a limit to how much ground you can cover in the course of a single episode, so it seemed easier for the producers to stick with what they had and figure out a way to make it seem inevitable.

The fact that it works at all is a tribute to the skill of the writers and cast, as well as to the fact that the whole love triangle was basically arbitrary in the first place. As James Joyce said in a very different context, it was a bridge across which the characters could walk, and once they were safely on the other side, it could be blown to smithereens. The real challenge was how to make the finale seem like a definitive ending, after the show had killed off and resurrected so many characters that not even death itself felt like a conclusion. It resorted to much the same solution that Lost did when faced with a similar problem: it shut off all possibility of future narrative by reuniting its characters in heaven. This partially a form of wish fulfillment, as we’ve seen with so many other television series, but it also puts a full stop on the story by leaving us in an afterlife, where, by definition, nothing can ever change. It’s hilariously unlike the various versions of the world to come that the series has presented over the years, from which characters can always be yanked back to life when necessary, but it’s also oddly moving and effective. Watching it, I began to appreciate how the show’s biggest narrative liability—a cast that just can’t be killed—also became its greatest asset. The defining image of The Vampire Diaries was that of a character who has his neck snapped, and then just shakes it off. Williamson and Plec must have realized, consciously or otherwise, that it was a reset button that would allow them to go through more ideas than would be possible than a show on which a broken neck was permanent. Every denizen of Mystic Falls got a great death scene, often multiple times per season, and the show exploited that freedom until it exhausted itself. It only really worked for three years out of eight, but it was a great run while it lasted. And now, after life’s fitful fever, the characters can sleep well, as they sail off into the mystic.

From Sputnik to WikiLeaks

with 2 comments

In Toy Story 2, there’s a moment in which Woody discovers that his old television series, Woody’s Roundup, was abruptly yanked off the air toward the end of the fifties. He asks: “That was a great show. Why cancel it?” The Prospector replies bitterly: “Two words: Sput-nik. Once the astronauts went up, children only wanted to play with space toys.” And while I wouldn’t dream of questioning the credibility of a man known as Stinky Pete, I feel obliged to point out that his version of events isn’t entirely accurate. The space craze among kids really began more than half a decade earlier, with the premiere of Tom Corbett, Space Cadet, and the impact of Sputnik on science fiction was far from a positive one. Here’s what John W. Campbell wrote about it in the first issue of Astounding to be printed after the satellite’s launch:

Well, we lost that race; Russian technology achieved an important milestone in human history—one that the United States tried for, talked about a lot, and didn’t make…One of the things Americans have long been proud of—and with sound reason—is our ability to convert theoretical science into practical, working engineering…This time we’re faced with the uncomfortable realization that the Russians have beaten us in our own special field; they solved a problem of engineering technology faster and better than we did.

And while much of the resulting “Sputnik crisis” was founded on legitimate concerns—Sputnik was as much a triumph of ballistic rocketry as it was of satellite technology—it also arose from the notion that the United States had been beaten at its own game. As Arthur C. Clarke is alleged to have said, America had become “a second-rate power.”

Campbell knew right away that he had reason to worry. Lester del Rey writes in The World of Science Fiction:

Sputnik simply convinced John Campbell that he’d better watch his covers and begin cutting back on space scenes. (He never did, but the art director of the magazine and others were involved in that decision.) We agreed in our first conversation after the satellite went up that people were going to react by deciding science had caught up with science fiction, and with a measure of initial fear. They did. Rather than helping science fiction, Sputnik made it seem outmoded.

And that’s more or less exactly what happened. There was a brief spike in sales, followed by a precipitous fall as mainstream readers abandoned the genre. I haven’t been able to find specific numbers for this period, but one source, the Australian fan Wynne Whitford, states that the circulation of Astounding fell by half after Sputnik—which seems high, but probably reflects a real decline. In a letter written decades later, Campbell said of Sputnik: “Far from encouraging the sales of science fiction magazines—half the magazines being published lost circulation so drastically they went out of business!” An unscientific glance at a list of titles appears to support this. In 1958, the magazines Imagination, Imaginative Tales, Infinity Science Fiction, Phantom, Saturn, Science Fiction Adventures, Science Fiction Quarterly, Star Science Fiction, and Vanguard Science Fiction all ceased publication, followed by three more over the next twelve months. The year before, just four magazines had folded. There was a bubble, and after Sputnik, it burst.

At first, this might seem like a sort of psychological self-care, of the same kind that motivated me to scale back my news consumption after the election. Americans were simply depressed, and they didn’t need any reminders of the situation they were in. But it also seems to have affected the public’s appetite for science fiction in particular, rather than science as a whole. In fact, the demand for nonfiction science writing actually increased. As Isaac Asimov writes in his memoir In Joy Still Felt:

The United States went into a dreadful crisis of confidence over the fact that the Soviet Union had gotten there first and berated itself for not being interested enough in science. And I berated myself for spending too much time on science fiction when I had the talent to be a great science writer…Sputnik also served to increase the importance of any known public speaker who could talk on science and, particularly, on space, and that meant me.

What made science fiction painful to read, I think, was its implicit assumption of American superiority, which had been disproven so spectacularly. Campbell later compared it to the reaction after the bomb fell, claiming that it was the moment when people realized that science fiction wasn’t a form of escapism, but a warning:

The reactions to Sputnik have been more rapid, and, therefore, more readily perceptible and correlatable. There was, again, a sudden rise in interest in science fiction…and there is, now, an even more marked dropping of the science-fiction interest. A number of the magazines have been very heavily hit…I think the people of the United States thought we were kidding.

And while Campbell seemed to believe that readers had simply misinterpreted science fiction’s intentions, the conventions of the genre itself clearly bore part of the blame.

In his first editorials after Sputnik, Campbell drew a contrast between the American approach to engineering, which proceeded logically and with vast technological resources, and the quick and dirty Soviet program, which was based on rules of thumb, trial and error, and the ability to bull its way through on one particular point of attack. It reminds me a little of the election. Like the space race, last year’s presidential campaign could be seen as a kind of proxy war between the American and Russian administrations, and regardless of what you believe about the Trump camp’s involvement, which I suspect was probably a tacit one, there’s no question as to which side Putin favored. On one hand, you had a large, well-funded political machine, and on the other, one that often seemed comically inept. Yet it was the quick and dirty approach that triumphed. “The essence of ingenuity is the ability to get precision results without precision equipment,” Campbell wrote, and that’s pretty much what occurred. A few applications of brute force in the right place made all the difference, and they were aided, to some extent, by a similar complacency. The Americans saw the Soviets as bunglers, and they never seriously considered the possibility that they might be beaten by a bunch of amateurs. As Campbell put it: “We earned what we got—fully, and of our own efforts. The ridicule we’ve collected is our just reward for our consistent efforts.” Sometimes I feel the same way. Right now, we’re entering a period in which the prospect of becoming a second-rate power is far more real than it was when Clarke made his comment. It took a few months for the implications of Sputnik to really sink in. And if history is any indication, we haven’t even gotten to the crisis yet.

Who we are in the moment

with 58 comments

Jordan Horowitz and Barry Jenkins

By now, you’re probably sick of hearing about what happened at the Oscars. I’m getting a little tired of it, too, even though it was possibly the strangest and most riveting two minutes I’ve ever seen on live television. It left me feeling sorry for everyone involved, but there are at least three bright spots. The first is that it’s going to make a great case study for somebody like Malcolm Gladwell, who is always looking for a showy anecdote to serve as a grabber opening for a book or article. So many different things had to go wrong for it to happen—on the levels of design, human error, and simple dumb luck—that you can use it to illustrate just about any point you like. A second silver lining is that it highlights the basically arbitrary nature of all such awards. As time passes, the list of Best Picture winners starts to look inevitable, as if Cimarron and Gandhi and Chariots of Fire had all been canonized by a comprehensible historical process. If anything, the cycle of inevitability is accelerating, so that within seconds of any win, the narratives are already locking into place. As soon as La La Land was announced as the winner, a story was emerging about how Hollywood always goes for the safe, predictable choice. The first thing that Dave Itzkoff, a very smart reporter, posted on the New York Times live chat was: “Of course.” Within a couple of minutes, however, that plot line had been yanked away and replaced with one for Moonlight. And the fact that the two versions were all but superimposed onscreen should warn us against reading too much into outcomes that could have gone any number of ways.

But what I want to keep in mind above all else is the example of La La Land producer Jordan Horowitz, who, at a moment of unbelievable pressure, simply said: “I’m going to be really proud to hand this to my friends from Moonlight.” It was the best thing that anybody could have uttered under those circumstances, and it tells us a lot about Horowitz himself. If you were going to design a psychological experiment to test a subject’s reaction under the most extreme conditions imaginable, it’s hard to think of a better one—although it might strike a grant committee as possibly too expensive. It takes what is undoubtedly one of the high points of someone’s life and twists it instantly into what, if perhaps not the worst moment, at least amounts to a savage correction. Everything that the participants onstage did or said, down to the facial expressions of those standing in the background, has been subjected to a level of scrutiny worthy of the Zapruder film. At the end of an event in which very little occurs that hasn’t been scripted or premeditated, a lot of people were called upon to figure out how to act in real time in front of an audience of hundreds of millions. It’s proverbial that nobody tells the truth in Hollywood, an industry that inspires insider accounts with titles like Hello, He Lied and Which Lie Did I Tell? A mixup like the one at the Oscars might have been expressly conceived as a stress test to bring out everyone’s true colors. Yet Horowitz said what he did. And I suspect that it will do more for his career than even an outright win would have accomplished.

Kellyanne Conway

It also reminds me of other instances over the last year in which we’ve learned exactly what someone thinks. When we get in trouble for a remark picked up on a hot mike, we often say that it doesn’t reflect who we really are—which is just another way of stating that it doesn’t live up to the versions of ourselves that we create for public consumption. It’s far crueler, but also more convincing, to argue that it’s exactly in those unguarded, unscripted moments that our true selves emerge. (Freud, whose intuition on such matters was uncanny, was onto something when he focused on verbal mistakes and slips of the tongue.) The justifications that we use are equally revealing. Maybe we dismiss it as “locker room talk,” even if it didn’t take place anywhere near a locker room. Kellyanne Conway excused her reference to the nonexistent Bowling Green Massacre by saying “I misspoke one word,” even though she misspoke it on three separate occasions. It doesn’t even need to be something said on the spur of the moment. At his confirmation hearing for the position of ambassador to Israel, David M. Friedman apologized for an opinion piece he had written before the election: “These were hurtful words, and I deeply regret them. They’re not reflective of my nature or my character.” Friedman also said that “the inflammatory rhetoric that accompanied the presidential campaign is entirely over,” as if it were an impersonal force that briefly took possession of its users and then departed. We ask to be judged on our most composed selves, not the ones that we reveal at our worst.

To some extent, that’s a reasonable request. I’ve said things in public and in private that I’ve regretted, and I wouldn’t want to be judged solely on my worst moments as a writer or parent. At a time when a life can be ruined by a single tweet, it’s often best to err on the side of forgiveness, especially when there’s any chance of misinterpretation. But there’s also a place for common sense. You don’t refer to an event as a “massacre” unless you really think of it that way or want to encourage others to do so. And we judge our public figures by what they say when they think that nobody is listening, or when they let their guard down. It might seem like an impossibly high standard, but it’s also the one that’s effectively applied in practice. You can respond by becoming inhumanly disciplined, like Obama, who in a decade of public life has said maybe five things he has reason to regret. Or you can react like Trump, who says five regrettable things every day and trusts that its sheer volume will reduce it to a kind of background noise—which has awakened us, as Trump has in so many other ways, to a political option that we didn’t even knew existed. Both strategies are exhausting, and most of us don’t have the energy to pursue either path. Instead, we’re left with the practical solution of cultivating the inner voice that, as I wrote last week, allows us to act instinctively. Kant writes: “Live your life as though your every act were to become a universal law.” Which is another way of saying that we should strive to be the best version of ourselves at all times. It’s probably impossible. But it’s easier than wearing a mask.

Written by nevalalee

February 28, 2017 at 9:00 am

Swallowing the turkey

with 2 comments

Benjamin Disraeli

Lord Rowton…says that he once asked Disraeli what was the most remarkable, the most self-sustained and powerful sentence he knew. Dizzy paused for a moment, and then said, “Sufficient unto the day is the evil thereof.”

—Augustus J.C. Hare, The Story of My Life

Disraeli was a politician and a novelist, which is an unusual combination, and he knew his business. Politics and writing have less to do with each other than a lot of authors might like to believe, and the fact that you can create a compelling world on paper doesn’t mean that you can do the same thing in real life. (One of the hidden themes of Astounding is that the skills that many science fiction writers acquired in organizing ideas on the page turned out to be notably inadequate when it came to getting anything done during World War II.) Yet both disciplines can be equally daunting and infuriating to novices, in large part because they both involve enormously complicated projects—often requiring years of effort—that need to be approached one day at a time. A single day’s work is rarely very satisfying in itself, and you have to cling to the belief that countless invisible actions and compromises will somehow result in something real. It doesn’t always happen, and even if it does, you may never get credit or praise. The ability to deal with the everyday tedium of politics or writing is what separates professionals from amateurs. And in both cases, the greatest accomplishments are usually achieved by freaks who can combine an overarching vision with a finicky obsession with minute particulars. As Eugène-Melchior de Vogüé, who was both a diplomat and literary critic, said of Tolstoy, it requires “a queer combination of the brain of an English chemist with the soul of an Indian Buddhist.”

And if you go into either field without the necessary degree of patience, the results can be unfortunate. If you’re a writer who can’t subordinate yourself to the routine of writing on a daily basis, the most probable outcome is that you’ll never finish your novel. In politics, you end up with something very much like what we’ve all observed over the last few weeks. Regardless of what you might think about the presidential refugee order, its rollout was clearly botched, thanks mostly to a president and staff that want to skip over all the boring parts of governing and get right to the good stuff. And it’s tempting to draw a contrast between the incumbent, who achieved his greatest success on reality television, and his predecessor, a detail-oriented introvert who once thought about becoming a novelist. (I’m also struck, yet again, by the analogy to L. Ron Hubbard. He spent most of his career fantasizing about a life of adventure, but when he finally got into the Navy, he made a series of stupid mistakes—including attacking two nonexistent submarines off the coast of Oregon—that ultimately caused him to be stripped of his command. The pattern repeated itself so many times that it hints at a fundamental aspect of his personality. He was too impatient to deal with the tedious reality of life during wartime, which failed to live up to the version he had dreamed of himself. And while I don’t want to push this too far, it’s hard not to notice the difference between Hubbard, who cranked out his fiction without much regard for quality, and Heinlein, a far more disciplined writer who was able to consciously tame his own natural impatience into a productive role at the Philadelphia Navy Yard.)

R.H. Blyth

Which brings us back to the sentence that impressed Disraeli. It’s easy to interpret it as an admonition not to think about the future, which isn’t quite right. We can start by observing that it comes at the end of what The Five Gospels notes is possibly “the longest connected discourse that can be directly attributed to Jesus.” It’s the one that asks us to consider the birds of the air and the lilies of the field, which, for a lot of us, prompts an immediate flashback to The Life of Brian. (“Consider the lilies?” “Uh, well, the birds, then.” “What birds?” “Any birds.” “Why?” “Well, have they got jobs?”) But whether or not you agree with the argument, it’s worth noticing that the advice to focus on the evils of each day comes only after an extended attempt at defining a larger set of values—what matters, what doesn’t, and what, if anything, you can change by worrying. You’re only in a position to figure out how best to spend your time after you’ve considered the big questions. As the physician William Osler put it:

[My ideal is] to do the day’s work well and not to bother about tomorrow. You may say that is not a satisfactory ideal. It is; and there is not one which the student can carry with him into practice with greater effect. To it more than anything else I owe whatever success I have had—to this power of settling down to the day’s work and trying to do it well to the best of my ability, and letting the future take care of itself.

This has important implications for both writers and politicians, as well as for progressives who wonder how they’ll be able to get through the next twenty-four hours, much less the next four years. When you’re working on any important project, even the most ambitious agenda comes down to what you’re going to do right now. In On Directing Film, David Mamet expresses it rather differently:

Now, you don’t eat a whole turkey, right? You take off the drumstick and you take a bite of the drumstick. Okay. Eventually you get the whole turkey done. It’ll probably get dry before you do, unless you have an incredibly good refrigerator and a very small turkey, but that is outside the scope of this lecture.

A lot of frustration in art, politics, and life in general comes from attempting to swallow the turkey in one bite. Jesus, I think, was aware of the susceptibility of his followers to grandiose but meaningless gestures, which is why he offered up the advice, so easy to remember and so hard to follow, to simultaneously focus on the given day while keeping the kingdom of heaven in mind. Nearly every piece of practical wisdom in any field is about maintaining that double awareness. Fortunately, it goes in both directions: small acts of discipline aid us in grasping the whole, and awareness of the whole tells us what to do in the moment. As R.H. Blyth says of Zen: “That is all religion is: eat when you are hungry, sleep when you are tired.” And don’t try to eat the entire turkey at once.

From Xenu to Xanadu

leave a comment »

L. Ron Hubbard

I do know that I could form a political platform, for instance, which would encompass the support of the unemployed, the industrialist and the clerk and day laborer all at one and the same time. And enthusiastic support it would be.

L. Ron Hubbard, in a letter to his wife Polly, October 1938

Yesterday, my article “Xenu’s Paradox: The Fiction of L. Ron Hubbard and the Making of Scientology” was published on Longreads. I’d been working on this piece, off and on, for the better part of a year, almost from the moment I knew that I was going to be writing the book Astounding. As part of my research, I had to read just about everything Hubbard ever wrote in the genres of science fiction and fantasy, and I ended up working my way through well over a million words of his prose. The essay that emerged from this process was inspired by a simple question. Hubbard clearly didn’t much care for science fiction, and he wrote it primarily for the money. Yet when the time came to invent a founding myth for Scientology, he turned to the conventions of space opera, which had previously played a minimal role in his work. Both his critics and his followers have looked hard at his published stories to find hints of the ideas to come, and there are a few that seem to point toward later developments. (One that frequently gets mentioned is “One Was Stubborn,” in which a fake religious messiah convinces people to believe in the nonexistence of matter so that he can rule the universe. There’s circumstantial evidence, however, that the premise came mostly from John W. Campbell, and that Hubbard wrote it up on the train ride home from New York to Puget Sound.) Still, it’s a tiny fraction of the whole. And such stories by other writers as “The Double Minds” by Campbell, “Lost Legacy” by Robert A. Heinlein, and The World of Null-A by A.E. van Vogt make for more compelling precursors to dianetics than anything Hubbard ever wrote.

The solution to the mystery, as I discuss at length in the article, is that Hubbard tailored his teachings to the small circle of followers he had available after his blowup with Campbell, many of whom were science fiction fans who owed their first exposure to his ideas to magazines like Astounding. And this was only the most dramatic and decisive instance of a pattern that is visible throughout his life. Hubbard is often called a fabulist who compulsively embellished own accomplishments and turned himself into something more than he really was. But it would be even more accurate to say that Hubbard transformed himself into whatever he thought the people around him wanted him to be. When he was hanging out with members of the Explorers Club, he became a barnstormer, world traveler, and intrepid explorer of the Caribbean and Alaska. Around his fellow authors, he presented himself as the most productive pulp writer of all time, inflating his already impressive word count to a ridiculous extent. During the war, he spun stories about his exploits in battle, claiming to have been repeatedly sunk and wounded, and even a former naval officer as intelligent and experienced as Heinlein evidently took him at his word. Hubbard simply became whatever seemed necessary at the time—as long as he was the most impressive man in the room. It wasn’t until he found himself surrounded by science fiction fans, whom he had mostly avoided until then, that he assumed the form that he would take for the rest of his career. He had never been interested in past lives, but many of his followers were, and the memories that they were “recovering” in their auditing sessions were often colored by the imagery of the stories they had read. And Hubbard responded by coming up with the grandest, most unbelievable space opera saga of them all.

Donald Trump

This leaves us with a few important takeaways. The first is that Hubbard, in the early days, was basically harmless. He had invented a colorful background for himself, but he wasn’t alone: Lester del Rey, among others, seems to have engaged in the same kind of self-mythologizing. His first marriage wasn’t a happy one, and he was always something of a blowhard, determined to outshine everyone he met. Yet he also genuinely impressed John and Doña Campbell, Heinlein, Asimov, and many other perceptive men and women. It wasn’t until after the unexpected success of dianetics that he grew convinced of his own infallibility, casting off such inconvenient collaborators as Campbell and Joseph Winter as obstacles to his power. Even after he went off to Wichita with his remaining disciples, he might have become little more than a harmless crank. As he began to feel persecuted by the government and professional organizations, however, his mood curdled into something poisonous, and it happened at a time in which he had undisputed authority over the people around him. It wasn’t a huge kingdom, but because of its isolation—particularly when he was at sea—he was able to exercise a terrifying amount of control over his closest followers. Hubbard didn’t even enjoy it. He had wealth, fame, and the adulation of a handful of true believers, but he grew increasingly paranoid and miserable. At the time of his death, his wrath was restricted to his critics and to anyone within arm’s reach, but he created a culture of oppression that his successor cheerfully extended against current and former members in faraway places, until no one inside or outside the Church of Scientology was safe.

I wrote the first draft of this essay in May of last year, but it’s hard to read it now without thinking of Donald Trump. Like Hubbard, Trump spent much of his life as an annoying but harmless windbag: a relentless self-promoter who constantly inflated his own achievements. As with Hubbard, everything that he did had to be the biggest and best, and until recently, he was too conscious of the value of his own brand to risk alienating too many people at once. After a lifetime of random grabs for attention, however, he latched onto a cause—the birther movement—that was more powerful than anything he had encountered before, and, like Hubbard, he began to focus on the small number of passionate followers he had attracted. His presidential campaign seems to have been conceived as yet another form of brand extension, culminating in the establishment of a Trump Television network. He shaped his message in response to the crowds who came to his rallies, and before long, he was caught in the same kind of cycle: a man who had once believed in nothing but himself gradually came to believe his own words. (Hubbard and Trump have both been described as con men, but the former spent countless hours auditing himself, and Trump no longer seems conscious of his own lies.) Both fell upward into positions of power that exceeded their wildest expectations, and it’s frightening to consider what might come next, when we consider how Hubbard was transformed. During his lifetime, Hubbard had a small handful of active followers; the Church of Scientology has perhaps 30,000, although, like Trump, they’re prone to exaggerate such numbers; Trump has millions. It’s especially telling that both Hubbard and Trump loved Citizen Kane. I love it, too. But both men ended up in their own personal Xanadu. And as I’ve noted before, the only problem with that movie is that our affection for Orson Welles distracts us from the fact that Kane ultimately went crazy.

Don’t stay out of Riverdale

leave a comment »

Riverdale

In the opening seconds of the series premiere of Riverdale, a young man speaks quietly in voiceover, his words playing over idyllic shots of American life:

Our story is about a town, a small town, and the people who live in the town. From a distance, it presents itself like so many other small towns all over the world. Safe. Decent. Innocent. Get closer, though, and you start seeing the shadows underneath. The name of our town is Riverdale.

Much later, we realize that the speaker is Jughead of Archie Comics fame, played by former Disney child star Cole Sprouse, which might seem peculiar enough in itself. But what I noticed first about this monologue is that it basically summarizes the prologue of Blue Velvet, which begins with images of roses and picket fences and then dives into the grass, revealing the insects ravening like feral animals in the darkness. It’s one of the greatest declarations of intent in all of cinema, and initially, there’s something a little disappointing in the way that Riverdale feels obliged to blandly state what Lynch put into a series of unforgettable images. Yet I have the feeling that series creator Roberto Aguirre-Sacasa, who says that Blue Velvet is one of his favorite movies, knows exactly what he’s doing. And the result promises to be more interesting than even he can anticipate.

Riverdale has been described as The O.C. meets Twin Peaks, which is how it first came to my attention. But it’s also a series on the CW, with all the good, the bad, and the lack of ugly that this implies. This the network that produced The Vampire Diaries, the first three seasons of which unexpectedly generated some of my favorite television from the last few years, and it takes its genre shows very seriously. There’s a fascinating pattern at work within systems that produce such narratives on a regular basis, whether in pulp magazines or comic books or exploitation pictures: as long as you hit all the obligatory notes and come in under budget, you’re granted a surprising amount of freedom. The CW, like its predecessors, has become an unlikely haven for auteurs, and it’s the sort of place where a showrunner like Aguirre-Sacasa—who has an intriguing background in playwriting, comics, and television—can explore a sandbox like this for years. Yet it also requires certain heavy, obvious beats, like structural supports, to prop up the rest of the edifice. A lot of the first episode of Riverdale, like most pilots, is devoted to setting up its premise and characters for even the most distracted viewers, and it can be almost insultingly on the nose. It’s why it feels obliged to spell out its theme of dark shadows beneath its sunlit surfaces, which isn’t exactly hard to grasp. As Roger Ebert wrote decades ago in his notoriously indignant review of Blue Velvet: “What are we being told? That beneath the surface of Small Town, U.S.A., passions run dark and dangerous? Don’t stop the presses.”

Blue Velvet

As a result, if you want to watch Riverdale at all, you need to get used to being treated occasionally as if you were twelve years old. But Aguirre-Sacasa seems determined to have it both ways. Like Glee before it, it feels as if it’s being pulled in three different directions even before it begins, but in this case, it comes off less as an unwanted side effect than as a strategy. It’s worth noting that not only did Aguirre-Sacasa write for Glee itself, but he’s also the guy who stepped in rewrite Spider-Man: Turn Off the Dark, which means that he knows something about wrangling intractable material for a mass audience under enormous scrutiny. (He’s also the chief creative officer of Archie Comics, which feels like a dream job in the best sort of way: one of his projects at the Yale School of Drama was a play about Archie encountering the murderers Leopold and Loeb, and he later received a cease and desist order from his future employer over Archie’s Weird Fantasy, which depicted its lead character as coming out of the closet.) Riverdale often plays like the work of a prodigiously talented writer trying to put his ideas into a form that could plausibly air on Thursdays after Supernatural. Like most shows at this stage, it’s also openly trying to decide what it’s supposed to be about. And I want to believe, on the basis of almost zero evidence, that Aguirre-Sacasa is deliberately attempting something almost unworkable, in hopes that he’ll be able to stick with it long enough—on a network that seems fairly indulgent of shows on the margins—to make it something special.

Most great television results from this sort of evolutionary process, and I’ve noted before—most explicitly in my Salon piece on The X-Files—that the best genre shows emerge when a jumble of inconsistent elements is given the chance to find its ideal form, usually because it lucks into a position where it can play under the radar for years. The pressures of weekly airings, fan response, critical reviews, and ratings, along with the unpredictable inputs of the cast and writing staff, lead to far more rewarding results than even the most visionary showrunner could produce in isolation. Writers of serialized narratives like comic books know this intuitively, and consciously or not, Aguirre-Sacasa seems to be trying something similar on television. It’s not an approach that would make sense for a series like Westworld, which was produced for so much money and with such high expectations that its creators had no choice but to start with a plan. But it might just work on the CW. I’m hopeful that Aguirre-Sacasa and his collaborators will use the mystery at the heart of the series much as Twin Peaks did, as a kind of clothesline on which they can hang a lot of wild experiments, only a certain percentage of which can be expected to work. Twin Peaks itself provides a measure of this method’s limitations: it mutated into something extraordinary, but it didn’t survive the departure of its original creative team. Riverdale feels like an attempt to recreate those conditions, and if it utilizes the Archie characters as its available raw material, well, why not? If Lynch had been able to get the rights, he might have used them, too.

Rob and Betty and Don and Laura

with 3 comments

Laura Petrie and Betty Draper

Mary Tyler Moore was the loveliest woman ever to appear on television, but you can only fully appreciate her charms if you also believe that Dick Van Dyke was maybe the most attractive man. I spent much of my youth obsessed with Rob and Laura Petrie on The Dick Van Dyke Show, which I think is the best three-camera sitcom of all time, and the one that secretly had the greatest impact on my inner life. Along with so much else, it was the first show that seemed to mine comedic and narrative material out of the act of its own creation. Rob was a comedy writer, and thanks to his scenes at the office with Sally and Buddy, I thought for a while I might want to do the same thing. I know now that this wouldn’t be a great job for someone like me, but the image of it is still enticing. What made it so appealing, I’ve come to realize, is that when Rob came home, the show never ended—he was married to a woman who was just as smart, funny, and talented as he was. (Looking at Moore, who was only twenty-four when the series premiered, I’m reminded a little of Debbie Reynolds in Singin’ in the Rain, who effortlessly kept up with her older costars under conditions of enormous pressure.) It was my first and best picture of a life that seemed complete both at work and at home. And the fact that both Moore and Van Dyke seem to have been drinking heavily during the show’s production only points at how difficult it must have been to sustain that dream on camera.

What strikes me the most now about The Dick Van Dyke Show is the uncanny way in which it anticipates the early seasons of Mad Men. In both shows, a husband leaves his idyllic home in Westchester each morning to commute to a creative job in Manhattan, where he brainstorms ideas with his wisecracking colleagues. (Don and Betty lived in Ossining, but the house that was used for exterior shots was in New Rochelle, with Rob and Laura presumably just up the road.) His wife is a much younger knockout—Laura was a former dancer, Betty a model—who seems that she ought to be doing something else besides watching a precocious kindergartener. The storylines are about evenly divided between the home and the office, and between the two, they give us a fuller portrait of the protagonist than most shows ever do. The influence, I can only assume, was unconscious. We know that Matthew Weiner watched the earlier series, as he revealed in a GQ interview when asked about life in the writers’ room:

We all came up in this system…When I watch The Dick Van Dyke Show, I’m like, Wow, this is the same job. There’s the twelve-year-old kid on the staff. There’s the guy who delivers lunch. I guarantee you I can walk into [another writer’s office] and, except for where the snack room is, it’s gonna be similar on some level.

And I don’t think it’s farfetched to guess that The Dick Van Dyke Show was Weiner’s introduction, as it was for so many of us, to the idea of writing for television in the first place.

Rob Petrie and Don Draper

The more I think about it, the more these two shows feel like mirror images of each other, just as “Don and Betty Draper” and “Rob and Laura Petrie” share the same rhythm. I’m not the first one to draw this connection, but instead of highlighting the obvious contrast between the sunniness of the former and the darkness of the latter, I’d prefer to focus on what they have in common. Both are hugely romantic visions of what it means to be a man who can afford a nice house in Westchester based solely on his ability to pitch better ideas than anybody else. Mad Men succeeds in large part because it manages to have it both ways. The series implicitly rebukes Don’s personal behavior, but it never questions his intelligence or talent. It doesn’t really sour us on advertising, any more than it does on drinking or smoking, and I don’t have any doubt that there are people who will build entire careers around its example. Both shows are the work of auteurs—Carl Reiner and Matt Weiner, whose names actually rhyme—who can’t help but let their joy in their own technical facility seep into the narrative. Rob and Don are veiled portraits of their creators. One is a lot better and the other a whole lot worse, but both amount to alternate lives, enacted for an audience, that reflect the restless activity behind the scenes.

And the real difference between Mad Men and The Dick Van Dyke Show doesn’t have anything to do with the decades in which they first aired, or even with the light and dark halves of the Camelot era that they both evoke. It comes down to the contrast between Laura and Betty—who, on some weird level, seem to represent opposing sides of the public image of Jacqueline Kennedy, and not just because the hairstyles are so similar. Betty was never a match for Don at home, and the only way in which she could win the game, which she did so emphatically, was to leave him altogether. Laura was Rob’s equal, intellectually and comedically, and she fit so well into the craziness at The Alan Brady Show that it wasn’t hard to envision her working there. In some ways, she was limited by her role as a housewife, and she would find her fullest realization in her second life as Mary Richards. But the enormous gap between Rob and Don boils down to the fact that one was married to a full partner and teammate, while the other had to make do with a glacial symbol of his success. When I think of them, I remember two songs. One is “Song of India,” which plays as Betty descends the hotel steps in “For Those Who Think Young,” as Don gazes at her so longingly that he seems to be seeing the ghost of his own marriage. The other is “Mountain Greenery,” which Rob and Laura sing at a party at their house, in a scene that struck me as contrived even at the time. Were there ever parties like this? It doesn’t really matter. Because I can’t imagine Don and Betty doing anything like it.

Written by nevalalee

January 26, 2017 at 9:05 am

Listening to “Retention,” Part 3

leave a comment »

Retention

Note: I’m discussing the origins of “Retention,” the episode that I wrote for the audio science fiction anthology series The Outer Reach. It’s available for streaming here on the Howl podcast network, and you can get a free month of access by using the promotional code REACH.

One of the unsung benefits of writing for film, television, or radio is that it requires the writer to conform to a fixed format on the printed page. The stylistic conventions of the screenplay originally evolved for the sake of everyone but the screenwriter: it’s full of small courtesies for the director, actors, sound editor, production designer, and line producer, and in theory, it’s supposed to result in one minute of running time per page—although, in practice, the differences between filmmakers and genres make even this rule of thumb almost meaningless. But it also offers certain advantages for writers, too, even if it’s mostly by accident. It can be helpful for authors to force themselves to work within the typeface, margins, and arbitrary formatting rules that the script imposes: it leaves them with minimal freedom except in the choice of the words themselves. Because all the dialogue is indented, you can see the balance between talk and action at a glance, and you eventually develop an intuition about how a good script should look when you flip rapidly through the pages. (The average studio executive, I suspect, rarely does much more than this.) Its typographical constraints amount to a kind of poetic form, and you find yourself thinking in terms of the logic of that space. As the screenwriter Terry Rossio put it:

In retrospect, my dedication—or my obsession—toward getting the script to look exactly the way it should, no matter how long it took—that’s an example of the sort of focus one needs to make it in this industry…If you find yourself with this sort of obsessive behavior—like coming up with inventive ways to cheat the page count!—then, I think, you’ve got the right kind of attitude to make it in Hollywood.

When it came time to write “Retention,” I was looking forward to working within a new template: the radio play. I studied other radio scripts and did my best to make the final result look right. This was more for my own sake than for anybody else’s, and I’m pretty sure that my producer would have been happy to get a readable script in any form. But I had a feeling that it would be helpful to adapt my habitual style to the standard format, and it was. In many ways, this story was a more straightforward piece of writing than most: it’s just two actors talking with minimal sound effects. Yet the stark look of the radio script, which consists of nothing but numbered lines of dialogue alternating between characters, had a way of clarifying the backbone of the narrative. Once I had an outline, I began by typing the dialogue as quickly as I could, almost in real time, without even indicating the identities of the speakers. Then I copied and pasted the transcript—which is how I came to think of it—into the radio play template. For the second draft, I found myself making small changes, as I always do, so that the result would look good on the page, rewriting lines to make for an even right margin and tightening speeches so that they wouldn’t fall across a page break. My goal was to come up with a document that would be readable and compelling in itself. And what distinguished it from my other projects was that I knew that it would ultimately be translated into performance, which was how its intended audience would experience it.

A page from the radio script of "Retention"

I delivered a draft of the script to Nick White, my producer, on January 8, 2016, which should give you a sense of how long it takes for something like this to come to fruition. Nick made a few edits, and I did one more pass on the whole thing, but we essentially had a finished version by the end of the month. After that, there was a long stretch of waiting, as we ran the script past the Howl network and began the process of casting. It went out to a number of potential actors, and it wasn’t until September that Aparna Nancherla and Echo Kellum came on board. (I also finally got paid for the script, which was noteworthy in itself—not many similar projects can afford to pay their writers. The amount was fairly modest, but it was more than reasonable for what amounted to a week of work.) In November, I got a rough cut of the episode, and I was able to make a few small suggestions. Finally, on December 21, it premiered online. All told, it took about a year to transform my initial idea into fifteen minutes of audio, so I was able to listen to the result with a decent amount of detachment. I’m relieved to say that I’m pleased with how it turned out. Casting Aparna Nancherla as Lisa, in particular, was an inspired touch. And although I hadn’t anticipated the decision to process her voice to make it more obvious from the beginning that she was a chatbot, on balance, I think that it was a valid choice. It’s probably the most predictable of the story’s twists, and by tipping it in advance, it serves as a kind of mislead for listeners, who might catch onto it quickly and conclude, incorrectly, that it was the only surprise in store.

What I found most interesting about the whole process was how it felt to deliver what amounted to a blueprint of a story for others to execute. Playwrights and screenwriters do it all the time, but for me, it was a novel experience: I may not be entirely happy with every story I’ve published, but they’re all mine, and I bear full responsibility for the outcome. “Retention” gave me a taste, in a modest way, of how it feels to hand an idea over to someone else, and of the peculiar relationship between a script and the dramatic work itself. Many aspiring screenwriters like to think that their vision on the page is complete, but it isn’t, and it has to pass through many intermediaries—the actors, the producer, the editor, the technical team—before it comes out on the other side. On balance, I prefer writing my own stuff, but I came away from “Retention” with valuable lessons that I expect to put into practice, whether or not I write for audio again. (I’m hopeful that there will be a second season of The Outer Reach, and I’d love to be a part of it, but its future is still up in the air.) I’ve spent most of my career worrying about issues of clarity, and in the case of a script, this isn’t an abstract goal, but a strategic element that can determine how faithfully the story is translated into its final form. Any fuzzy thinking early on will only be magnified in subsequent stages, so there’s a huge incentive for the writer to make the pieces as transparent and logical as possible. This is especially true when you’re providing a sketch for someone else to finish, but it also applies when you’re writing for ordinary readers, who are doing nothing else, after all, but turning the story into a movie in their heads.

Written by nevalalee

January 25, 2017 at 10:30 am

Rogue One and the logic of the story reel

leave a comment »

Gareth Edwards and Felicity Jones on the set of Rogue One

Last week, I came across a conversation on Yahoo Movies UK with John Gilroy and Colin Goudie, two of the editors who worked on Rogue One. I’ve never read an interview with a movie editor that wasn’t loaded with insights into storytelling, and this one is no exception. Here’s my favorite tidbit, in which Goudie describes cutting together a story reel early in the production process:

There was no screenplay, there was just a story breakdown at that point, scene by scene. [Director Gareth Edwards] got me to rip hundreds of movies and basically make Rogue One using other films so that they could work out how much dialogue they actually needed in the film.

It’s very simple to have a line [in the script] that reads “Krennic’s shuttle descends to the planet.” Now that takes maybe two to three seconds in other films, but if you look at any other Star Wars film you realize that takes forty-five seconds or a minute of screen time. So by making the whole film that way—I used a lot of the Star Wars films—but also hundreds of other films, too, it gave us a good idea of the timing.

This is a striking observation in itself. If Rogue One does an excellent job of recreating the feel of its source material, and I think it does, it’s because it honors its rhythms—which differ in subtle respects from those of other films—to an extent that the recent Star Trek movies mostly don’t. Goudie continues:

For example, the sequence of them breaking into the vault, I was ripping the big door closing in WarGames to work out how long does a vault door take to close.

So that’s what I did, and that was three months work to do that, and that had captions at the bottom which explained the action that was going to be taking place, and two thirds of the screen was filled with the concept art that had already been done and one quarter, the bottom corner, was the little movie clip to give you how long that scene would actually take.

Then I used dialogue from other movies to give you a sense of how long it would take in other films for someone to be interrogated. So for instance, when Jyn gets interrogated at the beginning of the film by the Rebel council, I used the scene where Ripley gets interrogated in Aliens.

Rogue One

This might seem like little more than interesting trivia, but there’s actually a lot to unpack. You could argue that the ability to construct an entire Star Wars movie out of analogous scenes from other films only points to how derivative the series has always been: it’s hard to imagine doing this for, say, Manchester By the Sea, or even Inception. But that’s also a big part of the franchise’s appeal. Umberto Eco famously said that Casablanca was made up of the memories of other movies, and he suggested that a cult movie—which we can revisit in our imagination from different angles, rather than recalling it as a seamless whole—is necessarily “unhinged”:

Only an unhinged movie survives as a disconnected series of images, of peaks, of visual icebergs. It should display not one central idea but many. It should not reveal a coherent philosophy of composition. It must live on, and because of, its glorious ricketiness.

After reminding us of the uncertain circumstances under which Casablanca was written and filmed, Eco then suggests: “When you don’t know how to deal with a story, you put stereotyped situations in it because you know that they, at least, have already worked elsewhere…My guess is that…[director Michael Curtiz] was simply quoting, unconsciously, similar situations in other movies and trying to provide a reasonably complete repetition of them.”

What interests me the most is Eco’s conclusion: “What Casablanca does unconsciously, other movies will do with extreme intertextual awareness, assuming also that the addressee is equally aware of their purposes.” He cites Raiders of the Lost Ark and E.T. as two examples, and he easily could have named Star Wars as well, which is explicitly made up of such references. (In fact, George Lucas was putting together story reels before there was even a word for it: “Every time there was a war movie on television, like The Bridges at Toko-Ri, I would watch it—and if there was a dogfight sequence, I would videotape it. Then we would transfer that to 16mm film, and I’d just edit it according to my story of Star Wars. It was really my way of getting a sense of the movement of the spaceships.”) What Eco doesn’t mention—perhaps because he was writing a generation ago—is how such films can pass through intertextuality and end up on the other side. They create memories for viewers who aren’t familiar with the originals, and they end up being quoted in turn by filmmakers who only know Star Wars. They become texts in themselves. In assembling a story reel from hundreds of other movies, Edwards and Goudie were only doing in a literal fashion what most storytellers do in their heads. They figure out how a story should “look” at its highest level, in a rough sketch of the whole, and fill in the details later. The difference here is that Rogue One had the budget and resources to pay someone to do it for real, in a form that could be timed down to the second and reviewed by others, on the assumption that it would save money and effort down the line. Did it work? I’ll be talking about this more tomorrow.

Written by nevalalee

January 12, 2017 at 9:13 am

The tentpole test

leave a comment »

Rogue One: A Star Wars Story

How do you release blockbusters like clockwork and still make each one seem special? It’s an issue that the movie industry is anxious to solve, and there’s a lot riding on the outcome. When I saw The Phantom Menace nearly two decades ago, there was an electric sense of excitement in the theater: we were pinching ourselves over the fact that we were about to see see the opening crawl for a new Star Wars movie on the big screen. That air of expectancy diminished for the two prequels that followed, and not only because they weren’t very good. There’s a big difference, after all, between the accumulated anticipation of sixteen years and one in which the installments are only a few years apart. The decade that elapsed between Revenge of the Sith and The Force Awakens was enough to ramp it up again, as if fan excitement were a battery that recovers some of its charge after it’s allowed to rest for a while. In the past, when we’ve watched a new chapter in a beloved franchise, our experience hasn’t just been shaped by the movie itself, but by the sudden release of energy that has been bottled up for so long. That kind of prolonged wait can prevent us from honestly evaluating the result—I wasn’t the only one who initially thought that The Phantom Menace had lived up to my expectations—but that isn’t necessarily a mistake. A tentpole picture is named for the support that it offers to the rest of the studio, but it also plays a central role in the lives of fans, which have been going on long before the film starts and will continue after it ends. As Robert Frost once wrote about a different tent, it’s “loosely bound / By countless silken ties of love and thought / to every thing on earth the compass round.”

When you have too many tentpoles coming out in rapid succession, however, the outcome—if I can switch metaphors yet again—is a kind of wave interference that can lead to a weakening of the overall system. On Christmas Eve, I went to see Rogue One, which was preceded by what felt like a dozen trailers. One was for Spider-Man: Homecoming, which left me with a perplexing feeling of indifference. I’m not the only one to observe that the constant onslaught of Marvel movies makes each installment feel less interesting, but in the case of Spider-Man, we actually have a baseline for comparison. Two baselines, really. I can’t defend every moment of the three Sam Raimi films, but there’s no question that each of those movies felt like an event. There was even enough residual excitement lingering after the franchise was rebooted to make me see The Amazing Spider-Man in the theater, and even its sequel felt, for better or worse, like a major movie. (I wonder sometimes if audiences can sense the pressure when a studio has a lot riding on a particular film: even a mediocre movie can seem significant if a company has tethered all its hopes to it.) Spider-Man: Homecoming, by contrast, feels like just one more component in the Marvel machine, and not even a particularly significant one. It has the effect of diminishing a superhero who ought to be at the heart of any universe in which he appears, relegating one of the two or three most successful comic book characters of all time to a supporting role in a larger universe. And because we still remember how central he was to no fewer than two previous franchises, it feels like a demotion, as if Spider-Man were an employee who had left the company, came back, and is now reporting to Iron Man.

Spider-Man in Captain America: Civil War

It isn’t that I’m all that emotionally invested in the future of Spider-Man, but it’s a useful case study for what it tells us about the pitfalls of these films, which can take something that once felt like a milestone and reduce it to a midseason episode of an ongoing television series. What’s funny, of course, is that the attitude we’re now being asked to take toward these movies is actually closer to the way in which they were originally conceived. The word “episode” is right there in the title of every Star Wars movie, which George Lucas saw as an homage to classic serials, with one installment following another on a weekly basis. Superhero films, obviously, are based on comic books, which are cranked out by the month. The fact that audiences once had to wait for years between movies may turn out to have been a historical artifact caused by technological limitations and corporate inertia. Maybe the logical way to view these films is, in fact, in semiannual installments, as younger viewers are no doubt growing up to expect. In years to come, the extended gaps between these movies in prior decades will seem like a structural quirk, rather than an inherent feature of how we relate to them. This transition may not be as meaningful as, say, the shift from silent films to the talkies, but they imply a similar change in the way we relate to the film onscreen. Blockbusters used to be released with years of anticipation baked into the response from moviegoers, which is no longer something that can be taken for granted. It’s a loss, in its way, to fan culture, which had to learn how to sustain itself during the dry periods between films, but it also implies that the movies themselves face a new set of challenges.

To be fair, Disney, which controls both the Marvel and Star Wars franchises, has clearly thought a lot about this problem, and they’ve hit on approaches that seem to work pretty well. With the Marvel Universe, this means pitching most of the films at a level at which they’re just good enough, but no more, while investing real energy every few years into a movie that is first among equals. This leads to a lot of fairly mediocre installments, but also to the occasional Captain America: Civil War, which I think is the best Marvel movie yet—it pulls off the impossible task of updating us on a dozen important characters while also creating real emotional stakes in the process, which is even more difficult than it looks. Rogue One, which I also liked a lot, takes a slightly different tack. For most of the first half, I was skeptical of how heavily it was leaning on its predecessors, but by the end, I was on board, and for exactly the same reason. This is a movie that depends on our knowledge of the prior films for its full impact, but it does so with intelligence and ingenuity, and there’s a real satisfaction in how neatly it aligns with and enhances the original Star Wars, while also having the consideration to close itself off at the end. (A lot of the credit for this may be due to Tony Gilroy, the screenwriter and unbilled co-director, who pulled off much of the same feat when he structured much of The Bourne Ultimatum to take place during gaps in The Bourne Supremacy.) Relying on nostalgia is a clever way to compensate for the reduced buildup between movies, as if Rogue One were drawing on the goodwill that Star Wars built up and hasn’t dissipated, like a flywheel that serves as an uninterruptible power supply. Star Wars isn’t just a tentpole, but a source of energy. And it might just be powerful enough to keep the whole machine running forever.

The steady hand

with 2 comments

Danny Lloyd in The Shining

Forty years ago, the cinematographer Garrett Brown invented the Steadicam. It was a stabilizer attached to a harness that allowed a camera operator, walking on foot or riding in a vehicle, to shoot the kind of smooth footage that had previously only been possible using a dolly. Before long, it had revolutionized the way in which both movies and television were shot, and not always in the most obvious ways. When we think of the Steadicam, we’re likely to remember virtuoso extended takes like the Copacabana sequence in Goodfellas, but it can also be a valuable tool even when we aren’t supposed to notice it. As the legendary Robert Elswit said recently to the New York Times:

“To me, it’s not a specialty item,” he said. “It’s usually there all the time.” The results, he added, are sometimes “not even necessarily recognizable as a Steadicam shot. You just use it to get something done in a simple way.”

Like digital video, the Steadicam has had a leveling influence on the movies. Scenes that might have been too expensive, complicated, or time-consuming to set up in the conventional manner can be done on the fly, which has opened up possibilities both for innovative stylists and for filmmakers who are struggling to get their stories made at all.

Not surprisingly, there are skeptics. In On Directing Film, which I think is the best book on storytelling I’ve ever read, David Mamet argues that it’s a mistake to think of a movie as a documentary record of what the protagonist does, and he continues:

The Steadicam (a hand-held camera), like many another technological miracle, has done injury; it has injured American movies, because it makes it so easy to follow the protagonist around, one no longer has to think, “What is the shot?” or “Where should I put the camera?” One thinks, instead, “I can shoot the whole thing in the morning.”

This conflicts with Mamet’s approach to structuring a plot, which hinges on dividing each scene into individual beats that can be expressed in purely visual terms. It’s a method that emerges naturally from the discipline of selecting shots and cutting them together, and it’s the kind of hard work that we’re often tempted to avoid. As Mamet adds in a footnote: “The Steadicam is no more capable of aiding in the creation of a good movie than the computer is in the writing of a good novel—both are labor-saving devices, which simplify and so make more attractive the mindless aspects of creative endeavor.” The casual use of the Steadicam seduces directors into conceiving of the action in terms of “little plays,” rather than in fundamental narrative units, and it removes some of the necessity of disciplined thinking beforehand.

Michael Keaton and Edward Norton in Birdman

But it isn’t until toward the end of the book that Mamet delivers his most ringing condemnation of what the Steadicam represents:

“Wouldn’t it be nice,” one might say, “if we could get this hall here, really around the corner from that door there; or to get that door here to really be the door that opens on the staircase to that door there? So we could just movie the camera from one to the next?”

It took me a great deal of effort and still takes me a great deal and will continue to take me a great deal of effort to answer the question thusly: no, not only is it not important to have those objects literally contiguous; it is important to fight against this desire, because fighting it reinforces an understanding of the essential nature of film, which is that it is made of disparate shorts, cut together. It’s a door, it’s a hall, it’s a blah-blah. Put the camera “there” and photograph, as simply as possible, that object. If we don’t understand that we both can and must cut the shots together, we are sneakily falling victim to the mistaken theory of the Steadicam.

This might all sound grumpy and abstract, but it isn’t. Take Birdman. You might well love Birdman—plenty of viewers evidently did—but I think it provides a devastating confirmation of Mamet’s point. By playing as a single, seemingly continuous shot, it robs itself of the ability to tell the story with cuts, and it inadvertently serves as an advertisement of how most good movies come together in the editing room. It’s an audacious experiment that never needs to be tried again. And it wouldn’t exist at all if it weren’t for the Steadicam.

But the Steadicam can also be a thing of beauty. I don’t want to discourage its use by filmmakers for whom it means the difference between making a movie under budget and never making it at all, as long as they don’t forget to think hard about all of the constituent parts of the story. There’s also a place for the bravura long take, especially when it depends on our awareness of the unfaked passage of time, as in the opening of Touch of Evil—a long take, made without benefit of a Steadicam, that runs the risk of looking less astonishing today because technology has made this sort of thing so much easier. And there’s even room for the occasional long take that exists only to wow us. De Palma has a fantastic one in Raising Cain, which I watched again recently, that deserves to be ranked among the greats. At its best, it can make the filmmaker’s audacity inseparable from the emotional core of the scene, as David Thomson observes of Goodfellas: “The terrific, serpentine, Steadicam tracking shot by which Henry Hill and his girl enter the Copacabana by the back exit is not just his attempt to impress her but Scorsese’s urge to stagger us and himself with bravura cinema.” The best example of all is The Shining, with its tracking shots of Danny pedaling his Big Wheel down the deserted corridors of the Overlook. It’s showy, but it also expresses the movie’s basic horror, as Danny is inexorably drawn to the revelation of his father’s true nature. (And it’s worth noting that much of its effectiveness is due to the sound design, with the alternation of the wheels against the carpet and floor, which is one of those artistic insights that never grows dated.) The Steadicam is a tool like any other, which means that it can be misused. It can be wonderful, too. But it requires a steady hand behind the camera.

The decline of the west

leave a comment »

Evan Rachel Wood on Westworld

Note: Spoilers follow for the season finale of Westworld.

Over time, as a society, we’ve more or less figured out how we’re all supposed to deal with spoilers. When a movie first comes out, there’s a grace period in which most of us agree not to discuss certain aspects of the story, especially the ending. Usually, reviewers will confine their detailed observations to the first half of the film, which can be difficult for a critic who sees his or her obligation as that of a thoughtful commentator, rather than of a consumer advisor who simply points audiences in the right direction on opening weekend. If there’s a particularly striking development before the halfway mark, we usually avoid talking about that, too. (Over time, the definition of what constitutes a spoiler has expanded to the point where some fans apply it to any information about a film whatsoever, particularly for big franchise installments.) For six months or so, we remain discreet—and most movies, it’s worth noting, are forgotten long before we even get to that point. A movie with a major twist at the end may see that tacit agreement extended for years. Eventually, however, it becomes fair game. Sometimes it’s because a surprise has seeped gradually into the culture, so that a film like Citizen Kane or Psycho becomes all but defined by its secrets. In other cases, as with The Sixth Sense or Fight Club, it feels more like we’ve collectively decided that anyone who wants to see it has already gotten a chance, and now we can talk about it openly. And up until now, it’s a system that has worked pretty well.

But this approach no longer makes sense for a television show that is still on the air, at least if the case of Westworld is any indication. We’re not talking about spoilers, exactly, but about a certain kind of informed speculation. The idea that one of the plotlines on Westworld was actually an extended flashback first surfaced in discussions on communities like Reddit, was picked up by the commenters on the reviews on mainstream websites, led theorists to put together elaborate chronologies and videos to organize the evidence, and finally made its way into think pieces. Long before last night’s finale, it was clear that the theory had to be correct. The result didn’t exactly ruin my enjoyment, since it turned out to be just one thread in a satisfying piece of storytelling, but I’ll never know what it would have been like to have learned the truth along with Dolores, and I suspect that a lot of other viewers felt the same twinge of regret. (To be fair, the percentage of people who keep up with this sort of theorizing online probably amounts to a fraction of the show’s total viewership, and the majority of the audience experienced the reveal pretty much as the creators envisioned it.) There’s clearly no point in discouraging this kind of speculation entirely. But when a show plays fair, as Westworld did, it’s only a matter of time before somebody solves the mystery in advance. And because a plausible theory can spread so quickly through the hive mind, it makes us feel smarter, as individuals, than we really are, which compromises our reactions to what was a legitimately clever and resonant surprise.

The Westworld episode "The Bicameral Mind"

Westworld isn’t the first show to be vulnerable to this kind of collective sleuthing: Game of Thrones has been subjected to it for years, especially when it comes to the parentage, status, and ultimate fate of a certain character who otherwise wouldn’t seem interesting enough to survive. In both cases, it’s because the show—or the underlying novels—provided logical clues along the way to prepare us, in the honorable fashion of all good storytelling. The trouble is that these rules were established at a time when most works of narrative were experienced in solitude. Even if one out of three viewers figured out the twist in The Usual Suspects before the movie was halfway done, it didn’t really affect the experience of the others in the theater, since we don’t tend to discuss the story in progress out loud. That was true of television, too, for most of the medium’s history. These days, however, many of us are essentially talking about these stories online while they’re still happening, so it isn’t surprising if the solutions can spread like a virus. I don’t blame the theorists, because this kind of speculation can be an absorbing game in its own right. But it’s so powerful that it needs to be separated from the general population. It requires a kind of self-policing, or quarantine, that has to become second nature to every viewer of this kind of show. Reviewers need to figure out how to deal with it, too. Otherwise, shows will lose the incentive to play fair, relying instead on blunter, more mechanical kinds of surprise. And this would be a real shame, because Westworld has assembled the pieces so effectively that I don’t doubt it will continue to do so in the future.

Watching the finale, I was curious to see how it would manage to explain the chronology of Dolores’s story without becoming hopelessly confusing, and it did a beautiful job, mostly by subordinating it to the larger questions of William’s fate, Dolores’s journey, and Ford’s master plan, which has taken thirty-five years to come to fruition. (In itself, this is a useful insight into storytelling: it’s easier for the audience to make a big conceptual leap when it feeds into an emotional arc that is already in progress, and if it’s treated as a means, not an end.) If anything, the reveal of the identity of Wyatt was even more powerful—although, oddly, the fact that everything has unfolded according to Ford’s design undermines the agency of the very robots that it was supposed to defend. It’s an emblem for why this excellent season remains one notch down from the level of a masterpiece, thanks to the need of its creators, like Ford, to maintain a tight level of control. Still, if it lasts for as long as I think it will, it may not even matter how much of it the Internet figured out on first viewing. For a television show, the lifespan of a spoiler seems to play in reverse: instead of a grace period followed by free discussion after enough time has passed, we get intense speculation while the show airs, giving way to silence once we’ve all moved on to the next big thing. If Westworld endures as a work of art, it will be seen just as it was intended by those who discover it much later, after the flurry of speculation has faded. I don’t know how long it will take before it can be seen again with fresh eyes. But thirty-five years seems about right.

Written by nevalalee

December 5, 2016 at 9:24 am

Posted in Television

Tagged with ,

The analytical laboratory

leave a comment »

The Martian

Over the last few months, there’s been a surprising flurry of film and television activity involving the writers featured in my upcoming book Astounding. SyFy has announced plans to adapt Robert A. Heinlein’s Stranger in the Strange Land as a miniseries, with an imposing creative team that includes Hollywood power broker Scott Rudin and Zodiac screenwriter James Vanderbilt. Columbia is aiming to reboot Starship Troopers with producer Neal H. Mortiz of The Fast and the Furious, prompting Paul Verhoeven, the director of the original, to comment: “Going back to the novel would fit very much in a Trump presidency.” The production company Legendary has bought the film and television rights to Dune, which first appeared as a serial edited by John W. Campbell in Analog. Meanwhile, Jonathan Nolan is apparently still attached to an adaptation of Isaac Asimov’s Foundation, although he seems rather busy at the moment. (L. Ron Hubbard remains relatively neglected, unless you want to count Leah Remini’s new show, which the Church of Scientology would probably hope you wouldn’t.) The fact that rights have been purchased and press releases issued doesn’t necessarily mean that anything will happen, of course, although the prospects for Stranger in a Strange Land seem strong. And while it’s possible that I’m simply paying more attention to these announcements now that I’m thinking about these writers all the time, I suspect that there’s something real going on.

So why the sudden surge of interest? The most likely, and also the most heartening, explanation is that we’re experiencing a revival of hard science fiction. Movies like Gravity, Interstellar, The Martian, and Arrival—which I haven’t seen yet—have demonstrated that there’s an audience for films that draw more inspiration from Clarke and Kubrick than from Star Wars. Westworld, whatever else you might think of it, has done much the same on television. And there’s no question that the environment for this kind of story is far more attractive now than it was even ten years ago. For my money, the most encouraging development is the movie Life, a horror thriller set on the International Space Station, which is scheduled to come out next summer. I’m tickled by it because, frankly, it doesn’t look like anything special: the trailer starts promisingly enough, but it ends by feeling very familiar. It might turn out to be better than it looks, but I almost hope that it doesn’t. The best sign that a genre is reaching maturity isn’t a series of singular achievements, but the appearance of works that are content to color inside the lines, consciously evoking the trappings of more visionary movies while remaining squarely focused on the mainstream. A film like Interstellar is always going to be an outlier. What we need are movies like what Life promises to be: a science fiction film of minimal ambition, but a certain amount of skill, and a willingness to copy the most obvious features of its predecessors. That’s when you’ve got a trend.

Jake Gyllenhaal in Life

The other key development is the growing market for prestige dramas on television, which is the logical home for Stranger in a Strange Land and, I think, Dune. It may be the case, as we’ve been told in connection with Star Trek: Discovery, that there isn’t a place for science fiction on a broadcast network, but there’s certainly room for it on cable. Combine this with the increased appetite for hard science fiction on film, and you’ve got precisely the conditions in which smart production companies should be snatching up the rights to Asimov, Heinlein, and the rest. Given the historically rapid rise and fall of such trends, they shouldn’t expect this window to remain open for long. (In a letter to Asimov on February 3, 1939, Frederik Pohl noted the flood of new science fiction magazines on newsstands, and he concluded: “Time is indeed of the essence…Such a condition can’t possibly last forever, and the time to capitalize on it is now; next month may be too late.”) What they’re likely to find, in the end, is that many of these stories are resistant to adaptation, and that they’re better off seeking out original material. There’s a reason that there have been so few movies derived from Heinlein and Asimov, despite the temptation that they’ve always presented. Heinlein, in particular, seems superficially amenable to the movies: he certainly knew how to write action in a way that Asimov couldn’t. But he also liked to spend the second half of a story picking apart the assumptions of the first, after sucking in the reader with an exciting beginning, and if you aren’t going to include the deconstruction, you might as well write something from scratch.

As it happens, the recent spike of action on the adaptation front has coincided with another announcement. Analog, the laboratory in which all these authors were born, is cutting back its production schedule to six double issues every year. This is obviously intended to manage costs, and it’s a reminder of how close to the edge the science fiction digests have always been. (To be fair, the change also coincides with a long overdue update of the magazine’s website, which is very encouraging. If this reflects a true shift from print to online, it’s less a retreat than a necessary recalibration.) It’s easy to contrast the game of pennies being played at the bottom with the expenditure of millions of dollars at the top, but that’s arguably how it has to be. Analog, like Astounding before it, was a machine for generating variations, which needs to be done on the cheap. Most stories are forgotten almost at once, and the few that survive the test of time are the ones that get the lion’s share of resources. All the while, the magazine persists as an indispensable form of research and development—a sort of skunk works that keeps the entire enterprise going. That’s been true since the beginning, and you can see this clearly in the lives of the writers involved. Asimov, Heinlein, Herbert, and their estates became wealthy from their work. Campbell, who more than any other individual was responsible for the rise of modern science fiction, did not. Instead, he remained in his little office, lugging manuscripts in a heavy briefcase twice a week on the train. He was reasonably well off, but not in a way that creates an empire of valuable intellectual property. Instead, he ran the lab. And we can see the results all around us.

The Westworld variations

leave a comment »

Jeffrey Wright on Westworld

Note: Spoilers follow for the most recent episode of Westworld.

I’ve written a lot on this blog about the power of ensembles, which allow television shows to experiment with different combinations of characters. Usually, it takes a season or two for the most fruitful pairings to emerge, and they can take even the writers by surprise. When a series begins, characters tend to interact based on where the plot puts them, and those initial groupings are based on little more than the creator’s best guess. Later, when the strengths of the actors have become apparent and the story has wandered in unanticipated directions, you end up with wonderful pairings that you didn’t even know you wanted. Last night’s installment of Westworld features at least two of these. The first is an opening encounter between Bernard and Maeve that gets the episode off to an emotional high that it never quite manages to top: it hurries Bernard to the next—and maybe last—stage of his journey too quickly to allow him to fully process what Maeve tells him. But it’s still nice to see them onscreen together. (They’re also the show’s two most prominent characters of color, but its treatment of race is so deeply buried that it barely even qualifies as subtext.) The second nifty scene comes when Charlotte, the duplicitous representative from the board, shows up in the Man in Black’s storyline. It’s more plot-driven, and it exists mostly to feed us some useful pieces of backstory. But there’s an undeniable frisson whenever two previously unrelated storylines reveal a hidden connection.

I hope that the show gives us more moments like this, but I’m also a little worried that it can’t. The scenes that I liked most in “The Well-Tempered Clavier” were surprising and satisfying precisely because the series has been so meticulous about keeping its plot threads separated. This may well be because at least one subplot is occurring in a different timeline, but more often, it’s a way of keeping things orderly: there’s so much happening in various places that the show is obliged to let each story go its own way. I don’t fault it for this, because this is such a superbly organized series, and although there are occasional lulls, they’ve been far fewer than you’d expect from a show with this level of this complexity. But very little of it seems organic or unanticipated. This might seem like a quibble. Yet I desperately want this show to be as great as it shows promise of being. And if there’s one thing that the best shows of the last decade—from Mad Men to Breaking Bad to Fargo—have in common, it’s that they enjoy placing a few characters in a room and simply seeing what happens. You could say that Westworld is an inherently different sort of series, and that’s fine. But it’s such an effective narrative machine that it leaves me a little starved for those unpredictable moments that television, of all media, is the most likely to produce. (Its other great weakness is its general air of humorlessness, which arises from the same cause.) This is one of the most plot-heavy shows I’ve ever seen, but it’s possible to tell a tightly structured story while still leaving room for the unexpected. In fact, that’s one sign of mastery.

Evan Rachel Wood on Westworld

And you don’t need to look far for proof. In a pivotal passage in The Films of Akira Kurosawa, one of my favorite books on the movies, Donald Richie writes of “the irrational rightness of an apparently gratuitous image in its proper place,” and he goes to to say:

Part of the beauty of such scenes…is just that they are “thrown away” as it were, that they have no place, that they do not ostensibly contribute, that they even constitute what has been called bad filmmaking. It is not the beauty of these unexpected images, however, that captivates…but their mystery. They must remain unexplained. It has been said that after a film is over all that remains are a few scattered images, and if they remain then the film was memorable…Further, if one remembers carefully one finds that it is only the uneconomical, mysterious images which remain…Kurosawa’s films are so rigorous and, at the same time, so closely reasoned, that little scenes such as this appeal with the direct simplicity of water in the desert.

“Rigorous” and “closely reasoned” are two words that I’m sure the creators of Westworld would love to hear used to describe their show. But when you look at a movie like Seven Samurai—which on some level is the greatest western ever made—you have to agree with Richie: “What one remembers best from this superbly economical film then are those scenes which seem most uneconomical—that is, those which apparently add nothing to it.

I don’t know if Westworld will ever become confident enough to offer viewers more water in the desert, but I’m hopeful that it will, because the precedent exists for a television series giving us a rigorous first season that it blows up down the line. I’m thinking, in particular, of Community, a show that might otherwise seem to have little in common with Westworld. It’s hard to remember now, after six increasingly nutty seasons, but Community began as an intensely focused sitcom: for its debut season, it didn’t even leave campus. The result gave the show what I’ve called a narrative home base, and even though I’m rarely inclined to revisit that first season, the groundwork that it laid was indispensable. It turned Greendale into a real place, and it provided a foundation for even the wildest moments to follow. Westworld seems to be doing much the same thing. Every scene so far has taken place in the park, and we’ve only received a few scattered hints of what the world beyond might be like—and whatever it is, it doesn’t sound good. The escape of the hosts from the park feels like an inevitable development, and the withholding of any information about what they’ll find is obviously a deliberate choice. This makes me suspect that this season is restricting itself on purpose, to prepare us for something even stranger, and in retrospect, it will seem cautious, compared to whatever else Westworld has up its sleeve. It’s the baseline from which crazier, more unexpected moments will later arise. Or, to take a page from the composer of “The Well-Tempered Clavier,” this season is the aria, and the variations are yet to come.

Written by nevalalee

November 28, 2016 at 8:35 am

Cain rose up

with 2 comments

John Lithgow in Raising Cain

I first saw Brian De Palma’s Raising Cain when I was fourteen years old. In a weird way, it amounted to a peak moment of my early adolescence: I was on a school trip to our nation’s capital, sharing a hotel room with my friends from middle school, and we were just tickled to get away with watching an R-rated movie on cable. The fact that we ended up with Raising Cain doesn’t quite compare with the kids on The Simpsons cheering at the chance to see Barton Fink, but it isn’t too far off. I think that we liked it, and while I won’t claim that we understood it, that doesn’t mean much of anything—it’s hard for me to imagine anybody, of any age, entirely understanding this movie, which includes both me and De Palma himself. A few years later, I caught it again on television, and while I can’t say I’ve thought about it much since, I never forgot it. Gradually, I began to catch up on my De Palma, going mostly by whatever movies made Pauline Kael the most ecstatic at the time, which in itself was an education in the gap between a great critic’s pet enthusiasms and what exists on the screen. (In her review of The Fury, Kael wrote: “No Hitchcock thriller was ever so intense, went so far, or had so many ‘classic’ sequences.” I love Kael, but there are at least three things wrong with that sentence.) And ultimately De Palma came to mean a lot to me, as he does to just about anyone who responds to the movies in a certain way.

When I heard about the recut version of Raising Cain—in an interview with John Lithgow on The A.V. Club, no less, in which he was promoting his somewhat different role on The Crown—I was intrigued. And its backstory is particularly interesting. Shortly before the movie was first released, De Palma moved a crucial sequence from the beginning to the middle, eliminating an extended flashback and allowing the film to play more or less chronologically. He came to regret the change, but it was too late to do anything about it. Years later, a freelance director and editor named Peet Gelderblom read about the original cut and decided to restore it, performing a judicious edit on a digital copy. He put it online, where, unbelievably, it was seen by De Palma himself, who not only loved it but asked that it be included as a special feature on the new Blu-ray release. If nothing else, it’s a reminder of the true possibilities of fan edits, which have served mostly for competing visions of the ideal version of Star Wars. With modern software, a fan can do for a movie what Walter Murch did for Touch of Evil, restoring it to the director’s original version based on a script or a verbal description. In the case of Raising Cain, this mostly just involved rearranging the pieces in the theatrical cut, but other fans have tackled such challenges as restoring all the deleted scenes in Twin Peaks: Fire Walk With Me, and there are countless other candidates.

Raising Cain

Yet Raising Cain might be the most instructive case study of all, because simply restoring the original opening to its intended place results in a radical transformation. It isn’t for everyone, and it’s necessary to grant De Palma his usual passes for clunky dialogue and characterization, but if you’re ready to meet it halfway, you’re rewarded with a thriller that twists back on itself like a Möbius strip. De Palma plunders his earlier movies so blatantly that it isn’t clear if he’s somehow paying loving homage to himself—bypassing Hitchcock entirely—or recycling good ideas that he feels like using again. The recut opens with a long mislead that recalls Dressed to Kill, which means that Lithgow barely even appears for the first twenty minutes. You can almost see why De Palma chickened out for the theatrical version: Lithgow’s performance as the meek Carter and his psychotic imaginary brother Cain feels too juicy to withhold. But the logic of the script was destroyed. For a film that tests an audience’s suspension of disbelief in so many other ways, it’s unclear why De Palma thought that a flashback would be too much for the viewer to handle. The theatrical release preserves all the great shock effects that are the movie’s primary reason for existing, but they don’t build to anything, and you’re left with a film that plays like a series of sketches. With the original order restored, it becomes what it was meant to be all along: a great shaggy dog story with a killer punchline.

Raising Cain is gleefully about nothing but itself, and I wouldn’t force anybody to watch it who wasn’t already interested. But the recut also serves as an excellent introduction to its director, just as the older version did for me: when I first encountered it, I doubt I’d seen anything by De Palma, except maybe The Untouchables, and Mission: Impossible was still a year away. It’s safe to say that if you like Raising Cain, you’ll like De Palma in general, and if you can’t get past its archness, campiness, and indifference to basic plausibility—well, I can hardly blame you. Watching it again, I was reminded of Blue Velvet, a far greater movie that presents the viewer with a similar test. It has the same mixture of naïveté and incredible technical virtuosity, with scenes that barely seem to have been written alternating with ones that push against the boundaries of the medium itself. You’re never quite sure if the director is in on the gag, and maybe it doesn’t matter. There isn’t much beauty in Raising Cain, and De Palma is a hackier and more mechanical director than Lynch, but both are so strongly visual that the nonsensory aspects of their films, like the obligatory scenes with the cops, seem to wither before our eyes. (It’s an approach that requires a kind of raw, intuitive trust from the cast, and as much as I enjoy what Lithgow does here, he may be too clever and resourceful an actor to really disappear into the role.) Both are rooted, crucially, in Hitchcock, who was equally obsessive, but was careful to never work from his own script. Hitchcock kept his secret self hidden, while De Palma puts it in plain sight. And if it turns out to be nothing at all, that’s probably part of the joke.

Late night thoughts

leave a comment »

Lewis Thomas

I cannot listen to Mahler’s Ninth Symphony with anything like the old melancholy mixed with the high pleasure I used to take from this music. There was a time, not long ago, when what I heard, especially in the final movement, was an open acknowledgement of death and at the same time a quiet celebration of the tranquility connected to the process. I took this music as a metaphor for reassurance, confirming my own strong hunch that the dying of every living creature, the most natural of all experiences, has to be a peaceful experience. I rely on nature. The long passages on all the strings at the end, as close as music can come to expressing silence itself, I used to hear as Mahler’s idea of leave-taking at its best. But always, I have heard this music as a solitary, private listener, thinking about death.

Now I hear it differently. I cannot listen to the last movement of the Mahler Ninth without the door-smashing intrusion of a huge new thought: death everywhere, the dying of everything, the end of humanity. The easy sadness expressed with such gentleness and delicacy by that repeated phrase on faded strings, over and over again, no longer comes to me as old, familiar news of the cycle of living and dying…If I were very young, sixteen or seventeen years old, I think I would begin, perhaps very slowly and imperceptibly, to go crazy…If I were sixteen or seventeen years old, I would not feel the cracking of my own brain, but I would know for sure that the whole world was coming unhinged. I can remember with some clarity what it was like to be sixteen. I had discovered the Brahms symphonies. I knew that there was something going on in the late Beethoven quartets that I would have to figure out, and I knew that there was plenty of time ahead for all the figuring I would ever have to do. I had never heard of Mahler. I was in no hurry. I was a college sophomore and had decided that Wallace Stevens and I possessed a comprehensive understanding of everything needed for a life…

The man on television, Sunday midday, middle-aged and solid, nice-looking chap, all the facts at his fingertips, more dependable looking than most high-school principals, is talking about civilian defense, his responsibility in Washington. It can make an enormous difference, he is saying. Instead of the outright death of eighty million American citizens in twenty minutes, he says, we can, by careful planning and practice, get that number down to only forty million, maybe even twenty…If I were sixteen or seventeen years old and had to listen to that, or read things like that, I would want to give up listening and reading. I would begin thinking up new kinds of sounds, different from any music heard before, and I would be twisting and turning to rid myself of human language.

Lewis Thomas, “Late Night Thoughts on Listening to Mahler’s Ninth Symphony

Written by nevalalee

November 12, 2016 at 7:30 am

%d bloggers like this: