Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Search Results

Peak television and the future of stardom

with one comment

Kevin Costner in The Postman

Earlier this week, I devoured the long, excellent article by Josef Adalian and Maria Elena Fernandez of Vulture on the business of peak television. It’s full of useful insights and even better gossip—and it names plenty of names—but there’s one passage that really caught my eye, in a section about the huge salaries that movie stars are being paid to make the switch to the small screen:

A top agent defends the sums his clients are commanding, explaining that, in the overall scheme of things, the extra money isn’t all that significant. “Look at it this way,” he says. “If you’re Amazon and you’re going to launch a David E. Kelley show, that’s gonna cost $4 million an episode [to produce], right? That’s $40 million. You can have Bradley Whitford starring in it, [who is] gonna cost you $150,000 an episode. That’s $1.5 million of your $40 million. Or you could spend another $3.5 million [to get Costner] on what will end up being a $60 million investment by the time you market and promote it. You can either spend $60 [million] and have the Bradley Whitford show, or $63.5 [million] and have the Kevin Costner show. It makes a lot of sense when you look at it that way.”

With all due apologies to Bradley Whitford, I found this thought experiment fascinating, and not just for the reasons that the agent presumably shared it. It implies, for one thing, that television—which is often said to be overtaking Hollywood in terms of quality—is becoming more like feature filmmaking in another respect: it’s the last refuge of the traditional star. We frequently hear that movie stardom is dead and that audiences are drawn more to franchises than to recognizable faces, so the fact that cable and streaming networks seem intensely interested in signing film stars, in a post-True Detective world, implies that their model is different. Some of it may be due to the fact, as William Goldman once said, that no studio executive ever got fired for hiring a movie star: as the new platforms fight to establish themselves, it makes sense that they’d fall back on the idea of star power, which is one of the few things that corporate storytelling has ever been able to quantify or understand. It may also be because the marketing strategy for television inherently differs from that for film: an online series is unusually dependent on media coverage to stand out from the pack, and signing a star always generates headlines. Or at least it once did. (The Vulture article notes that Woody Allen’s new series for Amazon “may end up marking peak Peak TV,” and it seems a lot like a deal that was made for the sake of the coverage it would produce.)

Kevin Costner in JFK

But the most plausible explanation lies in simple economics. As the article explains, Netflix and the other streaming companies operate according to a “cost-plus” model: “Rather than holding out the promise of syndication gold, the company instead pays its studio and showrunner talent a guaranteed up-front profit—typically twenty or thirty percent above what it takes to make a show. In exchange, it owns all or most of the rights to distribute the show, domestically and internationally.” This limits the initial risk to the studio, but also the potential upside: nobody involved in producing the show itself will see any money on the back end. In addition, it means that even the lead actors of the series are paid a flat dollar amount, which makes them a more attractive investment than they might be for a movie. Most of the major stars in Hollywood earn gross points, which means that they get a cut of the box office receipts before the film turns a profit—a “first dollar” deal that makes the mathematics of breaking even much more complicated. The thought experiment about Bradley Whitford and Kevin Costner only makes sense if you can get Costner at a fixed salary per episode. In other words, movie stars are being actively courted by television because its model is a throwback to an earlier era, when actors were held under contract by a studio without any profit participation, and before stars and their agents negotiated better deals that ended up undermining the economic basis of the star system entirely.

And it’s revealing that Costner, of all actors, appears in this example. His name came up mostly because multiple sources told Vulture that he was offered $500,000 per episode to star in a streaming series: “He passed,” the article says, “but industry insiders predict he’ll eventually say ‘yes’ to the right offer.” But he also resonates because he stands for a kind of movie stardom that was already on the wane when he first became famous. It has something to do with the quintessentially American roles that he liked to play—even JFK is starting to seem like the last great national epic—and an aura that somehow kept him in leading parts two decades after his career as a major star was essentially over. That’s weirdly impressive in itself, and it testifies to how intriguing a figure he remains, even if audiences aren’t likely to pay to see him in a movie. Whenever I think of Costner, I remember what the studio executive Mike Medavoy once claimed to have told him right at the beginning of his career:

“You know,” I said to him over lunch, “I have this sense that I’m sitting here with someone who is going to become a great big star. You’re going to want to direct your own movies, produce your own movies, and you’re going to end up leaving your wife and going through the whole Hollywood movie-star cycle.”

Costner did, in fact, end up leaving his first wife. And if he also leaves film for television, even temporarily, it may reveal that “the whole Hollywood movie-star cycle” has a surprising final act that few of us could have anticipated.

Written by nevalalee

May 27, 2016 at 9:03 am

“Asthana glanced over at the television…”

leave a comment »

"A woman was standing just over his shoulder..."

Note: This post is the eighteenth installment in my author’s commentary for Eternal Empire, covering Chapter 19. You can read the previous installments here.

A quarter of a century ago, I read a story about the actor Art Carney, possibly apocryphal, that I’ve never forgotten. Here’s the version told by the stage and television actress Patricia Wilson:

During a live performance of the original Honeymooners, before millions of viewers, Jackie [Gleason] was late making an entrance into a scene. He left Art Carney onstage alone, in the familiar seedy apartment set of Alice and Ralph Kramden. Unflappable, Carney improvised action for Ed Norton. He looked around, scratched himself, then went to the Kramden refrigerator and peered in. He pulled out an orange, shuffled to the table, and sat down and peeled it. Meanwhile frantic stage managers raced to find Jackie. Art Carney sat onstage peeling and eating an orange, and the audience convulsed with laughter.

According to some accounts, Carney stretched the bit of business out for a full two minutes before Gleason finally appeared. And while it certainly speaks to Carney’s ingenuity and resourcefulness, we should also take a moment to tip our hats to that humble orange, as well as the prop master who thought to stick it in the fridge—unseen and unremarked—in the first place.

Theatrical props, as all actors and directors know, can be a source of unexpected ideas, just as the physical limitations or possibilities of the set itself can provide a canvas on which the action is conceived in real time. I’ve spoken elsewhere of the ability of vaudeville comedians to improvise routines on the spot using whatever was available on a standing set, and there’s a sense in which the richness of the physical environment in which a scene takes place is a battery from which the performances can draw energy. When a director makes sure that each actor’s pockets are full of the litter that a character might actually carry, it isn’t just a mark of obsessiveness or self-indulgence, or even a nod toward authenticity, but a matter of storing up potential tools. A prop by itself can’t make a scene work, but it can provide the seed around which a memorable moment or notion can grow, like a crystal. In more situations than you might expect, creativity lies less in the ability to invent from scratch than to make effective use of whatever happens to lie at hand. Invention is a precious resource, and most artists have a finite amount of it; it’s better, whenever possible, to utilize what the world provides. And much of the time, when you’re faced with a hard problem to solve, you’ll find that the answer is right there in the background.

"Asthana glanced over at the television..."

This is as true of writing fiction as of any of the performing arts. In the past, I’ve suggested that this is the true purpose of research or location work: it isn’t about accuracy, but about providing raw material for dreams, and any writer faced with the difficult task of inventing a scene would be wise to exploit what already exists. It’s infinitely easier to write a chase scene, for example, if you’re tailoring it to the geography of a particular street. As usual, it comes back to the problem of making choices: the more tangible or physical the constraints, the more likely they’ll generate something interesting when they collide with the fundamentally abstract process of plotting. Even if the scene I’m writing takes place somewhere wholly imaginary, I’ll treat it as if it were being shot on location: I’ll pick a real building or locale that has the qualities I need for the story, pore over blueprints and maps, and depart from the real plan only when I don’t have any alternative. In most cases, the cost of that departure, in terms of the confusion it creates, is far greater than the time and energy required to make the story fit within an existing structure. For much the same reason, I try to utilize the props and furniture you’d naturally find there. And that’s all the more true when a scene occurs in a verifiable place.

Sometimes, this kind of attention to detail can result in surprising resonances. There’s a small example that I like in Chapter 19 of Eternal Empire. Rogozin, my accused intelligence agent, is being held without charges at a detention center in Paddington Green. This is a real location, and its physical setup becomes very important: Rogozin is going to be killed, in an apparent suicide, under conditions of heavy security. To prepare these scenes, I collected reference photographs, studied published descriptions, and shaped the action as much as possible to unfold logically under the constraints the location imposed. And one fact caught my eye, purely as a matter of atmosphere: the cells at Paddington Green are equipped with televisions, usually set to play something innocuous, like a nature video. This had obvious potential as a counterpoint to the action, so I went to work looking for a real video that might play there. And after a bit of searching, I hit on a segment from the BBC series Life in the Undergrowth, narrated by David Attenborough, about the curious life cycle of the gall wasp. The phenomenon it described, as an invading wasp burrows into the gall created by another, happened to coincide well—perhaps too well—with the story itself. As far as I’m concerned, it’s what makes Rogozin’s death scene work. And while I could have made up my own video to suit the situation, it seemed better, and easier, to poke around the stage first to see what I could find…

Written by nevalalee

May 7, 2015 at 9:11 am

The unbreakable television formula

leave a comment »

Ellie Kemper in Unbreakable Kimmy Schmidt

Watching the sixth season premiere of Community last night on Yahoo—which is a statement that would have once seemed like a joke in itself—I was struck by the range of television comedy we have at our disposal these days. We’ve said goodbye to Parks and Recreation, we’re following Community into what is presumably its final stretch, and we’re about to greet Unbreakable Kimmy Schmidt as it starts what looks to be a powerhouse run on Netflix. These shows are superficially in the same genre: they’re single-camera sitcoms that freely grant themselves elaborate sight gags and excursions into surrealism, with a cutaway style that owes as much to The Simpsons as to Arrested Development. Yet they’re palpably different in tone. Parks and Rec was the ultimate refinement of the mockumentary style, with talking heads and reality show techniques used to flesh out a narrative of underlying sweetness; Community, as always, alternates between obsessively detailed fantasy and a comic strip version of emotions to which we can all relate; and Kimmy Schmidt takes place in what I can only call Tina Fey territory, with a barrage of throwaway jokes and non sequiturs designed to be referenced and quoted forever.

And the diversity of approach we see in these three comedies makes the dramatic genre seem impoverished. Most television dramas are still basically linear; they’re told using the same familiar grammar of establishing shots, medium shots, and closeups; and they’re paced in similar ways. If you were to break down an episode by shot length and type, or chart the transitions between scenes, an installment of Game of Thrones would look a lot on paper like one of Mad Men. There’s room for individual quirks of style, of course: the handheld cinematography favored by procedurals has a different feel from the clinical, detached camera movements of House of Cards. And every now and then, we get a scene—like the epic tracking shot during the raid in True Detective—that awakens us to the medium’s potential. But the fact that such moments are striking enough to inspire think pieces the next day only points to how rare they are. Dramas are just less inclined to take big risks of structure and tone, and when they do, they’re likely to be hybrids. Shows like Fargo or Breaking Bad are able to push the envelope precisely because they have a touch of black comedy in their blood, as if that were the secret ingredient that allowed for greater formal daring.

Jon Hamm on Mad Men

It isn’t hard to pin down the reason for this. A cutaway scene or extended homage naturally takes us out of the story for a second, and comedy, which is inherently more anarchic, has trained us to roll with it. We’re better at accepting artifice in comic settings, since we aren’t taking the story quite as seriously: whatever plot exists is tacitly understood to be a medium for the delivery of jokes. Which isn’t to say that we can’t care deeply about these characters; if anything, our feelings for them are strengthened because they take place in a stylized world that allows free play for the emotions. Yet this is also something that comedy had to teach us. It can be fun to watch a sitcom push the limits of plausibility to the breaking point, but if a drama deliberately undermines its own illusion of reality, we can feel cheated. Dramas that constantly draw attention to their own artifice, as Twin Peaks did, are more likely to become cult favorites than popular successes, since most of us just want to sit back and watch a story that presents itself using the narrative language we know. (Which, to be fair, is true of comedies as well: the three sitcoms I’ve mentioned above, taken together, have a fraction of the audience of something like The Big Bang Theory.)

In part, it’s a problem of definition. When a drama pushes against its constraints, we feel more comfortable referring to it as something else: Orange is the New Black, which tests its structure as adventurously as any series on the air today, has suffered at awards season from its resistance to easy categorization. But what’s really funny is that comedy escaped from its old formulas by appropriating the tools that dramas had been using for years. The three-camera sitcom—which has been responsible for countless masterpieces of its own—made radical shifts of tone and location hard to achieve, and once comedies liberated themselves from the obligation to unfold as if for a live audience, they could indulge in extended riffs and flights of imagination that were impossible before. It’s the kind of freedom that dramas, in theory, have always had, even if they utilize it only rarely. This isn’t to say that a uniformity of approach is a bad thing: the standard narrative grammar evolved for a reason, and if it gives us compelling characters with a maximum of transparency, that’s all for the better. Telling good stories is hard enough as it is, and formal experimentation for its own sake can be a trap in itself. Yet we’re still living in a world with countless ways of being funny, and only one way, within a narrow range of variations, of being serious. And that’s no laughing matter.

The crowded circle of television

with 2 comments

The cast of Mad Men

Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s question: “What’s your favorite TV show of the year so far?”

There are times when watching television can start to feel like a second job—a pleasurable one, to be sure, but one that demands a lot of work nevertheless. Over the last year, I’ve followed more shows than ever, including Mad Men, Game of Thrones, Orange is the New Black, Hannibal, Community, Parks and Recreation, House of Cards, The Vampire Diaries, and True Detective. For the most part, they’ve all had strong runs, and I’d have trouble picking a favorite. (If pressed, I’d probably go with Mad Men, if only for old times’ sake, with Hannibal as a very close second.) They’re all strikingly different in emphasis, tone, and setting, but they also have a lot in common. With one exception, which I’ll get to in a moment, these are dense shows with large casts and intricate storylines. Many seem devoted to pushing the limits of how much complexity can be accommodated within the constraints of the television format, which may be why the majority run for just ten to thirteen episodes: it’s hard to imagine that level of energy sustained over twenty or more installments.

And while I’m thrilled by the level of ambition visible here, it comes at a price. There’s a sort of arms race taking place between media of all kinds, as they compete to stand out in an increasingly crowded space with so much competing for our attention. Books, even literary novels, are expected to be page-turners; movies offer up massive spectacle to the point where miraculous visual effects are taken for granted; and television has taken to packing every minute of narrative time to the bursting point. (This isn’t true of all shows, of course—a lot of television series are still designed to play comfortably in the background of a hotel room—but it’s generally the case with prestige shows that end up on critics’ lists and honored at award ceremonies.) This trend toward complexity arises from a confluence of factors I’ve tried to unpack here before: just as The Simpsons was the first freeze-frame sitcom, modern television takes advantage of our streaming and binge-watching habits to deliver storytelling that rewards, and even demands, close attention.

Matthew McConaughey on True Detective

For the most part, this is a positive development. Yet there’s also a case to be made that television, which is so good at managing extended narratives and enormous casts of characters, is also uniquely suited for the opposite: silence, emptiness, and contemplation. In a film, time is a precious commodity, and when you’re introducing characters while also setting in motion the machinery of a complicated story, there often isn’t time to pause. Television, in theory, should be able to stretch out a little, interspersing relentless forward momentum with moments of quiet, which are often necessary for viewers to consolidate and process what they’ve seen. Twin Peaks was as crowded and plotty as any show on the air today, but it also found time for stretches of weird, inexplicable inaction, and it’s those scenes that I remember best. Even in the series finale, with so many threads to address and only forty minutes to cover them all, it devotes endless minutes to Cooper’s hallucinatory—and almost entirely static—ordeal in the Black Lodge, and even to a gag involving a decrepit bank manager rising from his desk and crossing the floor of his branch very, very slowly.

So while there’s a lot of fun to be had with shows that constantly accelerate the narrative pace, it can also be a limitation, especially when it’s handled less than fluently. (For every show, like Orange is the New Black, that manages to cut expertly between subplots, there’s another, like Game of Thrones, that can’t quite seem to handle its enormous scope, and even The Vampire Diaries is showing signs of strain.) Both Hannibal and Mad Men know when to linger on an image or revelation—roughly half of Hannibal is devoted to contemplating its other half—and True Detective, in particular, seemed to consist almost entirely of such pauses. We remember such high points as the final chase with the killer or the raid in “Who Goes There,” but what made the show special were the scenes in which nothing much seemed to be happening. It was aided in this by its limited cast and its tight focus on its two leads, so it’s possible that what shows really need to slow things down are a couple of movie stars to hold the eye. But it’s a step in the right direction. If time is a flat circle, as Rust says, so is television, and it’s good to see it coming back around.

The dreamlife of television

with one comment

Aaron Paul on Breaking Bad

I’ve been dreaming a lot about Breaking Bad. On Wednesday, my wife and I returned from a trip to Barcelona, where we’d spent a beautiful week: my baby daughter was perfectly happy to be toted around various restaurants, cultural sites, and the Sagrada Familia, and it came as a welcome break from my own work. Unfortunately, it also meant that we were going to miss the Breaking Bad finale, which aired the Sunday before we came home. For a while, I seriously considered bringing my laptop and downloading it while we were out of the country, both because I was enormously anxious to see how the show turned out and because I dreaded the spoilers I’d have to avoid for the three days before we returned. In the end, I gritted my teeth and decided to wait until we got home. This meant avoiding most of my favorite news and pop cultural sites—I was afraid to even glance past the top few headlines on the New York Times—and staying off Twitter entirely, which I suppose wasn’t such a great loss. And even as we toured the Picasso Museum and walked for miles along the marina with a baby in tow, my thoughts were rarely very far from Walter White.

This must have done quite a number on my psyche, because I started dreaming about the show with alarming frequency. My dreams included two separate, highly elaborated versions of the finale, one of which was a straightforward bloodbath with a quiet epilogue, the other a weird metafictional conclusion in which the events of the series were played out on a movie screen with the cast and crew watching them unfold—which led me to exclaim, while still dreaming: “Of course that’s how they would end it!” Now that I’ve finally seen the real finale, the details of these dreams are fading, and only a few scraps of imagery remain. Yet the memories are still emotionally charged, and they undoubtedly affected how I approached the last episode itself, which I was afraid would never live up to the versions I’d dreamed for myself. I suspect that a lot of fans, even those who didn’t actually hallucinate alternate endings, probably felt the same way. (For the record, I liked the finale a lot, even if it ranks a notch below the best episodes of the show, which was always best at creating chaos, not resolving it. And I think about its closing moments almost every day.)

Jon Hamm on Mad Men

And it made me reflect on the ways in which television, especially in its modern, highly serialized form, is so conducive to dreaming. Dreams are a way of assembling and processing fragments of the day’s experience, or recollections from the distant past, and a great television series is nothing less than a vast storehouse of memories from another life. When a show is as intensely serialized as Breaking Bad was, it can be hard to remember individual episodes, aside from the occasional formal standout like “Fly”: I can’t always recall what scenes took place when, or in what order, and an especially charged sequence of installments—like the last half of this final season—tends to blend together into a blur of vivid impressions. What I remember are facial expressions, images, bits of dialogue: “Stay out of my territory.” “Run.” “Tread lightly.” And the result is a mine of moments that end up naturally incorporated into my own subconscious. A good movie or novel exists as a piece, and I rarely find myself dreaming alternate lives for, say, Rick and Ilsa or Charles Foster Kane. With Walter White, it’s easy to imagine different paths that the action could have taken, and those byways play themselves out in the deepest parts of my brain.

Which may explain why television is so naturally drawn to dream sequences and fantasies, which are only one step removed from the supposedly factual events of the shows themselves. Don Draper’s dreams have become a huge part of Mad Men, almost to the point of parody, and this has always been an art form that attracts surreal temperaments, from David Lynch to Bryan Fuller, even if they tend to be destroyed by it. As I’ve often said before, it’s the strangest medium I know, and at its best, it’s the outcome of many unresolved tensions. Television can feel maddeningly real, a hidden part of your own life, which is why it can be so hard to say goodbye to a great show. It’s also impossible to get a lasting grip on it or to hold it all in your mind at once, especially if it runs for more than a few seasons, which hints at an even deeper meaning. I’ve always been struck by how poorly we integrate the different chapters in our own past: there are entire decades of my life that I don’t think about for months on end. When they return, it’s usually in the hours just before waking. And by teaching us to process narratives that can last for years, it’s possible that television subtly trains us to better understand the shapes of our own lives, even if it’s only in dreams.

Written by nevalalee

October 7, 2013 at 8:27 am

Posted in Television

Tagged with ,

Critical television studies

with 4 comments

The cast of Community

Television is such a pervasive medium that it’s easy to forget how deeply strange it is. Most works of art are designed to be consumed all at once, or at least in a fixed period of time—it’s physically possible, if not entirely advisable, to read War and Peace in one sitting. Television, by contrast, is defined by the fact of its indefinite duration. House of Cards aside, it seems likely that most of us will continue to watch shows week by week, year after year, until they become a part of our lives. This kind of extended narrative can be delightful, but it’s also subject to risk. A beloved show can change for reasons beyond anyone’s control. Sooner or later, we find out who killed Laura Palmer. An actor’s contract expires, so Mulder is abducted by aliens, and even if he comes back, by that point, we’ve lost interest. For every show like Breaking Bad that has its dark evolution mapped out for seasons to come, there’s a series like Glee, which disappoints, or Parks and Recreation, which gradually reveals a richness and warmth that you’d never guess from the first season alone. And sometimes a show breaks your heart.

It’s clear at this point that the firing of Dan Harmon from Community was the most dramatic creative upheaval for any show in recent memory. This isn’t the first time that a show’s guiding force has departed under less than amicable terms—just ask Frank Darabont—but it’s unusual in a series so intimately linked to one man’s particular vision. Before I discovered Community, I’d never heard of Dan Harmon, but now I care deeply about what this guy feels and thinks. (Luckily, he’s never been shy about sharing this with the rest of us.) And although it’s obvious from the opening minutes of last night’s season premiere that the show’s new creative team takes its legacy seriously, there’s no escaping the sense that they’re a cover band doing a great job with somebody else’s music. Showrunners David Guarascio and Moses Port do their best to convince us out of the gate that they know how much this show means to us, and that’s part of the problem. Community was never a show about reassuring us that things won’t change, but about unsettling us with its endless transformations, even as it delighted us with its new tricks.

The Community episode "Remedial Chaos Theory"

Don’t get me wrong: I laughed a lot at last night’s episode, and I was overjoyed to see these characters again. By faulting the new staff for repeating the same beats I loved before, when I might have been outraged by any major alterations, I’m setting it up so they just can’t win. But the show seems familiar now in a way that would have seemed unthinkable for most of its first three seasons. Part of the pleasure of watching the series came from the fact that you never knew what the hell might happen next, and it wasn’t clear if Harmon knew either. Not all of his experiments worked: there even some clunkers, like “Messianic Myths and Ancient Peoples,” in the glorious second season, which is one of my favorite runs of any modern sitcom. But as strange as this might have once seemed, it feels like we finally know what Community is about. It’s a show that takes big formal risks, finds the emotional core in a flurry of pop culture references, and has no idea how to use Chevy Chase. And although I’m grateful that this version of the show has survived, I don’t think I’m going to tune in every week wondering where in the world it will take me.

And the strange thing is that Community might have gone down this path with or without Harmon. When a show needs only two seasons to establish that anything is possible, even the most outlandish developments can seem like variations on a theme. Even at the end of the third season, there was the sense that the series was repeating itself. I loved “Digital Estate Planning,” for instance, but it felt like the latest attempt to do one of the formally ambitious episodes that crop up at regular intervals each season, rather than an idea that forced itself onto television because the writers couldn’t help themselves. In my review of The Master, I noted that Paul Thomas Anderson has perfected his brand of hermetic filmmaking to the point where it would be more surprising if he made a movie that wasn’t ambiguous, frustrating, and deeply weird. Community has ended up in much the same place, so maybe it’s best that Harmon got out when he did. It’s doubtful that the series will ever be able to fake us out with a “Critical Film Studies” again, because it’s already schooled us, like all great shows, in how it needs to be watched. And although its characters haven’t graduated from Greendale yet, its viewers, to their everlasting benefit, already have.

Written by nevalalee

February 8, 2013 at 9:50 am

Wouldn’t it be easier to write for television?

leave a comment »

Last week, I had dinner with a college friend I hadn’t seen in years, who is thinking about giving up a PhD in psychology to write for television in Los Angeles. We spent a long time commiserating about the challenges of the medium, at least from a writer’s point of view, hitting many of the points that I’ve discussed here before. With the prospects of a fledgling television show so uncertain, I said, especially when the show might be canceled after four episodes, or fourteen, or forty, it’s all but impossible for the creator to tell effective stories over time. Running a television show is one of the hardest jobs in the world, with countless obstacles along the way, even for critical darlings. Knowing all this, I asked my friend, why did he want to do this in the first place?

My friend’s response was an enlightening one. The trouble with writing novels or short stories, he said, is the fact that the author is expected to spend a great deal of time on description, style, and other tedious elements that a television writer can cheerfully ignore. Teleplays, like feature scripts, are nothing but structure and dialogue (or maybe just structure, as William Goldman says), and there’s something liberating in how they strip storytelling down to its core. The writer takes care of the bones of the narrative, which is where his primary interest presumably lies, then outsources the work of casting, staging, and art direction to qualified professionals who are happy to do the work. And while I didn’t agree with everything my friend said, I could certainly see his point.

Yet that’s only half of the story. It’s true that a screenwriter gets to outsource much of the conventional apparatus of fiction to other departments, but only at the price of creative control. You may have an idea about how a character should look, or what kind of home he should have, or how a moment of dialogue, a scene, or an overall story should unfold, but as a writer, you don’t have much control over the matter. Scripts are easier to write than novels for a reason: they’re only one piece of a larger enterprise, which is reflected in the writer’s relative powerlessness. The closest equivalent to a novelist in television isn’t the writer, but the executive producer. Gene Roddenberry, in The Making of Star Trek, neatly sums up the similarity between the two roles:

Producing in television is like storytelling. The choice of the actor, picking the right costumes, getting the right flavor, the right pace—these are as much a part of storytelling as writing out that same description of a character in a novel.

And the crucial point about producing a television series, like directing a feature film, is that it’s insanely hard. As Thomas Lennon and Robert Ben Garant point out in their surprisingly useful Writing Movies for Fun and Profit, as far as directing is concerned, “If you’re doing it right, it’s not that fun.” As a feature director or television producer, you’re responsible for a thousand small but critical decisions that need to be made very quickly, and while you’re working on the story, you’re also casting parts, scouting for locations, dealing with the studio and the heads of various departments, and surviving on only a few hours of sleep a night, for a year or more of your life. In short, the amount of effort required to keep control of the project is greater, not less, than what is required to write a novel—except with more money on the line, in public, and with greater risk that control will eventually be taken away from you.

So it easier to write for television? Yes, if that’s all you want to do. But if you want control of your work, if you want your stories to be experienced in a form close to what you originally envisioned, it isn’t easier. It’s much harder. Which is why, to my mind, John Irving still puts it best: “When I feel like being a director, I write a novel.”

Lessons from great (and not-so-great) television

with one comment

It can be hard for a writer to admit being influenced by television. In On Becoming a Novelist, John Gardner struck a disdainful note that hasn’t changed much since:

Much of the dialogue one encounters in student fiction, as well as plot, gesture, even setting, comes not from life but from life filtered through TV. Many student writers seem unable to tell their own most important stories—the death of a father, the first disillusionment in love—except in the molds and formulas of TV. One can spot the difference at once because TV is of necessity—given its commercial pressures—false to life.

In the nearly thirty years since Gardner wrote these words, the television landscape has changed dramatically, but it’s worth pointing out that much of what he says here is still true. The basic elements of fiction—emotion, character, theme, even plot—need to come from close observation of life, or even the most skillful novel will eventually ring false. That said, the structure of fiction, and the author’s understanding of the possibilities of the form, doesn’t need to come from life alone, and probably shouldn’t. To develop a sense of what fiction can do, a writer needs to pay close attention to all types of art, even the nonliterary kind. And over the past few decades, television has expanded the possibilities of narrative in ways that no writer can afford to ignore.

If you think I’m exaggerating, consider a show like The Wire, which tells complex stories involving a vast range of characters, locations, and social issues in ways that aren’t possible in any other medium. The Simpsons, at least in its classic seasons, acquired a richness and velocity that continued to build for years, until it had populated a world that rivaled the real one for density and immediacy. (Like the rest of the Internet, I respond to most situations with a Simpsons quote.) And Mad Men continues to furnish a fictional world of astonishing detail and charm. World-building, it seems, is where television shines: in creating a long-form narrative that begins with a core group of characters and explores them for years, until they can come to seem as real as one’s own family and friends.

Which is why Glee can seem like such a disappointment. Perhaps because the musical is already the archest of genres, the show has always regarded its own medium with an air of detachment, as if the conventions of the after-school special or the high school sitcom were merely a sandbox in which the producers could play. On some level, this is fine: The Simpsons, among many other great shows, has fruitfully treated television as a place for narrative experimentation. But by turning its back on character continuity and refusing to follow any plot for more than a few episodes, Glee is abandoning many of the pleasures that narrative television can provide. Watching the show run out of ideas for its lead characters in less than two seasons simply serves as a reminder of how challenging this kind of storytelling can be.

Mad Men, by contrast, not only gives us characters who take on lives of their own, but consistently lives up to those characters in its acting, writing, and direction. (This is in stark contrast to Glee, where I sense that a lot of the real action is taking place in fanfic.) And its example has changed the way I write. My first novel tells a complicated story with a fairly controlled cast of characters, but Mad Men—in particular, the spellbinding convergence of plots in “Shut the Door, Have a Seat”—reminded me of the possibilities of expansive casts, which allows characters to pair off and develop in unexpected ways. (The evolution of Christina Hendricks’s Joan from eye candy to second lead is only the most obvious example.) As a result, I’ve tried to cast a wider net with my second novel, using more characters and settings in the hopes that something unusual will arise. Television, strangely, has made me more ambitious. I’d like to think that even John Gardner would approve.

Written by nevalalee

March 17, 2011 at 8:41 am

Flash Gordon and the time machine

leave a comment »

I remember the day my father came home from the neighbors’ in 1949 and said they had a radio with talking pictures. It was his way of explaining television to us in terms of what he knew: radio. Several years later I would sit on the rug with half a dozen neighborhood kids at the house down the block, watching Flash Gordon and advertisements for Buster Brown shoes.

Such early space-travel films may have marked my first encounter with the idea of time machines, those phone booths with the capacity to transpose one into encounters with Napoleon or to propel one ahead into dilemmas on distant planets. I was six or seven years old and already leading a double life as an imagined horse disguised as a young girl…

Flash Gordon never became a horse by stepping into a time machine, but he could choose any one of countless masquerades at crucial moments in history or in the futures he hoped to outsmart. This whole idea of past or future being accessible at the push of a button seemed so natural to me as a child that I have been waiting for science to catch up to the idea ever since.

Tess Gallagher, “The Poem as Time Machine”

Written by nevalalee

December 30, 2018 at 7:30 am

The fairy tale theater

leave a comment »

It must have all started with The Princess Switch, although that’s so long ago now that I can barely remember. Netflix was pushing me hard to watch an original movie with Vanessa Hudgens in a dual role as a European royal and a baker from Chicago who trade places and end up romantically entangled with each other’s love interests at Christmas, and I finally gave in. In the weeks since, my wife and I have watched Pride, Prejudice, and MistletoeThe Nine Lives of ChristmasCrown for ChristmasThe Holiday CalendarChristmas at the Palace; and possibly one or two others that I’ve forgotten. A few were on Netflix, but most were on Hallmark, which has staked out this space so aggressively that it can seem frighteningly singleminded in its pursuit of Yuletide cheer. By now, it airs close to forty original holiday romances between Thanksgiving and New Year’s Eve, and like its paperback predecessors, it knows better than to tinker with a proven formula. As two of its writers anonymously reveal in an interview with Entertainment Weekly:

We have an idea and it maybe takes us a week or so just to break it down into a treatment, a synopsis of the story; it’s like a beat sheet where you pretty much write what’s going to happen in every scene you just don’t write the scene. If we have a solid beat sheet done and it’s approved, then it’s only going to take us about a week and a half to finish a draft. Basically, an act or two a day and there’s nine. They’re kind of simple because there are so many rules so you know what you can and can’t do, and if you have everything worked out it comes together.

And the rules are revealing in themselves. As one writer notes: “The first rule is snow. We really wanted to do one where the basic conflict was a fear that there will not be snow on Christmas. We were told you cannot do that, there must be snow. They can’t be waiting for the snow, there has to be snow. You cannot threaten them with no snow.” And the conventions that make these movies so watchable are built directly into the structure:

There cannot be a single scene that does not acknowledge the theme. Well, maybe a scene, but you can’t have a single act that doesn’t acknowledge it and there are nine of them, so there’s lots of opportunities for Christmas. They have a really rigid nine-act structure that makes writing them a lot of fun because it’s almost like an exercise. You know where you have to get to: People have to be kissing for the first time, probably in some sort of a Christmas setting, probably with snow falling from the sky, probably with a small crowd watching. You have to start with two people who, for whatever reason, don’t like each other and you’re just maneuvering through those nine acts to get them to that kiss in the snow.

The result, as I’ve learned firsthand, is a movie that seems familiar before you’ve even seen it. You can watch with one eye as you’re wrapping presents, or tune in halfway through with no fear of becoming confused. It allows its viewers to give it exactly as much attention as they’re willing to spare, and at a time when the mere act of watching prestige television can be physically exhausting, there’s something to be said for an option that asks nothing of us at all.

After you’ve seen two or three of these movies, of course, the details start to blur, particularly when it comes to the male leads. The writers speak hopefully of making the characters “as unique and interesting as they can be within the confines of Hallmark land,” but while the women are allowed an occasional flash of individuality, the men are unfailingly generic. This is particularly true of the subgenre in which the love interest is a king or prince, who doesn’t get any more personality than his counterpart in fairy tales. Yet this may not be a flaw. In On Directing Film, which is the best work on storytelling that I’ve ever read, David Mamet provides a relevant word of advice:

In The Uses of Enchantment, Bruno Bettelheim says of fairy tales the same thing Alfred Hitchcock said about thrillers: that the less the hero of the play is inflected, identified, and characterized, the more we will endow him with our own internal meaning—the more we will identify with him—which is to say the more we will be assured that we are that hero. “The hero rode up on a white horse.” You don’t say “a short hero rode up on a white horse,” because if the listener isn’t short he isn’t going to identify with that hero. You don’t say “a tall hero rode up on a white horse,” because if the listener isn’t tall, he won’t identify with the hero. You say “a hero,” and the audience subconsciously realize they are that hero.

Yet Mamet also overlooks the fact that the women in fairy tales, like Snow White, are often described with great specificity—it’s the prince who is glimpsed only faintly. Hallmark follows much the same rule, which implies that it’s less important for the audience to identify with the protagonist than to fantasize without constraint about the object of desire.

This also leads to some unfortunate decisions about diversity, which is more or less what you might expect. As one writer says candidly to Entertainment Weekly:

On our end, we just write everybody as white, we don’t even bother to fight that war. If they want to put someone of color in there, that would be wonderful, but we don’t have control of that…I found out Meghan Markle had been in some and she’s biracial, but it almost seems like they’ve tightened those restrictions more recently. Everything’s just such a white, white, white, white world. It’s a white Christmas after all—with the snow and the people.

With more than thirty original movies coming out every year, you might think that Hallmark could make a few exceptions, especially since the demand clearly exists, but this isn’t about marketing at all. It’s a reflection of the fact that nonwhiteness is still seen as a token of difference, or a deviation from an assumed norm, and it’s the logical extension of the rules that I’ve listed above. White characters have the privilege—which is invisible but very real—of seeming culturally uninflected, which is the baseline that allows the formula to unfold. This seems very close to John W. Campbell’s implicit notion that all characters in science fiction should be white males by default, and while other genres have gradually moved past this point, it’s still very much the case with Hallmark. (There can be nonwhite characters, but they have to follow the rules: “Normally there’ll be a black character that’s like a friend or a boss, usually someone benevolent because you don’t want your one person of color to not be positive.”) With diversity, as with everything else, Hallmark is very mindful of how much variation its audience will accept. It thinks that it knows the formula. And it might not even be wrong.

The private eyes of culture

leave a comment »

Yesterday, in my post on the late magician Ricky Jay, I neglected to mention one of the most fascinating aspects of his long career. Toward the end of his classic profile in The New Yorker, Mark Singer drops an offhand reference to an intriguing project:

Most afternoons, Jay spends a couple of hours in his office, on Sunset Boulevard, in a building owned by Andrew Solt, a television producer…He decided now to drop by the office, where he had to attend to some business involving a new venture that he has begun with Michael Weber—a consulting company called Deceptive Practices, Ltd., and offering “Arcane Knowledge on a Need to Know Basis.” They are currently working on the new Mike Nichols film, Wolf, starring Jack Nicholson.

When the article was written, Deceptive Practices was just getting off the ground, but it went on to compile an enviable list of projects, including The Illusionist, The Prestige, and most famously Forrest Gump, for which Jay and Weber designed the wheelchair that hid Gary Sinise’s legs. It isn’t clear how lucrative the business ever was, but it made for great publicity, and best of all, it allowed Jay to monetize the service that he had offered for free to the likes of David Mamet—a source of “arcane knowledge,” much of it presumably gleaned from his vast reading in the field, that wasn’t available in any other way.

As I reflected on this, I was reminded of another provider of arcane knowledge who figures prominently in one of my favorite novels. In Umberto Eco’s Foucault’s Pendulum, the narrator, Casaubon, comes home to Milan after a long sojourn abroad feeling like a man without a country. He recalls:

I decided to invent a job for myself. I knew a lot of things, unconnected things, but I wanted to be able to connect them after a few hours at a library. I once thought it was necessary to have a theory, and that my problem was that I didn’t. But nowadays all you needed was information; everybody was greedy for information, especially if it was out of date. I dropped in at the university, to see if I could fit in somewhere. The lecture halls were quiet; the students glided along the corridors like ghosts, lending one another badly made bibliographies. I knew how to make a good bibliography.

In practice, Casaubon finds that he knows a lot of things—like the identities of such obscure figures as Lord Chandos and Anselm of Canterbury—that can’t be found easily in reference books, prompting a student to marvel at him: “In your day you knew everything.” This leads Casaubon to a sudden inspiration: “I had a trade after all. I would set up a cultural investigation agency, be a kind of private eye of learning. Instead of sticking my nose into all-night dives and cathouses, I would skulk around bookshops, libraries, corridors of university departments…I was lucky enough to find two rooms and a little kitchen in an old building in the suburbs…In a pair of bookcases I arranged the atlases, encyclopedias, catalogs I acquired bit by bit.”

This feels a little like the fond daydream of a scholar like Umberto Eco himself, who spent decades acquiring arcane knowledge—not all of it required by his academic work—before becoming a famous novelist. And I suspect that many graduate students, professors, and miscellaneous bibliophiles cherish the hope that the scraps of disconnected information that they’ve accumulated over time will turn out to be useful one day, in the face of all evidence to the contrary. (Casaubon is evidently named after the character from Middlemarch who labors for years over a book titled The Key to All Mythologies, which is already completely out of date.) To illustrate what he does for a living, Casaubon offers the example of a translator who calls him one day out of the blue, desperate to know the meaning of the word “Mutakallimūn.” Casaubon asks him for two days, and then he gets to work:

I go to the library, flip through some card catalogs, give the man in the reference office a cigarette, and pick up a clue. That evening I invite an instructor in Islamic studies out for a drink. I buy him a couple of beers and he drops his guard, gives me the lowdown for nothing. I call the client back. “All right, the Mutakallimūn were radical Moslem theologians at the time of Avicenna. They said the world was a sort of dust cloud of accidents that formed particular shapes only by an instantaneous and temporary act of the divine will. If God was distracted for even a moment, the universe would fall to pieces, into a meaningless anarchy of atoms. That enough for you? The job took me three days. Pay what you think is fair.”

Eco could have picked nearly anything to serve as a case study, of course, but the story that he choses serves as a metaphor for one of the central themes of the book. If the world of information is a “meaningless anarchy of atoms,” it takes the private eyes of culture to give it shape and meaning.

All the while, however, Eco is busy undermining the pretensions of his protagonists, who pay a terrible price for treating information so lightly. And it might not seem that such brokers of arcane knowledge are even necessary these days, now that an online search generates pages of results for the Mutakallimūn. Yet there’s still a place for this kind of scholarship, which might end up being the last form of brainwork not to be made obsolete by technology. As Ricky Jay knew, by specializing deeply in one particular field, you might be able to make yourself indispensable, especially in areas where the knowledge hasn’t been written down or digitized. (In the course of researching Astounding, I was repeatedly struck by how much of the story wasn’t available in any readily accessible form. It was buried in letters, manuscripts, and other primary sources, and while this happens to be the one area where I’ve actually done some of the legwork, I have a feeling that it’s equally true of every other topic imaginable.) As both Jay and Casaubon realized, it’s a role that rests on arcane knowledge of the kind that can only be acquired by reading the books that nobody else has bothered to read in a long time, even if it doesn’t pay off right away. Casaubon tells us: “In the beginning, I had to turn a deaf ear to my conscience and write theses for desperate students. It wasn’t hard; I just went and copied some from the previous decade. But then my friends in publishing began sending me manuscripts and foreign books to read—naturally, the least appealing and for little money.” But he perseveres, and the rule that he sets for himself might still be enough, if you’re lucky, to fuel an entire career:

Still, I was accumulating experience and information, and I never threw anything away…I had a strict rule, which I think secret services follow, too: No piece of information is superior to any other. Power lies in having them all on file and then finding the connections.

Written by nevalalee

November 27, 2018 at 8:41 am

Amplifying the dream

leave a comment »

Note: I’m taking a few days off for Thanksgiving. This post originally appeared, in a slightly different form, on August 23, 2017.

In the book Nobody Turn Me Around, Charles Euchner shares a story about Bayard Rustin, a neglected but pivotal figure in the civil rights movement who played a crucial role in the March on Washington in 1963:

Bayard Rustin had insisted on renting the best sound system money could buy. To ensure order at the march, Rustin insisted, people needed to hear the program clearly. He told engineers what he wanted. “Very simple,” he said, pointing at a map. “The Lincoln Memorial is here, the Washington Monument is there. I want one square mile where anyone can hear.” Most big events rented systems for $1,000 or $2,000, but Rustin wanted to spend ten times that. Other members of the march committee were skeptical about the need for a deluxe system. “We cannot maintain order where people cannot hear,” Rustin said. If the Mall was jammed with people baking in the sun, waiting in long lines for portable toilets, anything could happen. Rustin’s job was to control the crowd. “In my view it was a classic resolution of the problem of how can you keep a crowd from becoming something else,” he said. “Transform it into an audience.”

Ultimately, Rustin was able to convince the United Auto Workers and International Ladies’ Garment Workers’ Unions to raise twenty thousand dollars for the sound system. (When he was informed that it ought to be possible to do it for less, he replied: “Not for what I want.”) The company American Amplifier and Television landed the contract, and after the system was sabotaged by persons unknown the night before the march, Walter Fauntroy, who was in charge of operations on the ground, called Attorney General Robert Kennedy with a warning: “We have a serious problem. We have a couple hundred thousand people coming. Do you want a fight here tomorrow after all we’ve done?”

The system was fixed just in time, and its importance on that day is hard to overstate. As Zeynep Tufekci writes in her recent book Twitter and Tear Gas: “Rustin knew that without a focused way to communicate with the massive crowd and to keep things orderly, much could go wrong…The sound system worked without a hitch during the day of the march, playing just the role Rustin had imagined: all the participants could hear exactly what was going on, hear instructions needed to keep things orderly, and feel connected to the whole march.” But its impact on our collective memory of the event may have been even more profound. In an article last year in The New Yorker, which is where I first encountered the story, Nathan Heller notes in a discussion of Tufekci’s work:

Before the march, Martin Luther King, Jr., had delivered variations on his “I Have a Dream” speech twice in public. He had given a longer version to a group of two thousand people in North Carolina. And he had presented a second variation, earlier in the summer, before a vast crowd of a hundred thousand at a march in Detroit. The reason we remember only the Washington, D.C., version, Tufekci argues, has to do with the strategic vision and attentive detail work of people like Rustin. Framed by the Lincoln Memorial, amplified by a fancy sound system, delivered before a thousand-person press bay with good camera sight lines, King’s performance came across as something more than what it had been in Detroit—it was the announcement of a shift in national mood, the fulcrum of a movement’s story line and power. It became, in other words, the rarest of protest performances: the kind through which American history can change.

Heller concludes that successful protest movements hinge on the existence of organized, flexible, practical structures with access to elites. After noting that the sound system was repaired, on Kennedy’s orders, by the Army Corps of Engineers, he observes: “You can’t get much cozier with the Man than that.”

There’s another side to the story, however, which neither Tufekci or Heller mention. In his memoir Behind the Dream, the activist Clarence B. Jones recalls:

The Justice Department and the police had worked hand in hand with the March Committee to design a public address system powerful enough to get the speakers’ voices across the Mall; what march coordinators wouldn’t learn until after the event had ended was that the government had built in a bypass to the system so that they could instantly take over control if they deemed it necessary…Ted [Brown] and Bayard [Rustin] told us that right after the march ended those officers approached them, eager to relieve their consciences and reveal the truth about the sound system. There was a kill switch and an administration official’s thumb had been on it the entire time.

The journalist Gary Younge—whose primary source seems to be Jones—expands on this claim in his book The Speech: “Fearing incitement from the podium, the Justice Department secretly inserted a cutoff switch into the sound system so they could turn off the speakers if an insurgent group hijacked the microphone. In such an eventuality, the plan was to play a recording to Mahalia Jackson singing ‘He’s Got the Whole World in His Hands’ in order to calm down the crowd.” In Pillar of Fire, Taylor Branch identifies the official in question as Jerry Bruno, President Kennedy’s “advance man,” who “positioned himself to cut the power to the public address system if rally speeches proved incendiary.” Regardless of the details, the existence of this cutoff switch speaks to the extent to which Rustin’s sound system was central to the question of who controlled the march and its message. And the people who sabotaged it understood this intuitively. (I should also mention the curious rumor that was shared by Dave Chapelle in a comedy special on Netflix: “I heard when Martin Luther King stood on the steps of the Lincoln Memorial and said he had a dream, he was speaking into a PA system that Bill Cosby paid for.” It’s demonstrably untrue, but it also speaks to the place of the sound system in the stories that we tell about the march.)

But what strikes me the most is the sheer practicality of the ends that Rustin, Fauntroy, and the others on the ground were trying to achieve, as conveyed in their own words: “We cannot maintain order where people cannot hear.” “How can you keep a crowd from becoming something else?” “Do you want a fight here tomorrow after all we’ve done?” They weren’t worried about history, but about making it safely to the end of the day. Rustin had been thinking about this march for two decades, and he spent years actively planning for it, conscious that it presented massive organizational challenges that could only be addressed by careful preparation in advance. He had specifically envisioned that it would conclude at the Lincoln Memorial, with a crowd filling the National Mall, a huge space that imposed enormous logistical problems of its own. The primary purpose of the sound system was to allow a quarter of a million people to assemble and disperse in a peaceful fashion, and its properties were chosen with that end in mind. (As Euchner notes: “To get one square mile of clear sound, you need to spend upwards of twenty thousand dollars.”) A system of unusual power, expense, and complexity was the minimum required to ensure the orderly conclusion of an event on that scale. When the audacity to envision the National Mall as a backdrop was combined with the attention to detail to make it work, the result was an electrically charged platform that would amplify any message, figuratively and literally, which made it both powerful and potentially dangerous. Everyone understood this. The saboteurs did. So did the Justice Department. The march’s organizers were keenly aware of it, which was why potentially controversial speakers—including James Baldwin—were excluded from the program. In the end, it became a stage for King, and at least one lesson is clear. When you aim high, and then devote everything you can to the practical side, the result might be more than you could have dreamed.

The end of an era

with 4 comments

On July 11, 1971, the science fiction editor John W. Campbell passed away quietly at his home in New Jersey. When he died, he was alone in his living room, watching Mexican wrestling on the local Spanish channel, which was his favorite television show. (I should also note in passing that it was a genre with deep affinities to superhero culture and comic books.) Word of his death quickly spread through fandom. Isaac Asimov was heartbroken at the news, writing later of the man whom he had always seen as his intellectual father: “I had never once thought…that death and he had anything in common, could ever intersect. He was the fixed pole star about which all science fiction revolved, unchangeable, eternal.” For the last decade, Analog had been on the decline, and Campbell was no longer the inescapable figure he had been in the thirties and forties, but it was impossible to deny his importance. In The Engines of the Night, Barry N. Malzberg spends several pages chronicling the late editor’s failings, mistakes, and shortcomings, but he concludes unforgettably:

And yet when I heard of Campbell’s sudden death…and informed Larry Janifer, I trembled at Janifer’s response and knew that it was so: “The field has lost its conscience, its center, the man for whom we were all writing. Now there’s no one to get mad at us anymore.”

Tributes appeared in such magazines as Locus, and Campbell’s obituary ran in the New York Times, but the loss was felt most keenly within the close community of science fiction readers and writers—perhaps because they sensed that it marked an end to the era in which the genre could still be regarded as the property of a small circle of fans.

I thought of this earlier this week, when the death of Stan Lee inspired what seemed like a national day of mourning. For much of the afternoon, he all but took over the front page of Reddit, which is an achievement that no other nonagenarian could conceivably have managed. And it’s easy to draw a contrast between Lee and Campbell, both in their cultural impact and in the way in which they were perceived by the public. Here’s how Lee is described in the book Men of Tomorrow:

His great talent, in both writing and life, was to win people’s affection. He was raised to be lovable by a mother who worshipped him. “I used to come home from school,” said Stan, “and she’d grab me and fuss over me and say, ‘You’re home already? I was sure today was the day a movie scout would discover you and take you away from me!’” She told Stan that he was the most handsome, most talented, most remarkable boy who’d ever lived. “And I believed her!” Stan said. “I didn’t know any better!” Stan attacked the world with a crooked grin and a line of killer patter. No one else in comics ever wanted to badly to be liked or became so good at it. He was known as a soft touch on advances, deadlines, and extra assignments. Even people who didn’t take him seriously as an editor or writer had to admit that Stan truly was a nice guy.

This couldn’t be less like Campbell, who also had a famous story about coming home from school to see his mother—only to be confronted by her identical twin, his aunt, who hated him. He claimed that this memory inspired the novella that became The Thing. And while I’m not exactly a Freudian biographer, it isn’t hard to draw a few simple conclusions about how these two boys might have grown up to see the world.

Yet they also had a surprising amount in common, to the point that I often used Lee as a point of comparison when I was pitching Astounding. Lee was over a decade younger than Campbell, which made him nearly the same age as Isaac Asimov and Frederik Pohl—which testifies both to his longevity and to how relatively young Campbell and Asimov were when they died. Lee’s first job in publishing was as an assistant in the comics division of the pulp publisher Martin Goodman, presumably just a few steps away from Uncanny Tales, which suggests that he could just as easily have wound up in one as well as the other. He became the interim comics editor at the age of nineteen, or the same age as Pohl when he landed his first editing job. (I’m not aware of Lee crossing paths with any of my book’s major figures during this period, but it wouldn’t surprise me if they moved in the same circles in New York.) Like Campbell, Lee’s legacy is conventionally thought to consist of moving the genre toward greater realism, better writing, and more believable characters, although the degree to which each man was responsible for these developments has been disputed. Both also cultivated a distinct voice in their editorials and letters columns, which became a forum for open discussion with fans, although they differed drastically in their tones, political beliefs, and ambitions. Campbell openly wanted to make a discovery that would change the world, while Lee seemed content to make his mark on the entertainment industry, which he did with mixed success for decades. It can be hard to remember now, but there was a long period when Lee seemed lost in the wilderness, with a sketchy production company that filed for bankruptcy and pursued various dubious projects. If he had died in his seventies, or just after his cameo in Mallrats, he might well have been mourned, like Campbell, mostly by diehard fans.

Instead, he lived long enough to see the movie versions of X-Men and Spider-Man, followed by the apotheosis of the Marvel Universe. And it’s easy to see the difference between Campbell and Lee as partially a matter of longevity. If Campbell had lived to be the same age, he would have died in 2005, which is a truly staggering thought. I have trouble imagining what science fiction would have been like if he had stuck around for three more decades, even from the sidelines. (It isn’t hard to believe that he might have remained a fixture at conventions. The writer and scholar James Gunn—not to be confused with the director of Guardians of the Galaxy—is almost exactly Stan Lee’s age, and I sat down to chat with him at Worldcon two years ago.) Of course, Campbell was already estranged from many writers and fans at the time of his death, and unlike Lee, he was more than willing to alienate a lot of his readers. It seems unlikely that he would have been forgiven for his mistakes, as Lee was, simply out of the affection in which he was held. If anything, his death may have postponed the reckoning with his racism, and its impact on the genre, that otherwise might have taken place during his lifetime. But the differences also run deeper. When you look at the world in which we live today, it might seem obvious that Lee’s comics won out over Campbell’s stories, at least when measured by their box office and cultural impact. The final installment in E.E. Smith’s Galactic Patrol was published just a few months before the debut of a character created by the science fiction fans Jerry Siegel and Joe Shuster, but you still see kids dressed up as Superman, not the Gray Lensman. That may seem inevitable now, but it could easily have gone the other way. The story of how this happened is a complicated one, and Lee played a huge part in it, along with many others. His death, like Campbell’s, marks the end of an era. And it may only be now that we can start to figure out what it all really meant.

The Men Who Saw Tomorrow, Part 3

leave a comment »

By now, it might seem obvious that the best way to approach Nostradamus is to see it as a kind of game, as Anthony Boucher describes it in the June 1942 issue of Unknown Worlds: “A fascinating game, to be sure, with a one-in-a-million chance of hitting an astounding bullseye. But still a game, and a game that has to be played according to the rules. And those rules are, above all things else, even above historical knowledge and ingenuity of interpretation, accuracy and impartiality.” Boucher’s work inspired several spirited rebukes in print from L. Sprague de Camp, who granted the rules of the game but disagreed about its harmlessness. In a book review signed “J. Wellington Wells”—and please do keep an eye on that last name—de Camp noted that Nostradamus was “conjured out of his grave” whenever there was a war:

And wonder of wonders, it always transpires that a considerable portion of his several fat volumes of prophetic quatrains refer to the particular war—out of the twenty-odd major conflicts that have occurred since Dr. Nostradamus’s time—or other disturbance now taking place; and moreover that they prophesy inevitable victory for our side—whichever that happens to be. A wonderful man, Nostradamus.

Their affectionate battle culminated in a nonsense limerick that de Camp published in the December 1942 version of Esquire, claiming that if it was still in print after four hundred years, it would have been proven just as true as any of Nostradamus’s prophecies. Boucher responded in Astounding with the short story “Pelagic Spark,” an early piece of fanfic in which de Camp’s great-grandson uses the “prophecy” to inspire a rebellion in the far future against the sinister Hitler XVI.

This is all just good fun, but not everyone sees it as a game, and Nostradamus—like other forms of vaguely apocalyptic prophecy—tends to return at exactly the point when such impulses become the most dangerous. This was the core of de Camp’s objection, and Boucher himself issued a similar warning:

At this point there enters a sinister economic factor. Books will be published only when there is popular demand for them. The ideal attempt to interpret the as yet unfulfilled quatrains of Nostradamus would be made in an ivory tower when all the world was at peace. But books on Nostradamus sell only in times of terrible crisis, when the public wants no quiet and reasoned analysis, but an impassioned assurance that We are going to lick the blazes out of Them because look, it says so right here. And in times of terrible crisis, rules are apt to get lost.

Boucher observes that one of the best books on the subject, Charles A. Ward’s Oracles of Nostradamus, was reissued with a dust jacket emblazoned with such questions as “Will America Enter the War?” and “Will the British Fleet Be Destroyed?” You still see this sort of thing today, and it isn’t just the books that benefit. In 1981, the producer David L. Wolper released a documentary on the prophecies of Nostradamus, The Man Who Saw Tomorrow, that saw subsequent spikes in interest during the Gulf War—a revised version for television was hosted by Charlton Heston—and after the September 11 attacks, when there was a run on the cassette at Blockbuster. And the attention that it periodically inspires reflects the same emotional factors that led to psychohistory, as the host of the original version said to the audience: “Do we really want to know about the future? Maybe so—if we can change it.”

The speaker, of course, was Orson Welles. I had always known that The Man Who Saw Tomorrow was narrated by Welles, but it wasn’t until I watched it recently that I realized that he hosted it onscreen as well, in one of my favorite incarnations of any human being—bearded, gigantic, cigar in hand, vaguely contemptuous of his surroundings and collaborators, but still willing to infuse the proceedings with something of the velvet and gold braid. Keith Phipps of The A.V. Club once described the documentary as “a brain-damaged sequel” to Welles’s lovely F for Fake, which is very generous. The entire project is manifestly ridiculous and exploitative, with uncut footage from the Zapruder film mingling with a xenophobic fantasy of a war of the West against Islam. Yet there are also moments that are oddly transporting, as when Welles turns to the camera and says:

Before continuing, let me warn you now that the predictions of the future are not at all comforting. I might also add that these predictions of the past, these warnings of the future are not the opinions of the producers of the film. They’re certainly not my opinions. They’re interpretations of the quatrains as made by scores of independent scholars of Nostradamus’ work.

In the sly reading of “my opinions,” you can still hear a trace of Harry Lime, or even of Gregory Arkadin, who invited his guests to drink to the story of the scorpion and the frog. And the entire movie is full of strange echoes of Welles’s career. Footage is repurposed from Waterloo, in which he played Louis XVIII, and it glances at the fall of the Shah of Iran, whose brother-in-law funded Welles’s The Other Side of the Wind, which was impounded by the revolutionary government that Nostradamus allegedly foresaw.

Welles later expressed contempt for the whole affair, allegedly telling Merv Griffin that you could get equally useful prophecies by reading at random out of the phone book. Yet it’s worth remembering, as the critic David Thomson notes, that Welles turned all of his talk show interlocutors into versions of the reporter from Citizen Kane, or even into the Hal to his Falstaff, and it’s never clear where the game ended. His presence infuses The Man Who Saw Tomorrow with an unearned loveliness, despite the its many awful aspects, such as the presence of the “psychic” Jeane Dixon. (Dixon’s fame rested on her alleged prediction of the Kennedy assassination, based on a statement—made in Parade magazine in 1960—that the winner of the upcoming presidential election would be “assassinated or die in office though not necessarily in his first term.” Oddly enough, no one seems to remember an equally impressive prediction by the astrologer Joseph F. Goodavage, who wrote in Analog in September 1962: “It is coincidental that each American president in office at the time of these conjunctions [of Jupiter and Saturn in an earth sign] either died or was assassinated before leaving the presidency…John F. Kennedy was elected in 1960 at the time of a Jupiter and Saturn conjunction in Capricorn.”) And it’s hard for me to watch this movie without falling into reveries about Welles, who was like John W. Campbell in so many other ways. Welles may have been the most intriguing cultural figure of the twentieth century, but he never seemed to know what would come next, and his later career was one long improvisation. It might not be too much to hear a certain wistfulness when he speaks of the man who could see tomorrow, much as Campbell’s fascination with psychohistory stood in stark contrast to the confusion of the second half of his life. When The Man Who Saw Tomorrow was released, Welles had finished editing about forty minutes of his unfinished masterpiece The Other Side of the Wind, and for decades after his death, it seemed that it would never be seen. Instead, it’s available today on Netflix. And I don’t think that anybody could have seen that coming.

Wounded Knee and the Achilles heel

with one comment

On February 27, 1973, two hundred Native American activists occupied the town of Wounded Knee in South Dakota. They were protesting against the unpopular tribal president of the Oglala Lakota Sioux, along with the federal government’s failure to negotiate treaties, and the ensuing standoff—which resulted in two deaths, a serious casualty, and a disappearance—lasted for over seventy days. It also galvanized many of those who watched it unfold, including the author Paul Chaat Smith, who writes in his excellent book Everything You Know About Indians is Wrong:

Lots occurred over the next two and a half months, including a curious incident in which some of the hungry, blockaded Indians attempted to slaughter a cow. Reporters and photographers gathered to watch. Nothing happened. None of the Indians—some urban activists, some from Sioux reservations—actually knew how to butcher cattle. Fortunately, a few of the journalists did know, and they took over, ensuring dinner for the starving rebels. That was a much discussed event during and after Wounded Knee. The most common reading of this was that basically we were fakes. Indians clueless about butchering livestock were not really Indians.

Smith dryly notes that the protesters “lost points” with observers after this episode, which overshadowed many of the more significant aspects of the occupation, and he concludes: “I myself know nothing about butchering cattle, and would hope that doesn’t invalidate my remarks about the global news media and human rights.”

I got to thinking about this passage in the aftermath of Elizabeth Warren’s very bad week. More specifically, I was reminded of it by a column by the Washington Post opinion writer Dana Milbank, who focuses on Warren’s submissions to the cookbook Pow Wow Chow: A Collection of Recipes from Families of the Five Civilized Tribes, which was edited by her cousin three decades ago. One of the recipes that Warren contributed was “Crab with Tomato Mayonnaise Dressing,” which leads Milbank to crack: “A traditional Cherokee dish with mayonnaise, a nineteenth-century condiment imported by settlers? A crab dish from landlocked Oklahoma? This can mean only one thing: canned crab. Warren is unfit to lead.” He’s speaking with tongue partially in cheek—a point that probably won’t be caught by thousands of people who are just browsing the headlines—but when I read these words, I thought immediately of these lines from Smith’s book:

It presents the unavoidable question: Are Indian people allowed to change? Are we allowed to invent completely new ways of being Indian that have no connection to previous ways we have lived? Authenticity for Indians is a brutal measuring device that says we are only Indian as long as we are authentic. Part of the measurement is about percentage of Indian blood. The more, the better. Fluency in one’s Indian language is always a high card. Spiritual practices, living in one’s ancestral homeland, attending powwows, all are necessary to ace the authenticity test. Yet many of us believe taking the authenticity tests is like drinking the colonizer’s Kool-Aid—a practice designed to strengthen our commitment to our own internally warped minds. In this way, we become our own prison guards.

And while there may be other issues with Warren’s recipe, it’s revealing that we often act as if the Cherokee Nation somehow ceased to evolve—or cook for itself—after the introduction of mayonnaise.

This may seem like a tiny point, but it’s also an early warning of a monstrous cultural reckoning lurking just around the corner, at at time when we might have thought that we had exhausted every possible way to feel miserable and divided. If Warren runs for president, which I hope she does, we’re going to be plunged into what Smith aptly describes as a “snake pit” that terrifies most public figures. As Smith writes in a paragraph that I never tire of quoting:

Generally speaking, smart white people realize early on, probably even as children, that the whole Indian thing is an exhausting, dangerous, and complicated snake pit of lies. And…the really smart ones somehow intuit that these lies are mysteriously and profoundly linked to the basic construction of the reality of daily life, now and into the foreseeable future. And without it ever quite being a conscious thought, these intelligent white people come to understand that there is no percentage, none, in considering the Indian question, and so the acceptable result is to, at least subconsciously, acknowledge that everything they are likely to learn about Indians in school, from books and movies and television programs, from dialogue with Indians, from Indian art and stories, from museum exhibits about Indians, is probably going to be crap, so they should be avoided.

This leads him to an unforgettable conclusion: “Generally speaking, white people who are interested in Indians are not very bright.” But that’s only because most of the others are prudent enough to stay well away—and even Warren, who is undeniably smart, doesn’t seem to have realized that this was a fight that she couldn’t possibly win.

One white person who seems unquestionably interested in Indians, in his own way, is Donald Trump. True to form, he may not be very bright, but he also displays what Newt Gingrich calls a “sixth sense,” in this case for finding a formidable opponent’s Achilles heel and hammering at it relentlessly. Elizabeth Warren is one of the most interesting people to consider a presidential run in a long time, but Trump may have already hamstrung her candidacy by zeroing in on what might look like a trivial vulnerability. And the really important point here is that if Warren’s claims about her Native American heritage turn out to be her downfall, it’s because the rest of us have never come to terms with our guilt. The whole subject is so unsettling that we’ve collectively just agreed not to talk about it, and Warren made the unforgivable mistake, a long time ago, of folding it into her biography. If she’s being punished for it now, it’s because it precipitates something that was invisibly there all along, and this may only be the beginning. Along the way, we’re going to run up against a lot of unexamined assumptions, like Milbank’s amusement at that canned crab. (As Smith reminds us: “Indians are okay, as long as they meet non-Indian expectations about Indian religious and political beliefs. And what it really comes down to is that Indians are okay as long as we don’t change too much. Yes, we can fly planes and listen to hip-hop, but we must do these things in moderation and always in a true Indian way.” And mayonnaise is definitely out.) Depending on your point of view, this issue is either irrelevant or the most important problem imaginable, and like so much else these days, it may take a moronic quip from Trump—call it the Access Hollywood principle—to catalyze a debate that more reasonable minds have postponed. In his discussion of Wounded Knee, Smith concludes: “Yes, the news media always want the most dramatic story. But I would argue there is an overlay with Indian stories that makes it especially difficult.” And we might be about to find out how difficult it really is.

Written by nevalalee

October 19, 2018 at 8:44 am

The Rover Boys in the Air

with 3 comments

On September 3, 1981, a man who had recently turned seventy reminisced in a letter to a librarian about his favorite childhood books, which he had read in his youth in Dixon, Illinois:

I, of course, read all the books that a boy that age would like—The Rover Boys; Frank Merriwell at Yale; Horatio Alger. I discovered Edgar Rice Burroughs and read all the Tarzan books. I am amazed at how few people I meet today know that Burroughs also provided an introduction to science fiction with John Carter of Mars and the other books that he wrote about John Carter and his frequent trips to the strange kingdoms to be found on the planet Mars.

At almost exactly the same time, a boy in Kansas City was working his way through a similar shelf of titles, as described by one of his biographers: “Like all his friends, he read the Rover Boys series and all the Horatio Alger books…[and] Edgar Rice Burroughs’s wonderful and exotic Mars books.” And a slightly younger member of the same generation would read many of the same novels while growing up in Brooklyn, as he recalled in his memoirs: “Most important of all, at least to me, were The Rover Boys. There were three of them—Dick, Tom, and Sam—with Tom, the middle one, always described as ‘fun-loving.’”

The first youngster in question was Ronald Reagan; the second was Robert A. Heinlein; and the third was Isaac Asimov. There’s no question that all three men grew up reading many of the same adventure stories as their contemporaries, and Reagan’s apparent fondness for science fiction has inspired a fair amount of speculation. In a recent article on Slate, Kevin Bankston retells the famous story of how WarGames inspired the president to ask his advisors about the likelihood of such an incident occurring for real, concluding that it was “just one example of how science fiction influenced his administration and his life.” The Day the Earth Stood Still, which was adapted from a story by Harry Bates that originally appeared in Astounding, allegedly influenced Regan’s interest in the potential effect of extraterrestrial contact on global politics, which he once brought up with Gorbachev. And in the novelistic biography Dutch, Edmund Morris—or his narrative surrogate—ruminates at length on the possible origins of the Strategic Defense Initiative:

Long before that, indeed, [Reagan] could remember the warring empyrean of his favorite boyhood novel, Edgar Rice Burroughs’s Princess of Mars. I keep a copy on my desk: just to flick through it is to encounter five-foot-thick polished glass domes over cities, heaven-filling salvos, impregnable walls of carborundum, forts, and “manufactories” that only one man with a key can enter. The book’s last chapter is particularly imaginative, dominated by the magnificent symbol of a civilization dying for lack of air.

For obvious marketing reasons, I’d love to be able to draw a direct line between science fiction and the Reagan administration. Yet it’s also tempting to read a greater significance into these sorts of connections than they actually deserve. The story of science fiction’s role in the Strategic Defense Initiative has been told countless times, but usually by the writers themselves, and it isn’t clear what impact it truly had. (The definitive book on the subject, Way Out There in the Blue by Frances FitzGerald, doesn’t mention any authors at all by name, and it refers only once, in passing, to a group of advisors that included “a science fiction writer.” And I suspect that the most accurate description of their involvement appears in a speech delivered by Greg Bear: “Science fiction writers helped the rocket scientists elucidate their vision and clarified it.”) Reagan’s interest in science fiction seems less like a fundamental part of his personality than like a single aspect of a vision that was shaped profoundly by the popular culture of his young adulthood. The fact that Reagan, Heinlein, and Asimov devoured many of the same books only tells me that this was what a lot of kids were reading in the twenties and thirties—although perhaps only the exceptionally imaginative would try to live their lives as an extension of those stories. If these influences were genuinely meaningful, we should also be talking about the Rover Boys, a series “for young Americans” about three brothers at boarding school that has now been almost entirely forgotten. And if we’re more inclined to emphasize the science fiction side for Reagan, it’s because this is the only genre that dares to make such grandiose claims for itself.

In fact, the real story here isn’t about science fiction, but about Reagan’s gift for appropriating the language of mainstream culture in general. He was equally happy to quote Dirty Harry or Back to the Future, and he may not even have bothered to distinguish between his sources. In Way Out There in the Blue, FitzGerald brilliantly unpacks a set of unscripted remarks that Reagan made to reporters on March 24, 1983, in which he spoke of the need of rendering nuclear weapons “obsolete”:

There is a part of a line from the movie Torn Curtain about making missiles “obsolete.” What many inferred from the phrase was that Reagan believed what he had once seen in a science fiction movie. But to look at the explanation as a whole is to see that he was following a train of thought—or simply a trail of applause lines—from one reassuring speech to another and then appropriating a dramatic phrase, whose origin he may or may not have remembered, for his peroration.

Take out the world “reassuring,” and we have a frightening approximation of our current president, whose inner life is shaped in real time by what he sees on television. But we might feel differently if those roving imaginations had been channeled by chance along different lines—like a serious engagement with climate change. It might just as well have gone that way, but it didn’t, and we’re still dealing with the consequences. As Greg Bear asks: “Do you want your presidents to be smart? Do you want them to be dreamers? Or do you want them to be lucky?”

A better place

with 2 comments

Note: Spoilers follow for the first and second seasons of The Good Place.

When I began watching The Good Place, I thought that I already knew most of its secrets. I had missed the entire first season, and I got interested in it mostly due to a single review by Emily Nussbaum of The New Yorker, which might be my favorite piece so far from one of our most interesting critics. Nussbaum has done more than anyone else in the last decade to elevate television criticism into an art in itself, and this article—with its mixture of the critical, personal, and political—displays all her strengths at their best. Writing of the sitcom’s first season finale, which aired the evening before Trump’s inauguration, Nussbaum says: “Many fans, including me, were looking forward to a bit of escapist counterprogramming, something frothy and full of silly puns, in line with the first nine episodes. Instead, what we got was the rare season finale that could legitimately be described as a game-changer, vaulting the show from a daffy screwball comedy to something darker, much stranger, and uncomfortably appropriate for our apocalyptic era.” Following that grabber of an opening, she continues with a concise summary of the show’s complicated premise:

The first episode is about a selfish American jerk, Eleanor (the elfin charmer Kristen Bell), who dies and goes to Heaven, owing to a bureaucratic error. There she is given a soul mate, Chidi (William Jackson Harper), a Senegal-raised moral philosopher. When Chidi discovers that Eleanor is an interloper, he makes an ethical leap, agreeing to help her become a better person…Overseeing it all was Michael, an adorably flustered angel-architect played by Ted Danson; like Leslie Knope, he was a small-town bureaucrat who adored humanity and was desperate to make his flawed community perfect.

There’s a lot more involved, of course, and we haven’t even mentioned most of the other key players. It’s an intriguing setup for a television show, and it might have been enough to get me to watch it on its own. Yet what really caught my attention was Nussbaum’s next paragraph, which includes the kind of glimpse into a critic’s writing life that you only see when emotions run high: “After watching nine episodes, I wrote a first draft of this column based on the notion that the show, with its air of flexible optimism, its undercurrent of uplift, was a nifty dialectical exploration of the nature of decency, a comedy that combined fart jokes with moral depth. Then I watched the finale. After the credits rolled, I had to have a drink.” She then gives away the whole game, which I’m obviously going to do here as well. You’ve been warned:

In the final episode, we learn that it was no bureaucratic mistake that sent Eleanor to Heaven. In fact, she’s not in Heaven at all. She’s in Hell—which is something that Eleanor realizes, in a flash of insight, as the characters bicker, having been forced as a group to choose two of them to be banished to the Bad Place. Michael is no angel, either. He’s a low-ranking devil, a corporate Hell architect out on his first big assignment, overseeing a prankish experimental torture cul-de-sac. The malicious chuckle that Danson unfurls when Eleanor figures it out is both terrifying and hilarious, like a clap of thunder on a sunny day. “Oh, God!” he growls, dropping the mask. “You ruin everything, you know that?”

That’s a legitimately great twist, and when I suggested to my wife—who didn’t know anything about it—that we check it out on Netflix, it was partially so that I could enjoy her surprise at that moment, like a fan of A Song of Ice and Fire eagerly watching an unsuspecting friend during the Red Wedding.

Yet I was the one who really got fooled. The Good Place became my favorite sitcom since Community, and for almost none of the usual reasons. It’s very funny, of course, but I find that the jokes land about half the time, and it settles for what Nussbaum describes as “silly puns” more often than it probably should. Many episodes are closer to freeform comedy—the kind in which the riffs have less to do with context than with whatever the best pitch happened to be in the writers room—than to the clockwork farce to which it ought to aspire. But its flaws don’t really matter. I haven’t been so involved with the characters on a series like this in years, which allows it to take risks and get away with formal experiments that would destroy a lesser show. After the big revelation in the first season finale, it repeatedly blew up its continuity, with Michael resetting the memories of the others and starting over whenever they figured out his plan, but somehow, it didn’t leave me feeling jerked around. This is partially thanks to how the show cleverly conflates narrative time with viewing time, which is one of the great unsung strengths of the medium. (When the second season finally gets on track, these “versions” of the characters have only known one another for a couple of weeks, but every moment is enriched by our memories of their earlier incarnations. It’s a good trick, but it’s not so different from the realization, for example, that all of the plot twists and relationships of the first two seasons of Twin Peaks unfolded over less than a month.) It also speaks to the talent of the cast, which consistently rises to every challenge. And it does a better job of telling a serialized story than any sitcom that I can remember. Even while I was catching up with it, I managed to parcel it out over time, but I can also imagine binging an entire season at one sitting. That’s mostly due to the fact that the writers are masters of structure, if not always at filling the spaces between act breaks, but it’s also because the stakes are literally infinite.

And the stakes apply to all of us. It’s hard to come away from The Good Place without revisiting some of your assumptions about ethics, the afterlife, and what it means to be a good person. (The inevitable release of The Good Place and Philosophy might actually be worth reading.) I’m more aware of how much I’ve internalized the concept of “moral desert,” or the notion that good behavior will be rewarded, which we should all know by now isn’t true. In its own unpretentious way, the series asks its viewers to contemplate the problem of how to live when there might not be a prize awaiting us at the end. It’s the oldest question imaginable, but it seems particularly urgent these days, and the show’s answers are more optimistic than we have any right to expect. Writing just a few weeks after the inauguration, Nussbaum seems to project some of her own despair onto creator Michael Schur:

While I don’t like to read the minds of showrunners—or, rather, I love to, but it’s presumptuous—I suspect that Schur is in a very bad mood these days. If [Parks and Recreation] was a liberal fantasia, The Good Place is a dystopian mindfork: it’s a comedy about the quest to be moral even when the truth gets bent, bullies thrive, and sadism triumphs…Now that his experiment has crashed, [the character of] Michael plans to erase the ensemble’s memories and reboot. The second season—presuming the show is renewed (my mouth to God’s ear)—will start the same scheme from scratch. Michael will make his afterlife Sims suffer, no matter how many rounds it takes.

Yet in the second season hinges on an unlikely change of heart. Michael comes to care about his charges—he even tries to help them escape to the real Good Place—and his newfound affection doesn’t seem like another mislead. I’m not sure if I believe it, but I’m still grateful. It isn’t a coincidence that Michael shares his name with the show’s creator, and I’d like to think that Schur ended up with a kinder version of the series than he may have initially envisioned. Like Nussbaum, he tore up the first draft and started over. Life is hard enough as it is, and the miracle of The Good Place is that it takes the darkest view imaginable of human nature, and then it gently hints that we might actually be capable of becoming better.

Written by nevalalee

September 27, 2018 at 8:39 am

The surprising skepticism of The X-Files

with one comment

Gillian Anderson in "Jose Chung's From Outer Space"

Note: To celebrate the twenty-fifth anniversary of the premiere of The X-Files, I’m republishing a post that originally appeared, in a somewhat different form, on September 9, 2013.

Believe it or not, this week marks the twenty-fifth anniversary of The X-Files, which aired its first episode on September 10, 1993. As much as I’d like to claim otherwise, I didn’t watch the pilot that night, and I’m not even sure that I caught the second episode, “Deep Throat.” “Squeeze,” which aired the following week, is the first installment that I clearly remember seeing on its original broadcast, and I continued to tune in afterward, although only sporadically. In its early days, I had issues with the show’s lack of continuity: it bugged me to no end that after every weekly encounter with the paranormal—any one of which should have been enough to upend Scully’s understanding of the world forever—the two leads were right back where they were at the start of the next episode, and few, if any, of their cases were ever mentioned again. Looking back now, of course, it’s easy to see that this episodic structure was what allowed the show to survive, and that it was irrevocably damaged once it began to take its backstory more seriously. In the meantime, I learned to accept the show’s narrative logic on its own terms. And I’m very grateful that I did.

It’s no exaggeration to say that The X-Files has had a greater influence on my own writing than any work of narrative art in any medium. That doesn’t mean it’s my favorite work of art, or even my favorite television show—only that Chris Carter’s supernatural procedural came along at the precise moment in my young adulthood that I was most vulnerable to being profoundly influenced by a great genre series. I was thirteen when the show premiered, toward the end of the most pivotal year of my creative life. Take those twelve months away, or replace them with a different network of cultural influences, and I’d be a different person altogether. It was the year I discovered Umberto Eco, Stephen King, and Douglas R. Hofstadter; Oliver Stone’s JFK set me on a short but fruitful detour into the literature of conspiracy; I bought a copy of Very by the Pet Shop Boys, about which I’ll have a lot more to say soon; I acquired copies of Isaac Asimov’s Science Fiction Magazine and 100 Great Science Fiction Short Short Stories; and I took my first deep dive into the work of David Lynch and, later, Jorge Luis Borges. Some of these works have lasted, while others haven’t, but they all shaped who I became, and The X-Files stood at the heart of it all, with imagery drawn in equal part from Twin Peaks and Dealey Plaza and a playful, agnostic spirit that mirrored that of the authors I was reading at the same time.

Gillian Anderson and David Duchovny in The X-Files pilot

And this underlying skepticism—which may seem like a strange word to apply to The X-Files—was a big part of its appeal. What I found enormously attractive about the show was that although it took place in a world of aliens, ghosts, and vampires, it didn’t try to force these individual elements into one overarching pattern. Even in its later seasons, when it attempted, with mixed results, to weave its abduction and conspiracy threads into a larger picture, certain aspects remained incongruously unexplained. The same world shaped by the plans of the Consortium or Syndicate also included lake monsters, clairvoyants, and liver-eating mutants, all of whom would presumably continue to go about their business after the alien invasion occurred. It never tried to convert us to anything, because it didn’t have any answers. And what I love about it now, in retrospect, is the fact that this curiously indifferent attitude toward its own mysteries arose from the structural constraints of network television itself. Every episode had to stand on its own. There was no such thing as binge-watching. The show had to keep moving or die.

Which goes a long way toward explaining why even fundamentally skeptical viewers, like me, could become devoted fans, or why Mulder and Scully could appear on the cover of the Skeptical Inquirer. It’s true that Scully was never right, but it’s remarkable how often it seemed that she could be, which is due as much to the show’s episodic construction as to Gillian Anderson’s wonderful performance. (As I’ve mentioned before, Scully might be my favorite character on any television show.) Every episode changed the terms of the game, complete with a new supporting cast, setting, and premise—and after the advent of Darin Morgan, even the tone could be wildly variable. As a result, it was impossible for viewers to know where they stood, which made a defensive skepticism seem like the healthiest possible attitude. Over time, the mythology grew increasingly unwieldy, and the show’s lack of consistency became deeply frustrating, as reflected in its maddening, only occasionally transcendent reboot. The X-Files eventually lost its way, but not until after a haphazard, often dazzling initial season that established, in spite of what its creators might do in the future, that anything was possible, and no one explanation would ever be enough. And it’s a lesson that I never forgot.

Written by nevalalee

September 14, 2018 at 9:00 am

The writer’s defense

leave a comment »

“This book will be the death of me,” the writer Jose Chung broods to himself halfway through my favorite episode of Millennium. “I just can’t write anymore. What possessed me to be a writer anyway? What kind of a life is this? What else can I do now, with no other skills or ability? My life has fizzled away. Only two options left: suicide, or become a television weatherman.” I’ve loved this internal monologue—written by Darin Morgan and delivered by the great Charles Nelson Reilly—ever since I first heard it more than two decades ago. (As an aside, it’s startling for me to realize that just four short years separated the series premiere of The X-Files from “Jose Chung’s Doomsday Defense,” which was enough time for an entire fictional universe to be born, splinter apart, and reassemble itself into a better, more knowing incarnation.) And I find that I remember Chung’s words every time I sit down to write something new. I’ve been writing for a long time now, and I’m better at it than I am at pretty much anything else, but I still have to endure something like a moment of existential dread whenever I face the blank page for the first time. For the duration of the first draft, I regret all of my decisions, and I wonder whether there’s still a chance to try something else instead. Eventually, it passes. But it always happens. And after spending over a decade doing nothing else but writing, I’ve resigned myself to the fact that it’s always going to be this way.

Which doesn’t mean that there aren’t ways of dealing with it. In fact, I’ve come to realize that most of my life choices are designed to minimize the amount of time that I spend writing first drafts. This means nothing else but the physical act of putting down words for the first time, which is when I tend to hit my psychological bottom. Everything else is fine by comparison. As a result, I’ve shunted aspects of my creative process to one side or the other of the rough draft, which persists as a thin slice of effort between two huge continents of preparation and consolidation. I prefer to do as much research in advance as I can, and I spend an ungodly amount of time on outlines, which I’ve elsewhere described as a stealth first draft that I can trick myself into thinking doesn’t matter. My weird, ritualistic use of mind maps and other forms of random brainstorming is another way to generate as many ideas as possible before I need to really start writing. When I finally start the first draft, I make a point of never going back to read it until I’ve physically typed out the entire thing, with my outline at my elbow, as if I’m just transcribing something that already exists. Ideally, I can crank out that part of the day’s work in an hour or less. Once it’s there on the screen, I can begin revising, taking as many passes as possible without worrying too much about any given version. In the end, I somehow end up with a draft that I can stand to read. It isn’t entirely painless, but it involves less pain than any other method that I can imagine.

And these strategies are all just specific instances of my favorite piece of writing advice, which I owe to the playwright David Mamet. I haven’t quoted it here for a while, so here it is again:

As a writer, I’ve tried to train myself to go one achievable step at a time: to say, for example, “Today I don’t have to be particularly inventive, all I have to be is careful, and make up an outline of the actual physical things the character does in Act One.” And then, the following day to say, “Today I don’t have to be careful. I already have this careful, literal outline, and I all have to do is be a little bit inventive,” et cetera, et cetera.

As I’ve noted before, I badly wish that I could somehow send this paragraph back in time to my younger self, because it would have saved me years of wasted effort. But what Mamet doesn’t mention, perhaps because he thought that it was obvious, is that buried in that list of “achievable steps” is a monster of a task that can’t be eliminated, only reduced. There’s no getting around the time that you spend in front of the blank page, and even the best outline in the world can only take away so much of the pain. (An overly detailed outline may even cause problems later, if it leads to a work that seems lifeless and overdetermined—which leaves us with the uncomfortable fact that a certain amount of pain at the writing stage is necessary to avoid even greater trouble in the future.)

Of course, if you’re just looking to minimize the agony of writing that first draft, there are easier ways to anesthetize yourself. Jose Chung pours himself a glass of whiskey, and I’ve elsewhere characterized the widespread use of mind-altering chemicals by writers—particularly caffeine, nicotine, and alcohol—as a pragmatic survival tactic, like the other clichés that we associate with the bohemian life. And I haven’t been immune. For years, I’d often have a drink while working at night, and it certainly didn’t hurt my productivity. (A ring of discolored wood eventually appeared on the surface of my desk from the condensation on the glass, which said more about my habits than I realized at the time.) After I got married, and especially after I became a father, I had to drastically rethink my writing schedule. I was no longer writing long into the evening, but trying to cram as much work as I could into a few daylight hours, leaving me and my wife with a little time to ourselves after our daughter went to bed. As a result, the drinking stopped, and the more obsessive habits that I’ve developed in the meantime are meant to reduce the pain of writing with a clear head. This approach isn’t for everyone, and it may not work for anyone else at all. But it’s worth remembering that when you look at a reasonably productive writer, you’re really seeing a collection of behaviors that have accrued around the need to survive that daily engagement with the empty page. And if they tend to exhibit such an inexplicable range of strategies, vices, and rituals, ultimately, they’re all just forms of defense.

Written by nevalalee

September 12, 2018 at 8:21 am

The paper of record

leave a comment »

One of my favorite conventions in suspense fiction is the trope known as Authentication by Newspaper. It’s the moment in a movie, novel, or television show—and sometimes even in reality—when the kidnapper sends a picture of the victim holding a copy of a recent paper, with the date and headline clearly visible, as a form of proof of life. (You can also use it with piles of illicit cash, to prove that you’re ready to send payment.) The idea frequently pops up in such movies as Midnight Run and Mission: Impossible 2, and it also inspired a classic headline from The Onion: “Report: Majority Of Newspapers Now Purchased By Kidnappers To Prove Date.” It all depends on the fact that a newspaper is a datable object that is widely available and impossible to fake in advance, which means that it can be used to definitively establish the earliest possible day in which an event could have taken place. And you can also use the paper to verify a past date in subtler ways. A few weeks ago, Motherboard had a fascinating article on a time-stamping service called Surety, which provides the equivalent of a dated seal for digital documents. To make it impossible to change the date on one of these files, every week, for more than twenty years, Surety has generated a public hash value from its internal client database and published it in the classified ad section of the New York Times. As the company notes: “This makes it impossible for anyone—including Surety—to backdate timestamps or validate electronic records that were not exact copies of the original.”

I was reminded of all this yesterday, after the Times posted an anonymous opinion piece titled “I Am Part of the Resistance Inside the Trump Administration.” The essay, which the paper credits to “a senior official,” describes what amounts to a shadow government within the White House devoted to saving the president—and the rest of the country—from his worst impulses. And while the author may prefer to remain nameless, he certainly doesn’t suffer from a lack of humility:

Many of the senior officials in [Trump’s] own administration are working diligently from within to frustrate parts of his agenda and his worst inclinations. I would know. I am one of them…It may be cold comfort in this chaotic era, but Americans should know that there are adults in the room. We fully recognize what is happening. And we are trying to do what’s right even when Donald Trump won’t.

The result, he claims, is “a two-track presidency,” with a group of principled advisors doing their best to counteract Trump’s admiration for autocrats and contempt for international relations: “This isn’t the work of the so-called deep state. It’s the work of the steady state.” He even reveals that there was early discussion among cabinet members of using the Twenty-Fifth Amendment to remove Trump from office, although it was scuttled by concern of precipitating a crisis somehow worse than the one in which we’ve found ourselves.

Not surprisingly, the piece has generated a firestorm of speculation about the author’s identity, both online and in the White House itself, which I won’t bother covering here. What interests me are the writer’s reasons for publishing it in the first place. Over the short term, it can only destabilize an already volatile situation, and everyone involved will suffer for it. This implies that the author has a long game in mind, and it had better be pretty compelling. On Twitter, Nate Silver proposed one popular theory: “It seems like the person’s goal is to get outed and secure a very generous advance on a book deal.” He may be right—although if that’s the case, the plan has quickly gone sideways. Reaction on both sides has been far more critical than positive, with Erik Wemple of the Washington Post perhaps putting it best:

Like most anonymous quotes and tracts, this one is a PR stunt. Mr. Senior Administration Official gets to use the distributive power of the New York Times to recast an entire class of federal appointees. No longer are they enablers of a foolish and capricious president. They are now the country’s most precious and valued patriots. In an appearance on Wednesday afternoon, the president pronounced it all a “gutless” exercise. No argument here.

Or as the political blogger Charles P. Pierce says even more savagely in his response on Esquire: “Just shut up and quit.”

But Wemple’s offhand reference to “the distributive power” of the Times makes me think that the real motive is staring us right in the face. It’s a form of Authentication by Newspaper. Let’s say that you’re a senior official in the Trump administration who knows that time is running out. You’re afraid to openly defy the president, but you also want to benefit—or at least to survive—after the ship goes down. In the aftermath, everyone will be scrambling to position themselves for some kind of future career, even though the events of the last few years have left most of them irrevocably tainted. By the time it falls apart, it will be too late to claim that you were gravely concerned. But the solution is a stroke of genius. You plant an anonymous piece in the Times, like the founders of Surety publishing its hash value in the classified ads, except that your platform is vastly more prominent. And you place it there precisely so that you can point to it in the future. After Trump is no longer a threat, you can reveal yourself, with full corroboration from the paper of record, to show that you had the best interests of the country in mind all along. You were one of the good ones. The datestamp is right there. That’s your endgame, no matter how much pain it causes in the meantime. It’s brilliant. But it may not work. As nearly everyone has realized by now, the fact that a “steady state” of conservatives is working to minimize the damage of a Trump presidency to achieve “effective deregulation, historic tax reform, a more robust military and more” is a scandal in itself. This isn’t proof of life. It’s the opposite.

Written by nevalalee

September 6, 2018 at 8:59 am

%d bloggers like this: