Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Search Results

Peak television and the future of stardom

with one comment

Kevin Costner in The Postman

Earlier this week, I devoured the long, excellent article by Josef Adalian and Maria Elena Fernandez of Vulture on the business of peak television. It’s full of useful insights and even better gossip—and it names plenty of names—but there’s one passage that really caught my eye, in a section about the huge salaries that movie stars are being paid to make the switch to the small screen:

A top agent defends the sums his clients are commanding, explaining that, in the overall scheme of things, the extra money isn’t all that significant. “Look at it this way,” he says. “If you’re Amazon and you’re going to launch a David E. Kelley show, that’s gonna cost $4 million an episode [to produce], right? That’s $40 million. You can have Bradley Whitford starring in it, [who is] gonna cost you $150,000 an episode. That’s $1.5 million of your $40 million. Or you could spend another $3.5 million [to get Costner] on what will end up being a $60 million investment by the time you market and promote it. You can either spend $60 [million] and have the Bradley Whitford show, or $63.5 [million] and have the Kevin Costner show. It makes a lot of sense when you look at it that way.”

With all due apologies to Bradley Whitford, I found this thought experiment fascinating, and not just for the reasons that the agent presumably shared it. It implies, for one thing, that television—which is often said to be overtaking Hollywood in terms of quality—is becoming more like feature filmmaking in another respect: it’s the last refuge of the traditional star. We frequently hear that movie stardom is dead and that audiences are drawn more to franchises than to recognizable faces, so the fact that cable and streaming networks seem intensely interested in signing film stars, in a post-True Detective world, implies that their model is different. Some of it may be due to the fact, as William Goldman once said, that no studio executive ever got fired for hiring a movie star: as the new platforms fight to establish themselves, it makes sense that they’d fall back on the idea of star power, which is one of the few things that corporate storytelling has ever been able to quantify or understand. It may also be because the marketing strategy for television inherently differs from that for film: an online series is unusually dependent on media coverage to stand out from the pack, and signing a star always generates headlines. Or at least it once did. (The Vulture article notes that Woody Allen’s new series for Amazon “may end up marking peak Peak TV,” and it seems a lot like a deal that was made for the sake of the coverage it would produce.)

Kevin Costner in JFK

But the most plausible explanation lies in simple economics. As the article explains, Netflix and the other streaming companies operate according to a “cost-plus” model: “Rather than holding out the promise of syndication gold, the company instead pays its studio and showrunner talent a guaranteed up-front profit—typically twenty or thirty percent above what it takes to make a show. In exchange, it owns all or most of the rights to distribute the show, domestically and internationally.” This limits the initial risk to the studio, but also the potential upside: nobody involved in producing the show itself will see any money on the back end. In addition, it means that even the lead actors of the series are paid a flat dollar amount, which makes them a more attractive investment than they might be for a movie. Most of the major stars in Hollywood earn gross points, which means that they get a cut of the box office receipts before the film turns a profit—a “first dollar” deal that makes the mathematics of breaking even much more complicated. The thought experiment about Bradley Whitford and Kevin Costner only makes sense if you can get Costner at a fixed salary per episode. In other words, movie stars are being actively courted by television because its model is a throwback to an earlier era, when actors were held under contract by a studio without any profit participation, and before stars and their agents negotiated better deals that ended up undermining the economic basis of the star system entirely.

And it’s revealing that Costner, of all actors, appears in this example. His name came up mostly because multiple sources told Vulture that he was offered $500,000 per episode to star in a streaming series: “He passed,” the article says, “but industry insiders predict he’ll eventually say ‘yes’ to the right offer.” But he also resonates because he stands for a kind of movie stardom that was already on the wane when he first became famous. It has something to do with the quintessentially American roles that he liked to play—even JFK is starting to seem like the last great national epic—and an aura that somehow kept him in leading parts two decades after his career as a major star was essentially over. That’s weirdly impressive in itself, and it testifies to how intriguing a figure he remains, even if audiences aren’t likely to pay to see him in a movie. Whenever I think of Costner, I remember what the studio executive Mike Medavoy once claimed to have told him right at the beginning of his career:

“You know,” I said to him over lunch, “I have this sense that I’m sitting here with someone who is going to become a great big star. You’re going to want to direct your own movies, produce your own movies, and you’re going to end up leaving your wife and going through the whole Hollywood movie-star cycle.”

Costner did, in fact, end up leaving his first wife. And if he also leaves film for television, even temporarily, it may reveal that “the whole Hollywood movie-star cycle” has a surprising final act that few of us could have anticipated.

Written by nevalalee

May 27, 2016 at 9:03 am

“Asthana glanced over at the television…”

leave a comment »

"A woman was standing just over his shoulder..."

Note: This post is the eighteenth installment in my author’s commentary for Eternal Empire, covering Chapter 19. You can read the previous installments here.

A quarter of a century ago, I read a story about the actor Art Carney, possibly apocryphal, that I’ve never forgotten. Here’s the version told by the stage and television actress Patricia Wilson:

During a live performance of the original Honeymooners, before millions of viewers, Jackie [Gleason] was late making an entrance into a scene. He left Art Carney onstage alone, in the familiar seedy apartment set of Alice and Ralph Kramden. Unflappable, Carney improvised action for Ed Norton. He looked around, scratched himself, then went to the Kramden refrigerator and peered in. He pulled out an orange, shuffled to the table, and sat down and peeled it. Meanwhile frantic stage managers raced to find Jackie. Art Carney sat onstage peeling and eating an orange, and the audience convulsed with laughter.

According to some accounts, Carney stretched the bit of business out for a full two minutes before Gleason finally appeared. And while it certainly speaks to Carney’s ingenuity and resourcefulness, we should also take a moment to tip our hats to that humble orange, as well as the prop master who thought to stick it in the fridge—unseen and unremarked—in the first place.

Theatrical props, as all actors and directors know, can be a source of unexpected ideas, just as the physical limitations or possibilities of the set itself can provide a canvas on which the action is conceived in real time. I’ve spoken elsewhere of the ability of vaudeville comedians to improvise routines on the spot using whatever was available on a standing set, and there’s a sense in which the richness of the physical environment in which a scene takes place is a battery from which the performances can draw energy. When a director makes sure that each actor’s pockets are full of the litter that a character might actually carry, it isn’t just a mark of obsessiveness or self-indulgence, or even a nod toward authenticity, but a matter of storing up potential tools. A prop by itself can’t make a scene work, but it can provide the seed around which a memorable moment or notion can grow, like a crystal. In more situations than you might expect, creativity lies less in the ability to invent from scratch than to make effective use of whatever happens to lie at hand. Invention is a precious resource, and most artists have a finite amount of it; it’s better, whenever possible, to utilize what the world provides. And much of the time, when you’re faced with a hard problem to solve, you’ll find that the answer is right there in the background.

"Asthana glanced over at the television..."

This is as true of writing fiction as of any of the performing arts. In the past, I’ve suggested that this is the true purpose of research or location work: it isn’t about accuracy, but about providing raw material for dreams, and any writer faced with the difficult task of inventing a scene would be wise to exploit what already exists. It’s infinitely easier to write a chase scene, for example, if you’re tailoring it to the geography of a particular street. As usual, it comes back to the problem of making choices: the more tangible or physical the constraints, the more likely they’ll generate something interesting when they collide with the fundamentally abstract process of plotting. Even if the scene I’m writing takes place somewhere wholly imaginary, I’ll treat it as if it were being shot on location: I’ll pick a real building or locale that has the qualities I need for the story, pore over blueprints and maps, and depart from the real plan only when I don’t have any alternative. In most cases, the cost of that departure, in terms of the confusion it creates, is far greater than the time and energy required to make the story fit within an existing structure. For much the same reason, I try to utilize the props and furniture you’d naturally find there. And that’s all the more true when a scene occurs in a verifiable place.

Sometimes, this kind of attention to detail can result in surprising resonances. There’s a small example that I like in Chapter 19 of Eternal Empire. Rogozin, my accused intelligence agent, is being held without charges at a detention center in Paddington Green. This is a real location, and its physical setup becomes very important: Rogozin is going to be killed, in an apparent suicide, under conditions of heavy security. To prepare these scenes, I collected reference photographs, studied published descriptions, and shaped the action as much as possible to unfold logically under the constraints the location imposed. And one fact caught my eye, purely as a matter of atmosphere: the cells at Paddington Green are equipped with televisions, usually set to play something innocuous, like a nature video. This had obvious potential as a counterpoint to the action, so I went to work looking for a real video that might play there. And after a bit of searching, I hit on a segment from the BBC series Life in the Undergrowth, narrated by David Attenborough, about the curious life cycle of the gall wasp. The phenomenon it described, as an invading wasp burrows into the gall created by another, happened to coincide well—perhaps too well—with the story itself. As far as I’m concerned, it’s what makes Rogozin’s death scene work. And while I could have made up my own video to suit the situation, it seemed better, and easier, to poke around the stage first to see what I could find…

Written by nevalalee

May 7, 2015 at 9:11 am

The unbreakable television formula

leave a comment »

Ellie Kemper in Unbreakable Kimmy Schmidt

Watching the sixth season premiere of Community last night on Yahoo—which is a statement that would have once seemed like a joke in itself—I was struck by the range of television comedy we have at our disposal these days. We’ve said goodbye to Parks and Recreation, we’re following Community into what is presumably its final stretch, and we’re about to greet Unbreakable Kimmy Schmidt as it starts what looks to be a powerhouse run on Netflix. These shows are superficially in the same genre: they’re single-camera sitcoms that freely grant themselves elaborate sight gags and excursions into surrealism, with a cutaway style that owes as much to The Simpsons as to Arrested Development. Yet they’re palpably different in tone. Parks and Rec was the ultimate refinement of the mockumentary style, with talking heads and reality show techniques used to flesh out a narrative of underlying sweetness; Community, as always, alternates between obsessively detailed fantasy and a comic strip version of emotions to which we can all relate; and Kimmy Schmidt takes place in what I can only call Tina Fey territory, with a barrage of throwaway jokes and non sequiturs designed to be referenced and quoted forever.

And the diversity of approach we see in these three comedies makes the dramatic genre seem impoverished. Most television dramas are still basically linear; they’re told using the same familiar grammar of establishing shots, medium shots, and closeups; and they’re paced in similar ways. If you were to break down an episode by shot length and type, or chart the transitions between scenes, an installment of Game of Thrones would look a lot on paper like one of Mad Men. There’s room for individual quirks of style, of course: the handheld cinematography favored by procedurals has a different feel from the clinical, detached camera movements of House of Cards. And every now and then, we get a scene—like the epic tracking shot during the raid in True Detective—that awakens us to the medium’s potential. But the fact that such moments are striking enough to inspire think pieces the next day only points to how rare they are. Dramas are just less inclined to take big risks of structure and tone, and when they do, they’re likely to be hybrids. Shows like Fargo or Breaking Bad are able to push the envelope precisely because they have a touch of black comedy in their blood, as if that were the secret ingredient that allowed for greater formal daring.

Jon Hamm on Mad Men

It isn’t hard to pin down the reason for this. A cutaway scene or extended homage naturally takes us out of the story for a second, and comedy, which is inherently more anarchic, has trained us to roll with it. We’re better at accepting artifice in comic settings, since we aren’t taking the story quite as seriously: whatever plot exists is tacitly understood to be a medium for the delivery of jokes. Which isn’t to say that we can’t care deeply about these characters; if anything, our feelings for them are strengthened because they take place in a stylized world that allows free play for the emotions. Yet this is also something that comedy had to teach us. It can be fun to watch a sitcom push the limits of plausibility to the breaking point, but if a drama deliberately undermines its own illusion of reality, we can feel cheated. Dramas that constantly draw attention to their own artifice, as Twin Peaks did, are more likely to become cult favorites than popular successes, since most of us just want to sit back and watch a story that presents itself using the narrative language we know. (Which, to be fair, is true of comedies as well: the three sitcoms I’ve mentioned above, taken together, have a fraction of the audience of something like The Big Bang Theory.)

In part, it’s a problem of definition. When a drama pushes against its constraints, we feel more comfortable referring to it as something else: Orange is the New Black, which tests its structure as adventurously as any series on the air today, has suffered at awards season from its resistance to easy categorization. But what’s really funny is that comedy escaped from its old formulas by appropriating the tools that dramas had been using for years. The three-camera sitcom—which has been responsible for countless masterpieces of its own—made radical shifts of tone and location hard to achieve, and once comedies liberated themselves from the obligation to unfold as if for a live audience, they could indulge in extended riffs and flights of imagination that were impossible before. It’s the kind of freedom that dramas, in theory, have always had, even if they utilize it only rarely. This isn’t to say that a uniformity of approach is a bad thing: the standard narrative grammar evolved for a reason, and if it gives us compelling characters with a maximum of transparency, that’s all for the better. Telling good stories is hard enough as it is, and formal experimentation for its own sake can be a trap in itself. Yet we’re still living in a world with countless ways of being funny, and only one way, within a narrow range of variations, of being serious. And that’s no laughing matter.

The crowded circle of television

with 2 comments

The cast of Mad Men

Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s question: “What’s your favorite TV show of the year so far?”

There are times when watching television can start to feel like a second job—a pleasurable one, to be sure, but one that demands a lot of work nevertheless. Over the last year, I’ve followed more shows than ever, including Mad Men, Game of Thrones, Orange is the New Black, Hannibal, Community, Parks and Recreation, House of Cards, The Vampire Diaries, and True Detective. For the most part, they’ve all had strong runs, and I’d have trouble picking a favorite. (If pressed, I’d probably go with Mad Men, if only for old times’ sake, with Hannibal as a very close second.) They’re all strikingly different in emphasis, tone, and setting, but they also have a lot in common. With one exception, which I’ll get to in a moment, these are dense shows with large casts and intricate storylines. Many seem devoted to pushing the limits of how much complexity can be accommodated within the constraints of the television format, which may be why the majority run for just ten to thirteen episodes: it’s hard to imagine that level of energy sustained over twenty or more installments.

And while I’m thrilled by the level of ambition visible here, it comes at a price. There’s a sort of arms race taking place between media of all kinds, as they compete to stand out in an increasingly crowded space with so much competing for our attention. Books, even literary novels, are expected to be page-turners; movies offer up massive spectacle to the point where miraculous visual effects are taken for granted; and television has taken to packing every minute of narrative time to the bursting point. (This isn’t true of all shows, of course—a lot of television series are still designed to play comfortably in the background of a hotel room—but it’s generally the case with prestige shows that end up on critics’ lists and honored at award ceremonies.) This trend toward complexity arises from a confluence of factors I’ve tried to unpack here before: just as The Simpsons was the first freeze-frame sitcom, modern television takes advantage of our streaming and binge-watching habits to deliver storytelling that rewards, and even demands, close attention.

Matthew McConaughey on True Detective

For the most part, this is a positive development. Yet there’s also a case to be made that television, which is so good at managing extended narratives and enormous casts of characters, is also uniquely suited for the opposite: silence, emptiness, and contemplation. In a film, time is a precious commodity, and when you’re introducing characters while also setting in motion the machinery of a complicated story, there often isn’t time to pause. Television, in theory, should be able to stretch out a little, interspersing relentless forward momentum with moments of quiet, which are often necessary for viewers to consolidate and process what they’ve seen. Twin Peaks was as crowded and plotty as any show on the air today, but it also found time for stretches of weird, inexplicable inaction, and it’s those scenes that I remember best. Even in the series finale, with so many threads to address and only forty minutes to cover them all, it devotes endless minutes to Cooper’s hallucinatory—and almost entirely static—ordeal in the Black Lodge, and even to a gag involving a decrepit bank manager rising from his desk and crossing the floor of his branch very, very slowly.

So while there’s a lot of fun to be had with shows that constantly accelerate the narrative pace, it can also be a limitation, especially when it’s handled less than fluently. (For every show, like Orange is the New Black, that manages to cut expertly between subplots, there’s another, like Game of Thrones, that can’t quite seem to handle its enormous scope, and even The Vampire Diaries is showing signs of strain.) Both Hannibal and Mad Men know when to linger on an image or revelation—roughly half of Hannibal is devoted to contemplating its other half—and True Detective, in particular, seemed to consist almost entirely of such pauses. We remember such high points as the final chase with the killer or the raid in “Who Goes There,” but what made the show special were the scenes in which nothing much seemed to be happening. It was aided in this by its limited cast and its tight focus on its two leads, so it’s possible that what shows really need to slow things down are a couple of movie stars to hold the eye. But it’s a step in the right direction. If time is a flat circle, as Rust says, so is television, and it’s good to see it coming back around.

The dreamlife of television

with one comment

Aaron Paul on Breaking Bad

I’ve been dreaming a lot about Breaking Bad. On Wednesday, my wife and I returned from a trip to Barcelona, where we’d spent a beautiful week: my baby daughter was perfectly happy to be toted around various restaurants, cultural sites, and the Sagrada Familia, and it came as a welcome break from my own work. Unfortunately, it also meant that we were going to miss the Breaking Bad finale, which aired the Sunday before we came home. For a while, I seriously considered bringing my laptop and downloading it while we were out of the country, both because I was enormously anxious to see how the show turned out and because I dreaded the spoilers I’d have to avoid for the three days before we returned. In the end, I gritted my teeth and decided to wait until we got home. This meant avoiding most of my favorite news and pop cultural sites—I was afraid to even glance past the top few headlines on the New York Times—and staying off Twitter entirely, which I suppose wasn’t such a great loss. And even as we toured the Picasso Museum and walked for miles along the marina with a baby in tow, my thoughts were rarely very far from Walter White.

This must have done quite a number on my psyche, because I started dreaming about the show with alarming frequency. My dreams included two separate, highly elaborated versions of the finale, one of which was a straightforward bloodbath with a quiet epilogue, the other a weird metafictional conclusion in which the events of the series were played out on a movie screen with the cast and crew watching them unfold—which led me to exclaim, while still dreaming: “Of course that’s how they would end it!” Now that I’ve finally seen the real finale, the details of these dreams are fading, and only a few scraps of imagery remain. Yet the memories are still emotionally charged, and they undoubtedly affected how I approached the last episode itself, which I was afraid would never live up to the versions I’d dreamed for myself. I suspect that a lot of fans, even those who didn’t actually hallucinate alternate endings, probably felt the same way. (For the record, I liked the finale a lot, even if it ranks a notch below the best episodes of the show, which was always best at creating chaos, not resolving it. And I think about its closing moments almost every day.)

Jon Hamm on Mad Men

And it made me reflect on the ways in which television, especially in its modern, highly serialized form, is so conducive to dreaming. Dreams are a way of assembling and processing fragments of the day’s experience, or recollections from the distant past, and a great television series is nothing less than a vast storehouse of memories from another life. When a show is as intensely serialized as Breaking Bad was, it can be hard to remember individual episodes, aside from the occasional formal standout like “Fly”: I can’t always recall what scenes took place when, or in what order, and an especially charged sequence of installments—like the last half of this final season—tends to blend together into a blur of vivid impressions. What I remember are facial expressions, images, bits of dialogue: “Stay out of my territory.” “Run.” “Tread lightly.” And the result is a mine of moments that end up naturally incorporated into my own subconscious. A good movie or novel exists as a piece, and I rarely find myself dreaming alternate lives for, say, Rick and Ilsa or Charles Foster Kane. With Walter White, it’s easy to imagine different paths that the action could have taken, and those byways play themselves out in the deepest parts of my brain.

Which may explain why television is so naturally drawn to dream sequences and fantasies, which are only one step removed from the supposedly factual events of the shows themselves. Don Draper’s dreams have become a huge part of Mad Men, almost to the point of parody, and this has always been an art form that attracts surreal temperaments, from David Lynch to Bryan Fuller, even if they tend to be destroyed by it. As I’ve often said before, it’s the strangest medium I know, and at its best, it’s the outcome of many unresolved tensions. Television can feel maddeningly real, a hidden part of your own life, which is why it can be so hard to say goodbye to a great show. It’s also impossible to get a lasting grip on it or to hold it all in your mind at once, especially if it runs for more than a few seasons, which hints at an even deeper meaning. I’ve always been struck by how poorly we integrate the different chapters in our own past: there are entire decades of my life that I don’t think about for months on end. When they return, it’s usually in the hours just before waking. And by teaching us to process narratives that can last for years, it’s possible that television subtly trains us to better understand the shapes of our own lives, even if it’s only in dreams.

Written by nevalalee

October 7, 2013 at 8:27 am

Posted in Television

Tagged with ,

Critical television studies

with 4 comments

The cast of Community

Television is such a pervasive medium that it’s easy to forget how deeply strange it is. Most works of art are designed to be consumed all at once, or at least in a fixed period of time—it’s physically possible, if not entirely advisable, to read War and Peace in one sitting. Television, by contrast, is defined by the fact of its indefinite duration. House of Cards aside, it seems likely that most of us will continue to watch shows week by week, year after year, until they become a part of our lives. This kind of extended narrative can be delightful, but it’s also subject to risk. A beloved show can change for reasons beyond anyone’s control. Sooner or later, we find out who killed Laura Palmer. An actor’s contract expires, so Mulder is abducted by aliens, and even if he comes back, by that point, we’ve lost interest. For every show like Breaking Bad that has its dark evolution mapped out for seasons to come, there’s a series like Glee, which disappoints, or Parks and Recreation, which gradually reveals a richness and warmth that you’d never guess from the first season alone. And sometimes a show breaks your heart.

It’s clear at this point that the firing of Dan Harmon from Community was the most dramatic creative upheaval for any show in recent memory. This isn’t the first time that a show’s guiding force has departed under less than amicable terms—just ask Frank Darabont—but it’s unusual in a series so intimately linked to one man’s particular vision. Before I discovered Community, I’d never heard of Dan Harmon, but now I care deeply about what this guy feels and thinks. (Luckily, he’s never been shy about sharing this with the rest of us.) And although it’s obvious from the opening minutes of last night’s season premiere that the show’s new creative team takes its legacy seriously, there’s no escaping the sense that they’re a cover band doing a great job with somebody else’s music. Showrunners David Guarascio and Moses Port do their best to convince us out of the gate that they know how much this show means to us, and that’s part of the problem. Community was never a show about reassuring us that things won’t change, but about unsettling us with its endless transformations, even as it delighted us with its new tricks.

The Community episode "Remedial Chaos Theory"

Don’t get me wrong: I laughed a lot at last night’s episode, and I was overjoyed to see these characters again. By faulting the new staff for repeating the same beats I loved before, when I might have been outraged by any major alterations, I’m setting it up so they just can’t win. But the show seems familiar now in a way that would have seemed unthinkable for most of its first three seasons. Part of the pleasure of watching the series came from the fact that you never knew what the hell might happen next, and it wasn’t clear if Harmon knew either. Not all of his experiments worked: there even some clunkers, like “Messianic Myths and Ancient Peoples,” in the glorious second season, which is one of my favorite runs of any modern sitcom. But as strange as this might have once seemed, it feels like we finally know what Community is about. It’s a show that takes big formal risks, finds the emotional core in a flurry of pop culture references, and has no idea how to use Chevy Chase. And although I’m grateful that this version of the show has survived, I don’t think I’m going to tune in every week wondering where in the world it will take me.

And the strange thing is that Community might have gone down this path with or without Harmon. When a show needs only two seasons to establish that anything is possible, even the most outlandish developments can seem like variations on a theme. Even at the end of the third season, there was the sense that the series was repeating itself. I loved “Digital Estate Planning,” for instance, but it felt like the latest attempt to do one of the formally ambitious episodes that crop up at regular intervals each season, rather than an idea that forced itself onto television because the writers couldn’t help themselves. In my review of The Master, I noted that Paul Thomas Anderson has perfected his brand of hermetic filmmaking to the point where it would be more surprising if he made a movie that wasn’t ambiguous, frustrating, and deeply weird. Community has ended up in much the same place, so maybe it’s best that Harmon got out when he did. It’s doubtful that the series will ever be able to fake us out with a “Critical Film Studies” again, because it’s already schooled us, like all great shows, in how it needs to be watched. And although its characters haven’t graduated from Greendale yet, its viewers, to their everlasting benefit, already have.

Written by nevalalee

February 8, 2013 at 9:50 am

Wouldn’t it be easier to write for television?

leave a comment »

Last week, I had dinner with a college friend I hadn’t seen in years, who is thinking about giving up a PhD in psychology to write for television in Los Angeles. We spent a long time commiserating about the challenges of the medium, at least from a writer’s point of view, hitting many of the points that I’ve discussed here before. With the prospects of a fledgling television show so uncertain, I said, especially when the show might be canceled after four episodes, or fourteen, or forty, it’s all but impossible for the creator to tell effective stories over time. Running a television show is one of the hardest jobs in the world, with countless obstacles along the way, even for critical darlings. Knowing all this, I asked my friend, why did he want to do this in the first place?

My friend’s response was an enlightening one. The trouble with writing novels or short stories, he said, is the fact that the author is expected to spend a great deal of time on description, style, and other tedious elements that a television writer can cheerfully ignore. Teleplays, like feature scripts, are nothing but structure and dialogue (or maybe just structure, as William Goldman says), and there’s something liberating in how they strip storytelling down to its core. The writer takes care of the bones of the narrative, which is where his primary interest presumably lies, then outsources the work of casting, staging, and art direction to qualified professionals who are happy to do the work. And while I didn’t agree with everything my friend said, I could certainly see his point.

Yet that’s only half of the story. It’s true that a screenwriter gets to outsource much of the conventional apparatus of fiction to other departments, but only at the price of creative control. You may have an idea about how a character should look, or what kind of home he should have, or how a moment of dialogue, a scene, or an overall story should unfold, but as a writer, you don’t have much control over the matter. Scripts are easier to write than novels for a reason: they’re only one piece of a larger enterprise, which is reflected in the writer’s relative powerlessness. The closest equivalent to a novelist in television isn’t the writer, but the executive producer. Gene Roddenberry, in The Making of Star Trek, neatly sums up the similarity between the two roles:

Producing in television is like storytelling. The choice of the actor, picking the right costumes, getting the right flavor, the right pace—these are as much a part of storytelling as writing out that same description of a character in a novel.

And the crucial point about producing a television series, like directing a feature film, is that it’s insanely hard. As Thomas Lennon and Robert Ben Garant point out in their surprisingly useful Writing Movies for Fun and Profit, as far as directing is concerned, “If you’re doing it right, it’s not that fun.” As a feature director or television producer, you’re responsible for a thousand small but critical decisions that need to be made very quickly, and while you’re working on the story, you’re also casting parts, scouting for locations, dealing with the studio and the heads of various departments, and surviving on only a few hours of sleep a night, for a year or more of your life. In short, the amount of effort required to keep control of the project is greater, not less, than what is required to write a novel—except with more money on the line, in public, and with greater risk that control will eventually be taken away from you.

So it easier to write for television? Yes, if that’s all you want to do. But if you want control of your work, if you want your stories to be experienced in a form close to what you originally envisioned, it isn’t easier. It’s much harder. Which is why, to my mind, John Irving still puts it best: “When I feel like being a director, I write a novel.”

Lessons from great (and not-so-great) television

with one comment

It can be hard for a writer to admit being influenced by television. In On Becoming a Novelist, John Gardner struck a disdainful note that hasn’t changed much since:

Much of the dialogue one encounters in student fiction, as well as plot, gesture, even setting, comes not from life but from life filtered through TV. Many student writers seem unable to tell their own most important stories—the death of a father, the first disillusionment in love—except in the molds and formulas of TV. One can spot the difference at once because TV is of necessity—given its commercial pressures—false to life.

In the nearly thirty years since Gardner wrote these words, the television landscape has changed dramatically, but it’s worth pointing out that much of what he says here is still true. The basic elements of fiction—emotion, character, theme, even plot—need to come from close observation of life, or even the most skillful novel will eventually ring false. That said, the structure of fiction, and the author’s understanding of the possibilities of the form, doesn’t need to come from life alone, and probably shouldn’t. To develop a sense of what fiction can do, a writer needs to pay close attention to all types of art, even the nonliterary kind. And over the past few decades, television has expanded the possibilities of narrative in ways that no writer can afford to ignore.

If you think I’m exaggerating, consider a show like The Wire, which tells complex stories involving a vast range of characters, locations, and social issues in ways that aren’t possible in any other medium. The Simpsons, at least in its classic seasons, acquired a richness and velocity that continued to build for years, until it had populated a world that rivaled the real one for density and immediacy. (Like the rest of the Internet, I respond to most situations with a Simpsons quote.) And Mad Men continues to furnish a fictional world of astonishing detail and charm. World-building, it seems, is where television shines: in creating a long-form narrative that begins with a core group of characters and explores them for years, until they can come to seem as real as one’s own family and friends.

Which is why Glee can seem like such a disappointment. Perhaps because the musical is already the archest of genres, the show has always regarded its own medium with an air of detachment, as if the conventions of the after-school special or the high school sitcom were merely a sandbox in which the producers could play. On some level, this is fine: The Simpsons, among many other great shows, has fruitfully treated television as a place for narrative experimentation. But by turning its back on character continuity and refusing to follow any plot for more than a few episodes, Glee is abandoning many of the pleasures that narrative television can provide. Watching the show run out of ideas for its lead characters in less than two seasons simply serves as a reminder of how challenging this kind of storytelling can be.

Mad Men, by contrast, not only gives us characters who take on lives of their own, but consistently lives up to those characters in its acting, writing, and direction. (This is in stark contrast to Glee, where I sense that a lot of the real action is taking place in fanfic.) And its example has changed the way I write. My first novel tells a complicated story with a fairly controlled cast of characters, but Mad Men—in particular, the spellbinding convergence of plots in “Shut the Door, Have a Seat”—reminded me of the possibilities of expansive casts, which allows characters to pair off and develop in unexpected ways. (The evolution of Christina Hendricks’s Joan from eye candy to second lead is only the most obvious example.) As a result, I’ve tried to cast a wider net with my second novel, using more characters and settings in the hopes that something unusual will arise. Television, strangely, has made me more ambitious. I’d like to think that even John Gardner would approve.

Written by nevalalee

March 17, 2011 at 8:41 am

The president is collaborating

leave a comment »

Last week, Bill Clinton and James Patterson released their collaborative novel The President is Missing, which has already sold something like a quarter of a million copies. Its publication was heralded by a lavish two-page spread in The New Yorker, with effusive blurbs from just about everyone whom a former president and the world’s bestselling author might be expected to get on the phone. (Lee Child: “The political thriller of the decade.” Ron Chernow: “A fabulously entertaining thriller.”) If you want proof that the magazine’s advertising department is fully insulated from its editorial side, however, you can just point to the fact that the task of reviewing the book itself was given to Anthony Lane, who doesn’t tend to look favorably on much of anything. Lane’s style—he has evidently never met a smug pun or young starlet he didn’t like—can occasionally turn me off from his movie reviews, but I’ve always admired his literary takedowns. I don’t think a month goes by that I don’t remember his writeup of the New York Times bestseller list May 15, 1994, which allowed him to tackle the likes of The Bridges of Madison County, The Celestine Prophecy, and especially The Day After Tomorrow by Allan Folsom, from which he quoted a sentence that permanently changed my view of such novels: “Two hundred European cities have bus links with Frankfurt.” But he seems to have grudgingly liked The President is Missing. If nothing else, he furnishes a backhanded compliment that has already been posted, hilariously out of context, on Amazon: “If you want to make the most of your late-capitalist leisure-time, hit the couch, crack a Bud, punch the book open, focus your squint, and enjoy.”

The words “hit the couch, crack a Bud, punch the book open, [and] focus your squint,” are all callbacks to samples of Patterson’s prose that Lane quotes in the review, but the phrase “late-capitalist leisure-time” might require some additional explanation. It’s a reference to the paper “Structure over Style: Collaborative Authorship and the Revival of Literary Capitalism,” which appeared last year in Digital Humanities Review, and I’m grateful to Lane for bringing it to my attention. The authors, Simon Fuller and James O’Sullivan, focus on the factory model of novelists who employ ghostwriters to boost their productivity, and their star exhibit is Patterson, to whom they devote the same kind of computational scrutiny that has previously uncovered traces of collaboration in Shakespeare. Not surprisingly, it turns out that Patterson doesn’t write most of the books that he ostensibly coauthors. (He may not even have done much of the writing on First to Die, which credits him as the sole writer.) But the paper is less interesting for its quantitative analysis than for its qualitative evaluation of what Patterson tells us about how we consume and enjoy fiction. For instance:

The form of [Patterson’s] novels also appears to be molded by contemporary experience. In particular, his work is perhaps best described as “commuter fiction.” Nicholas Paumgarten describes how the average time for a commute has significantly increased. As a result, reading has increasingly become one of those pursuits that can pass the time of a commute. For example, a truck driver describes how “he had never read any of Patterson’s books but that he had listened to every single one of them on the road.” A number of online reader reviews also describe Patterson’s writing in terms of their commutes…With large print, and chapters of two or three pages, Patterson’s works are constructed to fit between the stops on a metro line.

Of course, you could say much the same of many thrillers, particularly the kind known as the airport novel, which wasn’t just a book that you read on planes—at its peak, it was one in which many scenes took place in airports, which were still associated with glamor and escape. What sets Patterson apart from his peers is his ability to maintain a viable brand while publishing a dozen books every year. His productivity is inseparable from his use of coauthors, but he wasn’t the first. Fuller and O’Sullivan cite the case of Alexandre Dumas, who allegedly boasted of having written four hundred novels and thirty-five plays that had created jobs for over eight thousand people. And they dig up a remarkable quote from The German Ideology by Karl Marx and Friedrich Engels, who “favorably compare French popular fiction to the German, paying particular attention to the latter’s appropriation of the division of labor”:

In proclaiming the uniqueness of work in science and art, [Max] Stirner adopts a position far inferior to that of the bourgeoisie. At the present time it has already been found necessary to organize this “unique” activity. Horace Vernet would not have had time to paint even a tenth of his pictures if he regarded them as works which “only this Unique person is capable of producing.” In Paris, the great demand for vaudevilles and novels brought about the organization of work for their production, organization which at any rate yields something better than its “unique” competitors in Germany.

These days, you could easily imagine Marx and Engels making a similar case about film, by arguing that the products of collaboration in Hollywood have often been more interesting, or at least more entertaining, than movies made by artists working outside the system. And they might be right.

The analogy to movies and television seems especially appropriate in the case of Patterson, who has often drawn such comparisons himself, as he once did to The Guardian: “There is a lot to be said for collaboration, and it should be seen as just another way to do things, as it is in other forms of writing, such as for television, where it is standard practice.” Fuller and O’Sullivan compare Patterson’s brand to that of Alfred Hitchcock, whose name was attached to everything from Dell anthologies to The Three Investigators to Alfred Hitchcock’s Mystery Magazine. It’s a good parallel, but an even better one might be hiding in plain sight. In her recent profile of the television producer Ryan Murphy, Emily Nussbaum evokes an ability to repackage the ideas of others that puts even Patterson to shame:

Murphy is also a collector, with an eye for the timeliest idea, the best story to option. Many of his shows originate as a spec script or as some other source material. (Murphy owned the rights to the memoir Orange Is the New Black before Jenji Kohan did, if you want to imagine an alternative history of television.) Glee grew out of a script by Ian Brennan; Feud began as a screenplay by Jaffe Cohen and Michael Zam. These scripts then get their DNA radically altered and replicated in Murphy’s lab, retooled with his themes and his knack for idiosyncratic casting.

Murphy’s approach of retooling existing material in his own image might be even smarter than Patterson’s method of writing outlines for others to expand, and he’s going to need it. Two months ago, he signed an unprecedented $300 million contract with Netflix to produce content of all kinds: television shows, movies, documentaries. And another former president was watching. While Bill Clinton was working with Patterson, Barack Obama was finalizing a Netflix deal of his own—and if he needs a collaborator, he doesn’t have far to look.

The magic window

with one comment

Last week, the magazine Nautilus published a conversation on “the science and art of time” between the composer Philip Glass and the painter Fredericka Foster. The entire article is worth a look, but my favorite detail is one that Glass shares at the very beginning:

There are many strange things about music and time. When I’m on a tour with the dance company we work in a different-sized theater every night. The first thing the dance company does when we arrive is to measure the stage. They have to reset the dance to fit that stage. So you also have to reset the time of the music: in a larger theater, you must play slower. In a smaller theater, you have to play faster. The relation of time and space in music is dynamic. I have a range of speed in mind. If the players don’t pay attention to that, it will look really funny. You can see the stage fill up with dancers because they are playing at the wrong speed.

And a few lines afterward, in a more contemplative mood, Glass continues: “I was reflecting on the universe expanding. We know that it is and can measure it, by the way time is operating, or by the way we see a star exploding far away. For various reasons, when a physicist tells me that the universe is expanding, I say ‘Okay, let’s go back to the dance floor.’ The dance floor is getting bigger, what does that mean? It means that time has to slow down.”

The relationship between the pacing of a work of art and the physical space in which it occurs is an intriguing one, and it reminds me of a trick employed by one of my heroes, the film editor Walter Murch. In his excellent book Behind the Seen, Charles Koppelman describes the “little people,” a pair of tiny paper silhouettes—one male, one female—that Murch attaches to the screening monitor in his editing room. Koppelman explains:

They are his way of dealing with the problem of scale…As an editor, Murch must remember that images in the edit room are only 1/240 the square footage of what the audience will eventually see on a thirty-foot-wide screen…It’s still easy to forget the size of a projected film, which can trick an editor into pacing a film too quickly, or using too many close-ups—styles more akin to television. The eye rapidly apprehends the relatively small, low-detail images on a TV. Large-scale faces help hold the attention of the audience sitting in a living room with lots of distractions or ambient light. But in movies, images are larger than life and more detailed, so the opposite is true. The eye needs time to peruse the movie screen and take it all in…The solution for Murch is to have these two human cutouts stand sentry on his monitor, reminding him of the film’s eventual huge proportions.

And Murch writes in his book In the Blink of an Eye: “Why don’t we just edit in large rooms with big screens? Well, with digital editing and video projection, we could, very easily, be editing with a thirty-foot screen. The real estate for the room would be expensive, however.”

And while the problems presented by a live performance and a projected image on film might seem rather different, the underlying issue, in both cases, is the audience’s ability to receive and process information. On a purely practical level, a big stage may require the tempo of the choreography to subtly change, because the dancers are moving in a larger physical space, and the music has to be adjusted accordingly. But the viewer’s relationship to the work is also affected—the eye is more likely to take in the action in pieces, rather than as a whole, and the pacing may need to be modified. A similar phenomenon occurs in the movies, as Murch writes:

I have heard directors say that they were were disappointed when they finally saw their digitally edited films projected on a big screen. They felt that the editing now seemed “choppy,” though it had seemed fine on the television monitor…With a small screen, your eye can easily take in everything at once, whereas on a big screen it can only take in sections at a time. You tend to look at a small screen, but into a big screen. If you are looking at an image, taking it all in at once, your tendency will be to cut away to the next shot sooner. With a theatrical film, particularly one in which the audience is fully engaged, the screen is not a surface, it is a magic window, sort of a looking glass through which your whole body passes and becomes engaged in the action with the characters on the screen.

Murch notes that the lack of detail on a small screen—or a compressed video file—can mislead the editor as well: “There may be so little detail that the eye can absorb all of it very quickly, leading the careless editor to cut sooner than if he had been looking at the fully detailed film image…Image detail and pace are intimately related.

And the risk of editing on a smaller screen isn’t anything new. Over thirty years ago, the director and editor Edward Dmytryk wrote in On Film Editing:

Many editors shape their editing concepts on the Moviola, a technique I consider decidedly inferior. One does not see the same things on a small Moviola screen, or even on the somewhat larger, though fuzzier, flatbed screen, that one sees in a theater. The audience sees its films only on the “big screen,” and since every cut should be made with the audience in mind, the cutter must try to see each bit of film as the viewer in the theater will eventually see it. (Even a moderate-sized television screen offers far more scope than a Moviola; therefore, it too presents a somewhat different “picture” for the viewer’s inspection.)

Today, of course, viewers can experience stories on a range of screen sizes that Dmytryk might never have anticipated, and which no editor can possibly control. And it’s unclear how editors—who, unlike Philip Glass, don’t have the luxury of measuring the space in which the film will unfold—are supposed to deal with this problem. Taken as a whole, it seems likely that the trend of editorial pacing reflects the smallest screen on which the results can be viewed, which is part of the reason why the average number of cuts per minute has steadily increased for years. And it’s not unreasonable for editors to prioritize the format in which movies will be seen for most of their lifetimes. Yet we also give up something when we no longer consider the largest possible stage. After the editor Anne V. Coates passed away last month, many obituaries paid tribute to the moment in Lawrence of Arabia that has justifiably been called the greatest cut in movie history. But it wouldn’t have nearly the same impact if it weren’t for the fact that the next shot is held for an astonishing thirty-five seconds, which might never have occurred to someone who was cutting it for a smaller screen. Even viewed on YouTube, it’s unforgettable. But in a theater, it’s a magic window.

The Prime of Miss Elizabeth Hoover

with 2 comments

Yesterday, as I was working on my post for this blog, I found myself thinking about the first time that I ever heard of Lyme disease, which, naturally, was on The Simpsons. In the episode “Lisa’s Substitute,” which first aired on April 25, 1991, Lisa’s teacher, Miss Hoover, tells the class: “Children, I won’t be staying long. I just came from the doctor, and I have Lyme disease.” As Principal Skinner cheerfully explains: “Lyme disease is spread by small parasites called ‘ticks.’ When a diseased tick attaches itself to you, it begins sucking your blood. Malignant spirochetes infect your bloodstream, eventually spreading to your spinal fluid and on into the brain.” At the end of the second act, however, Miss Hoover unexpectedly returns, and I’ve never forgotten her explanation for her sudden recovery:

Miss Hoover: You see, class, my Lyme disease turned out to be psychosomatic.
Ralph: Does that mean you’re crazy?
Janie: It means she was faking it.
Miss Hoover: No, actually, it was a little of both. Sometimes, when a disease is in all the magazines and on all the news shows, it’s only natural that you think you have it.

And while it might seem excessive to criticize a television episode that first aired over a quarter of a century ago, it’s hard to read these lines after Porochista Khakpour’s memoir Sick without wishing that this particular joke didn’t exist.

In its chronic form, Lyme disease remains controversial, but like chronic fatigue syndrome and fibromyalgia, it’s an important element in the long, complicated history of women having trouble finding doctors who will take their pain seriously. As Lidija Haas writes in The New Yorker:

There’s a class of illnesses—multi-symptomatic, chronic, hard to diagnose—that remain associated with suffering women and disbelieving experts. Lyme disease, symptoms of which can afflict patients years after the initial tick bite, appears to be one…[The musician Kathleen Hanna] describes an experience common to many sufferers from chronic illness—that of being dismissed as an unreliable witness to what is happening inside her. Since no single medical condition, a doctor once told her, could plausibly affect so many different systems—neurological, respiratory, gastrointestinal—she must be having a panic attack…As in so many other areas of American life, women of color often endure the most extreme versions of this problem.

It goes without saying that when “Lisa’s Substitute” was written, there weren’t any women on the writing staff of The Simpsons, although even if there were, it might not have made a difference. In her recent memoir Just the Funny Parts, Nell Scovell, who worked as a freelance television writer in the early nineties, memorably describes the feeling of walking into the “all-male” Simpsons writing room, which was “welcoming, but also intimidating.” It’s hard to imagine these writers, so many of them undeniably brilliant, thinking twice about making a joke like this—and it’s frankly hard to see them rejecting it now, when it might only lead to attacks from people who, in Matt Groening’s words, “love to pretend they’re offended.”

I’m not saying that there are any subjects that should be excluded from comedic consideration, or that The Simpsons can’t joke about Lyme disease. But as I look back at the classic years of my favorite television show of all time, I’m starting to see a pattern that troubles me, and it goes far beyond Apu. I’m tempted to call it “punching down,” but it’s worse. It’s a tendency to pick what seem at the time like safe targets, and to focus with uncanny precision on comic gray areas that allow for certain forms of transgression. I know that I quoted this statement just a couple of months ago, but I can’t resist repeating what producer Bill Oakley says of Greg Daniels’s pitch about an episode on racism in Springfield:

Do you remember this? Something about Homer and Dr. Hibbert? Well, you pitched it several times and I think we were just…It was some exploration of the concept of race in Springfield, and we just said, you know, we don’t think this is the forum. The Simpsons can’t be the right forum to deal with racism.

He was probably right. But when you look at the few racially charged jokes that the show actually made, the characters involved weren’t black, but quite specifically “brown,” or members of groups that occupy a liminal space in our cultural understanding of race: Apu, Akira, Bumblebee Man. (I know that Akira was technically whiter than anybody else, but you get my drift.) By contrast, the show was very cautious when it came to its black characters. Apart from Dr. Hibbert, who was derived from Bill Cosby, the show’s only recurring black faces were Carl and Officer Lou, the latter of whom is so unmemorable that I had to look him up to make sure that he wasn’t Officer Eddie. And both Carl and Lou were given effectively the same voice by Hank Azaria, the defining feature of which was that it was nondescript as humanly possible.

I’m not necessarily criticizing the show’s treatment of race, but the unconscious conservatism that carefully avoided potentially controversial areas while lavishing attention on targets that seemed unobjectionable. It’s hard to imagine a version of the show that would have dared to employ such stereotypes, even ironically, on Carl, Lou, or even Judge Snyder, who was so racially undefined that he was occasionally colored as white. (The show’s most transgressive black figures, Drederick Tatum and Lucius Sweet, were so transparently modeled on real people that they barely even qualified as characters. As Homer once said: “You know Lucius Sweet? He’s one of the biggest names in boxing! He’s exactly as rich and as famous as Don King, and he looks just like him, too!” And I’m not even going to talk about “Bleeding Gums” Murphy.) That joke about Miss Hoover is starting to feel much the same way, and if it took two decades for my own sensibilities to catch up with that fact, it’s for the same reasons that we’re finally taking a harder look at Apu. And if I speak as a fan, it isn’t to qualify these thoughts, but to get at the heart of why I feel obliged to write about them at all. We’re all shaped by popular culture, and I can honestly say of The Simpsons, as Jack Kerouac writes in On the Road: “All my actions since then have been dictated automatically to my subconscious by this horrible osmotic experience.” The show’s later seasons are reflexively dismissed as lazy, derivative, and reliant on easy jokes, but we still venerate its golden years. Yet if The Simpsons has gradually degraded under the watch of many of its original writers and producers, this implies that we’re only seeing the logical culmination—or eruption—of something that was there all along, afflicting its viewers years after the original bite. We all believed that The Simpsons, in its prime, was making us smarter. But what if it was just psychosomatic?

Designing the future

leave a comment »

Over the last half century or so, our culture has increasingly turned to film and television, rather than to the written word, as its primary reference point when we talk about the future. This is partially because more people are likely to have seen a blockbuster movie than to have read even the most successful novel, but the visual arts might also be more useful when it comes to certain kinds of speculation. As I browsed recently through the book Speculative Everything, I was repeatedly struck by the thought that dealing with physical materials can lead to insights that can’t be reached through words alone. In his classic New Yorker profile of Stanley Kubrick, the science writer Jeremy Bernstein provided a portrait of one such master at work:

In the film [2001], the astronauts will wear space suits when they are working outside their ships, and Kubrick was very anxious that they should look like the space suits of thirty-five years from now…They were studying a vast array of samples of cloth to find one that would look right and photograph well. While this was going on, people were constantly dropping into the office with drawings, models, letters, cables, and various props, such as a model of a lens for one of the telescopes in a spaceship. (Kubrick rejected it because it looked too crude.) At the end of the day, when my head was beginning to spin, someone came by with a wristwatch that the astronauts were going to use on their Jupiter voyage (which Kubrick rejected) and a plastic drinking glass for the moon hotel (which Kubrick thought looked fine).

This is a level of detail that most writers would lack the patience or ability to develop, and even if it were possible, there’s a huge difference between describing such objects at length on the page, which is rightly discouraged, and showing it to the viewer without comment. It can also lead to new ideas or discoveries that can feed into the story itself. I never tire of quoting a piece of advice from Shamus Culhane’s Animation: From Script to Screen, in which he recommends using a list of props to generate plot points and bits of business for a short cartoon:

One good method of developing a story is to make a list of details. For example [for a cartoon about elves as clock cleaners in a cathedral], what architectural features come to mind—steeples, bells, windows, gargoyles? What props would the elves use—brushes, pails, mops, sponges…what else? Keep on compiling lists without stopping to think about them. Let your mind flow effortlessly, and don’t try to be neat or orderly. Scribble as fast as you can until you run out of ideas.

In animation—or in a medium like comics or the graphic novel—this kind of brainstorming requires nothing more than a pencil and piece of paper. Kubrick’s great achievement in 2001 was to spend the same amount of time and attention, as well as considerably more money, on solving design problems in tangible form, and in the process, he set a standard for this kind of speculation that both filmmakers and other artists have done their best to meet ever since.

In Speculative Everything, Anthony Dunne and Fiona Raby suggest that the function of a prop in a movie might limit the range of possibilities that it can explore, since it has “to be legible and support plot development.” But this might also be a hidden strength. I don’t think it’s an accident that Minority Report is both the most influential piece of futurology in recent memory and one of the few science fiction films that manages to construct a truly ingenious mystery. And in another masterpiece from the same period, Children of Men, you can clearly see the prop maker’s pragmatism at work. Dunne and Raby quote the director Alfonso Cuarón, who says in one of the special features on the DVD:

Rule number one in the film was recognizability. We didn’t want to do Blade Runner. Actually, we thought about being the anti-Blade Runner in the sense of how we were approaching reality, and that was kind of difficult for the art department, because I would say, “I don’t want inventiveness. I want reference. Don’t show me the great idea, show me the reference in real life. And more importantly, I would like—as much as possible—references of contemporary iconography that is already engraved in human consciousness.”

Consciously or otherwise, Cuarón is echoing one of my favorite pieces of writing advice from David Mamet, who had exactly one rule when it came to designing props: You’ve got to be able to recognize it.” And the need to emphasize clarity and readability in unfamiliar contexts can push production designers in directions that they never would have taken otherwise.

Yet there’s also a case to be made for engaging in visual or sculptural thinking for its own sake, which is what makes speculative design such an interesting avenue of exploration. Dunne and Raby focus on more recent examples, but there’s a surprisingly long history of futurology in pictures. (For instance, a series of French postcards dating from the late nineteenth century imagined life a hundred years in the future, which Isaac Asimov discusses in his book Futuredays, and the book and exhibition Yesterday’s Tomorrows collects many other vintage examples of artwork about the future of America.) Some of these efforts lack the discipline that a narrative imposes, but the physical constraints of the materials can lead to a similar kind of ingenuity, and the result is a distinct tradition that draws on a different set of skills than the ones that writers tend to use. But the best solution might be one that combines both words and images at a reasonable cost. The science fiction of the golden age can sometimes seem curiously lacking in visual description—it can be hard to figure out how anything is supposed to look in Asimov’s stories—and such magazines as Astounding leaned hard on its artists to fill in the blanks. And this might have been a reasonable division of labor. The fans don’t seem to have made any distinction between the stories and their illustrations, and both played a crucial role in defining the genre. Movies and television may be our current touchstones for the future, but the literary and visual arts have been conspiring to imagine the world of tomorrow for longer than we tend to remember. As Speculative Everything demonstrates, each medium can come up with remarkable things when allowed to work on its own. But they have even more power when they join forces.

A season of disenchantment

leave a comment »

A few days ago, Matt Groening announced that his new animated series, Disenchantment, will premiere in August on Netflix. Under other circumstances, I might have been pleased by the prospect of another show from the creator of The Simpsons and Futurama—not to mention producers Bill Oakley and Josh Weinstein—and I expect that I’ll probably watch it. At the moment, however, it’s hard for me to think about Groening at all without recalling his recent reaction to the long overdue conversation around the character of Apu. When Bill Keveny of USA Today asked earlier this month if he had any thoughts on the subject, Groening replied: “Not really. I’m proud of what we do on the show. And I think it’s a time in our culture where people love to pretend they’re offended.” It was a profoundly disappointing statement, particularly after Hank Azaria himself had expressed his willingness to step aside from the role, and it was all the more disillusioning coming from a man whose work has been a part of my life for as long as I can remember. As I noted in my earlier post, the show’s unfeeling response to this issue is painful because it contradicts everything that The Simpsons was once supposed to represent. It was the smartest show on television; it was simply right about everything; it offered its fans an entire metaphorical language. And as the passage of time reveals that it suffered from its own set of blinders, it doesn’t just cast doubt on the series and its creators, but on the viewers, like me, who used it for so long as an intellectual benchmark.

And it’s still an inescapable part of my personal lexicon. Last year, for instance, when Elon Musk defended his decision to serve on Trump’s economic advisory council, I thought immediately of what Homer says to Marge in “Whacking Day”: “Maybe if I’m part of that mob, I can help steer it in wise directions.” Yet it turns out that I might have been too quick to give Musk—who, revealingly, was the subject of an adulatory episode of The Simpsons—the benefit of the doubt. A few months later, in response to reports of discrimination at Tesla, he wrote an email to employees that included this remarkable paragraph:

If someone is a jerk to you, but sincerely apologizes, it is important to be thick-skinned and accept that apology. If you are part of a lesser represented group, you don’t get a free pass on being a jerk yourself. We have had a few cases at Tesla were someone in a less represented group was actually given a job or promoted over more qualified highly represented candidates and then decided to sue Tesla for millions of dollars because they felt they weren’t promoted enough. That is obviously not cool.

The last two lines, which were a clear reference to the case of A.J. Vandermeyden, tell us more about Musk’s idea of a “sincere apology” than he probably intended. And when Musk responded this week to criticism of Tesla’s safety and labor practices by accusing the nonprofit Center for Investigative Reporting of bias and proposing a site where users could provide a “credibility score” for individual journalists, he sounded a lot like the president whose circle of advisers he only reluctantly left.

Musk, who benefited from years of uncritical coverage from people who will forgive anything as long as you talk about space travel, seems genuinely wounded by any form of criticism or scrutiny, and he lashes out just as Trump does—by questioning the motives of ordinary reporters or sources, whom he accuses of being in the pocket of unions or oil companies. Yet he’s also right to be worried. We’re living in a time when public figures and institutions are going to be judged by their responses to questions that they would rather avoid, which isn’t likely to change. And the media itself is hardly exempt. For the last two weeks, I’ve been waiting for The New Yorker to respond to stories about the actions of two of its most prominent contributors, Junot Díaz and the late David Foster Wallace. I’m not even sure what I want the magazine to do, exactly, except make an honest effort to grapple with the situation, and maybe even offer a valuable perspective, which is why I read it in the first place. (In all honesty, it fills much the same role in my life these days as The Simpsons did in my teens. As Norman Mailer wrote back in the sixties: “Hundreds of thousands, perhaps millions of people in the most established parts of the middle class kill their quickest impulses before they dare to act in such a way as to look ridiculous to the private eye of their taste whose style has been keyed by the eye of The New Yorker.”) As the days passed without any comment, I assumed that it was figuring out how to tackle an admittedly uncomfortable topic, and I didn’t expect it to rush. Now that we’ve reached the end of the month without any public engagement at all, however, I can only conclude that it’s deliberately ignoring the matter in hopes that it will go away. I hope that I’m wrong. But so far, it’s a discouraging omission from a magazine whose stories on Harvey Weinstein and Eric Schneiderman implicitly put it at the head of an entire movement.

The New Yorker has evidently discovered that it’s harder to take such stands when they affect people whom we know or care about— which only means that it can get in line. Our historical moment has forced some of our smartest individuals and organizations to learn how to take criticism as well as to give it, and it’s often those whose observations about others have been the sharpest who turn out to be singularly incapable, as Clarice Starling once put it, when it comes to pointing that high-powered perception on themselves. (In this list, which is constantly being updated, I include Groening, Musk, The New Yorker, and about half the cast of Arrested Development.) But I can sympathize with their predicament, because I feel it nearly every day. My opinion of Musk has always been rather mixed, but nothing can dislodge the affection and gratitude that I feel toward the first eight seasons of The Simpsons, and I expect to approvingly link to an article in The New Yorker this time next week. But if our disenchantment forces us to question the icons whose influence is fundamental to our conception of ourselves, then perhaps it will have been worth the pain. Separating our affection for the product from those who produced it is a problem that we all have to confront, and it isn’t going to get any easier. As I was thinking about this post yesterday, the news broke that Morgan Freeman had been accused by multiple women of inappropriate behavior. In response, he issued a statement that read in part: “I apologize to anyone who felt uncomfortable or disrespected.” It reminded me a little of another man who once grudgingly said of some remarks that were caught on tape: “I apologize if anyone was offended.” But it sounds a lot better when you imagine it in Morgan Freeman’s voice.

Written by nevalalee

May 25, 2018 at 9:21 am

The bedtime story

leave a comment »

Earlier this morning, I finally got my hands on the companion book to James Cameron’s Story of Science Fiction, which is airing this month on AMC. Naturally, I immediately looked for references to the four main subjects of Astounding, and the passage that caught my eye first was an exchange between Cameron and Steven Spielberg:

Spielberg: The working title of E.T. was Watch the Skies. Which is sort of the last line from The Thing. I just remember looking at the sky because of the influence of my father, and saying, only good should come from that. If it ain’t an ICBM coming from the Soviet Union, only good should come from beyond our gravitational hold…He was a visionary about that, yet he read all the Analog. Those paperbacks? And Amazing Stories, the paperbacks of that. I used to read that along with him. Sometimes, he’d read those books to me, those little tabloids to me at night.

Cameron: Asimov, Heinlein, all those guys were all published in those pulp magazines.

Spielberg: They were all published in those magazines, and a lot of them were optimists. They weren’t always calculating our doom. They were finding ways to open up our imagination and get us to dream and get us to discover and get us to contribute to the greater good.

The discussion quickly moves on to other subjects, but not before hinting at the solution to a mystery that I’ve been trying to figure out for years, which is why the influence of Astounding and its authors can be so hard to discern in the work of someone like Spielberg. In part, it’s a matter of timing. Spielberg was born in 1946, which means that he would have been thirteen when John W. Campbell announced that that his magazine was changing its title to Analog. As a result, at a point at which he should have been primed to devour science fiction, Spielberg doesn’t seem to have found its current incarnation all that interesting, for which you can hardly blame him. Instead, his emotional associations with the pulps were evidently passed down through his father, Arnold Spielberg, an electrical engineer who worked for General Electric and RCA. The elder Spielberg, remarkably, is still active at the age of 101, and just two months ago, he said in an interview with GE Reports:

I was also influenced by science fiction. There were twins in our neighborhood who read one of the first sci-fi magazines, called Astounding Stories of Science and Fact. They gave me one copy, and when I brought it home, I was hooked. The magazine is now called Analog Science Fiction and Fact, and I still get it.

And while I don’t think that there’s any way of verifying it, if Arnold Spielberg—the father of Steven Spielberg—isn’t the oldest living subscriber to Analog, he must be close.

This sheds light on his son’s career, although perhaps not in the way that you might think. Spielberg is such a massively important figure that his very existence realigns the history of the genre, and when he speaks of his influences, we need to be wary of the shadow cast by his inescapable personality. But there’s no denying the power—and truth—of the image of Arnold Spielberg reading from the pulps aloud to his son. It feels like an image from one of Spielberg’s own movies, which has been shaped from the beginning by the tradition of oral storytelling. (It’s worth noting, though, that the father might recall things differently than the son. In his biography of the director, Joseph McBride quotes Arnold Spielberg: “I’ve been reading science fiction since I was seven years old, all the way back to the earliest Amazing Stories. Amazing, Astounding, Analog—I still subscribe. I still read ’em. My kids used to complain, ‘Dad’s in the bathroom with a science-fiction magazine. We can’t get in.'”) For Spielberg, the stories seem inextricably linked with the memory of being taken outside by his father to look at the stars:

My father was the one that introduced me to the cosmos. He’s the one who built—from a big cardboard roll that you roll rugs on—a two-inch reflecting telescope with an Edmund Scientific kit that he had sent away for. [He] put this telescope together, and then I saw the moons of Jupiter. It was the first thing he pointed out to me. I saw the rings of Saturn around Saturn. I’m six, seven years old when this all happened.

Spielberg concludes: “Those were the stories, and just looking up at the sky, that got me to realize, if I ever get a chance to make a science fiction movie, I want those guys to come in peace.”

But it also testifies to the ways in which a strong personality will take exactly what it needs from its source material. Elsewhere in the interview, there’s another intriguing reference:

Spielberg: I always go for the heart first. Of course, sometimes I go for the heart so much I get a little bit accused of sentimentality, which I’m fine [with] because…sometimes I need to push it a little further to reach a little deeper into a society that is a little less sentimental than they were when I was a young filmmaker.

Cameron: You pushed it in the same way that John W. Campbell pushed science fiction [forward] from the hard-tech nerdy guys who had to put PhD after their name to write science fiction. It was all just about the equations and the math and the physics [and evolved to become much more] human stories [about] the human heart.

I see what Cameron is trying to say here, but if you’ve read enough of the magazine that turned into Analog, this isn’t exactly the impression that it leaves. It’s true that Campbell put a greater emphasis than most of his predecessors on characterization, at least in theory, but the number of stories that were about “the human heart” can be counted on two hands, and none were exactly Spielbergian—although they might seem that way when filtered through the memory of his father’s voice. And toward the end, the nerds took over again. In Dangerous Visions, which was published in 1967, Harlan Ellison wrote of “John W. Campbell, Jr., who used to edit a magazine that ran science fiction, called Astounding, and who now edits a magazine that runs a lot of schematic drawings, called Analog.” It was the latter version of the magazine that Spielberg would have seen as a boy—which may be why, when the time came, he made a television show called Amazing Stories.

The multiverse theory

leave a comment »

Yesterday, I flew back from the Grappling with the Futures symposium, which was held over the course of two days at Harvard and Boston University. I’d heard about the conference from my friend Emanuelle Burton, a scholar at the University of Illinois at Chicago, whom I met two years ago through the academic track at the World Science Fiction Convention in Kansas City. Mandy proposed that we collaborate on a presentation at this event, which was centered on the discipline of futures studies, a subject about which I knew nothing. For reasons of my own, though, I was interested in making the trip, and we put together a talk titled Fictional Futures, which included a short history of the concept of psychohistory. The session went fine, even if we ended up with more material than we could reasonably cover in twenty minutes. But I was equally interested in studying the people around me, who were uniformly smart, intense, quirky, and a little mysterious. Futures studies is an established academic field that draws on many of the tools and concepts of science fiction, but it uses a markedly different vocabulary. (One of the scheduled keynote speakers has written and published a climate change novella, just like me, except that she describes it as a “non-numerical simulation model.”) It left me with the sense of a closed world that evolved in response to the same problems and pressures that shaped science fiction, but along divergent lines, and I still wonder what might come of a closer relationship between the two communities.

As it happened, I had to duck out after the first day, because I had something else to do in Boston. Ever since I started work on Astounding, I’ve been meaning to pay a visit to the Isaac Asimov collection at the Howard Gotlieb Archival Research Center at Boston University, which houses the majority of Asimov’s surviving papers, but which can only be viewed in person. Since I was going to be in town anyway, I left the symposium early and headed over to the library, where I spent five hours yesterday going through what I could. When you arrive at the reading room, you sign in, check your bag and cell phone, and are handed a massive finding aid, an inventory of the Asimov collection that runs to more than three hundred pages. (The entire archive, which consists mostly of work that dates from after the early sixties, fills four hundred boxes.) After marking off the items that you want, you’re rewarded with a cart loaded with archival cartons and a pair of white gloves. At the back of my mind, I wasn’t expecting to find much—I’ve been gathering material for this book for years. As it turned out, there were well over a hundred letters between Asimov, Campbell, and Heinlein alone that I hadn’t seen before. You aren’t allowed to take pictures or make photocopies, so I typed up as many notes as I could before I had to run to catch my plane. For the most part, they fill out parts of the story that I already have, and they won’t fundamentally change the book. But in an age of digital research, I was struck by the fact that all this paper, of which I just scratched the surface, is only accessible to scholars who can physically set foot in the reading room at the Mugar Library.

After two frantic days, I finally made it home, where my wife and I watched last night’s premiere of James Cameron’s Story of Science Fiction on AMC. At first glance, this series might seem like the opposite of my experiences in Boston. Instead of being set apart from the wider world, it’s an ambitious attempt to appeal to the largest audience possible, with interviews with the likes of Steven Spielberg and Christopher Nolan and discussions of such works as Close Encounters and Alien. I’ve been looking forward to this show for a long time, not least because I was hoping that it would lead to a spike in interest in science fiction that would benefit my book, and the results were more or less what I expected. In the opening sequence, you briefly glimpse Heinlein and Asimov, and there’s even a nod to The Thing From Another World, although no mention of John W. Campbell himself. For the most part, though, the series treats the literary side as a precursor to its incarnations in the movies and television, which is absolutely the right call. You want to tell this story as much as possible through images, and the medium lends itself better to H.R. Geiger than to H.P. Lovecraft. But when I saw a brief clip of archival footage of Ray Bradbury, in his role in the late seventies as an ambassador for the genre, I found myself thinking of the Bradbury whom I know best—the eager, unpublished teenager in the Great Depression who wrote fan letters to the pulps, clung to the edges of the Heinlein circle, and never quite managed to break into Astounding. It’s a story that this series can’t tell, and I can’t blame it, because I didn’t really do it justice, either.

Over the last few days, I’ve been left with a greater sense than ever before of the vast scope and apparently irreconcilable aspects of science fiction, which consists of many worlds that only occasionally intersect. It’s a realization, or a recollection, that might seem to come at a particularly inopportune time. The day before I left for the symposium, I received the page proofs for Astounding, which normally marks the point at which a book can truly be said to be finished. I still have time to make a few corrections and additions, and I plan to fix as much of it as I can without driving my publisher up the wall. (There are a few misplaced commas that have been haunting my dreams.) I’m proud of the result, but when I look at the proofs, which present the text as an elegant and self-contained unit, it seems like an optical illusion. Even if I don’t take into account what I learned when it was too late, I’m keenly aware of everything and everyone that this book had to omit. I’d love to talk more about futures studies, or the letters that I dug up in the Asimov archives, or the practical effects in John Carpenter’s remake of The Thing, but there just wasn’t room or time. As it stands, the book tries to strike a balance between speaking to obsessive fans and appealing to a wide audience, which meant excluding a lot of fascinating material that might have survived if it were being published by a university press. It can’t possibly do everything, and the events of the weekend have only reminded me that there are worlds that I’ve barely even explored. But if that isn’t the whole point of science fiction—well, what is?

Into the West

leave a comment »

A few months ago, I was on the phone with a trusted adviser to discuss some revisions to Astounding. We were focusing on the prologue, which I had recently rewritten from scratch to make it more accessible to readers who weren’t already fans of science fiction. Among other things, I’d been asked to come up with ways in which the impact of my book’s four subjects was visible in modern pop culture, and after throwing some ideas back and forth, my adviser asked me plaintively: “Couldn’t you just say that without John W. Campbell, we wouldn’t have Game of Thrones?” I was tempted to give in, but I ultimately didn’t—it just felt like too much of a stretch. (Which isn’t to say that the influence isn’t there. When a commenter on his blog asked whether his work had been inspired by the mythographer Joseph Campbell, George R.R. Martin replied: “The Campbell that influenced me was John W., not Joseph.” And that offhand comment was enough of a selling point that I put it in the very first sentence of my book proposal.) Still, I understood the need to frame the story in ways that would resonate with a mainstream readership, and I thought hard about what other reference points I could honestly provide. Star Trek was an easy one, along with such recent movies as Interstellar and The Martian, but the uncomfortable reality is that much of what we call science fiction in film and television has more to do with Star Wars. But I wanted to squeeze in one last example, and I finally settled on this line about Campbell: “For more than three decades, an unparalleled series of visions of the future passed through his tiny office in New York, where he inaugurated the main sequence of science fiction that runs through works from 2001 to Westworld.”

As the book is being set in type, I’m still comfortable with this sentence as it stands, although there are a few obvious qualifications that ought to be made. Westworld, of course, is based on a movie written and directed by Michael Crichton, whose position in the history of the genre is a curious one. As I’ve written elsewhere, Crichton was an unusually enterprising author of paperback thrillers who found himself with an unexpected blockbuster in the form of The Andromeda Strain. It was his sixth novel, and his first in hardcover, and it seems to have benefited enormously from the input of editor Robert Gottlieb, who wrote in his memoir Avid Reader:

The Andromeda Strain was a terrific concept, but it was a mess—sloppily plotted, underwritten, and worst of all, with no characterization whatsoever. [Crichton’s] scientists were beyond generic—they lacked all human specificity; the only thing that distinguished some of them from the others was that some died and some didn’t. I realized right away that with his quick mind, swift embrace of editorial input, and extraordinary work habits he could patch the plot, sharpen the suspense, clarify the science—in fact, do everything necessary except create convincing human beings. (He never did manage to; eventually I concluded that he couldn’t write about people because they just didn’t interest him.) It occurred to me that instead of trying to help him strengthen the human element, we could make a virtue of necessity by stripping it away entirely; by turning The Andromeda Strain from a documentary novel into a fictionalized documentary. Michael was all for it—I think he felt relieved.

The result, to put it mildly, did quite well, and Crichton quickly put its lessons to work. But it’s revealing that the flaws that Gottlieb cites—indifferent plotting, flat writing, and a lack of real characterization—are also typical of even some of the best works of science fiction that came out of Campbell’s circle. Crichton’s great achievement was to focus relentlessly on everything else, especially readability, and it’s fair to say that he did a better job of it than most of the writers who came up through Astounding and Analog. He was left with the reputation of a carpetbagger, and his works may have been too square and fixated on technology to ever be truly fashionable. Yet a lot of it can be traced back to his name on the cover. In his story “Pierre Menard, Author of the Quixote,” Jorge Luis Borges speaks of enriching “the slow and rudimentary act of reading by means of a new technique—the technique of deliberate anachronism and fallacious attribution.” In this case, it’s pretty useful. I have a hunch that if The Terminal Man, Congo, and Sphere had been attributed on their first release to Robert A. Heinlein, they would be regarded as minor classics. They’re certainly better than many of the books that Heinlein was actually writing around the same time. And if I’m being honest, I should probably confess that I’d rather read Jurassic Park again than any of Asimov’s novels. (As part of my research for this book, I dutifully made my way through Asimov’s novelization of Fantastic Voyage, which came out just three years before The Andromeda Strain, and his fumbling of that very Crichtonesque premise only reminded me of how good at this sort of thing Crichton really was.) If Crichton had been born thirty years earlier, John W. Campbell would have embraced him like a lost son, and he might well have written a better movie than Destination Moon.

At its best, the television version of Westworld represents an attempt to reconcile Crichton’s gifts for striking premises and suspense with the more introspective mode of the genre to which he secretly belongs. (It’s no accident that Jonathan Nolan had been developing it in parallel with Foundation.) This balance hasn’t always been easy to manage, and last night’s premiere suggests that it can only become more difficult going forward. Westworld has always seemed defined by the pattern of forces that were acting on it—its source material, its speculative and philosophical ambitions, and the pressure of being a flagship drama on HBO. It also has to deal now with the legacy of its own first season, which set a precedent for playing with time, as well as the scrutiny of viewers who figured it out prematurely. The stakes here are established early on, with Bernard awakening on a beach in a sequence that seems like a nod to the best film by Nolan’s brother, and this time around, the parallel timelines are put front and center. Yet the strain occasionally shows. The series is still finding itself, with characters, like Dolores, who seem to be thinking through their story arcs out loud. It’s overly insistent on its violence and nudity, but it’s also cerebral and detached, with little possibility of real emotional pain that the third season of Twin Peaks was able to inflict. I don’t know if the center will hold. Yet’s also possible that these challenges were there from the beginning, as the series tried to reconcile Crichton’s tricks with the tradition of science fiction that it clearly honors. I still believe that this show is in the main line of the genre’s development. Its efforts to weave together its disparate influences strike me as worthwhile and important. And I hope that it finds its way home.

The men who sold the movies

with one comment

Yesterday, I noted that although Isaac Asimov achieved worldwide fame as a science fiction writer, his stories have inspired surprisingly few cinematic adaptations, despite the endless attempts to do something with the Foundation series. But there’s a more general point to be made here, which is the relative dearth of movies based on the works of the four writers whom I discuss in Astounding. Asimov has a cheap version of Nightfall, Bicentennial Man, and I, Robot. John W. Campbell has three versions of The Thing and nothing else. L. Ron Hubbard, who admittedly is a special case, just has Battlefield Earth, while Robert A. Heinlein has The Puppet Masters, Starship Troopers and its sequels, and the recent Predestination. Obviously, this isn’t bad, and most writers, even successful ones, never see their work onscreen at all. But when you look at someone like Philip K. Dick, whose stories have been adapted into something like three television series and ten feature films, this scarcity starts to seem odd, even when you account for other factors. Hubbard is presumably off the table, and the value of Campbell’s estate, to be honest, consists entirely of “Who Goes There?” It’s also possible that much of Asimov’s work just isn’t very cinematic. But if you’re a Heinlein fan, it’s easy to imagine an alternate reality in which we can watch adaptations of “If This Goes On—,” “The Roads Must Roll,” “Universe,” “Gulf,” Tunnel in the Sky, Have Space Suit—Will Travel, Glory Road, The Moon is a Harsh Mistress, and three different versions of Stranger in a Strange Land—the corny one from the seventies, the slick but empty remake from the late nineties, and the prestige television adaptation that at least looked great on Netflix.

That isn’t how it turned out, but it wasn’t for lack of trying. Various works by Heinlein and Asimov have been continuously under option for decades, and three out of these four authors made repeated efforts to break into movies or television. Hubbard, notably, was the first, with a sale to Columbia Pictures of a unpublished story that he adapted into the serial The Secret of Treasure Island in 1938. He spent ten weeks on the studio lot, doing uncredited rewrites on the likes of The Adventures of the Mysterious Pilot and The Great Adventures of Wild Bill Hickok, and he would later claim, without any evidence, that he had worked on the scripts for Stagecoach, Dive Bomber, and The Plainsman. Decades later, Hubbard actively shopped around the screenplay for Revolt in the Stars, an obvious Star Wars knockoff, and among his last works were the scripts Ai! Pedrito! and A Very Strange Trip. Campbell, in turn, hosted the radio series Exploring Tomorrow; corresponded with the producer Clement Fuller about the television series The Unknown, with an eye to adapting his own stories or writing originals; and worked briefly as a freelance story editor for the syndicated radio series The Planet Man. Heinlein had by far the most success—he wrote Rocket Ship Galileo with one eye toward the movies, and he developed a related project with Fritz Lang before partnering with George Pal on Destination Moon. As I mentioned last week, he worked on the film Project Moon Base and an unproduced teleplay for a television show called Century XXII, and he even had the dubious privilege of suing Roger Corman for plagiarism over The Brain Eaters. And Asimov seethed with jealousy:

[Destination Moon] was the first motion picture involving one of us, and while I said not a word, I was secretly unhappy. Bob had left our group and become famous in the land of the infidels…I don’t know whether I simply mourned his loss, because I thought that now he would never come back to us; or whether I was simply and greenly envious. All I knew was that I felt more and more uncomfortable. It was like having a stomachache in the mind, and it seemed to spoil all my fun in being a science fiction writer.

But Asimov remained outwardly uninterested in the movies, writing of one mildly unpleasant experience: “It showed me again what Hollywood was like and how fortunate I was to steer as clear of it as possible.” It’s also hard to imagine him moving to Los Angeles. Yet he was at least open to the possibility of writing a story for Paul McCartney, and his work was often in development. In Nat Segaloff’s recent biography A Lit Fuse: The Provocative Life of Harlan Ellison, we learn that the television producer John Mantley had held an option on I, Robot “for some twenty years” when Ellison was brought on board in 1978. (This isn’t exactly right—Asimov states in his memoirs that Mantley first contacted him on August 11, 1967, and it took a while for a contract to be signed. But it was still a long time.) Asimov expressed hope that the adaptation would be “the first really adult, complex, worthwhile science fiction movie ever made,” which incidentally sheds light on his opinion of 2001, but it wasn’t meant to be. As Segaloff writes:

For a year from December 1977 Ellison was, as he has put it, “consumed with the project.” He used Asimov’s framework of a reporter, Robert Bratenahl, doing a story about Susan Calvin’s former lover, Stephen Byerly, and presented four of Calvin’s stories as flashbacks, making her the central figure, even in events that she could not have witnessed. It was a bold and admittedly expensive adaptation…When no response was forthcoming, Ellison arranged an in-person meeting with [Warner executive Bob] Shapiro on October 25, 1978, during which he realized that the executive had not read the script.

Ellison allegedly told Shapiro: “You’ve got the intellectual capacity of an artichoke.” He was fired from the project a few months later.

And the case of I, Robot hints at why these authors have had only limited success in Hollywood. As Segaloff notes, the burst of interest in such properties was due mostly to the success of Star Wars, and after Ellison left, a few familiar names showed up:

Around June 1980, director Irvin Kershner, who had made a success with The Empire Strikes Back, expressed interest, but when he was told that Ellison would not be rehired to make changes, according to Ellison his interest vanished…In 1985, Gary Kurtz, who produced the Star Wars films, made inquiries but was told that the project would cost too much to shoot, both because of its actual budget and the past expenses that had been charged against it.

At various points, in other words, many of the same pieces were lined up for I, Robot that had been there just a few years earlier for Star Wars. (It’s worth noting that less time separates Star Wars from these abortive attempts than lies between us and Inception, which testifies to how vivid its impact still was.) But it didn’t happen a second time, and I can think of at least one good reason. In conceiving his masterpiece, George Lucas effectively skipped the golden age entirely to go back to an earlier pulp era, which spoke more urgently to him and his contemporaries—which may be why we had a television show called Amazing Stories and not Astounding. Science fiction in the movies often comes down to an attempt to recreate Star Wars, and if that’s your objective, these writers might as well not exist.

Foundation and Hollywood

with 2 comments

Yesterday, the news broke that Isaac Asimov’s Foundation trilogy will finally be adapted for television. I’ve learned to be skeptical of such announcements, but the package that they’ve assembled sounds undeniably exciting. As we learn from an article in The Wrap:

HBO and Warner Bros. TV are teaming to produce a series based on Isaac Asimov‘s Foundation trilogy that will be written and produced by Interstellar writer Jonathan Nolan…Nolan, who is already working with HBO on Westworld, has been quietly developing the project for the last several months. He recently tipped his hand to Indiewire, which asked him: “What’s the one piece of science fiction you truly love that people don’t know enough about?” [Nolan replied:] “Well, I fucking love the Foundation novels by Isaac Asimov…That’s a set of books I think everyone would benefit from reading.”

Whoops, my mistake—that’s a story from two years ago. The latest attempt will be developed by David S. Goyer and Josh Friedman for Apple, which acquired it from Skydance Television in what Deadline describes as “a competitive situation.” And when you turn back the clock even further, you find that efforts to adapt the trilogy were made in the nineties by New Line Cinema, which went with The Lord of the Rings instead, and even by Roland Emmerich, who might be the last director whom you’d entrust with this material. There were probably other projects that have been long since forgotten. And it doesn’t take a psychohistorian to realize that the odds are stacked against this new version ever seeing the light of day.

Why has the Foundation series remained so alluring to Hollywood, yet so resistant to adaptation? For a clue, we can turn to Asimov himself. In the early eighties, he was approached by Doubleday to write his first new novel in years, and an editor laid out the situation in no uncertain terms: “Listen, Isaac, let me make it clear. When [editor Betty Prashker] said ‘a novel,’ she meant ‘a science fiction novel,’ and when we say ‘a science fiction novel,’ we mean ‘a Foundation novel.’ That’s what we want.” Asimov was daunted, but the offer was too generous to refuse, so he decided to give it a try. As he recounts in his memoir I. Asimov:

Before I got started, I would have to reread the Foundation trilogy. This I approached with a certain horror…I couldn’t help noticing, of course, that there was not very much action in it. The problems and resolutions thereof were expressed primarily in dialogue, in competing rational discussions from different points of view, with no clear indication to the reader which view was right and which was wrong.

This didn’t mean that the trilogy wasn’t engaging—Asimov thought that “it was a page-turner,” and when he was done, he was surprised by his personal reaction: “I experienced exactly what readers had been telling me for decades—a sense of fury that it was over and there was no more.” But if you’re looking to adapt it into another medium, you quickly find that there isn’t a lot there in terms of conventional drama or excitement. As Omar Sharif once said about Lawrence of Arabia: “If you are the man with the money and somebody comes to you and says he wants to make a film…with no stars, and no women, and no love story, and not much action either…what would you say?

In fact, it’s hard to pin down exactly what the Foundation series—or at least the first book—has to offer the movies or television. Speaking as a fan, I can safely state that it doesn’t have memorable characters, iconic scenes, or even much in the way of background. If I were hired to adapt it, I might react in much the same way that William Goldman did when he worked on the movie version of Maverick. Goldman confesses in Which Lie Did I Tell? that his reasons for taking the assignment were simple: “I knew it would be easy…The last thing in life I wanted was to try another original. This adaptation had to be a breeze—all I needed to do was pick out one of the old [episodes] that had too much plot, expand it, and there would be the movie.” He continues:

One of the shocks of my life happened in my living room, where I spent many hours looking at the old Maverick shows I’d been sent. Because, and this was the crusher, television storytelling has changed…Not only was the [James] Garner character generally passive, there was almost no plot at all. Nothing for me to steal. I essentially had to write, sob, another original.

Similarly, the Foundation series gives a writer almost nothing to steal. Once you get to “The Mule,” the action picks up considerably, but that’s obviously your second—or even your third—season, not your first. In the meantime, you’re left with the concept of psychohistory and nothing else. You have to write another original. Which is essentially what happened with I, Robot.

And even psychohistory can be more trouble that it might be worth. It works most convincingly over the course of years or decades, which isn’t a timeframe that lends itself to movies or television, and it naturally restricts the ability of the characters to take control of the story. Which isn’t to say that it’s impossible. (In fact, I have some decent ideas of my own, but I’ll keep them to myself, in case Goyer and Friedman ever want to take a meeting. My schedule is pretty packed at the moment, but it frees up considerably in a few months.) But it’s worth asking why the Foundation series has been such a tempting target for so long. It’s clearly a recognizable property, which is valuable in itself, and its highbrow reputation makes it seem like a promising candidate for a prestige adaptation, although even a glance at the originals shows how deeply they remain rooted in the pulp tradition from which they emerged. If I were a producer looking to move into science fiction with a big acquisition, this would be one of the first places that I’d look, even if these stories aren’t exactly what they seem to be—the Deadline article says that they “informed” the Star Wars movies, which is true only in the loosest possible sense. When you combine the apparent value of the material with the practical difficulty of adapting it, you end up with the cycle that we’ve seen for decades. Asimov was the most famous name in science fiction for thirty years, and his works were almost perpetually under option, but apart from a quickie adaptation of Nightfall, he died before seeing any of it on the screen. He was glad to take the money, but he knew that his particular brand of fiction wouldn’t translate well to other media, and he concluded with what he once called Asimov’s First Law of Hollywood: “Whatever happens, nothing happens.”

Who Needs the Kwik-E-Mart?

leave a comment »

Who needs the Kwik-E-Mart?
Now here’s the tricky part…

“Homer and Apu”

On October 8, 1995, The Simpsons aired the episode “Bart Sells His Soul,” which still hasn’t stopped rattling around in my brain. (A few days ago, my daughter asked: “Daddy, what’s the soul?” I may have responded with some variation on Lisa’s words: “Whether or not the soul is physically real, it’s the symbol of everything fine inside us.” On a more typical morning, though, I’m likely to mutter to myself: “Remember Alf? He’s back—in pog form!”) It’s one of the show’s finest installments, but it came close to being about something else entirely. On the commentary track for the episode, the producer Bill Oakley recalls:

There’s a few long-lived ideas that never made it. One of which is David Cohen’s “Homer the Narcoleptic,” which we’ve mentioned on other tracks. The other one was [Greg Daniels’s] one about racism in Springfield. Do you remember this? Something about Homer and Dr. Hibbert? Well, you pitched it several times and I think we were just…It was some exploration of the concept of race in Springfield, and we just said, you know, we don’t think this is the forum. The Simpsons can’t be the right forum to deal with racism.

Daniels—who went on to create Parks and Recreation and the American version of The Office—went with the pitch for “Bart Sells His Soul” instead, and the other premise evidently disappeared forever, including from his own memory. When Oakley brings it up, Daniels only asks: “What was it?”

Two decades later, The Simpsons has yet to deal with race in any satisfying way, even when the issue seems unavoidable. Last year, the comedian Hari Kondabolu released the documentary The Problem With Apu, which explores the complicated legacy of one of the show’s most prominent supporting characters. On Sunday, the show finally saw fit to respond to these concerns directly, and the results weren’t what anyone—apart perhaps from longtime showrunner Al Jean—might have wanted. As Sopan Deb of the New York Times describes it:

The episode, titled “No Good Read Goes Unpunished,” featured a scene with Marge Simpson sitting in bed with her daughter Lisa, reading a book called “The Princess in the Garden,” and attempting to make it inoffensive for 2018. At one point, Lisa turns to directly address the TV audience and says, “Something that started decades ago and was applauded and inoffensive is now politically incorrect. What can you do?” The shot then pans to a framed picture of Apu at the bedside with the line, “Don’t have a cow!” inscribed on it. Marge responds: “Some things will be dealt with at a later date.” Followed by Lisa saying, “If at all.”

Kondabolu responded on Twitter: “This is sad.” And it was. As Linda Holmes of NPR aptly notes: “Apu is not appearing in a fifty-year-old book by a now-dead author. Apu is a going concern. Someone draws him, over and over again.” And the fact the show decided to put these words into the mouth of Lisa Simpson, whose importance to viewers everywhere was recently underlined, makes it doubly disappointing.

But there’s one obvious change that The Simpsons could make, and while it wouldn’t be perfect, it would be a step in the right direction. If the role of Apu were recast with an actor of South Asian descent, it might not be enough in itself, but I honestly can’t see a downside. Hank Azaria would still be allowed to voice dozens of characters. Even if Apu sounded slightly different than before, this wouldn’t be unprecedented—Homer’s voice changed dramatically after the first season, and Julie Kavner’s work as Marge is noticeably more gravelly than it used to be. Most viewers who are still watching probably wouldn’t even notice, and the purists who might object undoubtedly left a long time ago. It would allow the show to feel newsworthy again, and not just on account of another gimmick. And even if we take this argument to its logical conclusion and ask that Carl, Officer Lou, Akira, Bumblebee Man, and all the rest be voiced by actors of the appropriate background, well, why not? (The show’s other most prominent minority character, Dr. Hibbert, seems to be on his way out for other reasons, and he evidently hasn’t appeared in almost two years.) For a series that has systematically undermined its own legacy in every conceivable way out of little more than boredom, it seems shortsighted to cling to the idea that Azaria is the only possible Apu. And even if it leaves many issues unresolved on the writing level, it also seems like a necessary precondition for change. At this late date, there isn’t much left to lose.

Of course, if The Simpsons were serious about this kind of effort, we wouldn’t be talking about its most recent episode at all. And the discussion is rightly complicated by the fact that Apu—like everything else from the show’s golden age—was swept up in the greatness of those five or six incomparable seasons. Before that unsuccessful pitch on race in Springfield, Greg Daniels was credited for “Homer and Apu,” which deserves to be ranked among the show’s twenty best episodes, and the week after “Bart Sells His Soul,” we got “Lisa the Vegetarian,” which gave Apu perhaps his finest moment, as he ushered Lisa to the rooftop garden to meet Paul and Linda McCartney. But the fact that Apu was a compelling character shouldn’t argue against further change, but in its favor. And what saddens me the most about the show’s response is that it undermines what The Simpsons, at its best, was supposed to be. It was the cartoon that dared to be richer and more complex than any other series on the air; it had the smartest writers in the world and a network that would leave them alone; it was just plain right about everything; and it gave us a metaphorical language for every conceivable situation. The Simpsons wasn’t just a sitcom, but a vocabulary, and it taught me how to think—or it shaped the way that I do think so deeply that there’s no real distinction to be made. As a work of art, it has quietly fallen short in ways both small and large for over fifteen years, but I was able to overlook it because I was no longer paying attention. It had done what it had to do, and I would be forever grateful. But this week, when the show was given the chance to rise again to everything that was fine inside of it, it faltered. Which only tells me that it lost its soul a long time ago.

When Clarke Met Kubrick

with 3 comments

Note: To celebrate the fiftieth anniversary of the release of 2001: A Space Odyssey, which held its premiere on April 2, 1968, I’ll be spending the week looking at various aspects of what remains the greatest science fiction movie ever made.

“I’m reading everything by everybody,” Stanley Kubrick said one day over lunch in New York. It was early 1964, and he was eating at Trader Vic’s with Roger A. Caras, a wildlife photographer and studio publicist who was working at the time for Columbia Pictures. Dr. Strangelove had just been released, and after making small talk about their favorite brand of telescope, Caras asked the director what he had in mind for his next project. Kubrick replied that he was thinking about “something on extraterrestrials,” but he didn’t have a writer yet, and in the meantime, he was consuming as much science fiction as humanly possible. Unfortunately, we don’t know much about what he was reading, which is a frustrating omission in the career of a filmmaker whose archives have been the subject of so many exhaustive studies. In his biography of Kubrick, Vincent Lobrutto writes tantalizingly of this period: “Every day now boxes of science fiction and fact books were being delivered to his apartment. Kubrick was immersing himself in a subject he would soon know better than most experts. His capacity to grasp and disseminate information stunned many who worked with him.” Lobrutto notes that Kubrick took much the same approach a decade later on the project that became The Shining, holing up in his office with “stacks of horror books,” and the man with whom he would eventually collaborate on 2001 recalled of their first meeting: “[Kubrick] had already absorbed an immense amount of science fact and science fiction, and was in some danger of believing in flying saucers.” At their lunch that day at Trader Vic’s, however, Caras seemed to think that all of this work was unnecessary, and he told this to Kubrick in no uncertain terms: “Why waste your time? Why not just start with the best?”

Let’s pause the tape here for a moment to consider what other names Caras might plausibly have said. A year earlier, in his essay “The Sword of Achilles,” Isaac Asimov provided what we can take as a fairly representative summary of the state of the genre:

Robert A. Heinlein is usually considered the leading light among good science fiction writers. Others with a fine grasp of science and a fascinatingly imaginative view of its future possibilities are Arthur C. Clarke, Frederik Pohl, Damon Knight, James Blish, Clifford D. Smiak, Poul Anderson, L. Sprague de Camp, Theodore Sturgeon, Walter Miller, A.J. Budrys…These are by no means all.

Even accounting for the writer and the time period, there are a few noticeable omissions—it’s surprising not to see Lester del Rey, for instance, and A.E. van Vogt, who might not have qualified as what Asimov saw as “good science fiction,” had been voted one of the top four writers in the field in a pair of polls a few years earlier. It’s also necessary to add Asimov himself, who at the time was arguably the science fiction writer best known to general readers. (In 1964, he would even be mentioned briefly in Saul Bellow’s novel Herzog, which was the perfect intersection of the highbrow and the mainstream.) Arthur C. Clarke’s high ranking wasn’t just a matter of personal affection, either—he and Asimov later became good friends, but when the article was published, they had only met a handful of times. Clarke, in other words, was clearly a major figure. But it seems fair to say that anyone claiming to name “the best” science fiction writer in the field might very well have gone with Asimov or Heinlein instead.

Caras, of course, recommended Clarke, whom he had first met five years earlier at a weekend in Boston with Jacques Cousteau. Kubrick was under the impression that Clarke was a recluse, “a nut who lives in a tree in India someplace,” and after being reassured that he wasn’t, the director became excited: “Jesus, get in touch with him, will you?” Caras sent Clarke a telegram to ask about his availability, and when the author said that he was “frightfully interested,” Kubrick wrote him a fateful letter:

It’s a very interesting coincidence that our mutual friend Caras mentioned you in a conversation we were having about a Questar telescope. I had been a great admirer of your books for quite a time and had always wanted to discuss with you the possibility of doing the proverbial “really good” science-fiction movie…Roger tells me you are planning to come to New York this summer. Do you have an inflexible schedule? If not, would you consider coming sooner with a view to a meeting, the purpose of which would be to determine whether an idea might exist or arise which could sufficiently interest both of us enough to want to collaborate on a screenplay?

This account of the conversation differs slightly from Caras’s recollection—Kubrick doesn’t say that they were actively discussing potential writers for a film project, and he may have been flattering Clarke slightly with the statement that he had “always wanted” to talk about a movie with him. But it worked. Clarke wrote back to confirm his interest, and the two men finally met in New York on April 22, where the author did his best to talk Kubrick out of his newfound interest in flying saucers.

But why Clarke? At the time, Kubrick was living on the Upper East Side, which placed him within walking distance of many science fiction authors who were considerably closer than Ceylon, and it’s tempting to wonder what might have happened if he had approached Heinlein or Asimov, both of whom would have been perfectly sensible choices. A decade earlier, Heinlein made a concerted effort to break into Hollywood with the screenplays for Destination Moon and Project Moon Base, and the year before, he had written an unproduced teleplay for a proposed television show called Century XXII. (Kubrick studied Destination Moon for its special effects, if not for its story, as we learn from the correspondence of none other than Roger Caras, who had gone to work for Kubrick’s production company.) Asimov, for his part, was more than willing to explore such projects—in years to come, he would meet to discuss movies with Woody Allen and Paul McCartney, and I’ve written elsewhere about his close encounter with Steven Spielberg. But if Kubrick went with Clarke instead, it wasn’t just because they had a friend in common. At that point, Clarke was a highly respected writer, but not yet a celebrity outside the genre, and the idea of a “Big Three” consisting of Asimov, Clarke, and Heinlein was still a decade away. His talent was undeniable, but he was also a more promising candidate for the kind of working relationship that the director had in mind, which Kubrick later estimated as “four hours a day, six days a week” for more than three years. I suspect that Kubrick recognized what might best be described as a structural inefficiency in the science fiction market. The time and talents of one of the most qualified writers imaginable happened to be undervalued and available at just the right moment. When the opportunity came, Kubrick seized it. And it turned out to be one hell of a bargain.

%d bloggers like this: