Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Archive for the ‘Television’ Category

The fairy tale theater

leave a comment »

It must have all started with The Princess Switch, although that’s so long ago now that I can barely remember. Netflix was pushing me hard to watch an original movie with Vanessa Hudgens in a dual role as a European royal and a baker from Chicago who trade places and end up romantically entangled with each other’s love interests at Christmas, and I finally gave in. In the weeks since, my wife and I have watched Pride, Prejudice, and MistletoeThe Nine Lives of ChristmasCrown for ChristmasThe Holiday CalendarChristmas at the Palace; and possibly one or two others that I’ve forgotten. A few were on Netflix, but most were on Hallmark, which has staked out this space so aggressively that it can seem frighteningly singleminded in its pursuit of Yuletide cheer. By now, it airs close to forty original holiday romances between Thanksgiving and New Year’s Eve, and like its paperback predecessors, it knows better than to tinker with a proven formula. As two of its writers anonymously reveal in an interview with Entertainment Weekly:

We have an idea and it maybe takes us a week or so just to break it down into a treatment, a synopsis of the story; it’s like a beat sheet where you pretty much write what’s going to happen in every scene you just don’t write the scene. If we have a solid beat sheet done and it’s approved, then it’s only going to take us about a week and a half to finish a draft. Basically, an act or two a day and there’s nine. They’re kind of simple because there are so many rules so you know what you can and can’t do, and if you have everything worked out it comes together.

And the rules are revealing in themselves. As one writer notes: “The first rule is snow. We really wanted to do one where the basic conflict was a fear that there will not be snow on Christmas. We were told you cannot do that, there must be snow. They can’t be waiting for the snow, there has to be snow. You cannot threaten them with no snow.” And the conventions that make these movies so watchable are built directly into the structure:

There cannot be a single scene that does not acknowledge the theme. Well, maybe a scene, but you can’t have a single act that doesn’t acknowledge it and there are nine of them, so there’s lots of opportunities for Christmas. They have a really rigid nine-act structure that makes writing them a lot of fun because it’s almost like an exercise. You know where you have to get to: People have to be kissing for the first time, probably in some sort of a Christmas setting, probably with snow falling from the sky, probably with a small crowd watching. You have to start with two people who, for whatever reason, don’t like each other and you’re just maneuvering through those nine acts to get them to that kiss in the snow.

The result, as I’ve learned firsthand, is a movie that seems familiar before you’ve even seen it. You can watch with one eye as you’re wrapping presents, or tune in halfway through with no fear of becoming confused. It allows its viewers to give it exactly as much attention as they’re willing to spare, and at a time when the mere act of watching prestige television can be physically exhausting, there’s something to be said for an option that asks nothing of us at all.

After you’ve seen two or three of these movies, of course, the details start to blur, particularly when it comes to the male leads. The writers speak hopefully of making the characters “as unique and interesting as they can be within the confines of Hallmark land,” but while the women are allowed an occasional flash of individuality, the men are unfailingly generic. This is particularly true of the subgenre in which the love interest is a king or prince, who doesn’t get any more personality than his counterpart in fairy tales. Yet this may not be a flaw. In On Directing Film, which is the best work on storytelling that I’ve ever read, David Mamet provides a relevant word of advice:

In The Uses of Enchantment, Bruno Bettelheim says of fairy tales the same thing Alfred Hitchcock said about thrillers: that the less the hero of the play is inflected, identified, and characterized, the more we will endow him with our own internal meaning—the more we will identify with him—which is to say the more we will be assured that we are that hero. “The hero rode up on a white horse.” You don’t say “a short hero rode up on a white horse,” because if the listener isn’t short he isn’t going to identify with that hero. You don’t say “a tall hero rode up on a white horse,” because if the listener isn’t tall, he won’t identify with the hero. You say “a hero,” and the audience subconsciously realize they are that hero.

Yet Mamet also overlooks the fact that the women in fairy tales, like Snow White, are often described with great specificity—it’s the prince who is glimpsed only faintly. Hallmark follows much the same rule, which implies that it’s less important for the audience to identify with the protagonist than to fantasize without constraint about the object of desire.

This also leads to some unfortunate decisions about diversity, which is more or less what you might expect. As one writer says candidly to Entertainment Weekly:

On our end, we just write everybody as white, we don’t even bother to fight that war. If they want to put someone of color in there, that would be wonderful, but we don’t have control of that…I found out Meghan Markle had been in some and she’s biracial, but it almost seems like they’ve tightened those restrictions more recently. Everything’s just such a white, white, white, white world. It’s a white Christmas after all—with the snow and the people.

With more than thirty original movies coming out every year, you might think that Hallmark could make a few exceptions, especially since the demand clearly exists, but this isn’t about marketing at all. It’s a reflection of the fact that nonwhiteness is still seen as a token of difference, or a deviation from an assumed norm, and it’s the logical extension of the rules that I’ve listed above. White characters have the privilege—which is invisible but very real—of seeming culturally uninflected, which is the baseline that allows the formula to unfold. This seems very close to John W. Campbell’s implicit notion that all characters in science fiction should be white males by default, and while other genres have gradually moved past this point, it’s still very much the case with Hallmark. (There can be nonwhite characters, but they have to follow the rules: “Normally there’ll be a black character that’s like a friend or a boss, usually someone benevolent because you don’t want your one person of color to not be positive.”) With diversity, as with everything else, Hallmark is very mindful of how much variation its audience will accept. It thinks that it knows the formula. And it might not even be wrong.

The Men Who Saw Tomorrow, Part 3

leave a comment »

By now, it might seem obvious that the best way to approach Nostradamus is to see it as a kind of game, as Anthony Boucher describes it in the June 1942 issue of Unknown Worlds: “A fascinating game, to be sure, with a one-in-a-million chance of hitting an astounding bullseye. But still a game, and a game that has to be played according to the rules. And those rules are, above all things else, even above historical knowledge and ingenuity of interpretation, accuracy and impartiality.” Boucher’s work inspired several spirited rebukes in print from L. Sprague de Camp, who granted the rules of the game but disagreed about its harmlessness. In a book review signed “J. Wellington Wells”—and please do keep an eye on that last name—de Camp noted that Nostradamus was “conjured out of his grave” whenever there was a war:

And wonder of wonders, it always transpires that a considerable portion of his several fat volumes of prophetic quatrains refer to the particular war—out of the twenty-odd major conflicts that have occurred since Dr. Nostradamus’s time—or other disturbance now taking place; and moreover that they prophesy inevitable victory for our side—whichever that happens to be. A wonderful man, Nostradamus.

Their affectionate battle culminated in a nonsense limerick that de Camp published in the December 1942 version of Esquire, claiming that if it was still in print after four hundred years, it would have been proven just as true as any of Nostradamus’s prophecies. Boucher responded in Astounding with the short story “Pelagic Spark,” an early piece of fanfic in which de Camp’s great-grandson uses the “prophecy” to inspire a rebellion in the far future against the sinister Hitler XVI.

This is all just good fun, but not everyone sees it as a game, and Nostradamus—like other forms of vaguely apocalyptic prophecy—tends to return at exactly the point when such impulses become the most dangerous. This was the core of de Camp’s objection, and Boucher himself issued a similar warning:

At this point there enters a sinister economic factor. Books will be published only when there is popular demand for them. The ideal attempt to interpret the as yet unfulfilled quatrains of Nostradamus would be made in an ivory tower when all the world was at peace. But books on Nostradamus sell only in times of terrible crisis, when the public wants no quiet and reasoned analysis, but an impassioned assurance that We are going to lick the blazes out of Them because look, it says so right here. And in times of terrible crisis, rules are apt to get lost.

Boucher observes that one of the best books on the subject, Charles A. Ward’s Oracles of Nostradamus, was reissued with a dust jacket emblazoned with such questions as “Will America Enter the War?” and “Will the British Fleet Be Destroyed?” You still see this sort of thing today, and it isn’t just the books that benefit. In 1981, the producer David L. Wolper released a documentary on the prophecies of Nostradamus, The Man Who Saw Tomorrow, that saw subsequent spikes in interest during the Gulf War—a revised version for television was hosted by Charlton Heston—and after the September 11 attacks, when there was a run on the cassette at Blockbuster. And the attention that it periodically inspires reflects the same emotional factors that led to psychohistory, as the host of the original version said to the audience: “Do we really want to know about the future? Maybe so—if we can change it.”

The speaker, of course, was Orson Welles. I had always known that The Man Who Saw Tomorrow was narrated by Welles, but it wasn’t until I watched it recently that I realized that he hosted it onscreen as well, in one of my favorite incarnations of any human being—bearded, gigantic, cigar in hand, vaguely contemptuous of his surroundings and collaborators, but still willing to infuse the proceedings with something of the velvet and gold braid. Keith Phipps of The A.V. Club once described the documentary as “a brain-damaged sequel” to Welles’s lovely F for Fake, which is very generous. The entire project is manifestly ridiculous and exploitative, with uncut footage from the Zapruder film mingling with a xenophobic fantasy of a war of the West against Islam. Yet there are also moments that are oddly transporting, as when Welles turns to the camera and says:

Before continuing, let me warn you now that the predictions of the future are not at all comforting. I might also add that these predictions of the past, these warnings of the future are not the opinions of the producers of the film. They’re certainly not my opinions. They’re interpretations of the quatrains as made by scores of independent scholars of Nostradamus’ work.

In the sly reading of “my opinions,” you can still hear a trace of Harry Lime, or even of Gregory Arkadin, who invited his guests to drink to the story of the scorpion and the frog. And the entire movie is full of strange echoes of Welles’s career. Footage is repurposed from Waterloo, in which he played Louis XVIII, and it glances at the fall of the Shah of Iran, whose brother-in-law funded Welles’s The Other Side of the Wind, which was impounded by the revolutionary government that Nostradamus allegedly foresaw.

Welles later expressed contempt for the whole affair, allegedly telling Merv Griffin that you could get equally useful prophecies by reading at random out of the phone book. Yet it’s worth remembering, as the critic David Thomson notes, that Welles turned all of his talk show interlocutors into versions of the reporter from Citizen Kane, or even into the Hal to his Falstaff, and it’s never clear where the game ended. His presence infuses The Man Who Saw Tomorrow with an unearned loveliness, despite the its many awful aspects, such as the presence of the “psychic” Jeane Dixon. (Dixon’s fame rested on her alleged prediction of the Kennedy assassination, based on a statement—made in Parade magazine in 1960—that the winner of the upcoming presidential election would be “assassinated or die in office though not necessarily in his first term.” Oddly enough, no one seems to remember an equally impressive prediction by the astrologer Joseph F. Goodavage, who wrote in Analog in September 1962: “It is coincidental that each American president in office at the time of these conjunctions [of Jupiter and Saturn in an earth sign] either died or was assassinated before leaving the presidency…John F. Kennedy was elected in 1960 at the time of a Jupiter and Saturn conjunction in Capricorn.”) And it’s hard for me to watch this movie without falling into reveries about Welles, who was like John W. Campbell in so many other ways. Welles may have been the most intriguing cultural figure of the twentieth century, but he never seemed to know what would come next, and his later career was one long improvisation. It might not be too much to hear a certain wistfulness when he speaks of the man who could see tomorrow, much as Campbell’s fascination with psychohistory stood in stark contrast to the confusion of the second half of his life. When The Man Who Saw Tomorrow was released, Welles had finished editing about forty minutes of his unfinished masterpiece The Other Side of the Wind, and for decades after his death, it seemed that it would never be seen. Instead, it’s available today on Netflix. And I don’t think that anybody could have seen that coming.

A better place

with 2 comments

Note: Spoilers follow for the first and second seasons of The Good Place.

When I began watching The Good Place, I thought that I already knew most of its secrets. I had missed the entire first season, and I got interested in it mostly due to a single review by Emily Nussbaum of The New Yorker, which might be my favorite piece so far from one of our most interesting critics. Nussbaum has done more than anyone else in the last decade to elevate television criticism into an art in itself, and this article—with its mixture of the critical, personal, and political—displays all her strengths at their best. Writing of the sitcom’s first season finale, which aired the evening before Trump’s inauguration, Nussbaum says: “Many fans, including me, were looking forward to a bit of escapist counterprogramming, something frothy and full of silly puns, in line with the first nine episodes. Instead, what we got was the rare season finale that could legitimately be described as a game-changer, vaulting the show from a daffy screwball comedy to something darker, much stranger, and uncomfortably appropriate for our apocalyptic era.” Following that grabber of an opening, she continues with a concise summary of the show’s complicated premise:

The first episode is about a selfish American jerk, Eleanor (the elfin charmer Kristen Bell), who dies and goes to Heaven, owing to a bureaucratic error. There she is given a soul mate, Chidi (William Jackson Harper), a Senegal-raised moral philosopher. When Chidi discovers that Eleanor is an interloper, he makes an ethical leap, agreeing to help her become a better person…Overseeing it all was Michael, an adorably flustered angel-architect played by Ted Danson; like Leslie Knope, he was a small-town bureaucrat who adored humanity and was desperate to make his flawed community perfect.

There’s a lot more involved, of course, and we haven’t even mentioned most of the other key players. It’s an intriguing setup for a television show, and it might have been enough to get me to watch it on its own. Yet what really caught my attention was Nussbaum’s next paragraph, which includes the kind of glimpse into a critic’s writing life that you only see when emotions run high: “After watching nine episodes, I wrote a first draft of this column based on the notion that the show, with its air of flexible optimism, its undercurrent of uplift, was a nifty dialectical exploration of the nature of decency, a comedy that combined fart jokes with moral depth. Then I watched the finale. After the credits rolled, I had to have a drink.” She then gives away the whole game, which I’m obviously going to do here as well. You’ve been warned:

In the final episode, we learn that it was no bureaucratic mistake that sent Eleanor to Heaven. In fact, she’s not in Heaven at all. She’s in Hell—which is something that Eleanor realizes, in a flash of insight, as the characters bicker, having been forced as a group to choose two of them to be banished to the Bad Place. Michael is no angel, either. He’s a low-ranking devil, a corporate Hell architect out on his first big assignment, overseeing a prankish experimental torture cul-de-sac. The malicious chuckle that Danson unfurls when Eleanor figures it out is both terrifying and hilarious, like a clap of thunder on a sunny day. “Oh, God!” he growls, dropping the mask. “You ruin everything, you know that?”

That’s a legitimately great twist, and when I suggested to my wife—who didn’t know anything about it—that we check it out on Netflix, it was partially so that I could enjoy her surprise at that moment, like a fan of A Song of Ice and Fire eagerly watching an unsuspecting friend during the Red Wedding.

Yet I was the one who really got fooled. The Good Place became my favorite sitcom since Community, and for almost none of the usual reasons. It’s very funny, of course, but I find that the jokes land about half the time, and it settles for what Nussbaum describes as “silly puns” more often than it probably should. Many episodes are closer to freeform comedy—the kind in which the riffs have less to do with context than with whatever the best pitch happened to be in the writers room—than to the clockwork farce to which it ought to aspire. But its flaws don’t really matter. I haven’t been so involved with the characters on a series like this in years, which allows it to take risks and get away with formal experiments that would destroy a lesser show. After the big revelation in the first season finale, it repeatedly blew up its continuity, with Michael resetting the memories of the others and starting over whenever they figured out his plan, but somehow, it didn’t leave me feeling jerked around. This is partially thanks to how the show cleverly conflates narrative time with viewing time, which is one of the great unsung strengths of the medium. (When the second season finally gets on track, these “versions” of the characters have only known one another for a couple of weeks, but every moment is enriched by our memories of their earlier incarnations. It’s a good trick, but it’s not so different from the realization, for example, that all of the plot twists and relationships of the first two seasons of Twin Peaks unfolded over less than a month.) It also speaks to the talent of the cast, which consistently rises to every challenge. And it does a better job of telling a serialized story than any sitcom that I can remember. Even while I was catching up with it, I managed to parcel it out over time, but I can also imagine binging an entire season at one sitting. That’s mostly due to the fact that the writers are masters of structure, if not always at filling the spaces between act breaks, but it’s also because the stakes are literally infinite.

And the stakes apply to all of us. It’s hard to come away from The Good Place without revisiting some of your assumptions about ethics, the afterlife, and what it means to be a good person. (The inevitable release of The Good Place and Philosophy might actually be worth reading.) I’m more aware of how much I’ve internalized the concept of “moral desert,” or the notion that good behavior will be rewarded, which we should all know by now isn’t true. In its own unpretentious way, the series asks its viewers to contemplate the problem of how to live when there might not be a prize awaiting us at the end. It’s the oldest question imaginable, but it seems particularly urgent these days, and the show’s answers are more optimistic than we have any right to expect. Writing just a few weeks after the inauguration, Nussbaum seems to project some of her own despair onto creator Michael Schur:

While I don’t like to read the minds of showrunners—or, rather, I love to, but it’s presumptuous—I suspect that Schur is in a very bad mood these days. If [Parks and Recreation] was a liberal fantasia, The Good Place is a dystopian mindfork: it’s a comedy about the quest to be moral even when the truth gets bent, bullies thrive, and sadism triumphs…Now that his experiment has crashed, [the character of] Michael plans to erase the ensemble’s memories and reboot. The second season—presuming the show is renewed (my mouth to God’s ear)—will start the same scheme from scratch. Michael will make his afterlife Sims suffer, no matter how many rounds it takes.

Yet in the second season hinges on an unlikely change of heart. Michael comes to care about his charges—he even tries to help them escape to the real Good Place—and his newfound affection doesn’t seem like another mislead. I’m not sure if I believe it, but I’m still grateful. It isn’t a coincidence that Michael shares his name with the show’s creator, and I’d like to think that Schur ended up with a kinder version of the series than he may have initially envisioned. Like Nussbaum, he tore up the first draft and started over. Life is hard enough as it is, and the miracle of The Good Place is that it takes the darkest view imaginable of human nature, and then it gently hints that we might actually be capable of becoming better.

Written by nevalalee

September 27, 2018 at 8:39 am

The surprising skepticism of The X-Files

with one comment

Gillian Anderson in "Jose Chung's From Outer Space"

Note: To celebrate the twenty-fifth anniversary of the premiere of The X-Files, I’m republishing a post that originally appeared, in a somewhat different form, on September 9, 2013.

Believe it or not, this week marks the twenty-fifth anniversary of The X-Files, which aired its first episode on September 10, 1993. As much as I’d like to claim otherwise, I didn’t watch the pilot that night, and I’m not even sure that I caught the second episode, “Deep Throat.” “Squeeze,” which aired the following week, is the first installment that I clearly remember seeing on its original broadcast, and I continued to tune in afterward, although only sporadically. In its early days, I had issues with the show’s lack of continuity: it bugged me to no end that after every weekly encounter with the paranormal—any one of which should have been enough to upend Scully’s understanding of the world forever—the two leads were right back where they were at the start of the next episode, and few, if any, of their cases were ever mentioned again. Looking back now, of course, it’s easy to see that this episodic structure was what allowed the show to survive, and that it was irrevocably damaged once it began to take its backstory more seriously. In the meantime, I learned to accept the show’s narrative logic on its own terms. And I’m very grateful that I did.

It’s no exaggeration to say that The X-Files has had a greater influence on my own writing than any work of narrative art in any medium. That doesn’t mean it’s my favorite work of art, or even my favorite television show—only that Chris Carter’s supernatural procedural came along at the precise moment in my young adulthood that I was most vulnerable to being profoundly influenced by a great genre series. I was thirteen when the show premiered, toward the end of the most pivotal year of my creative life. Take those twelve months away, or replace them with a different network of cultural influences, and I’d be a different person altogether. It was the year I discovered Umberto Eco, Stephen King, and Douglas R. Hofstadter; Oliver Stone’s JFK set me on a short but fruitful detour into the literature of conspiracy; I bought a copy of Very by the Pet Shop Boys, about which I’ll have a lot more to say soon; I acquired copies of Isaac Asimov’s Science Fiction Magazine and 100 Great Science Fiction Short Short Stories; and I took my first deep dive into the work of David Lynch and, later, Jorge Luis Borges. Some of these works have lasted, while others haven’t, but they all shaped who I became, and The X-Files stood at the heart of it all, with imagery drawn in equal part from Twin Peaks and Dealey Plaza and a playful, agnostic spirit that mirrored that of the authors I was reading at the same time.

Gillian Anderson and David Duchovny in The X-Files pilot

And this underlying skepticism—which may seem like a strange word to apply to The X-Files—was a big part of its appeal. What I found enormously attractive about the show was that although it took place in a world of aliens, ghosts, and vampires, it didn’t try to force these individual elements into one overarching pattern. Even in its later seasons, when it attempted, with mixed results, to weave its abduction and conspiracy threads into a larger picture, certain aspects remained incongruously unexplained. The same world shaped by the plans of the Consortium or Syndicate also included lake monsters, clairvoyants, and liver-eating mutants, all of whom would presumably continue to go about their business after the alien invasion occurred. It never tried to convert us to anything, because it didn’t have any answers. And what I love about it now, in retrospect, is the fact that this curiously indifferent attitude toward its own mysteries arose from the structural constraints of network television itself. Every episode had to stand on its own. There was no such thing as binge-watching. The show had to keep moving or die.

Which goes a long way toward explaining why even fundamentally skeptical viewers, like me, could become devoted fans, or why Mulder and Scully could appear on the cover of the Skeptical Inquirer. It’s true that Scully was never right, but it’s remarkable how often it seemed that she could be, which is due as much to the show’s episodic construction as to Gillian Anderson’s wonderful performance. (As I’ve mentioned before, Scully might be my favorite character on any television show.) Every episode changed the terms of the game, complete with a new supporting cast, setting, and premise—and after the advent of Darin Morgan, even the tone could be wildly variable. As a result, it was impossible for viewers to know where they stood, which made a defensive skepticism seem like the healthiest possible attitude. Over time, the mythology grew increasingly unwieldy, and the show’s lack of consistency became deeply frustrating, as reflected in its maddening, only occasionally transcendent reboot. The X-Files eventually lost its way, but not until after a haphazard, often dazzling initial season that established, in spite of what its creators might do in the future, that anything was possible, and no one explanation would ever be enough. And it’s a lesson that I never forgot.

Written by nevalalee

September 14, 2018 at 9:00 am

The writer’s defense

leave a comment »

“This book will be the death of me,” the writer Jose Chung broods to himself halfway through my favorite episode of Millennium. “I just can’t write anymore. What possessed me to be a writer anyway? What kind of a life is this? What else can I do now, with no other skills or ability? My life has fizzled away. Only two options left: suicide, or become a television weatherman.” I’ve loved this internal monologue—written by Darin Morgan and delivered by the great Charles Nelson Reilly—ever since I first heard it more than two decades ago. (As an aside, it’s startling for me to realize that just four short years separated the series premiere of The X-Files from “Jose Chung’s Doomsday Defense,” which was enough time for an entire fictional universe to be born, splinter apart, and reassemble itself into a better, more knowing incarnation.) And I find that I remember Chung’s words every time I sit down to write something new. I’ve been writing for a long time now, and I’m better at it than I am at pretty much anything else, but I still have to endure something like a moment of existential dread whenever I face the blank page for the first time. For the duration of the first draft, I regret all of my decisions, and I wonder whether there’s still a chance to try something else instead. Eventually, it passes. But it always happens. And after spending over a decade doing nothing else but writing, I’ve resigned myself to the fact that it’s always going to be this way.

Which doesn’t mean that there aren’t ways of dealing with it. In fact, I’ve come to realize that most of my life choices are designed to minimize the amount of time that I spend writing first drafts. This means nothing else but the physical act of putting down words for the first time, which is when I tend to hit my psychological bottom. Everything else is fine by comparison. As a result, I’ve shunted aspects of my creative process to one side or the other of the rough draft, which persists as a thin slice of effort between two huge continents of preparation and consolidation. I prefer to do as much research in advance as I can, and I spend an ungodly amount of time on outlines, which I’ve elsewhere described as a stealth first draft that I can trick myself into thinking doesn’t matter. My weird, ritualistic use of mind maps and other forms of random brainstorming is another way to generate as many ideas as possible before I need to really start writing. When I finally start the first draft, I make a point of never going back to read it until I’ve physically typed out the entire thing, with my outline at my elbow, as if I’m just transcribing something that already exists. Ideally, I can crank out that part of the day’s work in an hour or less. Once it’s there on the screen, I can begin revising, taking as many passes as possible without worrying too much about any given version. In the end, I somehow end up with a draft that I can stand to read. It isn’t entirely painless, but it involves less pain than any other method that I can imagine.

And these strategies are all just specific instances of my favorite piece of writing advice, which I owe to the playwright David Mamet. I haven’t quoted it here for a while, so here it is again:

As a writer, I’ve tried to train myself to go one achievable step at a time: to say, for example, “Today I don’t have to be particularly inventive, all I have to be is careful, and make up an outline of the actual physical things the character does in Act One.” And then, the following day to say, “Today I don’t have to be careful. I already have this careful, literal outline, and I all have to do is be a little bit inventive,” et cetera, et cetera.

As I’ve noted before, I badly wish that I could somehow send this paragraph back in time to my younger self, because it would have saved me years of wasted effort. But what Mamet doesn’t mention, perhaps because he thought that it was obvious, is that buried in that list of “achievable steps” is a monster of a task that can’t be eliminated, only reduced. There’s no getting around the time that you spend in front of the blank page, and even the best outline in the world can only take away so much of the pain. (An overly detailed outline may even cause problems later, if it leads to a work that seems lifeless and overdetermined—which leaves us with the uncomfortable fact that a certain amount of pain at the writing stage is necessary to avoid even greater trouble in the future.)

Of course, if you’re just looking to minimize the agony of writing that first draft, there are easier ways to anesthetize yourself. Jose Chung pours himself a glass of whiskey, and I’ve elsewhere characterized the widespread use of mind-altering chemicals by writers—particularly caffeine, nicotine, and alcohol—as a pragmatic survival tactic, like the other clichés that we associate with the bohemian life. And I haven’t been immune. For years, I’d often have a drink while working at night, and it certainly didn’t hurt my productivity. (A ring of discolored wood eventually appeared on the surface of my desk from the condensation on the glass, which said more about my habits than I realized at the time.) After I got married, and especially after I became a father, I had to drastically rethink my writing schedule. I was no longer writing long into the evening, but trying to cram as much work as I could into a few daylight hours, leaving me and my wife with a little time to ourselves after our daughter went to bed. As a result, the drinking stopped, and the more obsessive habits that I’ve developed in the meantime are meant to reduce the pain of writing with a clear head. This approach isn’t for everyone, and it may not work for anyone else at all. But it’s worth remembering that when you look at a reasonably productive writer, you’re really seeing a collection of behaviors that have accrued around the need to survive that daily engagement with the empty page. And if they tend to exhibit such an inexplicable range of strategies, vices, and rituals, ultimately, they’re all just forms of defense.

Written by nevalalee

September 12, 2018 at 8:21 am

Don’t look now

leave a comment »

Note: This post discusses elements of the series finale of HBO’s Sharp Objects.

It’s been almost twenty years since I first saw Don’t Look Now at a revival screening at the Brattle Theatre in Cambridge, Massachusetts. I haven’t seen it again, but I’ve never gotten over it, and it remains one of my personal cinematic touchstones. (My novelette “Kawataro,” which is largely an homage to Japanese horror, includes a nod to its most famous visual conceit.) And it’s impossible to convey its power without revealing its ending, which I’m afraid I’ll need to do here. For most of its length, Nicholas Roeg’s movie is an evocative supernatural mystery set in Venice, less about conventional scares than about what the film critic Pauline Kael describes as its “unnerving cold ominousness,” with Donald Sutherland and Julie Christie as a husband and wife mourning the recent drowning death of their daughter. Throughout the movie, Sutherland glimpses a childlike figure in a red raincoat, which his daughter was wearing when she died. Finally, in the film’s closing minutes, he catches up with what he thinks is her ghost, only to find what Kael calls “a hideous joke of nature—their own child become a dwarf monstrosity.” A wrinkled crone in red, who is evidently just a serial killer, slashes him to death, in one of the great shock endings in the history of movies. Kael wasn’t convinced by it, but it clearly affected her as much as it did me:

The final kicker is predictable, and strangely flat, because it hasn’t been made to matter to us; fear is decorative, and there’s nothing to care about in this worldly, artificial movie. Yet at a mystery level the the movie can still affect the viewer; even the silliest ghost stories can. It’s not that I’m not impressionable; I’m just not as proud of it as some people are.

I had much the same reaction to the final scene of Sharp Objects, a prestige miniseries that I’ve been watching for two months now with growing impatience, only to have my feelings turn at the very end into a grudging respect. It’s a strange, frustrating, sometimes confusing show that establishes Jean-Marc Vallée, coming off the huge success of Big Little Lies, as one of our major directors—he’s got more pure craft at his disposal than just about anyone else working in television. (I don’t remember much about The Young Victoria, but it was clear even then that he was the real thing.) The series is endlessly clever in its production design, costuming, and music, and the actors do the best that they can with the material at hand. The first trailer led me to expect something heightened and Gothic, with a duel of wills between daughter Celeste (Amy Adams) and mother Adora (Patricia Clarkson), but the show itself spends most of its length going for something sadder and more wounded, and I don’t think it entirely succeeds. Like Big Little Lies, it exploits the structure of a mystery, but it isn’t particularly interested in furnishing clues or even basic information, and there are long stretches when it seems to forget about the two teenage girls who have been murdered in Celeste’s haunted hometown. Celeste is a bad reporter and a lousier investigator, which wouldn’t matter if this were really a psychological study. Yet the series isn’t all that interested in delving into its characters, either, apart from their gorgeously lit surfaces. For most of its eight episodes, it hits the same handful of notes, and by the end, we don’t have much more insight into Celeste, Adora, or anybody else than we did after the pilot. It has a few brilliant visual notions, but very little in the way of real ideas.

Then we come to the end, or the last minute of the finale, which I think is objectively staggering. (I’m not going to name the killer, but if you haven’t seen the show yet, you might want to skip this entire paragraph.) After an extended fake denouement that should serve as a warning sign in itself, Celeste stumbles across the truth, in the form of a few gruesome puzzle pieces that have been hiding in plain sight, followed by a smash cut to black. Aside from an unsettling montage of images during the closing credits, that’s it. It turns the entire series into the equivalent of a shaggy dog story, or an elephant joke, and I loved it—it’s a gigantic “screw you” to the audience that rises to Hitchcockian levels of bad taste. Yet I’m not entirely sure whether it redeems the rest of the series. When I replay Sharp Objects in my head, it seems to start out as a mystery, transition into a simulacrum of a character study, and then reveal at the last second that it was only messing with us. If it had been two hours long, it would have been very effective. But I don’t know if it works for a television series, even with a limited run, in which the episodes in the protracted second act can only deliver one tone at once. If this were a movie, I’d want to see it again, but I don’t think I’ll ever revisit the dusty middle innings of Sharp Objects, much of which was only marking time. As a confidence game, it works all too well, to the point that many critics thought that it was onto something profound. For some viewers, maybe it was. But I’d be curious to hear how they come to terms with that ending, which cuts so savagely away from anything like human resolution that it makes a mockery of the notion that this was ever about the characters at all.

And it works, at least to a point. If nothing else, I’ve been thinking about it ever since—as Kael says, I’m no less impressionable than anyone else, even if I’m not proud of it. But I’d also argue that the conventions of the prestige drama, which made this project possible in the first place, also interfere with its ultimate impact. There’s no particular reason why Sharp Objects had to be eight episodes long, and you could make a strong case that it would work better if the entire experience, like Don’t Look Now, were experienced in a single sitting. In the end, I liked it enough to want to see a shorter version, which might feel like a masterpiece. In a funny way, Sharp Objects represents the opposite problem as Gone Girl, another fascinating project that Gillian Flynn adapted from her own novel. That movie was a superb Hitchcockian toy that stumbled when it asked us to take it seriously at the end, while Sharp Objects is a superficially serious show that exposes itself in its final seconds as a twisted game. I prefer the latter, and that final shock is delivered with a newfound professionalism that promises great things from both Flynn and Vallée. (It certainly ends on a higher note than the first season of Big Little Lies, which also closed with an inexplicable ending that made much of the show seem meaningless, except not in a good way.) But the interminable central section makes me suspect that the creators were so seduced by Amy Adams—“so extraordinarily beautiful yet not adding up right for ordinary beauty,” as Kael said of Julie Christie—that they forgot what they were supposed to be doing. Kael ends her review on a typically inscrutable note: “It’s like an entertainment for bomb victims: nobody excepts any real pleasure from it.” But what I might remember most about Sharp Objects is that sick tingle of pleasure that it offered me at the very end, just after I’d given up looking for it.

Written by nevalalee

August 27, 2018 at 8:47 am

The fault in our stars

with one comment

Earlier this week, the writer Eric Vilas-Boas wrote an emotional essay for TV Guide about a personal crisis that was recently catalyzed by an episode of Star Trek: Deep Space Nine. In “Far Beyond the Stars,” Commander Benjamin Sisko—played, as always, by Avery Brooks, who also directs—hallucinates that he’s really Benny Russell, a black science writer living in New York in the fifties. The story aired for the first time on February 11, 1998, but even after over twenty years, its themes are still uncomfortably close to home, as Vilas-Boas observes: “The halls of magazines and newspapers remain difficult to break into without (white, often male) contacts or mentors. Just from my experience alone, that’s often meant policing my own behavior to appear more ‘white’ and less threatening: straightening my hair, cutting my hair, or holding my tongue in meetings when I’ve heard something unquestionably offensive.” And after quoting the extraordinary speech that Russell delivers toward the end, in a single unbroken take that amounts to some of the best work of Brooks’s career, Vilas-Boas writes:

I can’t think about that last line [“You can pulp a story but you cannot destroy an idea. Don’t you understand? That’s ancient knowledge.”] without crying. I can’t think about it without thinking about what ancient knowledge has been destroyed in the systemic abuse of marginalized peoples. I can’t watch that episode without thinking about the times I felt most worthless and undeserving of my jobs as a writer and editor in white-dominant workplaces. I can’t watch Russell’s plaintive bargaining with his editor over his stories without thinking about times I’ve policed myself in the process to appear less aggressive, less brown, less assertive, and less likely to cause problems, because of what I perceived as a clear power imbalance…That’s not an uncommon story, if you care to pay attention, but for people of color or other marginalized groups, it’s unavoidable.

Until yesterday, I had never seen “Far Beyond the Stars.” What prompted me to check it out last night, apart from the power of Vilas-Boas’s article, was a screen shot of René Auberjonois as Douglas Pabst, the editor of the fictional magazine Incredible Tales. The episode’s supporting characters are played by members of the show’s regular cast, many of whom are allowed to wear their real faces on camera for the first time—but Auberjonois is clearly made up and costumed to resemble John W. Campbell, down to the browline glasses. And the teleplay by Ira Steven Behr and Hans Beimler, based on a story by Marc Scott Zicree, is filled with affectionate nods to the pulps, along with a few forgivable inaccuracies. (Incredible is implausibly depicted as occupying a spacious, beautiful newsroom, with writers on salary typing up stories at their own desks. In reality, Campbell spent most of his career sharing a single tiny office with his assistant editor, Kay Tarrant, and there was little more than a spare chair for visitors. But it’s Sisko’s dream, after all, and it certainly looks great on television.) But Pabst’s response to Russell’s desire to write a story with a black protagonist rings all too true:

Look, Benny, I’m a magazine editor, I am not a crusader. I am not here to change the world, I’m here to put out a magazine. Now, that’s my job. That means I have to answer to the publisher, the national distributors, the wholesalers and none of them are going to want to put this story on the newsstand. For all we know, it could cause a race riot…The way I see it, you can either burn it or you can stick it in a drawer for fifty years or however long it takes the human race to become color-blind.

Earlier in the episode, Pabst expresses himself even more bluntly: “The average reader’s not going to spend his hard-earned cash on stories written by Negroes.”

And unfortunately, this isn’t much of an exaggeration. Russell inevitably reminds many viewers of Samuel R. Delany, and remarkably enough, the episode aired six months before the publication of Delany’s landmark essay “Racism and Science Fiction,” in which he shared a very similar anecdote from 1967:

I submitted Nova for serialization to the famous SF editor of Analog magazine, John W. Campbell, Jr. Campbell rejected it, with a note and phone call to my agent explaining that he didn’t feel his readership would be able to relate to a black main character. That was one of my first direct encounters, as a professional writer, with the slippery and always commercialized form of liberal American prejudice: Campbell had nothing against my being black, you understand…In the phone call Campbell made it fairly clear that this was his only reason for rejecting the book. Otherwise, he rather liked it.

In fact, Campbell was willing to print stories with black protagonists, notably Mack Reynolds’s “Black Man’s Burden” and its sequels—as long as all of its characters sounded just like John W. Campbell. Otherwise, he had minimal interest in diversifying the magazine. On May 1, 1969, he wrote to the fan Ron Stoloff: “If Negro authors are extremely few—it’s solely because extremely few Negroes both wish to, and can, write in open competition.” In the same letter, Campbell extended his views to the characters as well: “Think about it a bit, and you’ll realize why there is so little mention of blacks in science fiction; we see no reason to go saying ‘Lookee lookee lookee! We’re using blacks in our stories! See the Black Man! See him in a spaceship!’ It is my strongly held opinion that any Black should be thrown out of any story, spaceship, or any other place—unless he’s a black man. That he’s got no business there just because he’s black, but every right there if he’s a man.”

As I’ve noted here before, there are two implications here. The first is that all protagonists should be white males by default, a stance that Campbell might not even have seen as problematic—and even if race wasn’t made explicit, the magazine’s illustrations overwhelmingly depicted its characters as white. There’s also a clear sense that black heroes have to “earn” their presence in the magazine, which, given the hundreds of cardboard “competent men” that Campbell cheerfully featured over the years, is laughable in itself. In fiction, as in life, if you’re black, you’ve evidently got to be twice as good to justify yourself. Science fiction has come a long way in the last half century, but it still has room to grow, and you could even argue that the discussion about race within fan culture has degenerated since the first airing of “Far Beyond the Stars.” (The ongoing debate over programming at the upcoming World Science Fiction Convention only points to how fraught such issues remain.) Sisko’s closing monologue, in which he wonders if his entire world might exist only in Benny Russell’s imagination, is a little on the nose, but it’s a reminder that all of these stories emerged in response to similar hopes and fears. And at its best, science fiction can provide solace—or outrage—that we can put to use in our own lives. As Vilas-Boas concludes:

Six months later, I see a therapist regularly, largely to talk about my feelings, something that sounds like a cliché but is really a product of how much I’ve bottled up and held in every day of my life. I’ve had panic attacks since then, but I handle them better. In its own way, “Far Beyond the Stars” helped me set a rubric for them, to know that they have a prior trigger in my life, to recognize the world’s problems are not inextricably linked to my reactions to them. No matter how big or small, it would be Captain Sisko’s job to keep his cool and get his crew out of danger…And in the end, there’s no hiding. There’s only one thing I can do, in the words of Sisko: “Stay here and finish the job I started.”

A comedian reads the newspaper

leave a comment »

A few days ago, I was leafing through Ladies and Gentlemen—Lenny Bruce, the monumental biography of the legendary standup comic by Albert Goldman and Lawrence Schiller. My eye was caught by a description of a typical performance by Bruce, who died in 1966:

When Lenny starts to spritz, interspersed with the hip jargon, riding along the bops and beats of his Broadway-Brooklyn tachycardic speech pattern, are allusions to big sounds like Stravinsky, Picasso, Charlie Parker, José Limon and James Joyce. Jazz, existentialism, analysis, peyote cults, and California. He’s concerned about the racial scene and the man in the White House and the economy, the way the country is changing. Speaks from experience, done an awful lot of reading.

These days, we may not expect our comedians to drop allusions to Stravinsky or José Limon, but we’re still interested in what they have say about “the racial scene and the man in the White House and the economy, the way the country is changing.” It’s part of a tradition of turning to standup comics for wisdom—or truth—that can largely be traced back to Bruce himself. And here’s the punchline, as Goldman delivers it: “The image is a bitch to sustain. Lenny isn’t that knowledgable about jazz. He’s never been to Europe since the Navy. Most everything he knows, he picks up from the movies.”

This pressure to seem informed about current events is one to which most of us can relate, and it must be particularly challenging to those figures who find themselves at the forefront of the culture, where we expect them to be inhumanly knowledgeable about everything while making the result seem effortless. As Goldman points out, though, there are ways of getting around it: “Mort Sahl found the solution before Lenny. It’s called osmosis.” He continues:

The way Sahl worked? Wherever he was, at home or on the road, he would have his room lined with magazines and books. He never read anything. A voracious skimmer. By flipping through this and staring at that, reading a sentence here and picking up a word there, he got a very good idea of where everything was. When he went into his monologue, you would swear that he had digested the whole world for that week. Charles de Gaulle, Dwight Eisenhower, segregation, Shelley Berman, trade unions, Marty, Dave Brubeck, New York, Berkeley, Beckett, newspapers, coffeehouses, sandals, J.D. Salinger, filter-tip cigarettes, the State Department, Dick Clark, German radios, birth control, Charles Van Doren, Adlai Stevenson, natural-shoulder suits, Cuba, Israel, Dave Garroway, the Diners’ Club, Billy Graham, sports cars, the Strategic Air Command—wow! A barrage!

And if you replace that catalog of topics with one that seems more current—Red Hen, zero tolerance, “This is America,” Harley Davidson, and that’s just this week—it still captures something of what we expect from our late night hosts and talking heads on a daily basis.

The ability to skim a newspaper and turn it into a monologue for an audience every night is a valuable skill, and it can earn millions for those who possess it. But there’s no particular reason that comedians or pundits need to do the skimming themselves. In the period about which Goldman is writing, Bruce’s solution centered on the unlikely figure of Terry Lane, his assistant and a former burlesque drummer:

Lenny doesn’t need all this crap. He has an imagination and he’s really funny, not just nervous, like Sahl. But the trick is the same. Neither a reader nor a skimmer, what’s he supposed to do? Just accept it? Be a schmuck? Oh, no! There are always people who can help you. You don’t have to take a lot of shit from them either. Just sit a guy like Terry down and say: “Now look man, here’s the gig. I need an intellectual seeing-eye dog. Somebody who can check out the papers every day, read Time and Newsweek, do a little research for me, and just set me up nice so when I go out on the floor tonight, I’m the best-informed person in the city. Dig?”

What Goldman is describing here is basically the relationship between a star comic and his head writer, as enacted in a seedy hotel room in Times Square instead of backstage at The Tonight Show. And while Terry Lane’s résumé may no longer be typical—his equivalent today would be more likely to have gone to Harvard—his personal qualifications are much the same: “What grabbed Lenny was the fact that Terry was a reader…Lenny hadn’t got the patience, the concentration, the sitzfleisch. When pushed too hard he got terrible headaches. But Terry there, at the table between shows, would sit, riddling off titles like a college English professor…Lenny was impressed.”

But the real takeaway here is how this approach to current events has expanded outward from the nightclubs to radio and cable news, which is where Bruce’s true successors can be found. Goldman nicely describes the skill in question:

And the system works fine. Terry or Richey or Benny or whoever is traveling with Lenny is always a smart, studious sort of cat, who can feed him facts and help him learn big new words out of the dictionary. After all, what is literacy? Words. How do you learn words? Hear them. If you have a good ear and a tongue that can mimic anything you hear, you can learn whole languages by rote. Lenny is a mind-mouth man. His brain is located somewhere between his ears and his tongue. All he has to do is get the hang of a word, and he finds a place to slip it into his act.

These days, many of us get our news exactly from such “mind-mouth” men or women, whose gift consists of taking a few headlines and spinning them into thirty minutes of daily content. On the left, they’ve traditionally come from the ranks of improv, standup, and sketch comedy; on the right, which has trouble coming up with funny people, from talk radio. (Rush Limbaugh got his start as a disc jockey, which points to the fact that his true power is the ability to talk into a microphone for hours.) I’m not denigrating this talent, which is so rare that only a handful of people seem capable of doing it for large audiences at any one time. And we could do worse than to take our political cues from the writers at The Daily Show. But it’s still a simulacrum of insight, rather than the real thing. And we need to think hard about what happens when so many people turn to it for their information—including the man in the White House.

Written by nevalalee

June 26, 2018 at 8:20 am

The ghost in the machine

with one comment

Note: Spoilers follow for the season finale of Westworld.

When you’re being told a story, you want to believe that the characters have free will. Deep down, you know that they’ve been manipulated by a higher power that can make them do whatever it likes, and occasionally, it can even be fun to see the wires. For the most part, though, our enjoyment of narrative art is predicated on postponing that realization for as long as possible. The longer the work continues, the harder this becomes, and it can amount to a real problem for a heavily serialized television series, which can start to seem strained and artificial as the hours of plot developments accumulate. These tensions have a way of becoming the most visible in the protagonist, whose basic purpose is to keep the action clocking along. As I’ve noted here before, there’s a reason why the main character is often the least interesting person in sight. The show’s lead is under such pressure to advance the plot that he or she becomes reduced to the diagram of a pattern of forces, like one of the fish in D’Arcy Wentworth Thompson’s On Growth and Form, in which the animal’s physical shape is determined by the outside stresses to which it has been subjected. Every action exists to fulfill some larger purpose, which often results in leads who are boringly singleminded, with no room for the tangents that can bring supporting players to life. The characters at the center have to constantly triangulate between action, motivation, and relatability, which can drain them of all surprise. And if the story ever relaxes its hold, they burst, like sea creatures brought up from a crevasse to the surface.

This is true of most shows that rely heavily on plot twists and momentum—it became a huge problem for The Vampire Diaries—but it’s even more of an issue when a series is also trying to play tricks with structure and time. Westworld has done more than any other television drama that I can remember to push against the constraints of chronology, and the results are often ingenious. Yet they come at a price. (As the screenwriter Robert Towne put it in a slightly different content: “You end up paying for it with an almost mathematical certainty.”) And the victim, not surprisingly, has been the ostensible lead. Over a year and a half ago, when the first season was still unfolding, I wrote that Dolores, for all her problems, was the engine that drove the story, and that her gradual movement toward awareness was what gave the series its narrative thrust. I continued:

This is why I’m wary of the popular fan theory, which has been exhaustively discussed online, that the show is taking place in different timelines…Dolores’s story is the heart of the series, and placing her scenes with William three decades earlier makes nonsense of the show’s central conceit: that Dolores is slowly edging her way toward greater self-awareness because she’s been growing all this time. The flashback theory implies that she was already experiencing flashes of deeper consciousness almost from the beginning, which requires us to throw out most of what we know about her so far…It has the advantage of turning William, who has been kind of a bore, into a vastly more interesting figure, but only at the cost of making Dolores considerably less interesting—a puppet of the plot, rather than a character who can drive the narrative forward in her own right.

As it turned out, of course, that theory was totally on the mark, and I felt a little foolish for having doubted it for so long. But on a deeper level, I have to give myself credit for anticipating the effect that it would have on the series as a whole. At the time, I concluded: “Dolores is such a load-bearing character that I’m worried that the show would lose more than it gained by the reveal…The multiple timeline theory, as described, would remove the Dolores we know from the story forever. It would be a fantastic twist. But I’m not sure the show could survive it.” And that’s pretty much what happened, although it took another season to clarify the extent of the damage. On paper, Dolores was still the most important character, and Evan Rachel Wood deservedly came first in the credits. But in order to preserve yet another surprise, the show had to be maddeningly coy about what exactly she was doing, even as she humorlessly pursued her undefined mission. Every line was a cryptic hint about what was coming, and the payoff was reasonably satisfying. But I don’t know if it was worth it. Offhand, I can’t recall another series in which an initially engaging protagonist was reduced so abruptly to a plot device, and it’s hard not to blame the show’s conceptual and structural pretensions, which used Dolores as a valve for the pressure that was occurring everywhere else but at its center. It’s frankly impossible for me to imagine what Dolores would even look like if she were relaxing or joking around or doing literally anything except persisting grimly in her roaring rampage of revenge. Because of the nature of its ambitions, Westworld can’t give her—or any of its characters—the freedom to act outside the demands of the story. It’s willing to let its hosts be reprogrammed in any way that the plot requires. Which you’ve got to admit is kind of ironic.

None of this would really matter if the payoffs were there, and there’s no question that last night’s big reveal about Charlotte is an effective one. (Unfortunately, it comes at the expense of Tessa Thompson, who, like Wood, has seemed wasted throughout the entire season for reasons that have become evident only now.) But the more I think about it, the more I feel that this approach might be inherently unsuited for a season of television that runs close to twelve hours. When a conventional movie surprises us with a twist at the end, part of the pleasure is mentally rewinding the film to see how it plays in light of the closing revelation—and much of the genius of Memento, which was based on Jonathan Nolan’s original story, was that it allowed us to do this every ten minutes. Yet as Westworld itself repeatedly points out, there’s only so much information or complexity that the human mind can handle. I’m a reasonably attentive viewer, but I often struggled to recall what happened seven episodes ago, and the volume of data that the show presents makes it difficult to check up on any one point. Now that the series is over, I’m sure that if I revisited the earlier episodes, many scenes would take on an additional meaning, but I just don’t have the time. And twelve hours may be too long to make viewers wait for the missing piece that will lock the rest into place, especially when it comes at the expense of narrative interest in the meantime, and when anything truly definitive will need to be withheld for the sake of later seasons. It’s to the credit of Westworld and its creators that there’s little doubt that they have a master plan. They aren’t making it up as they go along. But this also makes it hard for the characters to make anything of themselves. None of us, the show implies, is truly in control of our actions, which may well be the case. But a work of art, like life itself, doesn’t seem worth the trouble if it can’t convince us otherwise.

Written by nevalalee

June 25, 2018 at 8:42 am

The president is collaborating

leave a comment »

Last week, Bill Clinton and James Patterson released their collaborative novel The President is Missing, which has already sold something like a quarter of a million copies. Its publication was heralded by a lavish two-page spread in The New Yorker, with effusive blurbs from just about everyone whom a former president and the world’s bestselling author might be expected to get on the phone. (Lee Child: “The political thriller of the decade.” Ron Chernow: “A fabulously entertaining thriller.”) If you want proof that the magazine’s advertising department is fully insulated from its editorial side, however, you can just point to the fact that the task of reviewing the book itself was given to Anthony Lane, who doesn’t tend to look favorably on much of anything. Lane’s style—he has evidently never met a smug pun or young starlet he didn’t like—can occasionally turn me off from his movie reviews, but I’ve always admired his literary takedowns. I don’t think a month goes by that I don’t remember his writeup of the New York Times bestseller list May 15, 1994, which allowed him to tackle the likes of The Bridges of Madison County, The Celestine Prophecy, and especially The Day After Tomorrow by Allan Folsom, from which he quoted a sentence that permanently changed my view of such novels: “Two hundred European cities have bus links with Frankfurt.” But he seems to have grudgingly liked The President is Missing. If nothing else, he furnishes a backhanded compliment that has already been posted, hilariously out of context, on Amazon: “If you want to make the most of your late-capitalist leisure-time, hit the couch, crack a Bud, punch the book open, focus your squint, and enjoy.”

The words “hit the couch, crack a Bud, punch the book open, [and] focus your squint,” are all callbacks to samples of Patterson’s prose that Lane quotes in the review, but the phrase “late-capitalist leisure-time” might require some additional explanation. It’s a reference to the paper “Structure over Style: Collaborative Authorship and the Revival of Literary Capitalism,” which appeared last year in Digital Humanities Review, and I’m grateful to Lane for bringing it to my attention. The authors, Simon Fuller and James O’Sullivan, focus on the factory model of novelists who employ ghostwriters to boost their productivity, and their star exhibit is Patterson, to whom they devote the same kind of computational scrutiny that has previously uncovered traces of collaboration in Shakespeare. Not surprisingly, it turns out that Patterson doesn’t write most of the books that he ostensibly coauthors. (He may not even have done much of the writing on First to Die, which credits him as the sole writer.) But the paper is less interesting for its quantitative analysis than for its qualitative evaluation of what Patterson tells us about how we consume and enjoy fiction. For instance:

The form of [Patterson’s] novels also appears to be molded by contemporary experience. In particular, his work is perhaps best described as “commuter fiction.” Nicholas Paumgarten describes how the average time for a commute has significantly increased. As a result, reading has increasingly become one of those pursuits that can pass the time of a commute. For example, a truck driver describes how “he had never read any of Patterson’s books but that he had listened to every single one of them on the road.” A number of online reader reviews also describe Patterson’s writing in terms of their commutes…With large print, and chapters of two or three pages, Patterson’s works are constructed to fit between the stops on a metro line.

Of course, you could say much the same of many thrillers, particularly the kind known as the airport novel, which wasn’t just a book that you read on planes—at its peak, it was one in which many scenes took place in airports, which were still associated with glamor and escape. What sets Patterson apart from his peers is his ability to maintain a viable brand while publishing a dozen books every year. His productivity is inseparable from his use of coauthors, but he wasn’t the first. Fuller and O’Sullivan cite the case of Alexandre Dumas, who allegedly boasted of having written four hundred novels and thirty-five plays that had created jobs for over eight thousand people. And they dig up a remarkable quote from The German Ideology by Karl Marx and Friedrich Engels, who “favorably compare French popular fiction to the German, paying particular attention to the latter’s appropriation of the division of labor”:

In proclaiming the uniqueness of work in science and art, [Max] Stirner adopts a position far inferior to that of the bourgeoisie. At the present time it has already been found necessary to organize this “unique” activity. Horace Vernet would not have had time to paint even a tenth of his pictures if he regarded them as works which “only this Unique person is capable of producing.” In Paris, the great demand for vaudevilles and novels brought about the organization of work for their production, organization which at any rate yields something better than its “unique” competitors in Germany.

These days, you could easily imagine Marx and Engels making a similar case about film, by arguing that the products of collaboration in Hollywood have often been more interesting, or at least more entertaining, than movies made by artists working outside the system. And they might be right.

The analogy to movies and television seems especially appropriate in the case of Patterson, who has often drawn such comparisons himself, as he once did to The Guardian: “There is a lot to be said for collaboration, and it should be seen as just another way to do things, as it is in other forms of writing, such as for television, where it is standard practice.” Fuller and O’Sullivan compare Patterson’s brand to that of Alfred Hitchcock, whose name was attached to everything from Dell anthologies to The Three Investigators to Alfred Hitchcock’s Mystery Magazine. It’s a good parallel, but an even better one might be hiding in plain sight. In her recent profile of the television producer Ryan Murphy, Emily Nussbaum evokes an ability to repackage the ideas of others that puts even Patterson to shame:

Murphy is also a collector, with an eye for the timeliest idea, the best story to option. Many of his shows originate as a spec script or as some other source material. (Murphy owned the rights to the memoir Orange Is the New Black before Jenji Kohan did, if you want to imagine an alternative history of television.) Glee grew out of a script by Ian Brennan; Feud began as a screenplay by Jaffe Cohen and Michael Zam. These scripts then get their DNA radically altered and replicated in Murphy’s lab, retooled with his themes and his knack for idiosyncratic casting.

Murphy’s approach of retooling existing material in his own image might be even smarter than Patterson’s method of writing outlines for others to expand, and he’s going to need it. Two months ago, he signed an unprecedented $300 million contract with Netflix to produce content of all kinds: television shows, movies, documentaries. And another former president was watching. While Bill Clinton was working with Patterson, Barack Obama was finalizing a Netflix deal of his own—and if he needs a collaborator, he doesn’t have far to look.

The Prime of Miss Elizabeth Hoover

with 2 comments

Yesterday, as I was working on my post for this blog, I found myself thinking about the first time that I ever heard of Lyme disease, which, naturally, was on The Simpsons. In the episode “Lisa’s Substitute,” which first aired on April 25, 1991, Lisa’s teacher, Miss Hoover, tells the class: “Children, I won’t be staying long. I just came from the doctor, and I have Lyme disease.” As Principal Skinner cheerfully explains: “Lyme disease is spread by small parasites called ‘ticks.’ When a diseased tick attaches itself to you, it begins sucking your blood. Malignant spirochetes infect your bloodstream, eventually spreading to your spinal fluid and on into the brain.” At the end of the second act, however, Miss Hoover unexpectedly returns, and I’ve never forgotten her explanation for her sudden recovery:

Miss Hoover: You see, class, my Lyme disease turned out to be psychosomatic.
Ralph: Does that mean you’re crazy?
Janie: It means she was faking it.
Miss Hoover: No, actually, it was a little of both. Sometimes, when a disease is in all the magazines and on all the news shows, it’s only natural that you think you have it.

And while it might seem excessive to criticize a television episode that first aired over a quarter of a century ago, it’s hard to read these lines after Porochista Khakpour’s memoir Sick without wishing that this particular joke didn’t exist.

In its chronic form, Lyme disease remains controversial, but like chronic fatigue syndrome and fibromyalgia, it’s an important element in the long, complicated history of women having trouble finding doctors who will take their pain seriously. As Lidija Haas writes in The New Yorker:

There’s a class of illnesses—multi-symptomatic, chronic, hard to diagnose—that remain associated with suffering women and disbelieving experts. Lyme disease, symptoms of which can afflict patients years after the initial tick bite, appears to be one…[The musician Kathleen Hanna] describes an experience common to many sufferers from chronic illness—that of being dismissed as an unreliable witness to what is happening inside her. Since no single medical condition, a doctor once told her, could plausibly affect so many different systems—neurological, respiratory, gastrointestinal—she must be having a panic attack…As in so many other areas of American life, women of color often endure the most extreme versions of this problem.

It goes without saying that when “Lisa’s Substitute” was written, there weren’t any women on the writing staff of The Simpsons, although even if there were, it might not have made a difference. In her recent memoir Just the Funny Parts, Nell Scovell, who worked as a freelance television writer in the early nineties, memorably describes the feeling of walking into the “all-male” Simpsons writing room, which was “welcoming, but also intimidating.” It’s hard to imagine these writers, so many of them undeniably brilliant, thinking twice about making a joke like this—and it’s frankly hard to see them rejecting it now, when it might only lead to attacks from people who, in Matt Groening’s words, “love to pretend they’re offended.”

I’m not saying that there are any subjects that should be excluded from comedic consideration, or that The Simpsons can’t joke about Lyme disease. But as I look back at the classic years of my favorite television show of all time, I’m starting to see a pattern that troubles me, and it goes far beyond Apu. I’m tempted to call it “punching down,” but it’s worse. It’s a tendency to pick what seem at the time like safe targets, and to focus with uncanny precision on comic gray areas that allow for certain forms of transgression. I know that I quoted this statement just a couple of months ago, but I can’t resist repeating what producer Bill Oakley says of Greg Daniels’s pitch about an episode on racism in Springfield:

Do you remember this? Something about Homer and Dr. Hibbert? Well, you pitched it several times and I think we were just…It was some exploration of the concept of race in Springfield, and we just said, you know, we don’t think this is the forum. The Simpsons can’t be the right forum to deal with racism.

He was probably right. But when you look at the few racially charged jokes that the show actually made, the characters involved weren’t black, but quite specifically “brown,” or members of groups that occupy a liminal space in our cultural understanding of race: Apu, Akira, Bumblebee Man. (I know that Akira was technically whiter than anybody else, but you get my drift.) By contrast, the show was very cautious when it came to its black characters. Apart from Dr. Hibbert, who was derived from Bill Cosby, the show’s only recurring black faces were Carl and Officer Lou, the latter of whom is so unmemorable that I had to look him up to make sure that he wasn’t Officer Eddie. And both Carl and Lou were given effectively the same voice by Hank Azaria, the defining feature of which was that it was nondescript as humanly possible.

I’m not necessarily criticizing the show’s treatment of race, but the unconscious conservatism that carefully avoided potentially controversial areas while lavishing attention on targets that seemed unobjectionable. It’s hard to imagine a version of the show that would have dared to employ such stereotypes, even ironically, on Carl, Lou, or even Judge Snyder, who was so racially undefined that he was occasionally colored as white. (The show’s most transgressive black figures, Drederick Tatum and Lucius Sweet, were so transparently modeled on real people that they barely even qualified as characters. As Homer once said: “You know Lucius Sweet? He’s one of the biggest names in boxing! He’s exactly as rich and as famous as Don King, and he looks just like him, too!” And I’m not even going to talk about “Bleeding Gums” Murphy.) That joke about Miss Hoover is starting to feel much the same way, and if it took two decades for my own sensibilities to catch up with that fact, it’s for the same reasons that we’re finally taking a harder look at Apu. And if I speak as a fan, it isn’t to qualify these thoughts, but to get at the heart of why I feel obliged to write about them at all. We’re all shaped by popular culture, and I can honestly say of The Simpsons, as Jack Kerouac writes in On the Road: “All my actions since then have been dictated automatically to my subconscious by this horrible osmotic experience.” The show’s later seasons are reflexively dismissed as lazy, derivative, and reliant on easy jokes, but we still venerate its golden years. Yet if The Simpsons has gradually degraded under the watch of many of its original writers and producers, this implies that we’re only seeing the logical culmination—or eruption—of something that was there all along, afflicting its viewers years after the original bite. We all believed that The Simpsons, in its prime, was making us smarter. But what if it was just psychosomatic?

A season of disenchantment

leave a comment »

A few days ago, Matt Groening announced that his new animated series, Disenchantment, will premiere in August on Netflix. Under other circumstances, I might have been pleased by the prospect of another show from the creator of The Simpsons and Futurama—not to mention producers Bill Oakley and Josh Weinstein—and I expect that I’ll probably watch it. At the moment, however, it’s hard for me to think about Groening at all without recalling his recent reaction to the long overdue conversation around the character of Apu. When Bill Keveny of USA Today asked earlier this month if he had any thoughts on the subject, Groening replied: “Not really. I’m proud of what we do on the show. And I think it’s a time in our culture where people love to pretend they’re offended.” It was a profoundly disappointing statement, particularly after Hank Azaria himself had expressed his willingness to step aside from the role, and it was all the more disillusioning coming from a man whose work has been a part of my life for as long as I can remember. As I noted in my earlier post, the show’s unfeeling response to this issue is painful because it contradicts everything that The Simpsons was once supposed to represent. It was the smartest show on television; it was simply right about everything; it offered its fans an entire metaphorical language. And as the passage of time reveals that it suffered from its own set of blinders, it doesn’t just cast doubt on the series and its creators, but on the viewers, like me, who used it for so long as an intellectual benchmark.

And it’s still an inescapable part of my personal lexicon. Last year, for instance, when Elon Musk defended his decision to serve on Trump’s economic advisory council, I thought immediately of what Homer says to Marge in “Whacking Day”: “Maybe if I’m part of that mob, I can help steer it in wise directions.” Yet it turns out that I might have been too quick to give Musk—who, revealingly, was the subject of an adulatory episode of The Simpsons—the benefit of the doubt. A few months later, in response to reports of discrimination at Tesla, he wrote an email to employees that included this remarkable paragraph:

If someone is a jerk to you, but sincerely apologizes, it is important to be thick-skinned and accept that apology. If you are part of a lesser represented group, you don’t get a free pass on being a jerk yourself. We have had a few cases at Tesla were someone in a less represented group was actually given a job or promoted over more qualified highly represented candidates and then decided to sue Tesla for millions of dollars because they felt they weren’t promoted enough. That is obviously not cool.

The last two lines, which were a clear reference to the case of A.J. Vandermeyden, tell us more about Musk’s idea of a “sincere apology” than he probably intended. And when Musk responded this week to criticism of Tesla’s safety and labor practices by accusing the nonprofit Center for Investigative Reporting of bias and proposing a site where users could provide a “credibility score” for individual journalists, he sounded a lot like the president whose circle of advisers he only reluctantly left.

Musk, who benefited from years of uncritical coverage from people who will forgive anything as long as you talk about space travel, seems genuinely wounded by any form of criticism or scrutiny, and he lashes out just as Trump does—by questioning the motives of ordinary reporters or sources, whom he accuses of being in the pocket of unions or oil companies. Yet he’s also right to be worried. We’re living in a time when public figures and institutions are going to be judged by their responses to questions that they would rather avoid, which isn’t likely to change. And the media itself is hardly exempt. For the last two weeks, I’ve been waiting for The New Yorker to respond to stories about the actions of two of its most prominent contributors, Junot Díaz and the late David Foster Wallace. I’m not even sure what I want the magazine to do, exactly, except make an honest effort to grapple with the situation, and maybe even offer a valuable perspective, which is why I read it in the first place. (In all honesty, it fills much the same role in my life these days as The Simpsons did in my teens. As Norman Mailer wrote back in the sixties: “Hundreds of thousands, perhaps millions of people in the most established parts of the middle class kill their quickest impulses before they dare to act in such a way as to look ridiculous to the private eye of their taste whose style has been keyed by the eye of The New Yorker.”) As the days passed without any comment, I assumed that it was figuring out how to tackle an admittedly uncomfortable topic, and I didn’t expect it to rush. Now that we’ve reached the end of the month without any public engagement at all, however, I can only conclude that it’s deliberately ignoring the matter in hopes that it will go away. I hope that I’m wrong. But so far, it’s a discouraging omission from a magazine whose stories on Harvey Weinstein and Eric Schneiderman implicitly put it at the head of an entire movement.

The New Yorker has evidently discovered that it’s harder to take such stands when they affect people whom we know or care about— which only means that it can get in line. Our historical moment has forced some of our smartest individuals and organizations to learn how to take criticism as well as to give it, and it’s often those whose observations about others have been the sharpest who turn out to be singularly incapable, as Clarice Starling once put it, when it comes to pointing that high-powered perception on themselves. (In this list, which is constantly being updated, I include Groening, Musk, The New Yorker, and about half the cast of Arrested Development.) But I can sympathize with their predicament, because I feel it nearly every day. My opinion of Musk has always been rather mixed, but nothing can dislodge the affection and gratitude that I feel toward the first eight seasons of The Simpsons, and I expect to approvingly link to an article in The New Yorker this time next week. But if our disenchantment forces us to question the icons whose influence is fundamental to our conception of ourselves, then perhaps it will have been worth the pain. Separating our affection for the product from those who produced it is a problem that we all have to confront, and it isn’t going to get any easier. As I was thinking about this post yesterday, the news broke that Morgan Freeman had been accused by multiple women of inappropriate behavior. In response, he issued a statement that read in part: “I apologize to anyone who felt uncomfortable or disrespected.” It reminded me a little of another man who once grudgingly said of some remarks that were caught on tape: “I apologize if anyone was offended.” But it sounds a lot better when you imagine it in Morgan Freeman’s voice.

Written by nevalalee

May 25, 2018 at 9:21 am

The bedtime story

leave a comment »

Earlier this morning, I finally got my hands on the companion book to James Cameron’s Story of Science Fiction, which is airing this month on AMC. Naturally, I immediately looked for references to the four main subjects of Astounding, and the passage that caught my eye first was an exchange between Cameron and Steven Spielberg:

Spielberg: The working title of E.T. was Watch the Skies. Which is sort of the last line from The Thing. I just remember looking at the sky because of the influence of my father, and saying, only good should come from that. If it ain’t an ICBM coming from the Soviet Union, only good should come from beyond our gravitational hold…He was a visionary about that, yet he read all the Analog. Those paperbacks? And Amazing Stories, the paperbacks of that. I used to read that along with him. Sometimes, he’d read those books to me, those little tabloids to me at night.

Cameron: Asimov, Heinlein, all those guys were all published in those pulp magazines.

Spielberg: They were all published in those magazines, and a lot of them were optimists. They weren’t always calculating our doom. They were finding ways to open up our imagination and get us to dream and get us to discover and get us to contribute to the greater good.

The discussion quickly moves on to other subjects, but not before hinting at the solution to a mystery that I’ve been trying to figure out for years, which is why the influence of Astounding and its authors can be so hard to discern in the work of someone like Spielberg. In part, it’s a matter of timing. Spielberg was born in 1946, which means that he would have been thirteen when John W. Campbell announced that that his magazine was changing its title to Analog. As a result, at a point at which he should have been primed to devour science fiction, Spielberg doesn’t seem to have found its current incarnation all that interesting, for which you can hardly blame him. Instead, his emotional associations with the pulps were evidently passed down through his father, Arnold Spielberg, an electrical engineer who worked for General Electric and RCA. The elder Spielberg, remarkably, is still active at the age of 101, and just two months ago, he said in an interview with GE Reports:

I was also influenced by science fiction. There were twins in our neighborhood who read one of the first sci-fi magazines, called Astounding Stories of Science and Fact. They gave me one copy, and when I brought it home, I was hooked. The magazine is now called Analog Science Fiction and Fact, and I still get it.

And while I don’t think that there’s any way of verifying it, if Arnold Spielberg—the father of Steven Spielberg—isn’t the oldest living subscriber to Analog, he must be close.

This sheds light on his son’s career, although perhaps not in the way that you might think. Spielberg is such a massively important figure that his very existence realigns the history of the genre, and when he speaks of his influences, we need to be wary of the shadow cast by his inescapable personality. But there’s no denying the power—and truth—of the image of Arnold Spielberg reading from the pulps aloud to his son. It feels like an image from one of Spielberg’s own movies, which has been shaped from the beginning by the tradition of oral storytelling. (It’s worth noting, though, that the father might recall things differently than the son. In his biography of the director, Joseph McBride quotes Arnold Spielberg: “I’ve been reading science fiction since I was seven years old, all the way back to the earliest Amazing Stories. Amazing, Astounding, Analog—I still subscribe. I still read ’em. My kids used to complain, ‘Dad’s in the bathroom with a science-fiction magazine. We can’t get in.'”) For Spielberg, the stories seem inextricably linked with the memory of being taken outside by his father to look at the stars:

My father was the one that introduced me to the cosmos. He’s the one who built—from a big cardboard roll that you roll rugs on—a two-inch reflecting telescope with an Edmund Scientific kit that he had sent away for. [He] put this telescope together, and then I saw the moons of Jupiter. It was the first thing he pointed out to me. I saw the rings of Saturn around Saturn. I’m six, seven years old when this all happened.

Spielberg concludes: “Those were the stories, and just looking up at the sky, that got me to realize, if I ever get a chance to make a science fiction movie, I want those guys to come in peace.”

But it also testifies to the ways in which a strong personality will take exactly what it needs from its source material. Elsewhere in the interview, there’s another intriguing reference:

Spielberg: I always go for the heart first. Of course, sometimes I go for the heart so much I get a little bit accused of sentimentality, which I’m fine [with] because…sometimes I need to push it a little further to reach a little deeper into a society that is a little less sentimental than they were when I was a young filmmaker.

Cameron: You pushed it in the same way that John W. Campbell pushed science fiction [forward] from the hard-tech nerdy guys who had to put PhD after their name to write science fiction. It was all just about the equations and the math and the physics [and evolved to become much more] human stories [about] the human heart.

I see what Cameron is trying to say here, but if you’ve read enough of the magazine that turned into Analog, this isn’t exactly the impression that it leaves. It’s true that Campbell put a greater emphasis than most of his predecessors on characterization, at least in theory, but the number of stories that were about “the human heart” can be counted on two hands, and none were exactly Spielbergian—although they might seem that way when filtered through the memory of his father’s voice. And toward the end, the nerds took over again. In Dangerous Visions, which was published in 1967, Harlan Ellison wrote of “John W. Campbell, Jr., who used to edit a magazine that ran science fiction, called Astounding, and who now edits a magazine that runs a lot of schematic drawings, called Analog.” It was the latter version of the magazine that Spielberg would have seen as a boy—which may be why, when the time came, he made a television show called Amazing Stories.

The multiverse theory

leave a comment »

Yesterday, I flew back from the Grappling with the Futures symposium, which was held over the course of two days at Harvard and Boston University. I’d heard about the conference from my friend Emanuelle Burton, a scholar at the University of Illinois at Chicago, whom I met two years ago through the academic track at the World Science Fiction Convention in Kansas City. Mandy proposed that we collaborate on a presentation at this event, which was centered on the discipline of futures studies, a subject about which I knew nothing. For reasons of my own, though, I was interested in making the trip, and we put together a talk titled Fictional Futures, which included a short history of the concept of psychohistory. The session went fine, even if we ended up with more material than we could reasonably cover in twenty minutes. But I was equally interested in studying the people around me, who were uniformly smart, intense, quirky, and a little mysterious. Futures studies is an established academic field that draws on many of the tools and concepts of science fiction, but it uses a markedly different vocabulary. (One of the scheduled keynote speakers has written and published a climate change novella, just like me, except that she describes it as a “non-numerical simulation model.”) It left me with the sense of a closed world that evolved in response to the same problems and pressures that shaped science fiction, but along divergent lines, and I still wonder what might come of a closer relationship between the two communities.

As it happened, I had to duck out after the first day, because I had something else to do in Boston. Ever since I started work on Astounding, I’ve been meaning to pay a visit to the Isaac Asimov collection at the Howard Gotlieb Archival Research Center at Boston University, which houses the majority of Asimov’s surviving papers, but which can only be viewed in person. Since I was going to be in town anyway, I left the symposium early and headed over to the library, where I spent five hours yesterday going through what I could. When you arrive at the reading room, you sign in, check your bag and cell phone, and are handed a massive finding aid, an inventory of the Asimov collection that runs to more than three hundred pages. (The entire archive, which consists mostly of work that dates from after the early sixties, fills four hundred boxes.) After marking off the items that you want, you’re rewarded with a cart loaded with archival cartons and a pair of white gloves. At the back of my mind, I wasn’t expecting to find much—I’ve been gathering material for this book for years. As it turned out, there were well over a hundred letters between Asimov, Campbell, and Heinlein alone that I hadn’t seen before. You aren’t allowed to take pictures or make photocopies, so I typed up as many notes as I could before I had to run to catch my plane. For the most part, they fill out parts of the story that I already have, and they won’t fundamentally change the book. But in an age of digital research, I was struck by the fact that all this paper, of which I just scratched the surface, is only accessible to scholars who can physically set foot in the reading room at the Mugar Library.

After two frantic days, I finally made it home, where my wife and I watched last night’s premiere of James Cameron’s Story of Science Fiction on AMC. At first glance, this series might seem like the opposite of my experiences in Boston. Instead of being set apart from the wider world, it’s an ambitious attempt to appeal to the largest audience possible, with interviews with the likes of Steven Spielberg and Christopher Nolan and discussions of such works as Close Encounters and Alien. I’ve been looking forward to this show for a long time, not least because I was hoping that it would lead to a spike in interest in science fiction that would benefit my book, and the results were more or less what I expected. In the opening sequence, you briefly glimpse Heinlein and Asimov, and there’s even a nod to The Thing From Another World, although no mention of John W. Campbell himself. For the most part, though, the series treats the literary side as a precursor to its incarnations in the movies and television, which is absolutely the right call. You want to tell this story as much as possible through images, and the medium lends itself better to H.R. Geiger than to H.P. Lovecraft. But when I saw a brief clip of archival footage of Ray Bradbury, in his role in the late seventies as an ambassador for the genre, I found myself thinking of the Bradbury whom I know best—the eager, unpublished teenager in the Great Depression who wrote fan letters to the pulps, clung to the edges of the Heinlein circle, and never quite managed to break into Astounding. It’s a story that this series can’t tell, and I can’t blame it, because I didn’t really do it justice, either.

Over the last few days, I’ve been left with a greater sense than ever before of the vast scope and apparently irreconcilable aspects of science fiction, which consists of many worlds that only occasionally intersect. It’s a realization, or a recollection, that might seem to come at a particularly inopportune time. The day before I left for the symposium, I received the page proofs for Astounding, which normally marks the point at which a book can truly be said to be finished. I still have time to make a few corrections and additions, and I plan to fix as much of it as I can without driving my publisher up the wall. (There are a few misplaced commas that have been haunting my dreams.) I’m proud of the result, but when I look at the proofs, which present the text as an elegant and self-contained unit, it seems like an optical illusion. Even if I don’t take into account what I learned when it was too late, I’m keenly aware of everything and everyone that this book had to omit. I’d love to talk more about futures studies, or the letters that I dug up in the Asimov archives, or the practical effects in John Carpenter’s remake of The Thing, but there just wasn’t room or time. As it stands, the book tries to strike a balance between speaking to obsessive fans and appealing to a wide audience, which meant excluding a lot of fascinating material that might have survived if it were being published by a university press. It can’t possibly do everything, and the events of the weekend have only reminded me that there are worlds that I’ve barely even explored. But if that isn’t the whole point of science fiction—well, what is?

Into the West

leave a comment »

A few months ago, I was on the phone with a trusted adviser to discuss some revisions to Astounding. We were focusing on the prologue, which I had recently rewritten from scratch to make it more accessible to readers who weren’t already fans of science fiction. Among other things, I’d been asked to come up with ways in which the impact of my book’s four subjects was visible in modern pop culture, and after throwing some ideas back and forth, my adviser asked me plaintively: “Couldn’t you just say that without John W. Campbell, we wouldn’t have Game of Thrones?” I was tempted to give in, but I ultimately didn’t—it just felt like too much of a stretch. (Which isn’t to say that the influence isn’t there. When a commenter on his blog asked whether his work had been inspired by the mythographer Joseph Campbell, George R.R. Martin replied: “The Campbell that influenced me was John W., not Joseph.” And that offhand comment was enough of a selling point that I put it in the very first sentence of my book proposal.) Still, I understood the need to frame the story in ways that would resonate with a mainstream readership, and I thought hard about what other reference points I could honestly provide. Star Trek was an easy one, along with such recent movies as Interstellar and The Martian, but the uncomfortable reality is that much of what we call science fiction in film and television has more to do with Star Wars. But I wanted to squeeze in one last example, and I finally settled on this line about Campbell: “For more than three decades, an unparalleled series of visions of the future passed through his tiny office in New York, where he inaugurated the main sequence of science fiction that runs through works from 2001 to Westworld.”

As the book is being set in type, I’m still comfortable with this sentence as it stands, although there are a few obvious qualifications that ought to be made. Westworld, of course, is based on a movie written and directed by Michael Crichton, whose position in the history of the genre is a curious one. As I’ve written elsewhere, Crichton was an unusually enterprising author of paperback thrillers who found himself with an unexpected blockbuster in the form of The Andromeda Strain. It was his sixth novel, and his first in hardcover, and it seems to have benefited enormously from the input of editor Robert Gottlieb, who wrote in his memoir Avid Reader:

The Andromeda Strain was a terrific concept, but it was a mess—sloppily plotted, underwritten, and worst of all, with no characterization whatsoever. [Crichton’s] scientists were beyond generic—they lacked all human specificity; the only thing that distinguished some of them from the others was that some died and some didn’t. I realized right away that with his quick mind, swift embrace of editorial input, and extraordinary work habits he could patch the plot, sharpen the suspense, clarify the science—in fact, do everything necessary except create convincing human beings. (He never did manage to; eventually I concluded that he couldn’t write about people because they just didn’t interest him.) It occurred to me that instead of trying to help him strengthen the human element, we could make a virtue of necessity by stripping it away entirely; by turning The Andromeda Strain from a documentary novel into a fictionalized documentary. Michael was all for it—I think he felt relieved.

The result, to put it mildly, did quite well, and Crichton quickly put its lessons to work. But it’s revealing that the flaws that Gottlieb cites—indifferent plotting, flat writing, and a lack of real characterization—are also typical of even some of the best works of science fiction that came out of Campbell’s circle. Crichton’s great achievement was to focus relentlessly on everything else, especially readability, and it’s fair to say that he did a better job of it than most of the writers who came up through Astounding and Analog. He was left with the reputation of a carpetbagger, and his works may have been too square and fixated on technology to ever be truly fashionable. Yet a lot of it can be traced back to his name on the cover. In his story “Pierre Menard, Author of the Quixote,” Jorge Luis Borges speaks of enriching “the slow and rudimentary act of reading by means of a new technique—the technique of deliberate anachronism and fallacious attribution.” In this case, it’s pretty useful. I have a hunch that if The Terminal Man, Congo, and Sphere had been attributed on their first release to Robert A. Heinlein, they would be regarded as minor classics. They’re certainly better than many of the books that Heinlein was actually writing around the same time. And if I’m being honest, I should probably confess that I’d rather read Jurassic Park again than any of Asimov’s novels. (As part of my research for this book, I dutifully made my way through Asimov’s novelization of Fantastic Voyage, which came out just three years before The Andromeda Strain, and his fumbling of that very Crichtonesque premise only reminded me of how good at this sort of thing Crichton really was.) If Crichton had been born thirty years earlier, John W. Campbell would have embraced him like a lost son, and he might well have written a better movie than Destination Moon.

At its best, the television version of Westworld represents an attempt to reconcile Crichton’s gifts for striking premises and suspense with the more introspective mode of the genre to which he secretly belongs. (It’s no accident that Jonathan Nolan had been developing it in parallel with Foundation.) This balance hasn’t always been easy to manage, and last night’s premiere suggests that it can only become more difficult going forward. Westworld has always seemed defined by the pattern of forces that were acting on it—its source material, its speculative and philosophical ambitions, and the pressure of being a flagship drama on HBO. It also has to deal now with the legacy of its own first season, which set a precedent for playing with time, as well as the scrutiny of viewers who figured it out prematurely. The stakes here are established early on, with Bernard awakening on a beach in a sequence that seems like a nod to the best film by Nolan’s brother, and this time around, the parallel timelines are put front and center. Yet the strain occasionally shows. The series is still finding itself, with characters, like Dolores, who seem to be thinking through their story arcs out loud. It’s overly insistent on its violence and nudity, but it’s also cerebral and detached, with little possibility of real emotional pain that the third season of Twin Peaks was able to inflict. I don’t know if the center will hold. Yet’s also possible that these challenges were there from the beginning, as the series tried to reconcile Crichton’s tricks with the tradition of science fiction that it clearly honors. I still believe that this show is in the main line of the genre’s development. Its efforts to weave together its disparate influences strike me as worthwhile and important. And I hope that it finds its way home.

Foundation and Hollywood

with 2 comments

Yesterday, the news broke that Isaac Asimov’s Foundation trilogy will finally be adapted for television. I’ve learned to be skeptical of such announcements, but the package that they’ve assembled sounds undeniably exciting. As we learn from an article in The Wrap:

HBO and Warner Bros. TV are teaming to produce a series based on Isaac Asimov‘s Foundation trilogy that will be written and produced by Interstellar writer Jonathan Nolan…Nolan, who is already working with HBO on Westworld, has been quietly developing the project for the last several months. He recently tipped his hand to Indiewire, which asked him: “What’s the one piece of science fiction you truly love that people don’t know enough about?” [Nolan replied:] “Well, I fucking love the Foundation novels by Isaac Asimov…That’s a set of books I think everyone would benefit from reading.”

Whoops, my mistake—that’s a story from two years ago. The latest attempt will be developed by David S. Goyer and Josh Friedman for Apple, which acquired it from Skydance Television in what Deadline describes as “a competitive situation.” And when you turn back the clock even further, you find that efforts to adapt the trilogy were made in the nineties by New Line Cinema, which went with The Lord of the Rings instead, and even by Roland Emmerich, who might be the last director whom you’d entrust with this material. There were probably other projects that have been long since forgotten. And it doesn’t take a psychohistorian to realize that the odds are stacked against this new version ever seeing the light of day.

Why has the Foundation series remained so alluring to Hollywood, yet so resistant to adaptation? For a clue, we can turn to Asimov himself. In the early eighties, he was approached by Doubleday to write his first new novel in years, and an editor laid out the situation in no uncertain terms: “Listen, Isaac, let me make it clear. When [editor Betty Prashker] said ‘a novel,’ she meant ‘a science fiction novel,’ and when we say ‘a science fiction novel,’ we mean ‘a Foundation novel.’ That’s what we want.” Asimov was daunted, but the offer was too generous to refuse, so he decided to give it a try. As he recounts in his memoir I. Asimov:

Before I got started, I would have to reread the Foundation trilogy. This I approached with a certain horror…I couldn’t help noticing, of course, that there was not very much action in it. The problems and resolutions thereof were expressed primarily in dialogue, in competing rational discussions from different points of view, with no clear indication to the reader which view was right and which was wrong.

This didn’t mean that the trilogy wasn’t engaging—Asimov thought that “it was a page-turner,” and when he was done, he was surprised by his personal reaction: “I experienced exactly what readers had been telling me for decades—a sense of fury that it was over and there was no more.” But if you’re looking to adapt it into another medium, you quickly find that there isn’t a lot there in terms of conventional drama or excitement. As Omar Sharif once said about Lawrence of Arabia: “If you are the man with the money and somebody comes to you and says he wants to make a film…with no stars, and no women, and no love story, and not much action either…what would you say?

In fact, it’s hard to pin down exactly what the Foundation series—or at least the first book—has to offer the movies or television. Speaking as a fan, I can safely state that it doesn’t have memorable characters, iconic scenes, or even much in the way of background. If I were hired to adapt it, I might react in much the same way that William Goldman did when he worked on the movie version of Maverick. Goldman confesses in Which Lie Did I Tell? that his reasons for taking the assignment were simple: “I knew it would be easy…The last thing in life I wanted was to try another original. This adaptation had to be a breeze—all I needed to do was pick out one of the old [episodes] that had too much plot, expand it, and there would be the movie.” He continues:

One of the shocks of my life happened in my living room, where I spent many hours looking at the old Maverick shows I’d been sent. Because, and this was the crusher, television storytelling has changed…Not only was the [James] Garner character generally passive, there was almost no plot at all. Nothing for me to steal. I essentially had to write, sob, another original.

Similarly, the Foundation series gives a writer almost nothing to steal. Once you get to “The Mule,” the action picks up considerably, but that’s obviously your second—or even your third—season, not your first. In the meantime, you’re left with the concept of psychohistory and nothing else. You have to write another original. Which is essentially what happened with I, Robot.

And even psychohistory can be more trouble that it might be worth. It works most convincingly over the course of years or decades, which isn’t a timeframe that lends itself to movies or television, and it naturally restricts the ability of the characters to take control of the story. Which isn’t to say that it’s impossible. (In fact, I have some decent ideas of my own, but I’ll keep them to myself, in case Goyer and Friedman ever want to take a meeting. My schedule is pretty packed at the moment, but it frees up considerably in a few months.) But it’s worth asking why the Foundation series has been such a tempting target for so long. It’s clearly a recognizable property, which is valuable in itself, and its highbrow reputation makes it seem like a promising candidate for a prestige adaptation, although even a glance at the originals shows how deeply they remain rooted in the pulp tradition from which they emerged. If I were a producer looking to move into science fiction with a big acquisition, this would be one of the first places that I’d look, even if these stories aren’t exactly what they seem to be—the Deadline article says that they “informed” the Star Wars movies, which is true only in the loosest possible sense. When you combine the apparent value of the material with the practical difficulty of adapting it, you end up with the cycle that we’ve seen for decades. Asimov was the most famous name in science fiction for thirty years, and his works were almost perpetually under option, but apart from a quickie adaptation of Nightfall, he died before seeing any of it on the screen. He was glad to take the money, but he knew that his particular brand of fiction wouldn’t translate well to other media, and he concluded with what he once called Asimov’s First Law of Hollywood: “Whatever happens, nothing happens.”

Who Needs the Kwik-E-Mart?

leave a comment »

Who needs the Kwik-E-Mart?
Now here’s the tricky part…

“Homer and Apu”

On October 8, 1995, The Simpsons aired the episode “Bart Sells His Soul,” which still hasn’t stopped rattling around in my brain. (A few days ago, my daughter asked: “Daddy, what’s the soul?” I may have responded with some variation on Lisa’s words: “Whether or not the soul is physically real, it’s the symbol of everything fine inside us.” On a more typical morning, though, I’m likely to mutter to myself: “Remember Alf? He’s back—in pog form!”) It’s one of the show’s finest installments, but it came close to being about something else entirely. On the commentary track for the episode, the producer Bill Oakley recalls:

There’s a few long-lived ideas that never made it. One of which is David Cohen’s “Homer the Narcoleptic,” which we’ve mentioned on other tracks. The other one was [Greg Daniels’s] one about racism in Springfield. Do you remember this? Something about Homer and Dr. Hibbert? Well, you pitched it several times and I think we were just…It was some exploration of the concept of race in Springfield, and we just said, you know, we don’t think this is the forum. The Simpsons can’t be the right forum to deal with racism.

Daniels—who went on to create Parks and Recreation and the American version of The Office—went with the pitch for “Bart Sells His Soul” instead, and the other premise evidently disappeared forever, including from his own memory. When Oakley brings it up, Daniels only asks: “What was it?”

Two decades later, The Simpsons has yet to deal with race in any satisfying way, even when the issue seems unavoidable. Last year, the comedian Hari Kondabolu released the documentary The Problem With Apu, which explores the complicated legacy of one of the show’s most prominent supporting characters. On Sunday, the show finally saw fit to respond to these concerns directly, and the results weren’t what anyone—apart perhaps from longtime showrunner Al Jean—might have wanted. As Sopan Deb of the New York Times describes it:

The episode, titled “No Good Read Goes Unpunished,” featured a scene with Marge Simpson sitting in bed with her daughter Lisa, reading a book called “The Princess in the Garden,” and attempting to make it inoffensive for 2018. At one point, Lisa turns to directly address the TV audience and says, “Something that started decades ago and was applauded and inoffensive is now politically incorrect. What can you do?” The shot then pans to a framed picture of Apu at the bedside with the line, “Don’t have a cow!” inscribed on it. Marge responds: “Some things will be dealt with at a later date.” Followed by Lisa saying, “If at all.”

Kondabolu responded on Twitter: “This is sad.” And it was. As Linda Holmes of NPR aptly notes: “Apu is not appearing in a fifty-year-old book by a now-dead author. Apu is a going concern. Someone draws him, over and over again.” And the fact the show decided to put these words into the mouth of Lisa Simpson, whose importance to viewers everywhere was recently underlined, makes it doubly disappointing.

But there’s one obvious change that The Simpsons could make, and while it wouldn’t be perfect, it would be a step in the right direction. If the role of Apu were recast with an actor of South Asian descent, it might not be enough in itself, but I honestly can’t see a downside. Hank Azaria would still be allowed to voice dozens of characters. Even if Apu sounded slightly different than before, this wouldn’t be unprecedented—Homer’s voice changed dramatically after the first season, and Julie Kavner’s work as Marge is noticeably more gravelly than it used to be. Most viewers who are still watching probably wouldn’t even notice, and the purists who might object undoubtedly left a long time ago. It would allow the show to feel newsworthy again, and not just on account of another gimmick. And even if we take this argument to its logical conclusion and ask that Carl, Officer Lou, Akira, Bumblebee Man, and all the rest be voiced by actors of the appropriate background, well, why not? (The show’s other most prominent minority character, Dr. Hibbert, seems to be on his way out for other reasons, and he evidently hasn’t appeared in almost two years.) For a series that has systematically undermined its own legacy in every conceivable way out of little more than boredom, it seems shortsighted to cling to the idea that Azaria is the only possible Apu. And even if it leaves many issues unresolved on the writing level, it also seems like a necessary precondition for change. At this late date, there isn’t much left to lose.

Of course, if The Simpsons were serious about this kind of effort, we wouldn’t be talking about its most recent episode at all. And the discussion is rightly complicated by the fact that Apu—like everything else from the show’s golden age—was swept up in the greatness of those five or six incomparable seasons. Before that unsuccessful pitch on race in Springfield, Greg Daniels was credited for “Homer and Apu,” which deserves to be ranked among the show’s twenty best episodes, and the week after “Bart Sells His Soul,” we got “Lisa the Vegetarian,” which gave Apu perhaps his finest moment, as he ushered Lisa to the rooftop garden to meet Paul and Linda McCartney. But the fact that Apu was a compelling character shouldn’t argue against further change, but in its favor. And what saddens me the most about the show’s response is that it undermines what The Simpsons, at its best, was supposed to be. It was the cartoon that dared to be richer and more complex than any other series on the air; it had the smartest writers in the world and a network that would leave them alone; it was just plain right about everything; and it gave us a metaphorical language for every conceivable situation. The Simpsons wasn’t just a sitcom, but a vocabulary, and it taught me how to think—or it shaped the way that I do think so deeply that there’s no real distinction to be made. As a work of art, it has quietly fallen short in ways both small and large for over fifteen years, but I was able to overlook it because I was no longer paying attention. It had done what it had to do, and I would be forever grateful. But this week, when the show was given the chance to rise again to everything that was fine inside of it, it faltered. Which only tells me that it lost its soul a long time ago.

The believer

leave a comment »

Note: Spoilers follow for “My Struggle IV,” the eleventh season finale of The X-Files

There are times when I think that The X-Files was the most important thing that ever happened to me. I’m not saying that it carries much weight compared to getting married or having a kid, but as far as pop culture is concerned, if you wanted to go back in time and remove just one piece to cause the maximal change in my life, you couldn’t do any better than this. If I had never seen The Red Shoes or read Jorge Luis Borges or even listened to the Pet Shop Boys, I’d be immeasurably poorer for it, but my overall biography would be more or less unchanged. The X-Files, by contrast, was a determining factor in how I spent my time for years. I wrote fanfic throughout high school and college. My first published short story, “Inversus,” was basically a straight casefile with the names changed, and only a timely rejection of my second effort from Analog editor Stanley Schmidt kept me from trying to turn it into a series. Of all the stories that I’ve published since, at least half fall comfortably into that formula. My three novels don’t have any paranormal elements, but they represented a conscious attempt to recover some of the magic of two government agents unraveling a conspiracy, and even Astounding is a project that never would have occurred to me if I hadn’t spent most of my life writing science fiction in one form or another. Which is all to say that if you managed to distract me so that I didn’t watch “Squeeze” on September 24, 1993—or even “Humbug” a year and a half later—most of this goes away, or at least gets transformed into a form so different that I wouldn’t be able to recognize it.

Yet it’s also a little embarrassing for me to admit this, not just because The X-Files wasn’t always a good show, even in its prime, but also because I don’t remember much about it. It had the longest run of any science fiction series in the history of television, with two hundred and eighteen episodes and two feature films. That’s a staggering amount of content, and it means that there’s more to know about Mulder and Scully, in theory, than about the main characters of any comparable franchise. In practice, that isn’t how it worked out. There are maybe two dozen episodes of the series that I plan on watching again, along with about fifty more that I remember fairly well. The rest consist of a single image, a vague impression, a logline, or more often nothing at all. Most of the mytharc, in particular, has disappeared entirely from my memory. And one of the problems with last night’s season finale—which probably marks the end to the entire series—is that it assumes that its viewers care about elements that the show flagged as important, but never really meant anything to the audience. I don’t recall much about William, or Mulder’s family drama, and I barely even remember Agent Reyes. These are clearly all things that should matter to the characters, and there’s no question that that loss of their child was the major event in Mulder and Scully’s lives. But it isn’t real to me, which is why I spent most of the episode asking myself why it had to be about this at all. (In any case, there’s already a perfect finale to the show, and it’s called “The Lost Art of Forehead Sweat.”)

But the eleventh season as a whole exceeded my expectations to an extent that I’m grateful that it exists. Apart from “Mulder and Scully Meet the Were-Monster,” the tenth season was uniformly painful to watch—it left me feeling humiliated that I’d invested so much of my life into this series, and nobody, aside from Gillian Anderson and Darin Morgan, seemed to have any idea what they were doing. This past season had one great episode (“Forehead Sweat”) and one that came close (“Rm9sbG93ZXJz”), and apart from the opener and closer, which were disasters, the rest ranged from merely watchable to pretty good. Duchovny looked healthier and more relaxed, there were some nice sentimental moments between the two leads that elevated even routine installments, and there was even an attempt to stir some fresh voices into the mix. The fact that the show seems to be ending now is regrettable, but maybe it’s the best possible outcome. And I can even live with the finale, which offers up a winning bingo card of Chris Carter’s worst impulses. It separates Mulder and Scully for most of its runtime; it scrambles the chronology for no apparent reason; it dwells on pointless action and violence; it drops every plot thread that it raises; it spoils a nice fakeout by repeating it just a few minutes later; and its idea of a happy ending is having Scully announce that she’s pregnant again. (“It’s all she’s good for,” my wife remarked dryly.) But it at least it was bad in all the usual ways, without going out of its way to invent new ones, as much of last season did. And as Scully once said about Robert Patrick Modell, I won’t let it take up another minute of my time.

But The X-Files is a lot like life itself—which is only to say that my relationship to it maps onto everything else that matters. If the golden age of science fiction is twelve, as the fan Peter Graham allegedly said, then the show came along at just the right time to change me forever. If I had been born a few years earlier or later, or if I had been watching a different network, it might have been something else. As it turned out, I got sucked into a show that lasted for the quarter of a century that happened to coincide with most of my teens, twenties, and thirties. If I don’t remember a lot of it, well, I can’t recall much about college or the first two years of being a father, either. I just have bits and pieces, which are enough to make up my memories. Dana Scully is my favorite character on television, but my picture of her is assembled from the handful of episodes that understood what made her special, rather than the countless others that abused or misused her to an extent that we’re only just starting to acknowledge. I view her from only one angle, as I do with most of the people in my life, and I see what I want to believe. Like Darin Morgan, I’ve come to identify more with Mulder as I’ve gotten older, not as an action hero, but as the guy who started his career in a basement and ended it nowhere in particular. But you also have to imagine Mulder, like Sisyphus, as happy. I can’t sum up The X-Files in one sentence, but these days, I see it as a show about how to relate with intelligence and grace to a world that remains unknowable, indifferent, and too complicated to change. Maybe it starts with finding someone you love. The finale wasn’t about this, of course. But it never really had to be.

Written by nevalalee

March 22, 2018 at 9:01 am

The allure of unknowing

leave a comment »

Although there is no substitute for merit in writing, clarity comes closest to being one. Even to a writer who is being intentionally obscure or wild of tongue we can say, “Be obscure clearly! Be wild of tongue in a way we can understand!”

—E.B. White, The Elements of Style

Last night, while watching the new X-Files episode “Ghouli,” which I actually sort of liked, I found myself pondering the ageless question of why this series is so uneven. It isn’t as if I haven’t wondered about this before. Even during the show’s golden years, which I’d roughly describe as its first five seasons, it was hard not to be struck by how often a classic installment was followed a week later by one that was insultingly bad. (This might explain the otherwise inexplicable reference in last week’s “The Lost Art of Forehead Sweat” to “Teso dos Bichos,” a terrible episode memorable mostly for interrupting the strongest run that the series ever had. As Reggie says: “Guys, if this turns out to be killer cats, I’m going to be very disappointed.”) Part of this may be due to the fact that I’ve watched so many episodes of this show, which had me tuning in every week for years, but I don’t think that it’s just my imagination. Most series operate within a fairly narrow range of quality, with occasional outliers in both sides, but the worst episodes of The X-Files are bad in ways that don’t apply to your average procedural. They aren’t simply boring or routine, but confusing, filled with illogical behavior by the main characters, ugly, and incoherent. There are also wild swings within individual episodes, like “Ghouli” itself, which goes so quickly from decent to awful to inexplicable to weirdly satisfying that it made me tired to watch it. And while last season proved that there are worse things than mere unevenness—with one big exception, it consisted of nothing but low points—I think it’s still worth asking why this series in particular has always seemed intent on punishing its fans with its sheer inconsistency.

One possible explanation is that The X-Files, despite its two regular leads, was basically an anthology show, which meant that every episode had to start from scratch in establishing a setting, a supporting cast, and even a basic tone. This ability to change the rules from one week to the next was a big part of what made the show exciting, but it also deprived it of the standard safety net—a narrative home base, a few familiar faces in the background—on which most shows unthinkingly rely. It’s a testament to the clarity and flexibility of Chris Carter’s original premise that it ever worked at all, usually thanks to a line or two from Scully, leafing through a folder in the passenger seat of a rental car, to explain why they were driving to a small town in the middle of nowhere. (In fact, this stock opening became less common as the show went on, and it never really found a way to improve on it.) It was also a science fiction and fantasy series, which meant that even the rules of reality tended to change from one installment to another. As a result, much of the first act of every episode was spent in orienting the audience, which represented a loss of valuable screen time that otherwise could have been put to other narrative ends. Watching it reminds us of how much other shows can take for granted. In Bambi vs. Godzilla, David Mamet writes: “When you walk into a bar and see a drama on the television, you’ve missed the exposition. Do you have any trouble whatever understanding what’s going on?” That’s true of most dramas, but not necessarily of The X-Files, in which you could sit through an episode from the beginning and still be lost halfway through. You could make a case that this disorientation was part of its appeal, but it wasn’t a feature. It was a bug.

And the most damning criticism that you can advance against The X-Files is that its narrative sins were routinely overlooked or forgiven by its creators because it was supposedly “about” confusion and paranoia. Early on, the myth arose that this was a series that deliberately left its stories unresolved, in contrast to the tidy conclusions of most procedurals. As the critic Rob Tannenbaum wrote in Details back in the late nineties:

What defines The X-Files is the allure of unknowing: Instead of declaring a mystery and solving it by the end of the show, as Columbo and Father Dowling did, Carter has spent five year showing us everything except the truth. He is a high-concept tease who understands an essential psychological dynamic: The less you give, the more people want. Watching The X-Files is almost an interactive venture. It’s incomplete enough to compel viewers to complete the blank parts of the narrative.

This might be true enough of many of the conspiracy episodes, but in the best casefiles, and most of the mediocre ones, there’s really no doubt about what happened. Mulder and Scully might not end up with all of the information, but the viewers usually do, and an episode like “Pusher” or “Ice” is an elegant puzzle without any missing pieces. (Even “Jose Chung’s From Outer Space,” which is explicitly about the failure of definitive explanations, offers a reading of itself that more or less makes sense.) Unfortunately, the blank spaces in the show’s mytharc were also used to excuse errors of clarity and resolution, which in turn encouraged the show to remain messy and unsatisfying for no good reason.

In other words, The X-Files began every episode at an inherent disadvantage, with all of the handicaps of a science fiction anthology show that had to start from nothing each week, as well as a premise that allowed it to explain away its narrative shortcomings as stylistic choices, which wasn’t true of shows like Star Trek or The Twilight Zone. All too often, this was a deadly combination. In an academic study that was published when the show was still on the air, the scholar Jan Delasara writes:

When apprehended consciously, narrative gaps may seem random accidents or continuity errors. Who substitutes the dead dog for Private McAlpin’s corpse in the episode “Fresh Bones?” And why? What did the demon’s first wife remember but not tell her husband in “Terms of Endearment?” Who is conducting the experiment in subliminal suggestion along with chemical phobia enhancement in “Blood?” Is Mulder’s explanation really what’s going on?

Delasara argues that such flaws are the “disturbing gaps and unresolved questions” typical of supernatural horror, but it’s fair to say that in most of these cases, if the writers could have come up with something better, they would have. The X-Files had a brilliant aesthetic that also led to the filming of scripts that never would have been approved on a show that wasn’t expressly about dislocation and the unknown. The result often left me alienated, but probably not in the way that the creators intended. Mulder and Scully might never discover the full truth—but that doesn’t excuse their writers.

Written by nevalalee

February 1, 2018 at 8:53 am

Childhood’s end

with 2 comments

I’ve been thinking a lot recently about my childhood. One of the inciting factors was the movie adaptation of Stephen King’s It, which I enjoyed a great deal when I finally saw it. It’s a blue-chip horror film, with a likable cast and fantastic visuals, and its creators clearly care as much about the original novel as I do. In theory, the shift of its setting to the late eighties should make it even more resonant, since this is a period that I know and remember firsthand. Yet it isn’t quite as effective as it should be, since it only tells the half of the story that focuses on the main characters as children, and most of the book’s power comes from its treatment of memory, childhood, and forgetfulness—which director Andy Muschietti and his collaborators must know perfectly well. Under the circumstances, they’ve done just about the best job imaginable, but they inevitably miss a crucial side of a book that has been a part of my life for decades, even if I was too young to appreciate it on my first reading. I was about twelve years old at the time, which means that I wasn’t in a position to understand its warning that I was doomed to forget much of who I was and what I did. (King’s uncanny ability to evoke his own childhood so vividly speaks as much as anything else to his talents.) As time passes, this is the aspect of the book that impresses me the most, and it’s one that the movie in its current form isn’t able to address. A demonic clown is pretty scary, but not as much as the realization, which isn’t a fantasy at all, that we have to cut ourselves off from much of who we were as children in order to function as adults. And I’m saying this as someone who has remained almost bizarrely faithful to the values that I held when I was ten years old.

In fact, it wouldn’t be farfetched to read Pennywise the Dancing Clown as the terrifying embodiment of the act of forgetting itself. In his memoir Self-ConsciousnessJohn Updike—who is mentioned briefly in It and lends his last name to a supporting character in The Talisman—described this autobiographical amnesia in terms that could serve as an epigraph to King’s novel:

Not only are selves conditional but they die. Each day, we wake slightly altered, and the person we were yesterday is dead. So why, one could say, be afraid of death, when death comes all the time? It is even possible to dislike our old selves, these disposable ancestors of ours. For instance, my high-school self—skinny, scabby, giggly, gabby, frantic to be noticed, tormented enough to be a tormenter, relentlessly pushing his cartoons ad posters and noisy jokes and pseudo-sophisticated poems upon the helpless high school—strikes me now as considerably obnoxious, though I owe him a lot.

Updike sounds a lot here like King’s class clown Richie Tozier, and his contempt toward his teenage self is one to which most of us can relate. Yet Updike’s memories of that period seem slightly less vivid than the ones that he explored elsewhere in his fiction. He only rarely mined them for material, even as he squeezed most of his other experiences to the last drop, which implies that even Updike, our greatest noticer, preferred to draw a curtain of charity across himself as an adolescent. And you can hardly blame him.

I was reminded of this by the X-Files episode “The Lost Art of Forehead Sweat,” which is about nothing less than the ways in which we misremember our childhoods, even if this theme is cunningly hidden behind its myriad other layers. At one point, Scully says to Reggie: “None of us remember our high school years with much accuracy.” In context, it seems like an irrelevant remark, but it was evidently important to Darin Morgan, who said to Entertainment Weekly:

When we think back on our memories from our youth, we have a tendency—or at least I do—to imagine my current mindset. Whenever I think about my youth, I’m like, “Why didn’t I do this? Why didn’t I do that?” And then you drive by high school students and you go, “Oh, that’s why I didn’t do it. Because I was a kid.” You tend to think of your adult consciousness, and you take that with you when you’re thinking back on your memories and things you’ve done in the past. Our memories are sometimes not quite accurate.

In “Forehead Sweat,” Morgan expresses this through a weird flashback in which we see Mulder’s adult head superimposed on his preadolescent body, which is a broad visual gag that also gets at something real. We really do seem to recall the past through the lens of our current selves, so we’re naturally mortified by what we find there—which neatly overlooks the point that everything that embarrasses us about our younger years is what allowed us to become what we are now. I often think about this when I look at my daughter, who is so much like me at the age of five that it scares me. And although I want to give her the sort of advice that I wish I’d heard at the time, I know that it’s probably pointless.

Childhood and adolescence are obstacle courses—and occasional horror shows—that we all need to navigate for ourselves, and even if we sometimes feel humiliated when we look back, that’s part of the point. Marcel Proust, who thought more intensely about memory and forgetting than anybody else, put it best in Within a Budding Grove:

There is no man…however wise, who has not at some period of his youth said things, or lived in a way the consciousness of which is so unpleasant to him in later life that he would gladly, if he could, expunge it from his memory. And yet he ought not entirely to regret it, because he cannot be certain that he has indeed become a wise man—so far as it is possible for any of us to be wise—unless he has passed through all the fatuous or unwholesome incarnations by which that ultimate stage must be preceded…We are not provided with wisdom, we must discover it for ourselves, after a journey through the wilderness which no one else can take for us, an effort which no one can spare us, for our wisdom is the point of view from which we come at last to regard the world. The lives that you admire, the attitudes that seem noble to you are not the result of training at home, by a father, or by masters at school, they have sprung from beginnings of a very different order, by reaction from the influence of everything evil or commonplace that prevailed round about them. They represent a struggle and a victory.

I believe this, even if I don’t have much of a choice. My childhood is a blur, but it’s also part of me, and on some level, it never ended. King might be speaking of adolescence itself when he writes in the first sentence of It: “The terror…would not end for another twenty-eight years—if it ever did end.” And I can only echo what Updike wistfully says elsewhere: “I’ve remained all too true to my youthful self.”

%d bloggers like this: