Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘The A.V. Club

The Men Who Saw Tomorrow, Part 3

leave a comment »

By now, it might seem obvious that the best way to approach Nostradamus is to see it as a kind of game, as Anthony Boucher describes it in the June 1942 issue of Unknown Worlds: “A fascinating game, to be sure, with a one-in-a-million chance of hitting an astounding bullseye. But still a game, and a game that has to be played according to the rules. And those rules are, above all things else, even above historical knowledge and ingenuity of interpretation, accuracy and impartiality.” Boucher’s work inspired several spirited rebukes in print from L. Sprague de Camp, who granted the rules of the game but disagreed about its harmlessness. In a book review signed “J. Wellington Wells”—and please do keep an eye on that last name—de Camp noted that Nostradamus was “conjured out of his grave” whenever there was a war:

And wonder of wonders, it always transpires that a considerable portion of his several fat volumes of prophetic quatrains refer to the particular war—out of the twenty-odd major conflicts that have occurred since Dr. Nostradamus’s time—or other disturbance now taking place; and moreover that they prophesy inevitable victory for our side—whichever that happens to be. A wonderful man, Nostradamus.

Their affectionate battle culminated in a nonsense limerick that de Camp published in the December 1942 version of Esquire, claiming that if it was still in print after four hundred years, it would have been proven just as true as any of Nostradamus’s prophecies. Boucher responded in Astounding with the short story “Pelagic Spark,” an early piece of fanfic in which de Camp’s great-grandson uses the “prophecy” to inspire a rebellion in the far future against the sinister Hitler XVI.

This is all just good fun, but not everyone sees it as a game, and Nostradamus—like other forms of vaguely apocalyptic prophecy—tends to return at exactly the point when such impulses become the most dangerous. This was the core of de Camp’s objection, and Boucher himself issued a similar warning:

At this point there enters a sinister economic factor. Books will be published only when there is popular demand for them. The ideal attempt to interpret the as yet unfulfilled quatrains of Nostradamus would be made in an ivory tower when all the world was at peace. But books on Nostradamus sell only in times of terrible crisis, when the public wants no quiet and reasoned analysis, but an impassioned assurance that We are going to lick the blazes out of Them because look, it says so right here. And in times of terrible crisis, rules are apt to get lost.

Boucher observes that one of the best books on the subject, Charles A. Ward’s Oracles of Nostradamus, was reissued with a dust jacket emblazoned with such questions as “Will America Enter the War?” and “Will the British Fleet Be Destroyed?” You still see this sort of thing today, and it isn’t just the books that benefit. In 1981, the producer David L. Wolper released a documentary on the prophecies of Nostradamus, The Man Who Saw Tomorrow, that saw subsequent spikes in interest during the Gulf War—a revised version for television was hosted by Charlton Heston—and after the September 11 attacks, when there was a run on the cassette at Blockbuster. And the attention that it periodically inspires reflects the same emotional factors that led to psychohistory, as the host of the original version said to the audience: “Do we really want to know about the future? Maybe so—if we can change it.”

The speaker, of course, was Orson Welles. I had always known that The Man Who Saw Tomorrow was narrated by Welles, but it wasn’t until I watched it recently that I realized that he hosted it onscreen as well, in one of my favorite incarnations of any human being—bearded, gigantic, cigar in hand, vaguely contemptuous of his surroundings and collaborators, but still willing to infuse the proceedings with something of the velvet and gold braid. Keith Phipps of The A.V. Club once described the documentary as “a brain-damaged sequel” to Welles’s lovely F for Fake, which is very generous. The entire project is manifestly ridiculous and exploitative, with uncut footage from the Zapruder film mingling with a xenophobic fantasy of a war of the West against Islam. Yet there are also moments that are oddly transporting, as when Welles turns to the camera and says:

Before continuing, let me warn you now that the predictions of the future are not at all comforting. I might also add that these predictions of the past, these warnings of the future are not the opinions of the producers of the film. They’re certainly not my opinions. They’re interpretations of the quatrains as made by scores of independent scholars of Nostradamus’ work.

In the sly reading of “my opinions,” you can still hear a trace of Harry Lime, or even of Gregory Arkadin, who invited his guests to drink to the story of the scorpion and the frog. And the entire movie is full of strange echoes of Welles’s career. Footage is repurposed from Waterloo, in which he played Louis XVIII, and it glances at the fall of the Shah of Iran, whose brother-in-law funded Welles’s The Other Side of the Wind, which was impounded by the revolutionary government that Nostradamus allegedly foresaw.

Welles later expressed contempt for the whole affair, allegedly telling Merv Griffin that you could get equally useful prophecies by reading at random out of the phone book. Yet it’s worth remembering, as the critic David Thomson notes, that Welles turned all of his talk show interlocutors into versions of the reporter from Citizen Kane, or even into the Hal to his Falstaff, and it’s never clear where the game ended. His presence infuses The Man Who Saw Tomorrow with an unearned loveliness, despite the its many awful aspects, such as the presence of the “psychic” Jeane Dixon. (Dixon’s fame rested on her alleged prediction of the Kennedy assassination, based on a statement—made in Parade magazine in 1960—that the winner of the upcoming presidential election would be “assassinated or die in office though not necessarily in his first term.” Oddly enough, no one seems to remember an equally impressive prediction by the astrologer Joseph F. Goodavage, who wrote in Analog in September 1962: “It is coincidental that each American president in office at the time of these conjunctions [of Jupiter and Saturn in an earth sign] either died or was assassinated before leaving the presidency…John F. Kennedy was elected in 1960 at the time of a Jupiter and Saturn conjunction in Capricorn.”) And it’s hard for me to watch this movie without falling into reveries about Welles, who was like John W. Campbell in so many other ways. Welles may have been the most intriguing cultural figure of the twentieth century, but he never seemed to know what would come next, and his later career was one long improvisation. It might not be too much to hear a certain wistfulness when he speaks of the man who could see tomorrow, much as Campbell’s fascination with psychohistory stood in stark contrast to the confusion of the second half of his life. When The Man Who Saw Tomorrow was released, Welles had finished editing about forty minutes of his unfinished masterpiece The Other Side of the Wind, and for decades after his death, it seemed that it would never be seen. Instead, it’s available today on Netflix. And I don’t think that anybody could have seen that coming.

Revise like you’re running out of time

leave a comment »

Lin-Manuel Miranda's drafts of "My Shot"

Note: I’m taking a few days off for the holidays, so I’ll be republishing some of my favorite pieces from earlier in this blog’s run. This post originally appeared, in a slightly different form, on August 17, 2016.

It might seem like a stretch, or at least premature, to compare Lin-Manuel Miranda to Shakespeare, but after listening to Hamilton nonstop over the last couple of years, I still can’t put the notion away. What these two writers have in common, aside from a readiness to plunder history as material for drama and a fondness for blatant anachronism, is their density and rapidity. When we try to figure out what sets Shakespeare apart from other playwrights, we’re likely to think of the way his ideas and images succeed each other so quickly that they run the risk of turning into mixed metaphors, and how both characters and scenes can turn on a dime to introduce a new tone or register. Hamilton, at its best, has many of the same qualities—hip-hop is capable of conveying more information per line than just about any other medium, and Miranda exploits it to the fullest. And what really strikes me, after repeated listens, is his ability to move swiftly from one character, subplot, or theme to another, often in the course of a single song. For a musical to accomplish as much in two and a half hours as Hamilton does, it has to nail all the transitions. My favorite example is the whirlwind in the first act that carries us from “Helpless” to “Satisfied” to “Wait For It,” taking us from Hamilton’s courtship of Eliza to Angelica’s unrequited love to checking in with Burr in the space of about fifteen minutes. I’ve listened to that sequence countless times, marveling at how all the pieces fit together, and it never even occurred to me to wonder how it was constructed until I’d internalized it. Which may be the most Shakespearean attribute of all. (Miranda’s knack for delivering information in the form of self-contained set pieces that amount to miniature plays in themselves, like “Blow Us All Away,” has even influenced my approach to my own book.)

But this doesn’t happen by accident. A while back, Manuel tweeted out a picture of his notebook for the incomparable “My Shot,” along with the dry comment: “Songs take time.” Like most musicals, Hamilton was refined and restructured in workshops—many recordings of which are available online—and continued to evolve between its Off-Broadway and Broadway incarnations. In theater, revision has a way of taking place in plain sight: it’s impossible to know the impact of any changes until you’ve seen them in performance, and the feedback you get in real time informs the next iteration. Hamilton was developed under far greater scrutiny than Miranda’s In the Heights, which was the product of five years of unhurried readings and workshops, and its evolution was constrained by what its creator has called “these weirdly visible benchmarks,” including the American Songbook Series at Lincoln Center and a high-profile presentation at Vassar. Still, much of the revision took place in Miranda’s head, a balance between public and private revision that feels Shakespearean in itself. Shakespeare clearly understood the creative utility of rehearsal and collaboration with a specific cast of actors, and he was cheerfully willing to rework a play based on how the audience responded. But we also know, based on surviving works like the unfinished Timon of Athens, that he revised the plays carefully on his own, roughing out large blocks of the action in prose form before going back to transform it into verse. We don’t have any of his manuscripts, but I suspect that they looked a lot like Miranda’s, and that he was ready to rearrange scenes and drop entire sequences to streamline and unify the whole. Like Hamilton, and Miranda, Shakespeare wrote like he was running out of time.

As it happens, I originally got to thinking about all this after reading a description of a very different creative experience, in the form of playwright Glen Berger’s interview with The A.V. Club about the doomed production of Spider-Man: Turn Off the Dark. The whole thing is worth checking out, and I’ve long been meaning to read Berger’s book Song of Spider-Man to get the full version. (Berger, incidentally, was replaced as the show’s writer by Roberto Aguirre-Sacasa, who has since gone on to greater fame as the creator of Riverdale.) But this is the detail that stuck in my head the most:

Almost inevitably during previews for a Broadway musical, several songs are cut and several new songs are written. Sometimes, the new songs are the best songs. There’s the famous story of “Comedy Tonight” for A Funny Thing Happened On The Way To The Forum being written out of town. There are hundreds of other examples of songs being changed and scenes rearranged.

From our first preview to the day Julie [Taymor] left the show seven months later, not a single song was cut, which is kind of indicative of the rigidity that was setting in for one camp of the creators who felt like, “No, we came up with the perfect show. We just need to find a way to render it competently.”

A lot of things went wrong with Spider-Man, but this inability to revise—which might have allowed the show to address its problems—seems like a fatal flaw. As books like Stephen Sondheim’s Finishing the Hat make clear, a musical can undergo drastic transformations between its earliest conception and opening night, and the lack of it here is what made the difference between a troubled production and a debacle.

But it’s also hard to blame Taymor, Berger, or any other individual involved when you consider the conditions under which this musical was produced, which made it hard for any kind of meaningful revision to occur at all. Even in theater, revision works best when it’s essentially private: following any train of thought to its logical conclusion requires the security that only solitude provides. An author or director is less likely to learn from mistakes or test out the alternatives when the process is occurring in plain sight. From the very beginning, the creators of Spider-Man never had a moment of solitary reflection: it was a project that was born in a corporate boardroom and jumped immediately to Broadway. As Berger says:

Our biggest blunder was that we only had one workshop, and then we went into rehearsals for the Broadway run of the show. I’m working on another bound-for-Broadway musical now, and we’ve already had four workshops. Every time you hear, “Oh, we’re going to do another workshop,” the knee-jerk reaction is, “We don’t need any more. We can just go straight into rehearsals,” but we learn some new things every time. They provide you the opportunity to get rid of stuff that doesn’t work, songs that fall flat that you thought were amazing, or totally rewrite scenes. I’m all for workshops now.

It isn’t impossible to revise properly under conditions of extreme scrutiny—Pixar does a pretty good job of it, although this has clearly led to troubling cultural tradeoffs of its own—but it requires a degree of bravery that wasn’t evident here. And I’m curious to see how Miranda handles similar pressure, now that he occupies the position of an artist in residence at Disney, where Spider-Man also resides. Fame can open doors and create possibilities, but real revision can only occur in the sessions of sweet silent thought.

The critical path

leave a comment »

Renata Adler

Note: I’m taking a few days off, so I’ll be republishing some of my favorite pieces from earlier in this blog’s run. This post originally appeared, in a slightly different form, on February 16, 2016.

Every few years or so, I go back and revisit Renata Adler’s famous attack in the New York Review of Books on the reputation of the film critic Pauline Kael. As a lifelong Kael fan, I don’t agree with Adler—who describes Kael’s output as “not simply, jarringly, piece by piece, line by line, and without interruption, worthless”—but I respect the essay’s fire and eloquence, and it’s still a great read. What is sometimes forgotten is that Adler opens with an assault, not on Kael alone, but on the entire enterprise of professional criticism itself. Here’s what she says:

The job of the regular daily, weekly, or even monthly critic resembles the work of the serious intermittent critic, who writes only when he is asked to or genuinely moved to, in limited ways and for only a limited period of time…Normally, no art can support for long the play of a major intelligence, working flat out, on a quotidian basis. No serious critic can devote himself, frequently, exclusively, and indefinitely, to reviewing works most of which inevitably cannot bear, would even be misrepresented by, review in depth…

The simple truth—this is okay, this is not okay, this is vile, this resembles that, this is good indeed, this is unspeakable—is not a day’s work for a thinking adult. Some critics go shrill. Others go stale. A lot go simultaneously shrill and stale.

Adler concludes: “By far the most common tendency, however, is to stay put and simply to inflate, to pretend that each day’s text is after all a crisis—the most, first, best, worst, finest, meanest, deepest, etc.—to take on, since we are dealing in superlatives, one of the first, most unmistakable marks of the hack.” And I think that she has a point, even if I have to challenge a few of her assumptions. (The statement that most works of art “inevitably cannot bear, would even be misrepresented by, review in depth,” is particularly strange, with its implicit division of all artistic productions into the sheep and the goats. It also implies that it’s the obligation of the artist to provide a worthy subject for the major critic, when in fact it’s the other way around: as a critic, you prove yourself in large part through your ability to mine insight from the unlikeliest of sources.) Writing reviews on a daily or weekly basis, especially when you have a limited amount of time to absorb the work itself, lends itself inevitably to shortcuts, and you often find yourself falling back on the same stock phrases and judgments. And Adler’s warning about “dealing in superlatives” seems altogether prescient. As Keith Phipps and Tasha Robinson of The A.V. Club pointed out a few years back, the need to stand out in an ocean of competing coverage means that every topic under consideration becomes either an epic fail or an epic win: a sensible middle ground doesn’t generate page views.

Pauline Kael

But the situation, at least from Adler’s point of view, is even more dire than when she wrote this essay in the early eighties. When Adler’s takedown of Kael first appeared, the most threatening form of critical dilution lay in weekly movie reviews: today, we’re living in a media environment in which every episode of every television show gets thousands of words of critical analysis from multiple pop culture sites. (Adler writes: “Television, in this respect, is clearly not an art but an appliance, through which reviewable material is sometimes played.” Which is only a measure of how much the way we think and talk about the medium has changed over the intervening three decades.) The conditions that Adler identifies as necessary for the creation of a major critic like Edmund Wilson or Harold Rosenberg—time, the ability to choose one’s subjects, and the freedom to quit when necessary—have all but disappeared for most writers hoping to make a mark, or even just a living. To borrow a trendy phrase, we’ve reached a point of peak content, with a torrent of verbiage being churned out at an unsustainable pace without the advertising dollars to support it, in a situation that can be maintained only by the seemingly endless supply of aspiring writers willing to be chewed up by the machine. And if Adler thought that even a monthly reviewing schedule was deadly for serious criticism, I’d be curious to hear how she feels about the online apprenticeship that all young writers seem expected to undergo these days.

Still, I’d like to think that Adler got it wrong, just as I believe that she was ultimately mistaken about Kael, whose legacy, for all its flaws, still endures. (It’s revealing to note that Adler had a long, distinguished career as a writer and critic herself, and yet she almost certainly remains best known among casual readers for her Kael review.) Not every lengthy writeup of the latest episode of Riverdale is going to stand the test of time, but as a crucible for forming a critic’s judgment, this daily grind feels like a necessary component, even if it isn’t the only one. A critic needs time and leisure to think about major works of art, which is a situation that the current media landscape doesn’t seem prepared to offer. But the ability to form quick judgments about works of widely varying quality and to express them fluently on deadline is an indispensable part of any critic’s toolbox. When taken as an end itself, it can be deadening, as Adler notes, but it can also be the foundation for something more, even if it has to be undertaken outside of—or despite—the critic’s day job. The critic’s responsibility, now more than ever, isn’t to detach entirely from the relentless pace of pop culture, but to find ways of channeling it into something deeper than the instantaneous think piece or hot take. As a daily blogger who also undertakes projects that can last for months or years, I’m constantly mindful of the relationship between my work on demand and my larger ambitions. And I sure hope that the two halves can work together. Because, like it or not, every critic is walking that path already.

Written by nevalalee

April 18, 2017 at 9:00 am

The illusion of life

leave a comment »

Last week, The A.V. Club ran an entire article devoted to television shows in which the lead is also the best character, which only points to how boring many protagonists tend to be. I’ve learned to chalk this up to two factors, one internal, the other external. The internal problem stems from the reasonable principle that the narrative and the hero’s objectives should be inseparable: the conflict should emerge from something that the protagonist urgently needs to accomplish, and when the goal has been met—or spectacularly thwarted—the story is over. It’s great advice, but in practice, it often results in leads who are boringly singleminded: when every action needs to advance the plot, there isn’t much room for the digressions and quirks that bring characters to life. The supporting cast has room to go off on tangents, but the characters at the center have to constantly triangulate between action, motivation, and relatability, which can drain them of all surprise. A protagonist is under so much narrative pressure that when the story relaxes, he bursts, like a sea creature brought up from its crevasse to the surface. Elsewhere, I’ve compared a main character to a diagram of a pattern of forces, like one of the fish in D’Arcy Wentworth Thompson’s On Growth and Form, in which the animal’s physical shape is determined by the outside stresses to which it has been subjected. And on top of this, there’s an external factor, which is the universal desire of editors, producers, and studio executives to make the protagonist “likable,” which, whether or not you agree with it, tends to smooth out the rough edges that make a character vivid and memorable.

In the classic textbook Disney Animation: The Illusion of Life, we find a useful perspective on this problem. The legendary animators Frank Thomas and Ollie Johnston provide a list of guidelines for evaluating story material before the animation begins, including the following:

Tell your story through the broad cartoon characters rather than the “straight” ones. There is no way to animate strong-enough attitudes, feelings, or expressions on realistic characters to get the communication you should have. The more real, the less latitude for clear communication. This is more easily done with the cartoon characters who can carry the story with more interest and spirit anyway. Snow White was told through the animals, the dwarfs, and the witch—not through the prince or the queen or the huntsman. They had vital roles, but their scenes were essentially situation. The girl herself was a real problem, but she was helped by always working to a sympathetic animal or a broad character. This is the old vaudeville trick of playing the pretty girl against the buffoon; it helps both characters.

Even more than Snow White, the great example here is Sleeping Beauty, which has always fascinated me as an attempt by Disney to recapture past glories by a mechanical application of its old principles raised to dazzling technical heights. Not only do Aurora and Prince Philip fail to drive the story, but they’re all but abandoned by it—Aurora speaks fewer lines than any other Disney main character, and neither of them talk for the last thirty minutes. Not only does the film acknowledge the dullness of its protagonists, but it practically turns it into an artistic statement in itself.

And it arises from a tension between the nature of animation, which is naturally drawn to caricature, and the notion that sympathetic protagonists need to be basically realistic. With regard to the first point, Thomas and Johnston advise:

Ask yourself, “Can the story point be done in caricature?” Be sure the scenes call for action, or acting that can be caricatured if you are to make a clear statement. Just to imitate nature, illustrate reality, or duplicate live action not only wastes the medium but puts an enormous burden on the animator. It should be believable, but not realistic.

The italics are mine. This is a good rule, but it collides headlong with the principle that the “real” characters should be rendered with greater naturalism:

Of course, there is always a big problem in making the “real” or “straight” characters in our pictures have enough personality to carry their part of the story…The point of this is misinterpreted by many to mean that characters who have to be represented as real should be left out of feature films, that the stories should be told with broad characters who can be handled more easily. This would be a mistake, for spectators need to have someone or something they can believe in, or the picture falls apart.

And while you could make a strong case that viewers relate just as much to the sidekicks, it’s probably also true that a realistic central character serves an important functional role, which allows the audience to take the story seriously. This doesn’t just apply to animation, either, but to all forms of storytelling—including most fiction, film, and television—that work best with broad strokes. In many cases, you can sense the reluctance of animators to tackle characters who don’t lend themselves to such bold gestures:

Early in the story development, these questions will be asked: “Does this character have to be straight?” “What is the role we need here?” If it is a prince or a hero or a sympathetic person who needs acceptance from the audience to make the story work, then the character must be drawn realistically.

Figuring out the protagonists is a thankless job: they have to serve a function within the overall story, but they’re also liable to be taken out and judged on their own merits, in the absence of the narrative pressures that created them in the first place. The best stories, it seems, are the ones in which that pattern of forces results in something fascinating in its own right, or which transform a stock character into something more. (It’s revealing that Thomas and Johnston refer to the queen and the witch in Snow White as separate figures, when they’re really a single person who evolves over the course of the story into her true form.) And their concluding advice is worth bearing in mind by everyone: “Generally speaking, if there is a human character in a story, it is wise to draw the person with as much caricature as the role will permit.”

Cain rose up

with 2 comments

John Lithgow in Raising Cain

I first saw Brian De Palma’s Raising Cain when I was fourteen years old. In a weird way, it amounted to a peak moment of my early adolescence: I was on a school trip to our nation’s capital, sharing a hotel room with my friends from middle school, and we were just tickled to get away with watching an R-rated movie on cable. The fact that we ended up with Raising Cain doesn’t quite compare with the kids on The Simpsons cheering at the chance to see Barton Fink, but it isn’t too far off. I think that we liked it, and while I won’t claim that we understood it, that doesn’t mean much of anything—it’s hard for me to imagine anybody, of any age, entirely understanding this movie, which includes both me and De Palma himself. A few years later, I caught it again on television, and while I can’t say I’ve thought about it much since, I never forgot it. Gradually, I began to catch up on my De Palma, going mostly by whatever movies made Pauline Kael the most ecstatic at the time, which in itself was an education in the gap between a great critic’s pet enthusiasms and what exists on the screen. (In her review of The Fury, Kael wrote: “No Hitchcock thriller was ever so intense, went so far, or had so many ‘classic’ sequences.” I love Kael, but there are at least three things wrong with that sentence.) And ultimately De Palma came to mean a lot to me, as he does to just about anyone who responds to the movies in a certain way.

When I heard about the recut version of Raising Cain—in an interview with John Lithgow on The A.V. Club, no less, in which he was promoting his somewhat different role on The Crown—I was intrigued. And its backstory is particularly interesting. Shortly before the movie was first released, De Palma moved a crucial sequence from the beginning to the middle, eliminating an extended flashback and allowing the film to play more or less chronologically. He came to regret the change, but it was too late to do anything about it. Years later, a freelance director and editor named Peet Gelderblom read about the original cut and decided to restore it, performing a judicious edit on a digital copy. He put it online, where, unbelievably, it was seen by De Palma himself, who not only loved it but asked that it be included as a special feature on the new Blu-ray release. If nothing else, it’s a reminder of the true possibilities of fan edits, which have served mostly for competing visions of the ideal version of Star Wars. With modern software, a fan can do for a movie what Walter Murch did for Touch of Evil, restoring it to the director’s original version based on a script or a verbal description. In the case of Raising Cain, this mostly just involved rearranging the pieces in the theatrical cut, but other fans have tackled such challenges as restoring all the deleted scenes in Twin Peaks: Fire Walk With Me, and there are countless other candidates.

Raising Cain

Yet Raising Cain might be the most instructive case study of all, because simply restoring the original opening to its intended place results in a radical transformation. It isn’t for everyone, and it’s necessary to grant De Palma his usual passes for clunky dialogue and characterization, but if you’re ready to meet it halfway, you’re rewarded with a thriller that twists back on itself like a Möbius strip. De Palma plunders his earlier movies so blatantly that it isn’t clear if he’s somehow paying loving homage to himself—bypassing Hitchcock entirely—or recycling good ideas that he feels like using again. The recut opens with a long mislead that recalls Dressed to Kill, which means that Lithgow barely even appears for the first twenty minutes. You can almost see why De Palma chickened out for the theatrical version: Lithgow’s performance as the meek Carter and his psychotic imaginary brother Cain feels too juicy to withhold. But the logic of the script was destroyed. For a film that tests an audience’s suspension of disbelief in so many other ways, it’s unclear why De Palma thought that a flashback would be too much for the viewer to handle. The theatrical release preserves all the great shock effects that are the movie’s primary reason for existing, but they don’t build to anything, and you’re left with a film that plays like a series of sketches. With the original order restored, it becomes what it was meant to be all along: a great shaggy dog story with a killer punchline.

Raising Cain is gleefully about nothing but itself, and I wouldn’t force anybody to watch it who wasn’t already interested. But the recut also serves as an excellent introduction to its director, just as the older version did for me: when I first encountered it, I doubt I’d seen anything by De Palma, except maybe The Untouchables, and Mission: Impossible was still a year away. It’s safe to say that if you like Raising Cain, you’ll like De Palma in general, and if you can’t get past its archness, campiness, and indifference to basic plausibility—well, I can hardly blame you. Watching it again, I was reminded of Blue Velvet, a far greater movie that presents the viewer with a similar test. It has the same mixture of naïveté and incredible technical virtuosity, with scenes that barely seem to have been written alternating with ones that push against the boundaries of the medium itself. You’re never quite sure if the director is in on the gag, and maybe it doesn’t matter. There isn’t much beauty in Raising Cain, and De Palma is a hackier and more mechanical director than Lynch, but both are so strongly visual that the nonsensory aspects of their films, like the obligatory scenes with the cops, seem to wither before our eyes. (It’s an approach that requires a kind of raw, intuitive trust from the cast, and as much as I enjoy what Lithgow does here, he may be too clever and resourceful an actor to really disappear into the role.) Both are rooted, crucially, in Hitchcock, who was equally obsessive, but was careful to never work from his own script. Hitchcock kept his secret self hidden, while De Palma puts it in plain sight. And if it turns out to be nothing at all, that’s probably part of the joke.

The bicameral mind

leave a comment »

Evan Rachel Wood and Jimmi Simpson on Westworld

Note: Major spoilers follow for the most recent episode of Westworld.

Shortly before the final scene of “Trompe L’Oeil,” it occurred to me that Westworld, after a strong start, was beginning to coast a little. Like any ensemble drama on a premium cable channel, it’s a machine with a lot of moving parts, so it can be hard to pin down any specific source of trouble. But it appears to be a combination of factors. The plot thread centering on Dolores, which I’ve previously identified as the engine that drives the entire series, has entered something of a holding pattern—presumably because the show is saving its best material for closer to the finale. (I was skeptical of the multiple timelines theory at first, but I’m reluctantly coming around to it.) The introduction of Delos, the corporation that owns the park, as an active participant in the story is a decision that probably looked good on paper, but it doesn’t quite work. So far, the series has given us what amounts to a closed ecosystem, with a cast of characters that consists entirely of the hosts, the employees, and a handful of guests. At this stage, bringing in a broadly villainous executive from corporate headquarters comes precariously close to a gimmick: it would have been more interesting to have the conflict arise from someone we’d already gotten to know in a more nuanced way. Finally, it’s possible that the events of the last week have made me more sensitive to the tendency of the series to fall back on images of violence against women to drive the story forward. I don’t know how those scenes would have played earlier, but they sure don’t play for me now.

And then we get the twist that a lot of viewers, including me, had suspected might be coming: Bernard is a robot. Taken on its own, the revelation is smartly handled, and there are a lot of clever touches. In a scene at the beginning between Bernard and Hector, the episode establishes that the robots simply can’t process details that conflict with their programming, and this pays off nicely at the end, when Bernard doesn’t see the door that leads into Dr. Ford’s secret lab. A minute later, when Theresa hands him the schematics that show his own face, Bernard says: “It doesn’t look like anything to me.” (This raises an enticing possibility for future reveals, in which scenes from previous episodes that were staged from Bernard’s point of view are shown to have elements that we didn’t see at the time, because Bernard couldn’t. I don’t know if the show will take that approach, but it should—it’s nothing less than an improvement on the structural mislead in The Sixth Sense, and it would be a shame not to use it.) Yet the climactic moment, in which Dr. Ford calmly orders Bernard to murder Theresa, doesn’t land as well as it could have. It should have felt like a shocking betrayal, but the groundwork wasn’t quite there: Bernard and Theresa’s affair was treated very casually, and by the time we get to their defining encounter, whatever affection they had for each other is long gone. From the point of view of the overall plot, this arguably makes sense. But it also drains some of the horror from a payoff that the show must have known was coming. If we imagine Elsie as the victim instead, we can glimpse what the scene might have been.

Jeffrey Wright and Sidse Babett Knudsen on Westworld

Yet I’m not entirely sure this wasn’t intentional. Westworld is a cerebral, even clinical show, and it doesn’t seem to take pleasure in action or visceral climaxes for their own sake. Part of this probably reflects the temperament of its creators, but it also feels like an attempt by the show to position itself in a challenging time for this kind of storytelling. It’s a serialized drama that delivers new installments each week, but these days, such shows are just as likely to drop all ten episodes at once. This was obviously never an option for a show on HBO, but the weekly format creates real problems for a show that seems determined to set up twists that are more considered and logical than the usual shock deaths. To its credit, the show has played fair with viewers, and the clues to Bernard’s true nature were laid in with care. (If I noticed them, it was only because I was looking: I asked myself, working from first principles, what kind of surprise a show like this would be likely to spring, and the revelation that one of the staff members was actually a host seemed like a strong contender.) When a full week of online discussion and speculation falls between each episode, it becomes harder to deliver such surprises. Even if the multiple timeline theory doesn’t turn out to be correct, its very existence indicates the amount of energy, ingenuity, and obsessive analysis that the audience is willing to devote to it. As a result, the show’s emotional detachment comes off as a preemptive defense mechanism. It downplays the big twists, as if to tell us that it isn’t the surprises that count, but their implications.

In the case of Bernard, I’m willing to take that leap, if only because the character is in the hands of Jeffrey Wright, who is more qualified than any other actor alive to work through the repercussions. It’s a casting choice that speaks a lot, in itself, to the show’s intelligence. (In an interview with The A.V. Club, Wright has revealed that he didn’t know that Bernard was a robot when he shot the pilot, and that his own theory was that Dr. Ford was a creation of Bernard’s, which would have been even more interesting.) The revelation effectively reveals Bernard to have been the show’s secret protagonist all along, which is where he belongs, and it occurs at just about the right point in the season for it to resonate: we’ve still got three episodes to go, which gives the show room, refreshingly, to deal with the consequences, rather than rushing past them to the finale. Whether it can do the same with whatever else it has up its sleeve, including the possibility of multiple timelines, remains to be seen. But even though I’ve been slightly underwhelmed by the last two episodes, I’m still excited to see how it plays its hand. Even as Westworld unfolds from one week to the next, it clearly sees the season as a single continuous story, and the qualities that I’ve found unsatisfying in the moment—the lulls, the lack of connection between the various plot threads, the sense that it’s holding back for the climax—are those that I hope will pay off the most in the end. Like its robots, the series is built with a bicameral mind, with the logic of the whole whispering its instructions to the present. And more than any show since Mad Men, it seems to have its eye on the long game.

Written by nevalalee

November 14, 2016 at 10:02 am

The Importance of Writing “Ernesto,” Part 3

leave a comment »

My short story “Ernesto,” which originally appeared in the March 2012 issue of Analog Science Fiction and Fact, has just been reprinted by Lightspeed. To celebrate its reappearance, I’ll be publishing revised versions of a few posts in which I described the origins of this story, which you can read for free here, along with a nice interview.

In an excellent interview from a few years ago with The A.V. Club, the director Steven Soderbergh spoke about the disproportionately large impact that small changes can have on a film: “Two frames can be the difference between something that works and something that doesn’t. It’s fascinating.” The playwright and screenwriter Jez Butterworth once made a similar point, noting that the gap between “nearly” and “really” in a photograph—or a script—can come down to a single frame. The same principle holds just as true, if not more so, for fiction. A cut, a new sentence, or a tiny clarification can turn a decent but unpublishable story into one that sells. These changes are often so invisible that the author himself would have trouble finding them after the fact, but their overall effect can’t be denied. And I’ve learned this lesson more than once in my life, perhaps most vividly with “Ernesto,” a story that I thought was finished, but which turned out to have a few more surprises in store.

When I was done with “Ernesto,” I sent it to Stanley Schmidt at Analog, who had just purchased my novelette “The Last Resort.” Stan’s response, which I still have somewhere in my files, was that the story didn’t quite grab him enough to find room for it in a rather crowded schedule, but that he’d hold onto it, just in case, while I sent it around to other publications. It wasn’t a rejection, exactly, but it was hardly an acceptance. (Having just gone through three decades of John W. Campbell’s correspondence, I now know that this kind of response is fairly common when a magazine is overstocked.) I dutifully sent it around to most of the usual suspects at the time: Asimov’s, Fantasy & Science Fiction, and the online magazines Clarkesworld and Intergalatic Medicine Show. Some had a few kind words for the story, but they all ultimately passed. At that point, I concluded that “Ernesto” just wasn’t publishable. This was hardly the end of the world—it had only taken two weeks to write—but it was an unfortunate outcome for a story that I thought was still pretty clever.

A few months later, I saw a call for submissions for a independent paperback anthology, the kind that pays its contributors in author’s copies, and its theme—science fiction stories about monks—seemed to fit “Ernesto” fairly well. The one catch was that the maximum length for submissions was 6,000 words, while “Ernesto” weighed in at over 7,500. Cutting twenty percent of a story that was already highly compressed, at least to my eyes, was no joke, but I figured that I’d give it a try. Over the course of a couple of days, then, I cut it to the bone, removing scenes and extra material wherever I could. Since almost a year had passed since I’d first written it, it was easy to see what was and wasn’t necessary. More significantly, I added an epigraph, from Ernest Hemingway’s interview with The Paris Review, that made it clear from the start that the main character was Hemingway, which wasn’t the case with the earlier draft. And the result read a lot more smoothly than the version I’d sent out before.

It might have ended there, with “Ernesto” appearing without fanfare in an unpaid anthology, but as luck would have it, Analog had just accepted a revised version of my novelette “The Boneless One,” which had also been rejected by a bunch of magazines in its earlier form. Encouraged by this, I thought I’d try the same thing with “Ernesto.” So I sent it to Analog again, and it was accepted, almost twelve months after my first submission. Now it’s being reprinted more than four years later by Lightspeed, a magazine that didn’t even exist when I first wrote it. The moral, I guess, is that if a story has been turned down by five of the top magazines in your field, it probably isn’t good enough to be published—but that doesn’t mean it can’t get better. In this case, my rule of spending two weeks on a short story ended up being not quite correct: I wrote the story in two weeks, shopped it around for a year, and then spent two more days on it. And those last two days, like Soderbergh’s two frames, were what made all the difference.

Written by nevalalee

September 22, 2016 at 8:19 am

%d bloggers like this: