Posts Tagged ‘The A.V. Club’
The Men Who Saw Tomorrow, Part 3
By now, it might seem obvious that the best way to approach Nostradamus is to see it as a kind of game, as Anthony Boucher describes it in the June 1942 issue of Unknown Worlds: “A fascinating game, to be sure, with a one-in-a-million chance of hitting an astounding bullseye. But still a game, and a game that has to be played according to the rules. And those rules are, above all things else, even above historical knowledge and ingenuity of interpretation, accuracy and impartiality.” Boucher’s work inspired several spirited rebukes in print from L. Sprague de Camp, who granted the rules of the game but disagreed about its harmlessness. In a book review signed “J. Wellington Wells”—and please do keep an eye on that last name—de Camp noted that Nostradamus was “conjured out of his grave” whenever there was a war:
And wonder of wonders, it always transpires that a considerable portion of his several fat volumes of prophetic quatrains refer to the particular war—out of the twenty-odd major conflicts that have occurred since Dr. Nostradamus’s time—or other disturbance now taking place; and moreover that they prophesy inevitable victory for our side—whichever that happens to be. A wonderful man, Nostradamus.
Their affectionate battle culminated in a nonsense limerick that de Camp published in the December 1942 version of Esquire, claiming that if it was still in print after four hundred years, it would have been proven just as true as any of Nostradamus’s prophecies. Boucher responded in Astounding with the short story “Pelagic Spark,” an early piece of fanfic in which de Camp’s great-grandson uses the “prophecy” to inspire a rebellion in the far future against the sinister Hitler XVI.
This is all just good fun, but not everyone sees it as a game, and Nostradamus—like other forms of vaguely apocalyptic prophecy—tends to return at exactly the point when such impulses become the most dangerous. This was the core of de Camp’s objection, and Boucher himself issued a similar warning:
At this point there enters a sinister economic factor. Books will be published only when there is popular demand for them. The ideal attempt to interpret the as yet unfulfilled quatrains of Nostradamus would be made in an ivory tower when all the world was at peace. But books on Nostradamus sell only in times of terrible crisis, when the public wants no quiet and reasoned analysis, but an impassioned assurance that We are going to lick the blazes out of Them because look, it says so right here. And in times of terrible crisis, rules are apt to get lost.
Boucher observes that one of the best books on the subject, Charles A. Ward’s Oracles of Nostradamus, was reissued with a dust jacket emblazoned with such questions as “Will America Enter the War?” and “Will the British Fleet Be Destroyed?” You still see this sort of thing today, and it isn’t just the books that benefit. In 1981, the producer David L. Wolper released a documentary on the prophecies of Nostradamus, The Man Who Saw Tomorrow, that saw subsequent spikes in interest during the Gulf War—a revised version for television was hosted by Charlton Heston—and after the September 11 attacks, when there was a run on the cassette at Blockbuster. And the attention that it periodically inspires reflects the same emotional factors that led to psychohistory, as the host of the original version said to the audience: “Do we really want to know about the future? Maybe so—if we can change it.”
The speaker, of course, was Orson Welles. I had always known that The Man Who Saw Tomorrow was narrated by Welles, but it wasn’t until I watched it recently that I realized that he hosted it onscreen as well, in one of my favorite incarnations of any human being—bearded, gigantic, cigar in hand, vaguely contemptuous of his surroundings and collaborators, but still willing to infuse the proceedings with something of the velvet and gold braid. Keith Phipps of The A.V. Club once described the documentary as “a brain-damaged sequel” to Welles’s lovely F for Fake, which is very generous. The entire project is manifestly ridiculous and exploitative, with uncut footage from the Zapruder film mingling with a xenophobic fantasy of a war of the West against Islam. Yet there are also moments that are oddly transporting, as when Welles turns to the camera and says:
Before continuing, let me warn you now that the predictions of the future are not at all comforting. I might also add that these predictions of the past, these warnings of the future are not the opinions of the producers of the film. They’re certainly not my opinions. They’re interpretations of the quatrains as made by scores of independent scholars of Nostradamus’ work.
In the sly reading of “my opinions,” you can still hear a trace of Harry Lime, or even of Gregory Arkadin, who invited his guests to drink to the story of the scorpion and the frog. And the entire movie is full of strange echoes of Welles’s career. Footage is repurposed from Waterloo, in which he played Louis XVIII, and it glances at the fall of the Shah of Iran, whose brother-in-law funded Welles’s The Other Side of the Wind, which was impounded by the revolutionary government that Nostradamus allegedly foresaw.
Welles later expressed contempt for the whole affair, allegedly telling Merv Griffin that you could get equally useful prophecies by reading at random out of the phone book. Yet it’s worth remembering, as the critic David Thomson notes, that Welles turned all of his talk show interlocutors into versions of the reporter from Citizen Kane, or even into the Hal to his Falstaff, and it’s never clear where the game ended. His presence infuses The Man Who Saw Tomorrow with an unearned loveliness, despite the its many awful aspects, such as the presence of the “psychic” Jeane Dixon. (Dixon’s fame rested on her alleged prediction of the Kennedy assassination, based on a statement—made in Parade magazine in 1960—that the winner of the upcoming presidential election would be “assassinated or die in office though not necessarily in his first term.” Oddly enough, no one seems to remember an equally impressive prediction by the astrologer Joseph F. Goodavage, who wrote in Analog in September 1962: “It is coincidental that each American president in office at the time of these conjunctions [of Jupiter and Saturn in an earth sign] either died or was assassinated before leaving the presidency…John F. Kennedy was elected in 1960 at the time of a Jupiter and Saturn conjunction in Capricorn.”) And it’s hard for me to watch this movie without falling into reveries about Welles, who was like John W. Campbell in so many other ways. Welles may have been the most intriguing cultural figure of the twentieth century, but he never seemed to know what would come next, and his later career was one long improvisation. It might not be too much to hear a certain wistfulness when he speaks of the man who could see tomorrow, much as Campbell’s fascination with psychohistory stood in stark contrast to the confusion of the second half of his life. When The Man Who Saw Tomorrow was released, Welles had finished editing about forty minutes of his unfinished masterpiece The Other Side of the Wind, and for decades after his death, it seemed that it would never be seen. Instead, it’s available today on Netflix. And I don’t think that anybody could have seen that coming.
Revise like you’re running out of time
Note: I’m taking a few days off for the holidays, so I’ll be republishing some of my favorite pieces from earlier in this blog’s run. This post originally appeared, in a slightly different form, on August 17, 2016.
It might seem like a stretch, or at least premature, to compare Lin-Manuel Miranda to Shakespeare, but after listening to Hamilton nonstop over the last couple of years, I still can’t put the notion away. What these two writers have in common, aside from a readiness to plunder history as material for drama and a fondness for blatant anachronism, is their density and rapidity. When we try to figure out what sets Shakespeare apart from other playwrights, we’re likely to think of the way his ideas and images succeed each other so quickly that they run the risk of turning into mixed metaphors, and how both characters and scenes can turn on a dime to introduce a new tone or register. Hamilton, at its best, has many of the same qualities—hip-hop is capable of conveying more information per line than just about any other medium, and Miranda exploits it to the fullest. And what really strikes me, after repeated listens, is his ability to move swiftly from one character, subplot, or theme to another, often in the course of a single song. For a musical to accomplish as much in two and a half hours as Hamilton does, it has to nail all the transitions. My favorite example is the whirlwind in the first act that carries us from “Helpless” to “Satisfied” to “Wait For It,” taking us from Hamilton’s courtship of Eliza to Angelica’s unrequited love to checking in with Burr in the space of about fifteen minutes. I’ve listened to that sequence countless times, marveling at how all the pieces fit together, and it never even occurred to me to wonder how it was constructed until I’d internalized it. Which may be the most Shakespearean attribute of all. (Miranda’s knack for delivering information in the form of self-contained set pieces that amount to miniature plays in themselves, like “Blow Us All Away,” has even influenced my approach to my own book.)
But this doesn’t happen by accident. A while back, Manuel tweeted out a picture of his notebook for the incomparable “My Shot,” along with the dry comment: “Songs take time.” Like most musicals, Hamilton was refined and restructured in workshops—many recordings of which are available online—and continued to evolve between its Off-Broadway and Broadway incarnations. In theater, revision has a way of taking place in plain sight: it’s impossible to know the impact of any changes until you’ve seen them in performance, and the feedback you get in real time informs the next iteration. Hamilton was developed under far greater scrutiny than Miranda’s In the Heights, which was the product of five years of unhurried readings and workshops, and its evolution was constrained by what its creator has called “these weirdly visible benchmarks,” including the American Songbook Series at Lincoln Center and a high-profile presentation at Vassar. Still, much of the revision took place in Miranda’s head, a balance between public and private revision that feels Shakespearean in itself. Shakespeare clearly understood the creative utility of rehearsal and collaboration with a specific cast of actors, and he was cheerfully willing to rework a play based on how the audience responded. But we also know, based on surviving works like the unfinished Timon of Athens, that he revised the plays carefully on his own, roughing out large blocks of the action in prose form before going back to transform it into verse. We don’t have any of his manuscripts, but I suspect that they looked a lot like Miranda’s, and that he was ready to rearrange scenes and drop entire sequences to streamline and unify the whole. Like Hamilton, and Miranda, Shakespeare wrote like he was running out of time.
As it happens, I originally got to thinking about all this after reading a description of a very different creative experience, in the form of playwright Glen Berger’s interview with The A.V. Club about the doomed production of Spider-Man: Turn Off the Dark. The whole thing is worth checking out, and I’ve long been meaning to read Berger’s book Song of Spider-Man to get the full version. (Berger, incidentally, was replaced as the show’s writer by Roberto Aguirre-Sacasa, who has since gone on to greater fame as the creator of Riverdale.) But this is the detail that stuck in my head the most:
Almost inevitably during previews for a Broadway musical, several songs are cut and several new songs are written. Sometimes, the new songs are the best songs. There’s the famous story of “Comedy Tonight” for A Funny Thing Happened On The Way To The Forum being written out of town. There are hundreds of other examples of songs being changed and scenes rearranged.
From our first preview to the day Julie [Taymor] left the show seven months later, not a single song was cut, which is kind of indicative of the rigidity that was setting in for one camp of the creators who felt like, “No, we came up with the perfect show. We just need to find a way to render it competently.”
A lot of things went wrong with Spider-Man, but this inability to revise—which might have allowed the show to address its problems—seems like a fatal flaw. As books like Stephen Sondheim’s Finishing the Hat make clear, a musical can undergo drastic transformations between its earliest conception and opening night, and the lack of it here is what made the difference between a troubled production and a debacle.
But it’s also hard to blame Taymor, Berger, or any other individual involved when you consider the conditions under which this musical was produced, which made it hard for any kind of meaningful revision to occur at all. Even in theater, revision works best when it’s essentially private: following any train of thought to its logical conclusion requires the security that only solitude provides. An author or director is less likely to learn from mistakes or test out the alternatives when the process is occurring in plain sight. From the very beginning, the creators of Spider-Man never had a moment of solitary reflection: it was a project that was born in a corporate boardroom and jumped immediately to Broadway. As Berger says:
Our biggest blunder was that we only had one workshop, and then we went into rehearsals for the Broadway run of the show. I’m working on another bound-for-Broadway musical now, and we’ve already had four workshops. Every time you hear, “Oh, we’re going to do another workshop,” the knee-jerk reaction is, “We don’t need any more. We can just go straight into rehearsals,” but we learn some new things every time. They provide you the opportunity to get rid of stuff that doesn’t work, songs that fall flat that you thought were amazing, or totally rewrite scenes. I’m all for workshops now.
It isn’t impossible to revise properly under conditions of extreme scrutiny—Pixar does a pretty good job of it, although this has clearly led to troubling cultural tradeoffs of its own—but it requires a degree of bravery that wasn’t evident here. And I’m curious to see how Miranda handles similar pressure, now that he occupies the position of an artist in residence at Disney, where Spider-Man also resides. Fame can open doors and create possibilities, but real revision can only occur in the sessions of sweet silent thought.
The critical path
Note: I’m taking a few days off, so I’ll be republishing some of my favorite pieces from earlier in this blog’s run. This post originally appeared, in a slightly different form, on February 16, 2016.
Every few years or so, I go back and revisit Renata Adler’s famous attack in the New York Review of Books on the reputation of the film critic Pauline Kael. As a lifelong Kael fan, I don’t agree with Adler—who describes Kael’s output as “not simply, jarringly, piece by piece, line by line, and without interruption, worthless”—but I respect the essay’s fire and eloquence, and it’s still a great read. What is sometimes forgotten is that Adler opens with an assault, not on Kael alone, but on the entire enterprise of professional criticism itself. Here’s what she says:
The job of the regular daily, weekly, or even monthly critic resembles the work of the serious intermittent critic, who writes only when he is asked to or genuinely moved to, in limited ways and for only a limited period of time…Normally, no art can support for long the play of a major intelligence, working flat out, on a quotidian basis. No serious critic can devote himself, frequently, exclusively, and indefinitely, to reviewing works most of which inevitably cannot bear, would even be misrepresented by, review in depth…
The simple truth—this is okay, this is not okay, this is vile, this resembles that, this is good indeed, this is unspeakable—is not a day’s work for a thinking adult. Some critics go shrill. Others go stale. A lot go simultaneously shrill and stale.
Adler concludes: “By far the most common tendency, however, is to stay put and simply to inflate, to pretend that each day’s text is after all a crisis—the most, first, best, worst, finest, meanest, deepest, etc.—to take on, since we are dealing in superlatives, one of the first, most unmistakable marks of the hack.” And I think that she has a point, even if I have to challenge a few of her assumptions. (The statement that most works of art “inevitably cannot bear, would even be misrepresented by, review in depth,” is particularly strange, with its implicit division of all artistic productions into the sheep and the goats. It also implies that it’s the obligation of the artist to provide a worthy subject for the major critic, when in fact it’s the other way around: as a critic, you prove yourself in large part through your ability to mine insight from the unlikeliest of sources.) Writing reviews on a daily or weekly basis, especially when you have a limited amount of time to absorb the work itself, lends itself inevitably to shortcuts, and you often find yourself falling back on the same stock phrases and judgments. And Adler’s warning about “dealing in superlatives” seems altogether prescient. As Keith Phipps and Tasha Robinson of The A.V. Club pointed out a few years back, the need to stand out in an ocean of competing coverage means that every topic under consideration becomes either an epic fail or an epic win: a sensible middle ground doesn’t generate page views.
But the situation, at least from Adler’s point of view, is even more dire than when she wrote this essay in the early eighties. When Adler’s takedown of Kael first appeared, the most threatening form of critical dilution lay in weekly movie reviews: today, we’re living in a media environment in which every episode of every television show gets thousands of words of critical analysis from multiple pop culture sites. (Adler writes: “Television, in this respect, is clearly not an art but an appliance, through which reviewable material is sometimes played.” Which is only a measure of how much the way we think and talk about the medium has changed over the intervening three decades.) The conditions that Adler identifies as necessary for the creation of a major critic like Edmund Wilson or Harold Rosenberg—time, the ability to choose one’s subjects, and the freedom to quit when necessary—have all but disappeared for most writers hoping to make a mark, or even just a living. To borrow a trendy phrase, we’ve reached a point of peak content, with a torrent of verbiage being churned out at an unsustainable pace without the advertising dollars to support it, in a situation that can be maintained only by the seemingly endless supply of aspiring writers willing to be chewed up by the machine. And if Adler thought that even a monthly reviewing schedule was deadly for serious criticism, I’d be curious to hear how she feels about the online apprenticeship that all young writers seem expected to undergo these days.
Still, I’d like to think that Adler got it wrong, just as I believe that she was ultimately mistaken about Kael, whose legacy, for all its flaws, still endures. (It’s revealing to note that Adler had a long, distinguished career as a writer and critic herself, and yet she almost certainly remains best known among casual readers for her Kael review.) Not every lengthy writeup of the latest episode of Riverdale is going to stand the test of time, but as a crucible for forming a critic’s judgment, this daily grind feels like a necessary component, even if it isn’t the only one. A critic needs time and leisure to think about major works of art, which is a situation that the current media landscape doesn’t seem prepared to offer. But the ability to form quick judgments about works of widely varying quality and to express them fluently on deadline is an indispensable part of any critic’s toolbox. When taken as an end itself, it can be deadening, as Adler notes, but it can also be the foundation for something more, even if it has to be undertaken outside of—or despite—the critic’s day job. The critic’s responsibility, now more than ever, isn’t to detach entirely from the relentless pace of pop culture, but to find ways of channeling it into something deeper than the instantaneous think piece or hot take. As a daily blogger who also undertakes projects that can last for months or years, I’m constantly mindful of the relationship between my work on demand and my larger ambitions. And I sure hope that the two halves can work together. Because, like it or not, every critic is walking that path already.
The illusion of life
Last week, The A.V. Club ran an entire article devoted to television shows in which the lead is also the best character, which only points to how boring many protagonists tend to be. I’ve learned to chalk this up to two factors, one internal, the other external. The internal problem stems from the reasonable principle that the narrative and the hero’s objectives should be inseparable: the conflict should emerge from something that the protagonist urgently needs to accomplish, and when the goal has been met—or spectacularly thwarted—the story is over. It’s great advice, but in practice, it often results in leads who are boringly singleminded: when every action needs to advance the plot, there isn’t much room for the digressions and quirks that bring characters to life. The supporting cast has room to go off on tangents, but the characters at the center have to constantly triangulate between action, motivation, and relatability, which can drain them of all surprise. A protagonist is under so much narrative pressure that when the story relaxes, he bursts, like a sea creature brought up from its crevasse to the surface. Elsewhere, I’ve compared a main character to a diagram of a pattern of forces, like one of the fish in D’Arcy Wentworth Thompson’s On Growth and Form, in which the animal’s physical shape is determined by the outside stresses to which it has been subjected. And on top of this, there’s an external factor, which is the universal desire of editors, producers, and studio executives to make the protagonist “likable,” which, whether or not you agree with it, tends to smooth out the rough edges that make a character vivid and memorable.
In the classic textbook Disney Animation: The Illusion of Life, we find a useful perspective on this problem. The legendary animators Frank Thomas and Ollie Johnston provide a list of guidelines for evaluating story material before the animation begins, including the following:
Tell your story through the broad cartoon characters rather than the “straight” ones. There is no way to animate strong-enough attitudes, feelings, or expressions on realistic characters to get the communication you should have. The more real, the less latitude for clear communication. This is more easily done with the cartoon characters who can carry the story with more interest and spirit anyway. Snow White was told through the animals, the dwarfs, and the witch—not through the prince or the queen or the huntsman. They had vital roles, but their scenes were essentially situation. The girl herself was a real problem, but she was helped by always working to a sympathetic animal or a broad character. This is the old vaudeville trick of playing the pretty girl against the buffoon; it helps both characters.
Even more than Snow White, the great example here is Sleeping Beauty, which has always fascinated me as an attempt by Disney to recapture past glories by a mechanical application of its old principles raised to dazzling technical heights. Not only do Aurora and Prince Philip fail to drive the story, but they’re all but abandoned by it—Aurora speaks fewer lines than any other Disney main character, and neither of them talk for the last thirty minutes. Not only does the film acknowledge the dullness of its protagonists, but it practically turns it into an artistic statement in itself.
And it arises from a tension between the nature of animation, which is naturally drawn to caricature, and the notion that sympathetic protagonists need to be basically realistic. With regard to the first point, Thomas and Johnston advise:
Ask yourself, “Can the story point be done in caricature?” Be sure the scenes call for action, or acting that can be caricatured if you are to make a clear statement. Just to imitate nature, illustrate reality, or duplicate live action not only wastes the medium but puts an enormous burden on the animator. It should be believable, but not realistic.
The italics are mine. This is a good rule, but it collides headlong with the principle that the “real” characters should be rendered with greater naturalism:
Of course, there is always a big problem in making the “real” or “straight” characters in our pictures have enough personality to carry their part of the story…The point of this is misinterpreted by many to mean that characters who have to be represented as real should be left out of feature films, that the stories should be told with broad characters who can be handled more easily. This would be a mistake, for spectators need to have someone or something they can believe in, or the picture falls apart.
And while you could make a strong case that viewers relate just as much to the sidekicks, it’s probably also true that a realistic central character serves an important functional role, which allows the audience to take the story seriously. This doesn’t just apply to animation, either, but to all forms of storytelling—including most fiction, film, and television—that work best with broad strokes. In many cases, you can sense the reluctance of animators to tackle characters who don’t lend themselves to such bold gestures:
Early in the story development, these questions will be asked: “Does this character have to be straight?” “What is the role we need here?” If it is a prince or a hero or a sympathetic person who needs acceptance from the audience to make the story work, then the character must be drawn realistically.
Figuring out the protagonists is a thankless job: they have to serve a function within the overall story, but they’re also liable to be taken out and judged on their own merits, in the absence of the narrative pressures that created them in the first place. The best stories, it seems, are the ones in which that pattern of forces results in something fascinating in its own right, or which transform a stock character into something more. (It’s revealing that Thomas and Johnston refer to the queen and the witch in Snow White as separate figures, when they’re really a single person who evolves over the course of the story into her true form.) And their concluding advice is worth bearing in mind by everyone: “Generally speaking, if there is a human character in a story, it is wise to draw the person with as much caricature as the role will permit.”
Cain rose up
I first saw Brian De Palma’s Raising Cain when I was fourteen years old. In a weird way, it amounted to a peak moment of my early adolescence: I was on a school trip to our nation’s capital, sharing a hotel room with my friends from middle school, and we were just tickled to get away with watching an R-rated movie on cable. The fact that we ended up with Raising Cain doesn’t quite compare with the kids on The Simpsons cheering at the chance to see Barton Fink, but it isn’t too far off. I think that we liked it, and while I won’t claim that we understood it, that doesn’t mean much of anything—it’s hard for me to imagine anybody, of any age, entirely understanding this movie, which includes both me and De Palma himself. A few years later, I caught it again on television, and while I can’t say I’ve thought about it much since, I never forgot it. Gradually, I began to catch up on my De Palma, going mostly by whatever movies made Pauline Kael the most ecstatic at the time, which in itself was an education in the gap between a great critic’s pet enthusiasms and what exists on the screen. (In her review of The Fury, Kael wrote: “No Hitchcock thriller was ever so intense, went so far, or had so many ‘classic’ sequences.” I love Kael, but there are at least three things wrong with that sentence.) And ultimately De Palma came to mean a lot to me, as he does to just about anyone who responds to the movies in a certain way.
When I heard about the recut version of Raising Cain—in an interview with John Lithgow on The A.V. Club, no less, in which he was promoting his somewhat different role on The Crown—I was intrigued. And its backstory is particularly interesting. Shortly before the movie was first released, De Palma moved a crucial sequence from the beginning to the middle, eliminating an extended flashback and allowing the film to play more or less chronologically. He came to regret the change, but it was too late to do anything about it. Years later, a freelance director and editor named Peet Gelderblom read about the original cut and decided to restore it, performing a judicious edit on a digital copy. He put it online, where, unbelievably, it was seen by De Palma himself, who not only loved it but asked that it be included as a special feature on the new Blu-ray release. If nothing else, it’s a reminder of the true possibilities of fan edits, which have served mostly for competing visions of the ideal version of Star Wars. With modern software, a fan can do for a movie what Walter Murch did for Touch of Evil, restoring it to the director’s original version based on a script or a verbal description. In the case of Raising Cain, this mostly just involved rearranging the pieces in the theatrical cut, but other fans have tackled such challenges as restoring all the deleted scenes in Twin Peaks: Fire Walk With Me, and there are countless other candidates.
Yet Raising Cain might be the most instructive case study of all, because simply restoring the original opening to its intended place results in a radical transformation. It isn’t for everyone, and it’s necessary to grant De Palma his usual passes for clunky dialogue and characterization, but if you’re ready to meet it halfway, you’re rewarded with a thriller that twists back on itself like a Möbius strip. De Palma plunders his earlier movies so blatantly that it isn’t clear if he’s somehow paying loving homage to himself—bypassing Hitchcock entirely—or recycling good ideas that he feels like using again. The recut opens with a long mislead that recalls Dressed to Kill, which means that Lithgow barely even appears for the first twenty minutes. You can almost see why De Palma chickened out for the theatrical version: Lithgow’s performance as the meek Carter and his psychotic imaginary brother Cain feels too juicy to withhold. But the logic of the script was destroyed. For a film that tests an audience’s suspension of disbelief in so many other ways, it’s unclear why De Palma thought that a flashback would be too much for the viewer to handle. The theatrical release preserves all the great shock effects that are the movie’s primary reason for existing, but they don’t build to anything, and you’re left with a film that plays like a series of sketches. With the original order restored, it becomes what it was meant to be all along: a great shaggy dog story with a killer punchline.
Raising Cain is gleefully about nothing but itself, and I wouldn’t force anybody to watch it who wasn’t already interested. But the recut also serves as an excellent introduction to its director, just as the older version did for me: when I first encountered it, I doubt I’d seen anything by De Palma, except maybe The Untouchables, and Mission: Impossible was still a year away. It’s safe to say that if you like Raising Cain, you’ll like De Palma in general, and if you can’t get past its archness, campiness, and indifference to basic plausibility—well, I can hardly blame you. Watching it again, I was reminded of Blue Velvet, a far greater movie that presents the viewer with a similar test. It has the same mixture of naïveté and incredible technical virtuosity, with scenes that barely seem to have been written alternating with ones that push against the boundaries of the medium itself. You’re never quite sure if the director is in on the gag, and maybe it doesn’t matter. There isn’t much beauty in Raising Cain, and De Palma is a hackier and more mechanical director than Lynch, but both are so strongly visual that the nonsensory aspects of their films, like the obligatory scenes with the cops, seem to wither before our eyes. (It’s an approach that requires a kind of raw, intuitive trust from the cast, and as much as I enjoy what Lithgow does here, he may be too clever and resourceful an actor to really disappear into the role.) Both are rooted, crucially, in Hitchcock, who was equally obsessive, but was careful to never work from his own script. Hitchcock kept his secret self hidden, while De Palma puts it in plain sight. And if it turns out to be nothing at all, that’s probably part of the joke.
The bicameral mind
Note: Major spoilers follow for the most recent episode of Westworld.
Shortly before the final scene of “Trompe L’Oeil,” it occurred to me that Westworld, after a strong start, was beginning to coast a little. Like any ensemble drama on a premium cable channel, it’s a machine with a lot of moving parts, so it can be hard to pin down any specific source of trouble. But it appears to be a combination of factors. The plot thread centering on Dolores, which I’ve previously identified as the engine that drives the entire series, has entered something of a holding pattern—presumably because the show is saving its best material for closer to the finale. (I was skeptical of the multiple timelines theory at first, but I’m reluctantly coming around to it.) The introduction of Delos, the corporation that owns the park, as an active participant in the story is a decision that probably looked good on paper, but it doesn’t quite work. So far, the series has given us what amounts to a closed ecosystem, with a cast of characters that consists entirely of the hosts, the employees, and a handful of guests. At this stage, bringing in a broadly villainous executive from corporate headquarters comes precariously close to a gimmick: it would have been more interesting to have the conflict arise from someone we’d already gotten to know in a more nuanced way. Finally, it’s possible that the events of the last week have made me more sensitive to the tendency of the series to fall back on images of violence against women to drive the story forward. I don’t know how those scenes would have played earlier, but they sure don’t play for me now.
And then we get the twist that a lot of viewers, including me, had suspected might be coming: Bernard is a robot. Taken on its own, the revelation is smartly handled, and there are a lot of clever touches. In a scene at the beginning between Bernard and Hector, the episode establishes that the robots simply can’t process details that conflict with their programming, and this pays off nicely at the end, when Bernard doesn’t see the door that leads into Dr. Ford’s secret lab. A minute later, when Theresa hands him the schematics that show his own face, Bernard says: “It doesn’t look like anything to me.” (This raises an enticing possibility for future reveals, in which scenes from previous episodes that were staged from Bernard’s point of view are shown to have elements that we didn’t see at the time, because Bernard couldn’t. I don’t know if the show will take that approach, but it should—it’s nothing less than an improvement on the structural mislead in The Sixth Sense, and it would be a shame not to use it.) Yet the climactic moment, in which Dr. Ford calmly orders Bernard to murder Theresa, doesn’t land as well as it could have. It should have felt like a shocking betrayal, but the groundwork wasn’t quite there: Bernard and Theresa’s affair was treated very casually, and by the time we get to their defining encounter, whatever affection they had for each other is long gone. From the point of view of the overall plot, this arguably makes sense. But it also drains some of the horror from a payoff that the show must have known was coming. If we imagine Elsie as the victim instead, we can glimpse what the scene might have been.
Yet I’m not entirely sure this wasn’t intentional. Westworld is a cerebral, even clinical show, and it doesn’t seem to take pleasure in action or visceral climaxes for their own sake. Part of this probably reflects the temperament of its creators, but it also feels like an attempt by the show to position itself in a challenging time for this kind of storytelling. It’s a serialized drama that delivers new installments each week, but these days, such shows are just as likely to drop all ten episodes at once. This was obviously never an option for a show on HBO, but the weekly format creates real problems for a show that seems determined to set up twists that are more considered and logical than the usual shock deaths. To its credit, the show has played fair with viewers, and the clues to Bernard’s true nature were laid in with care. (If I noticed them, it was only because I was looking: I asked myself, working from first principles, what kind of surprise a show like this would be likely to spring, and the revelation that one of the staff members was actually a host seemed like a strong contender.) When a full week of online discussion and speculation falls between each episode, it becomes harder to deliver such surprises. Even if the multiple timeline theory doesn’t turn out to be correct, its very existence indicates the amount of energy, ingenuity, and obsessive analysis that the audience is willing to devote to it. As a result, the show’s emotional detachment comes off as a preemptive defense mechanism. It downplays the big twists, as if to tell us that it isn’t the surprises that count, but their implications.
In the case of Bernard, I’m willing to take that leap, if only because the character is in the hands of Jeffrey Wright, who is more qualified than any other actor alive to work through the repercussions. It’s a casting choice that speaks a lot, in itself, to the show’s intelligence. (In an interview with The A.V. Club, Wright has revealed that he didn’t know that Bernard was a robot when he shot the pilot, and that his own theory was that Dr. Ford was a creation of Bernard’s, which would have been even more interesting.) The revelation effectively reveals Bernard to have been the show’s secret protagonist all along, which is where he belongs, and it occurs at just about the right point in the season for it to resonate: we’ve still got three episodes to go, which gives the show room, refreshingly, to deal with the consequences, rather than rushing past them to the finale. Whether it can do the same with whatever else it has up its sleeve, including the possibility of multiple timelines, remains to be seen. But even though I’ve been slightly underwhelmed by the last two episodes, I’m still excited to see how it plays its hand. Even as Westworld unfolds from one week to the next, it clearly sees the season as a single continuous story, and the qualities that I’ve found unsatisfying in the moment—the lulls, the lack of connection between the various plot threads, the sense that it’s holding back for the climax—are those that I hope will pay off the most in the end. Like its robots, the series is built with a bicameral mind, with the logic of the whole whispering its instructions to the present. And more than any show since Mad Men, it seems to have its eye on the long game.
The Importance of Writing “Ernesto,” Part 3
My short story “Ernesto,” which originally appeared in the March 2012 issue of Analog Science Fiction and Fact, has just been reprinted by Lightspeed. To celebrate its reappearance, I’ll be publishing revised versions of a few posts in which I described the origins of this story, which you can read for free here, along with a nice interview.
In an excellent interview from a few years ago with The A.V. Club, the director Steven Soderbergh spoke about the disproportionately large impact that small changes can have on a film: “Two frames can be the difference between something that works and something that doesn’t. It’s fascinating.” The playwright and screenwriter Jez Butterworth once made a similar point, noting that the gap between “nearly” and “really” in a photograph—or a script—can come down to a single frame. The same principle holds just as true, if not more so, for fiction. A cut, a new sentence, or a tiny clarification can turn a decent but unpublishable story into one that sells. These changes are often so invisible that the author himself would have trouble finding them after the fact, but their overall effect can’t be denied. And I’ve learned this lesson more than once in my life, perhaps most vividly with “Ernesto,” a story that I thought was finished, but which turned out to have a few more surprises in store.
When I was done with “Ernesto,” I sent it to Stanley Schmidt at Analog, who had just purchased my novelette “The Last Resort.” Stan’s response, which I still have somewhere in my files, was that the story didn’t quite grab him enough to find room for it in a rather crowded schedule, but that he’d hold onto it, just in case, while I sent it around to other publications. It wasn’t a rejection, exactly, but it was hardly an acceptance. (Having just gone through three decades of John W. Campbell’s correspondence, I now know that this kind of response is fairly common when a magazine is overstocked.) I dutifully sent it around to most of the usual suspects at the time: Asimov’s, Fantasy & Science Fiction, and the online magazines Clarkesworld and Intergalatic Medicine Show. Some had a few kind words for the story, but they all ultimately passed. At that point, I concluded that “Ernesto” just wasn’t publishable. This was hardly the end of the world—it had only taken two weeks to write—but it was an unfortunate outcome for a story that I thought was still pretty clever.
A few months later, I saw a call for submissions for a independent paperback anthology, the kind that pays its contributors in author’s copies, and its theme—science fiction stories about monks—seemed to fit “Ernesto” fairly well. The one catch was that the maximum length for submissions was 6,000 words, while “Ernesto” weighed in at over 7,500. Cutting twenty percent of a story that was already highly compressed, at least to my eyes, was no joke, but I figured that I’d give it a try. Over the course of a couple of days, then, I cut it to the bone, removing scenes and extra material wherever I could. Since almost a year had passed since I’d first written it, it was easy to see what was and wasn’t necessary. More significantly, I added an epigraph, from Ernest Hemingway’s interview with The Paris Review, that made it clear from the start that the main character was Hemingway, which wasn’t the case with the earlier draft. And the result read a lot more smoothly than the version I’d sent out before.
It might have ended there, with “Ernesto” appearing without fanfare in an unpaid anthology, but as luck would have it, Analog had just accepted a revised version of my novelette “The Boneless One,” which had also been rejected by a bunch of magazines in its earlier form. Encouraged by this, I thought I’d try the same thing with “Ernesto.” So I sent it to Analog again, and it was accepted, almost twelve months after my first submission. Now it’s being reprinted more than four years later by Lightspeed, a magazine that didn’t even exist when I first wrote it. The moral, I guess, is that if a story has been turned down by five of the top magazines in your field, it probably isn’t good enough to be published—but that doesn’t mean it can’t get better. In this case, my rule of spending two weeks on a short story ended up being not quite correct: I wrote the story in two weeks, shopped it around for a year, and then spent two more days on it. And those last two days, like Soderbergh’s two frames, were what made all the difference.
Food for thought
Earlier this week, The A.V. Club, which is still the pop culture website at which I spend the vast majority of my online life, announced a new food section called “Supper Club.” It’s helmed by the James Beard Award-winning food critic and journalist Kevin Pang, a talented writer and documentarian whose work I’ve admired for years. On Wednesday, alongside the site’s usual television and movie coverage, seemingly half the homepage was devoted to features like “America’s ten tastiest fast foods,” followed a day later by “All of Dairy Queen’s Blizzards, ranked.” And the reaction from the community was—not good. Pang’s introductory post quickly drew over a thousand comments, with the most upvoted response reading:
I’ll save you about six months of pissed-away cash. Please reallocate the money that will be wasted on this venture to add more shows to the TV Club review section.
Most of the other food features received the same treatment, with commenters ignoring the content of the articles themselves and complaining about the new section on principle. Internet commenters, it must be said, are notoriously resistant to change, and most vocal segment of the community represents a tiny fraction of the overall readership of The A.V. Club. But I think it’s fair to say that the site’s editors can’t be entirely happy with how the launch has gone.
Yet the readers aren’t altogether wrong, either, and in retrospect, you could make a good case that the rollout should have been handled differently. The A.V. Club has gone through a rough couple of years, with many of its most recognizable writers leaving to start the movie site The Dissolve—which recently folded—even as its signature television coverage has been scaled back. Those detailed reviews of individual episodes might be popular with commenters, but they evidently don’t generate enough page views to justify the same degree of investment, and the site is looking at ways to stabilize its revenue at a challenging time for the entire industry. The community is obviously worried abut this, and Supper Club happened to appear at a moment when the commenters were likely to be skeptical about any new move, as if it were all a zero-sum game, which it isn’t. But the launch itself didn’t help matters. It makes sense to start an enterprise like this with a lot of articles on its first day, but taking over half the site with minimal advance warning lost it a lot of goodwill. Pang could also have been introduced more gradually: he’s a celebrity in foodie circles, but to most A.V. Club readers, he’s just a name. (It was also probably a miscalculation to have Pang write the introductory post himself, which placed him in the awkward position of having to drum up interest in his own work for an audience that didn’t know who he was.) And while I’ve enjoyed some of the content so far, and I understand the desire to keep the features lightweight and accessible, I don’t think the site has done itself any favors by leading with articles like “Do we eat soup or do we drink soup?”
This might seem like a lot of analysis for a kerfuffle that will be forgotten within a few weeks, no matter how Supper Club does in the meantime. But The A.V. Club has been a landmark site for pop culture coverage for the last decade, and its efforts to reinvent itself should concern anyone who cares about whether such venues can survive. I found myself thinking about this shortly after reading the excellent New Yorker profile of Pete Wells, the restaurant critic of the New York Times. Its author, Ian Parker, notes that modern food writing has become a subset of cultural criticism:
“A lot of reviews now tend to be food features,” [former Times restaurant critic Mimi Sheraton] said. She recalled a reference to Martin Amis in a Wells review of a Spanish restaurant in Brooklyn; she said she would have mentioned Amis only “if he came in and sat down and ordered chopped liver.”
Craig Claiborne, in a review from 1966, observed, “The lobster tart was palatable but bland and the skewered lamb on the dry side. The mussels marinière were creditable.” Thanks, in part, to the informal and diverting columns of Gael Greene, at New York, and Ruth Reichl, the Times’ critic during the nineties, restaurant reviewing in American papers has since become as much a vehicle for cultural criticism and literary entertainment—or, as Sheraton put it, “gossip”—as a guide to eating out.
If this is true, and I think it is, it means that food criticism, for better or worse, falls squarely within the mandate of The A.V. Club, whether its commenters like it or not.
But that doesn’t mean that we shouldn’t hold The A.V. Club to unreasonably high standards. In fact, we should be harder on it than we would on most sites, for reasons that Parker neatly outlines in his profile of Wells:
As Wells has come to see it, a disastrous restaurant is newsworthy only if it has a pedigree or commercial might. The mom-and-pop catastrophe can be overlooked. “I shouldn’t be having to explain to people what the place is,” he said. This reasoning seems civil, though, as Wells acknowledged, it means that his pans focus disproportionately on restaurants that have corporate siblings. Indeed, hype is often his direct or indirect subject. Of the fifteen no-star evaluations in his first four years, only two went to restaurants that weren’t part of a group of restaurants.
Parker continues: “There are restaurants that exist to have four Times stars. With fewer, they become a kind of paradox.” And when it comes to pop culture, The A.V. Club is the equivalent of a four-star restaurant. It was writing deeply felt, outrageously long essays on film and television before the longread was even a thing—in part, I suspect, because of its historical connection to The Onion: because it was often mistaken for a parody site, it always felt the need to prove its fundamental seriousness, which it did, over and over again. If Supper Club had launched with one of the ambitious, richly reported pieces that Pang has written elsewhere, the response might have been very different. Listicles might make more economic sense, and they can be fun if done right, but The A.V. Club has defined itself as a place where obsessively detailed and personal pop culture writing has a home. That’s what Supper Club should be. And until it is, we shouldn’t be surprised if readers have trouble swallowing it.
The excerpt opinion
“It’s the rare writer who cannot have sentences lifted from his work,” Norman Mailer once wrote. What he meant is that if a reviewer is eager to find something to mock, dismiss, or pick apart, any interesting book will provide plenty of ammunition. On a simple level of craft, it’s hard for most authors to sustain a high pitch of technical proficiency in every line, and if you want to make a novelist seem boring or ordinary, you can just focus on the sentences that fall between the high points. In his famously savage takedown of Thomas Harris’s Hannibal, Martin Amis quotes another reviewer who raved: “There is not a single ugly or dead sentence.” Amis then acidly observes:
Hannibal is a genre novel, and all genre novels contain dead sentences—unless you feel the throb of life in such periods as “Tommaso put the lid back on the cooler” or “Eric Pickford answered” or “Pazzi worked like a man possessed” or “Margot laughed in spite of herself” or “Bob Sneed broke the silence.”
Amis knows that this is a cheap shot, and he glories in it. But it isn’t so different from what critics do when they list the awful sentences from a current bestseller or nominate lines for the Bad Sex in Fiction Award. I laugh at this along with anyone else, but I also wince a little, because there are few authors alive who aren’t vulnerable to that sort of treatment. As G.K. Chesterton pointed out: “You could compile the worst book in the world entirely out of selected passages from the best writers in the world.”
This is even more true of authors who take considerable stylistic or thematic risks, which usually result in individual sentences that seem crazy or, worse, silly. The fear of seeming ridiculous is what prevents a lot of writers from taking chances, and it isn’t always unjustified. An ambitious novel opens itself up to savaging from all sides, precisely because it provides so much material that can be turned against the author when taken out of context. And it doesn’t need to be malicious, either: even objective or actively sympathetic critics can be seduced by the ease with which a writer can be excerpted to make a case. I’ve become increasingly daunted by the prospect of distilling the work of Robert A. Heinlein, for example, because his career was so long, varied, and often intentionally provocative that you can find sentences to support any argument about him that you want to make. (It doesn’t help that his politics evolved drastically over time, and they probably would have undergone several more transformations if he had lived for longer.) This isn’t to say that his opinions aren’t a fair target for criticism, but any reasonable understanding of who Heinlein was and what he believed—which I’m still trying to sort out for myself—can’t be conveyed by a handful of cherry-picked quotations. Literary biography is useful primarily to the extent that it can lay out a writer’s life in an orderly fashion, providing a frame that tells us something about the work that we wouldn’t know by encountering it out of order. But even that involves a process of selection, as does everything else about a biography. The biographer’s project isn’t essentially different from that of a working critic or reviewer: it just takes place on a larger scale.
And it’s worth noting that prolific critics themselves are particularly susceptible to this kind of treatment. When Renata Adler described Pauline Kael’s output as “not simply, jarringly, piece by piece, line by line, and without interruption, worthless,” any devotee of Kael’s work had to disagree—but it was also impossible to deny that there was plenty of evidence for the prosecution. If you’re determined to hate Roger Ebert, you just have to search for the reviews in which his opinions, written on deadline, weren’t sufficiently in line with the conclusions reached by posterity, as when he unforgivably gave only three stars to The Godfather Part II. And there isn’t a single page in the work of David Thomson, who is probably the most interesting movie critic who ever lived, that couldn’t be mined for outrageous, idiotic, or infuriating statements. I still remember a review on The A.V. Club of How to Watch a Movie that quoted lines like this:
Tell me a story, we beg as children, while wanting so many other things. Story will put off sleep (or extinction) and the child’s organism hardly trusts the habit of waking yet.
And this:
You came into this book under deceptive promises (mine) and false hopes (yours). You believed we might make decisive progress in the matter of how to watch a movie. So be it, but this was a ruse to make you look at life.
The reviewer quoted these sentences as examples of the book’s deficiencies, and they were duly excoriated in the comments. But anyone who has really read Thomson knows that such statements are part of the package, and removing them would also deny most of what makes him so fun, perverse, and valuable.
So what’s a responsible reviewer to do? We could start, maybe, by quoting longer or complete sections, rather than sentences in isolation, and by providing more context when we offer up just a line or two. We can also respect an author’s feelings, explicit or otherwise, about what sections are actually important. In the passage I mentioned at the beginning of this post, which is about John Updike, Mailer goes on to quote a few sentences from Rabbit, Run, and he adds:
The first quotation is taken from the first five sentences of the book, the second is on the next-to-last page, and the third is nothing less than the last three sentences of the novel. The beginning and end of a novel are usually worked over. They are the index to taste in the writer.
That’s a pretty good rule, and it ensures that the critic is discussing something reasonably close to what the writer intended to say. Best of all, we can approach the problem of excerpting with a kind of joy in the hunt: the search for the slice of a work that will stand as a synecdoche of the whole. In the book U & I, which is also about Updike, Nicholson Baker writes about the “standardized ID phrase” and “the aphoristic consensus” and “the jingle we will have to fight past at some point in the future” to see a writer clearly again, just as fans of Joyce have to do their best to forget about “the ineluctable modality of the visible” and “yes I said yes I will Yes.” For a living author, that repository of familiar quotations is constantly in flux, and reviewers might approach their work with a greater sense of responsibility if they realized that they were playing a part in creating it—one tiny excerpt at a time.
Revise like you’re running out of time
It might seem like a stretch, or at least premature, to compare Lin-Manuel Miranda to Shakespeare, but after playing Hamilton nonstop over the last couple of months, I can’t put the notion away. What the two of them have in common, aside from a readiness to plunder history as material for drama and a fondness for blatant anachronism, is their density and rapidity. When we try to figure out what sets Shakespeare apart from other playwrights, we’re likely to think first of the way his ideas and images succeed each other so quickly that they run the risk of turning into mixed metaphors, and how both characters and scenes can turn on a dime to introduce a new tone or register. Hamilton, at its best, has many of the same qualities. Hip-hop is capable of conveying more information per line than just about any other idiom, and Miranda exploits it to the fullest. But what really strikes me, after repeated listens, is his ability to move swiftly from one character, subplot, or theme to another, often in the course of a single song. For a musical to accomplish as much in two and a half hours as Hamilton does, it has to nail all the transitions. My favorite example is the one in the first act that carries us from “Helpless” to “Satisfied” to “Wait For It,” or from Hamilton’s courtship of Eliza to Angelica’s unrequited love to checking in with Burr in the space of about fifteen minutes. I’ve listened to that sequence multiple times, marveling at how all the pieces fit together, and it never even occurred to me to wonder how it was constructed until I’d internalized it. Which may be the most Shakespearean attribute of all.
But this doesn’t happen by accident. A few days ago, Manuel tweeted out a picture of his notebook for the incomparable “My Shot,” along with the dry comment: “Songs take time.” Like most musicals, Hamilton was refined and restructured in workshops—many recordings of which are available online—and continued to evolve between its Off-Broadway and Broadway incarnations. In theater, revision has a way of taking place in plain sight: it’s impossible to know the impact of any changes until you’ve seen them in performance, and the feedback you get in real time naturally informs the next iteration. Hamilton was developed under greater scrutiny than Miranda’s In the Heights, which was the product of five years of readings and workshops, and its evolution was constrained by what its creator has called “these weirdly visible benchmarks,” including the American Songbook Series at Lincoln Center and a high-profile presentation at Vassar. Still, much of the revision took place in Miranda’s head, a balance between public and private revision that feels Shakespearean in itself, if only because Shakespeare was better at it than anybody else. He clearly understood the creative utility of rehearsal and collaboration with a specific cast of actors, and he was cheerfully willing to rework a play based on how the audience responded. But we also know, based on surviving works like the unfinished Timon of Athens, that he revised the plays carefully on his own, roughing out large blocks of the action in prose form before going back to transform it into verse. We don’t have any of his manuscripts, but I suspect that they looked a lot like Miranda’s, and that he was ready to rearrange scenes and drop entire sequences to streamline and unify the whole. Like Hamilton, and Miranda, Shakespeare wrote like he was running out of time.
As it happens, I got to thinking about all this shortly after reading a description of a very different creative experience, in the form of playwright Glen Berger’s interview with The A.V. Club about the doomed production of Spider-Man: Turn Off the Dark. The whole thing is worth checking out, and I’ll probably end up reading Berger’s book Song of Spider-Man to get the full version. But this is the detail that stuck in my head the most:
Almost inevitably during previews for a Broadway musical, several songs are cut and several new songs are written. Sometimes, the new songs are the best songs. There’s the famous story of “Comedy Tonight” for A Funny Thing Happened On The Way To The Forum being written out of town. There are hundreds of other examples of songs being changed and scenes rearranged.
From our first preview to the day Julie [Taymor] left the show seven months later, not a single song was cut, which is kind of indicative of the rigidity that was setting in for one camp of the creators who felt like, “No, we came up with the perfect show. We just need to find a way to render it competently.”
A lot of things went wrong with Spider-Man, but this inability to revise—which might have allowed the show to address its other problems—seems like a fatal flaw. As books like Stephen Sondheim’s Finishing the Hat make clear, a musical can undergo drastic transformations between its earliest conception and opening night, and the lack of it here is what made the difference between a troubled production and a debacle.
But it’s also hard to blame Taymor, Berger, or any other individual involved when you consider the conditions under which the musical was produced, which made it hard for any kind of meaningful revision to occur at all. Even in theater, revision works best when it’s essentially private: following any train of thought to its logical conclusion requires the security that only solitude provides. A writer or director is less likely to learn from mistakes or test out the alternatives when the process is occurring in plain sight. From the very beginning, the creators of Spider-Man never had a moment of solitary reflection: it was a project that was born in a corporate boardroom and jumped immediately to Broadway. As Berger says:
Our biggest blunder was that we only had one workshop, and then we went into rehearsals for the Broadway run of the show. I’m working on another bound-for-Broadway musical now, and we’ve already had four workshops. Every time you hear, “Oh, we’re going to do another workshop,” the knee-jerk reaction is, “We don’t need any more. We can just go straight into rehearsals,” but we learn some new things every time. They provide you the opportunity to get rid of stuff that doesn’t work, songs that fall flat that you thought were amazing, or totally rewrite scenes. I’m all for workshops now.
It isn’t impossible to revise properly under conditions of extreme scrutiny—Pixar does a pretty good job of it—but it requires a degree of bravery that wasn’t evident here. And I’m curious to see how Miranda handles similar pressure, now that he occupies the position of an artist in residence at Disney, where Spider-Man also resides. Fame opens doors and creates possibilities, but real revision can only occur in the sessions of sweet silent thought.
Note: I’m heading out this afternoon for Kansas City, Missouri, where I’ll be taking part in programming over the next four days at the World Science Fiction Convention. Hope to see some of you there!
Do media brands have a future?
Note: I’m taking a break for the next few days, so I’ll be republishing some of my favorite posts from earlier in this blog’s run. This post originally appeared, in a slightly different form, on March 24, 2015.
Years ago, my online browsing habits followed a predictable routine. Each morning, after checking my email, I’d click over to read the headlines on the New York Times, then The A.V. Club, followed by whatever blogger, probably Andrew Sullivan, I was following at the moment. Although I didn’t think of it in those terms, in each case, I was responding to a brand: I trusted these sites to provide me with a few minutes of engaging content, and although I didn’t know exactly what would be posted each day, there were certain intangibles—a voice, a writer’s point of view, a stamp of quality—that assured me that a visit there would be worth my time. These days, my regimen looks very different. I still tune into the New York Times and The A.V. Club for old time’s sake, but the bulk of my browsing is done through Reddit or Digg. I don’t visit a lot of sites specifically for the content they provide; instead, I trust in aggregators, whether crowdsourced by upvotes or curated more deliberately, to direct my attention to whatever is worth reading from one hour to the next. In many cases, when I click through to a story, I don’t even know where the link goes, and I’ve lost count of the times I’ve told my wife about an article I saw “somewhere on Digg.” And once I’m done with that one spotlighted piece, I’m not particularly likely to visit the site later to see what else it might have to offer.
As a content provider—which is a term I hate—in my own right, the pattern of consumption that I see in myself chills me to the bone. Yet it represents a rational, if subconscious, choice. I’m simply betting that I’ll have a better time by trusting the aggregators, which admittedly are brands in themselves, rather than the brand of a specific writer or publication. Individual authors or sites can be erratic; on slow news days, even the Times can seem like a bore. But an aggregator that sweeps the entire web for material will always come up with something diverting, and I’m not tied down to any one source. After all, even the most consistently reliable reads can lose interest over time. I started visiting Reddit more regularly during the last presidential election, for instance, after I got tired of Andrew Sullivan’s increasingly panicky and hysterical tone: reading his blog turned into a chore. And I became less active on The A.V. Club, particularly as a commenter, after much of its core staff decamped for The Dissolve and Vox, although I still read certain features faithfully. To be honest, it’s been years since a new site grabbed my attention to the point where I wanted to read it every day. And I’m not alone: the problem of retaining loyalty to brands is the single greatest challenge confronting journalism of all kinds, even as musical artists deal with much the same issues on Spotify and Pandora.
Faced with a future driven by aggregators, which destroy the old business models for distributing content, most media companies have turned to one of two solutions. Either you provide content in a form that resists aggregation while still attracting an audience, or you nurture a voice or personality compelling enough to draw readers back on a regular basis. Both have their problems. At first glance, the two kinds of content that might seem immune to aggregation are television shows and podcasts, but that’s more of a structural quirk. From a network’s perspective, the real brand at stake isn’t Community or Parks and Recreation but NBC itself, and with the proliferation of viewing and streaming options, we’re much less likely to tune in to whatever the network wants to show us on Thursday night. And podcasts are simply awaiting the appearance of a reliable aggregator that will cull the day’s best episodes, or, even more likely, the best two- or three-minute snippets. Once that happens, we’re likely to start listening to podcasts as we consume written content, as a kind of smorgasbord of diversion that isn’t tied down to any one creator. As for personalities, they’re great when you can get them, but they’re excruciatingly rare. Talk radio is a fantastic example: the fact that maybe half a dozen guys—and they’re mostly men—have divided the radio audience between them for decades now points to how few can really do it.
And there’s no reason to expect other kinds of content to be any different. Every author hopes that his voice will be distinctive enough to draw in people who simply want to hear everything he says, but there aren’t many such writers left: David Carr, who passed away over a year ago, was one of the last. Even I’m mostly reconciled to the fact that readership on this blog is largely dependent on factors outside my control. My single busiest day occurred after one of my posts appeared on the front page of Reddit, but as I’ve noted elsewhere, after a heady period in which a mass of eyeballs equivalent to the population of Cincinnati came to visit, few, if any, stuck around to read more. I’ve slowly acquired a coterie of regular readers, but page views have remained more or less fixed for a long time, and my only spikes in traffic come when a post is linked somewhere else. I do what I can to keep the level of quality consistent, and if nothing else, I don’t lack for productivity. All I can really do is keep writing, throw out ideas, and hope that a few of them stick, which isn’t all that different from what the major media companies are doing on a much larger scale. (Although you can find lessons in unexpected places. One brand that caught my eye—in the form of a shelf of musty books, most of them long out of print—was the Bollingen Foundation, which I still think is a fascinating, if not entirely useful, counterexample.) But I can’t help but feel that there must be a better way.
The reviewable appliance
Last week, I quoted the critic Renata Adler, who wrote back in the early eighties: “Television…is clearly not an art but an appliance, through which reviewable material is sometimes played.” Which only indicates how much has changed over the last thirty years, which have seen television not only validated as a reviewable medium, but transformed into maybe the single most widely reviewed art form in existence. Part of this is due to an increase in the quality of the shows themselves: by now, it’s a cliché to say that we’re living in a golden age of television, but that doesn’t make it any less true, until there are almost too many great shows for any one viewer to absorb. As John Landgraf of FX said last year, in a quote that was widely shared in media circles, mostly because it expresses how many of us feel: “There is simply too much television.” There are something like four hundred original scripted series airing these days—which is remarkable in itself, given how often critics have tolled the death knell for scripted content in the face of reality programming—and many are good to excellent. If we’ve learned to respect television as a medium that rewards close scrutiny, it’s largely because there are more worthwhile shows than ever before, and many deserve to be unpacked at length.
There’s also a sense in which shows have consciously risen to that challenge, taking advantage of the fact that there are so many venues for reviews and discussion. I never felt that I’d truly watched an episode of Mad Men until I’d watched Matthew Weiner’s weekly commentary and read the writeup on The A.V. Club, and I suspect that Weiner felt enabled to go for that level of density because the tools for talking about it were there. (To take another example: Mad Style, the fantastic blog maintained by Tom and Lorenzo, came into being because of the incredible work of costume designer Jane Bryant, but Bryant herself seemed to be make certain choices because she knew that they would be noticed and dissected.) The Simpsons is often called the first VCR show—it allowed itself to go for rapid freeze-frame jokes and sign gags because viewers could pause to catch every detail—but these days, we’re more likely to rely on recaps and screen grabs to process shows that are too rich to be fully grasped on a single viewing. I’m occasionally embarrassed when I click on a review and read about a piece of obvious symbolism that I missed the first time around, but you could also argue that I’ve outsourced that part of my brain to the hive mind, knowing that I can take advantage of countless other pairs of eyes.
But the fact that television inspires millions of words of coverage every day can’t be entirely separated from Adler’s description of it an appliance. For reasons that don’t have anything to do with television itself, the cycle of pop culture coverage—like that of every form of news—has continued to accelerate, with readers expecting nonstop content on demand: I’ll refresh a site a dozen times a day to see what has been posted in the meantime. Under those circumstances, reviewers and their editors naturally need a regular stream of material to be discussed, and television fits the bill beautifully. There’s a lot of it, it generates fresh grist for the mill on a daily basis, and it has an existing audience that can be enticed into reading about their favorite shows online. (This just takes a model that had long been used for sports and applies it to entertainment: the idea that every episode of Pretty Little Liars deserves a full writeup isn’t that much more ridiculous than devoting a few hundred words to every baseball game.) One utility piggybacks on the other, and it results in a symbiotic relationship: the shows start to focus on generating social media chatter, which, if not exactly a replacement for ratings, at least becomes an argument for keeping marginal shows like Community alive. And before long, the show itself is on Hulu or Yahoo.
None of this is inherently good or bad, although I’m often irked by the pressure to provide instant hot takes about the latest twist on a hit series, with think pieces covering other think pieces until the snake has eaten its own tail. (The most recent example was the “death” of Glenn on The Walking Dead, a show I don’t even watch, but which I found impossible to escape for three weeks last November.) There’s also an uncomfortable sense in which a television show can become an adjunct to its own media coverage: I found reading about Game of Thrones far more entertaining over the last season than watching the show itself. It’s all too easy to use the glut of detailed reviews as a substitute for the act of viewing: I haven’t watched Halt and Catch Fire, for instance, but I feel as if I have an opinion about it, based solely on the information I’ve picked up by osmosis from the review sites I visit. I sometimes worry that critics and fans have become so adept at live-tweeting episodes that they barely look at the screen, and the concept of hate-watching, of which I’ve been guilty myself, wouldn’t exist if we didn’t have plenty of ways to publicly express our contempt. It’s a slippery slope from there to losing the ability to enjoy good storytelling for its own sake. And we need to be aware of this. Because we’re lucky to be living in an era of so much great television—and we ought to treat it as something more than a source of hot and cold running reviews.
The critical path
A few weeks ago, I had occasion to mention Renata Adler’s famous attack in the New York Review of Books on the reputation of the film critic Pauline Kael. As a lifelong Kael fan, I don’t agree with Adler—who describes Kael’s output as “not simply, jarringly, piece by piece, line by line, and without interruption, worthless”—but I respect the essay’s fire and eloquence, and it’s still a great read. What I’d forgotten is that Adler opens with an assault, not on Kael alone, but on the entire enterprise of professional criticism itself. Here’s what she says:
The job of the regular daily, weekly, or even monthly critic resembles the work of the serious intermittent critic, who writes only when he is asked to or genuinely moved to, in limited ways and for only a limited period of time…Normally, no art can support for long the play of a major intelligence, working flat out, on a quotidian basis. No serious critic can devote himself, frequently, exclusively, and indefinitely, to reviewing works most of which inevitably cannot bear, would even be misrepresented by, review in depth…
The simple truth—this is okay, this is not okay, this is vile, this resembles that, this is good indeed, this is unspeakable—is not a day’s work for a thinking adult. Some critics go shrill. Others go stale. A lot go simultaneously shrill and stale.
Adler concludes: “By far the most common tendency, however, is to stay put and simply to inflate, to pretend that each day’s text is after all a crisis—the most, first, best, worst, finest, meanest, deepest, etc.—to take on, since we are dealing in superlatives, one of the first, most unmistakable marks of the hack.” And I think that she has a point, even if I have to challenge a few of her assumptions. (The statement that most works of art “inevitably cannot bear, would even be misrepresented by, review in depth,” is particularly strange, with its implicit division of all artistic productions into the sheep and the goats. It also implies that it’s the obligation of the artist to provide a worthy subject for the major critic, when in fact it’s the other way around: as a critic, you prove yourself in large part through your ability to mine insight from the unlikeliest of sources.) Writing reviews on a daily or weekly basis, especially when you have a limited amount of time to absorb the work itself, lends itself inevitably to shortcuts, and you often find yourself falling back on the same stock phrases and judgments. And Adler’s warning about “dealing in superlatives” seems altogether prescient. As Keith Phipps and Tasha Robinson of The A.V. Club pointed out a few years back, the need to stand out in an ocean of competing coverage means that every topic under consideration becomes either an epic fail or an epic win: a sensible middle ground doesn’t generate page views.
But the situation, at least from Adler’s point of view, is even more dire than when she wrote this essay in the early eighties. When Adler’s takedown of Kael first appeared, the most threatening form of critical dilution lay in weekly movie reviews: today, we’re living in a media environment in which every episode of every television show gets thousands of words of critical analysis from multiple pop culture sites. (Adler writes: “Television, in this respect, is clearly not an art but an appliance, through which reviewable material is sometimes played.” Which is only a measure of how much the way we think and talk about the medium has changed over the intervening three decades.) The conditions that Adler identifies as necessary for the creation of a major critic like Edmund Wilson or Harold Rosenberg—time, the ability to choose one’s subjects, and the freedom to quit when necessary—have all but disappeared for most writers hoping to make a mark, or even just a living. To borrow a trendy phrase, we’ve reached a point of peak content, with a torrent of verbiage being churned out at an unsustainable pace without the advertising dollars to support it, in a situation that can be maintained only by the seemingly endless supply of aspiring writers willing to be chewed up by the machine. And if Adler thought that even a monthly reviewing schedule was deadly for serious criticism, I’d be curious to hear how she feels about the online apprenticeship that all young writers seem expected to undergo these days.
Still, I’d like to think that Adler got it wrong, just as I believe that she was ultimately mistaken about Kael, whose legacy, for all its flaws, still endures. (It’s revealing to note that Adler had a long, distinguished career as a writer and critic herself, and yet she almost certainly remains best known among casual readers for her Kael review.) Not every lengthy writeup of the latest episode of The Vampire Diaries is going to stand the test of time, but as a crucible for forming a critic’s judgment, this daily grind feels like a necessary component, even if it isn’t the only one. A critic needs time and leisure to think about major works of art, which is a situation that the current media landscape doesn’t seem prepared to offer. But the ability to form quick judgments about works of widely varying quality and to express them fluently on deadline is an indispensable part of any critic’s toolbox. When taken as an end itself, it can be deadening, as Adler notes, but it can also be the foundation for something more, even if it has to be undertaken outside of—or despite—the critic’s day job. The critic’s responsibility, now more than ever, isn’t to detach entirely from the relentless pace of pop culture, but to find ways of channeling it into something deeper than the instant think piece or hot take. As a blogger who frequently undertakes projects that can last for months or years, I’m constantly mindful of the relationship between my work on demand and my larger ambitions. And I sure hope that the two halves can work together. Because like it or not, every critic is walking that path already.
The peanut gallery
I first heard about The Peanuts Movie on October 9, 2012, when The A.V. Club reported that it was under development at Fox. At the time, my wife and I were expecting our first child, and it wouldn’t have been long afterward that I looked at the projected release date, did the math, and wondered if this might be the first movie I’d take my daughter to see in the theater. Three years later, that’s exactly how it worked out. I took Beatrix to a noon matinee last Thursday, and although I chose two seats in the back in case I had to beat a hasty retreat, she did great. At times, she got a little squirmy, and I ended up delivering a whispered plot commentary into her ear for much of the movie. She spent most of the last half on my lap. But aside from one moment when she wanted to get up from her seat to dance with the characters onscreen, she was perfect—laughing at all the right moments, even clapping at the end. (In retrospect, the choice of material couldn’t have been better: she complained that the Ice Age short that played before the feature was “too loud,” and I have a feeling that she would have reacted much the same way to anything but the sedate style that The Peanuts Movie captures so beautifully.) Best of all, when it was over and I asked what her favorite part was, she said: “When Charlie Brown was sad.” To which I could only think to myself: “That’s my girl!”
When The Peanuts Movie was first announced, many observers—including me—expressed reservations over whether it would be able to capture the feel of the strip and the original animated specials, and worried in particular that it would degenerate into a series of pop culture references. These concerns, while justified, conveniently ignored the fact that Charles Schulz himself was hardly averse to a trendy gag or two: Lucy once gave Schroeder a pair of Elton John glasses, and the Peanuts special that I watched the most growing up was It’s Flashbeagle, Charlie Brown. More to the point, the strip itself seems so timeless precisely because it reflected its own time so acutely. Its shift in tone from the fifties to the sixties feels like an expression of deeper cultural anxieties, and it was touched by current events to an extent that can be hard to appreciate now. (Snoopy’s dogfights with the Red Baron, which took place exclusively from 1965 to 1972, coincide to an eerie extent with American involvement in Vietnam.) The Peanuts Movie makes the smart, conservative choice by avoiding contemporary references as much as possible: like the first season of Fargo, its primary order of business is to establish its bona fides to anxious fans. But I’d like to think that the inevitable sequels will be a bit more adventurous, just as the later features that Schulz himself wrote began to venture into weirder, more idiosyncratic territory.
That’s hard, of course, when a movie is being conceived in the absence of its creator’s uniquely personal vision. The Peanuts Movie sometimes plays as if it had been written according to the model that Nicholas Meyer used when cracking The Wrath of Khan: “Let’s make a list of things we like.” (It doesn’t go quite as far as the musical You’re a Good Man, Charlie Brown, which adapts the original strips almost word by word, but it quotes from its sources to just the right extent.) The result is an anthology, gracefully assembled, of the best moments from the strip and specials, particularly A Charlie Brown Christmas, but it lacks the prickly specificity that characterized Schulz at his best. Yet I don’t want to undervalue its real achievements. Visually and tonally, it pulls off the immensely difficult technical trick of translating the strip’s spirit into a modern idiom, and the constraints that this imposed result in one of the prettiest, most graphically inventive animated movies I’ve seen in a long time. It never feels rushed or frantic, and its use of child actors, with their slight flatness of affect, is still appealing. Best of all, it respects the strip’s air of sadness—although there’s nothing like “It Changes” from Snoopy Come Home, which might be the bleakest sequence in any children’s movie. And while its happy ending might seem out of tune with Schulz’s underlying pessimism, it’s not so different from the conclusion that he might have given us if ill health and other distractions hadn’t intervened. This is a man, after all, who shied away from easy satisfactions in the strip, but who also wrote the script for It’s Your First Kiss, Charlie Brown.
And I’d like to think that it will play the same incalculable role in my daughter’s inner life that it did in mine. I’ve written at length about the strip before, but it wasn’t until I saw Snoopy at his typewriter on the big screen that I realized—or remembered—how struck I was by that image as a child, and how the impulse it awakened is responsible for where I am today. (One of my first attempts at writing consisted of a careful transcript of one of Snoopy’s stories, which I can still write from memory: “It was a dark and stormy night. Suddenly, a shot rang out! A door slammed. The maid screamed. Suddenly, a pirate ship appeared on the horizon!” At which point Snoopy smugly notes: “This twist in plot will baffle my readers.”) I would have loved this movie as a kid, and scenes like the one in which Snoopy, in his imagination, sneaks back across the front lines after his plane is downed are as much fun to dream about as always. Afterward, my daughter seemed most interested in imagining herself as the little red-haired girl, but if she’s anything like her father, she’ll come to recognize herself more in Charlie Brown and Snoopy, which represent the two halves of their creator’s personality: the neurotic and the fantasist, the solitary introvert and the imaginative writer for whom everything is possible. The Peanuts Movie may not ignite those feelings on its own, but as a gateway toward the rest of the Schulz canon, it’s close to perfection. As I once wrote about The Complete Peanuts collections, which I said would be among the first books my children would ever read: “I can’t imagine giving them a greater gift.”
Quote of the Day
People say things like, “The rule is that you never show the devil.” I’ve heard that. An actress lectured me on that once. But if you have a good-looking devil, and it looks convincing—well, yes, you show it! You kidding? It’ll scare the shit out of the audience. If you have a stupid devil, then you don’t show it.
Is this post an example of Betteridge’s Law?
Yesterday, I was browsing The A.V. Club when I came across the following clunky headline: “Could Guardians of the Galaxy be worthy of the coveted Firefly comparison?” I only skimmed the article itself, which asks, in case you were wondering, if the Guardians of the Galaxy animated series could be “the next Firefly“—a matter on which I don’t have much of an opinion one way or the other. But my attention was caught by one of the reader comments in response, which invoked Betteridge’s Law of Headlines: “Any headline that ends in a question mark can be answered by the word ‘no.'” Needless to say, this is a very useful rule. In its current form, it was set forth by the technology writer Ian Betteridge in response to the TechCrunch headline “Did Last.fm Just Hand Over User Listening Data to the RIAA?” Betteridge wrote:
This story is a great demonstration of my maxim that any headline which ends in a question mark can be answered by the word “no.” The reason why journalists use that style of headline is that they know the story is probably bullshit, and don’t actually have the sources and facts to back it up, but still want to run it. Which, of course, is why it’s so common in the Daily Mail.
Betteridge may have given the rule its most familiar name, but it’s actually much older. It pops up here and there in collections of Murphy’s Law and its variants, and among academics, it’s best known as Hinchliffe’s Rule, attributed—perhaps apocryphally—to the physicist Ian Hinchliffe, which states: “If the title of a scholarly article is a yes or no question, the answer is ‘no.'” (This recently led the Harvard University computer scientist Stuart M. Shieber to publish a scholarly article titled “Is This Article Consistent with Hinchliffe’s Rule?” The answer is no, but only if the answer is yes.) In his book My Trade, the British newspaper editor Andrew Marr makes the same point more forcefully:
If the headline asks a question, try answering “no.” Is This the True Face of Britain’s Young? (Sensible reader: No.) Have We Found the Cure for AIDS? (No; or you wouldn’t have put the question mark in.) Does This Map Provide the Key for Peace? (Probably not.) A headline with a question mark at the end means, in the vast majority of cases, that the story is tendentious or over-sold. It is often a scare story, or an attempt to elevate some run-of-the-mill piece of reporting into a national controversy and, preferably, a national panic. To a busy journalist hunting for real information a question mark means “don’t bother reading this bit.”
What I find most interesting about Betteridge’s version of the rule is his last line: “Which, of course, is why it’s so common in the Daily Mail.” This implies that the rule can be used not just to identify unreliable articles, but to characterize publications as a whole. As I write this, for instance, three headlines on the New York Times home page run afoul of it: “Is Valeant Pharmaceuticals the Next Enron?” “Has Diversity Lost Its Meaning?” “Are Flip Phones Having a Retro Chic Moment?” (There are a few more that technically sprout question marks but don’t quite fit the rubric, such as “Should You Be Watching Supergirl?”) The Daily Mail site, by contrast, has five times as many, and most of them fall neatly into the Betteridge category, including my favorite: “Does This Clip Show the Corpse of a Feared Chupacabra Vampire?” Buzzfeed, interestingly, doesn’t go for that headline format at all, and it only uses question marks to signify its famous personality quizzes: “Are You More Like Adele or Beyoncé?” This implies that a headline phrased in the form of a question might not be especially good at attracting eyeballs: Buzzfeed, which has refined clickbait into an art form, would surely use it more often if it worked. Most likely, as both Betteridge and Marr imply, it’s a way out for journalists who want to publish a story, but aren’t ready to stand behind it entirely. If anyone objects, they can always say that they were just raising the issue for further discussion.
But most readers, I suspect, can intuitively sense the difference. Headlines like this have always reminded me of “The End?” at the close of Manos: The Hands of Fate, to which Crow T. Robot replies: “Umm…Yes? No! I want to change my answer!” It might be instructive to conduct a study of whether or not they’ve increased in frequency over the last decade, as news cycles have grown ever more compressed and the need to generate think pieces on demand forces writers to crank out stories with a minimum of preparation. It’s hard to blame the reporters themselves, who are operating under conditions that actively discourage the kind of extended research process that would allow the question mark to be removed or the article to be dropped altogether. (And it’s worth noting that editors, not reporters, are the ones who write the headlines.) This isn’t to say that there can’t be good stories that sport headlines in the form of a question: like the Bechdel Test for movies, it’s less about criticizing individual works than making us more aware of the landscape. And given the choice, the question mark—which at least provides a visible red flag—is preferable to the exclamation point, literal or otherwise, that characterizes so much current content, from cable news on down. In that light, the question mark almost feels like a form of courtesy. And we have to learn to live with it, at least until good journalism, like the flip phone, experiences a retro chic moment of its own.
Trading places
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What famous person’s life would you want to assume?”
“Celebrity,” John Updike once wrote, “is a mask that eats into the face.” And Updike would have known, having been one of the most famous—and the most envied—literary novelists of his generation, with a career that seemed to consist of nothing but the serene annual production of poems, stories, essays, and hardcovers that, with their dust jackets removed, turned out to have been bound and designed as a uniform edition. From the very beginning, Updike was already thinking about how his complete works would look on library shelves. That remarkable equanimity made an impression on the writer Nicholson Baker, who wrote in his book U & I:
I compared my awkward self-promotion too with a documentary about Updike that I saw in 1983, I believe, on public TV, in which, in one scene, as the camera follows his climb up a ladder at his mother’s house to put up or take down some storm windows, in the midst of this tricky physical act, he tosses down to us some startlingly lucid little felicity, something about “These small yearly duties which blah blah blah,” and I was stunned to recognize that in Updike we were dealing with a man so naturally verbal that he could write his fucking memoirs on a ladder!
Plenty of writers, young or old, might have wanted to switch places with Updike, although the first rule of inhabiting someone else’s life is that you don’t want to be a writer. (The Updike we see in Adam Begley’s recent biography comes across as more unruffled than most, but all those extramarital affairs in Ipswich must have been exhausting.) Writing might seem like an attractive kind of celebrity: you can inspire fierce devotion in a small community of fans while remaining safely anonymous in a restaurant or airport. You don’t even need to go as far as Thomas Pynchon: how many of us could really pick Michael Chabon or Don DeLillo or Cormac McCarthy out of a crowd? Yet that kind of seclusion carries a psychological toll as well, and I suspect that the daily life of any author, no matter how rich or acclaimed, looks much the same as any other. If you want to know what it’s like to be old, Malcolm Cowley wrote: “Put cotton in your ears and pebbles in your shoes. Pull on rubber gloves. Smear Vaseline over your glasses, and there you have it: instant old age.” And if you want to know what it’s like to be a novelist, you can fill a room with books and papers, go inside, close the door, and stay there for as long as possible while doing absolutely nothing that an outside observer would find interesting. Ninety percent of a writer’s working life looks more or less like that.
What kind of celebrity, then, do you really want to be? If celebrity is a mask, as Updike says, it might be best to make it explicit. Being a member of Daft Punk, say, would allow you to bask in the adulation of a stadium show, then remove your helmet and take the bus back to your hotel without any risk of being recognized. The mask doesn’t need to be literal, either: I have a feeling that Lady Gaga could dress down in a hoodie and ponytail and order a latte at any Starbucks in the country without being mobbed. The trouble, of course, with taking on the identity of a total unknown—Banksy, for instance—is that you’re buying the equivalent of a pig in a poke: you just don’t know what you’re getting. Ideally, you’d switch places with a celebrity whose life has been exhaustively chronicled, either by himself or others, so that there aren’t any unpleasant surprises. It’s probably best to also go with someone slightly advanced in years: as Solon says in Herodotus, you don’t really know how happy someone’s life is until it’s over, and the next best thing would be a person whose legacy seems more or less fixed. (There are dangers there, too, as Bill Cosby knows.) And maybe you want someone with a rich trove of memories of a life spent courting risk and uncertainty, but who has since mellowed into something slightly more stable, with the aura of those past accomplishments still intact.
You also want someone with the kind of career that attracts devoted collaborators, which is the only kind of artistic wealth that really counts. But you don’t want too much fame or power, both of which can become traps in themselves. In many respects, then, what you’d want is something close to the life of half and half that Lin Yutang described so beautifully: “A man living in half-fame and semi-obscurity.” Take it too far, though, and you start to inch away from whatever we call celebrity these days. (Only in today’s world can an otherwise thoughtful profile of Brie Larson talk about her “relative anonymity.”) And there are times when a touch of recognition in public can be a welcome boost to your ego, like for Sally Field in Soapdish, as long as you’re accosted by people with the same basic mindset, rather than those who just recognize you from Istagram. You want, in short, to be someone who can do pretty much what he likes, but less because of material resources than because of a personality that makes the impossible happen. You want to be someone who can tell an interviewer: “Throughout my life I have been able to do what I truly love, which is more valuable than any cash you could throw at me…So long as I have a roof over my head, something to read and something to eat, all is fine…What makes me so rich is that I am welcomed almost everywhere.” You want to be Werner Herzog.
The old switcheroo
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What makes a great trailer?”
A few years ago, in a post about The Cabin in the Woods, which is one of a small handful of recent films I still think about on a regular basis, I wrote:
If there’s one thing we’ve learned about American movie audiences over the past decade or so, it’s that they don’t like being surprised. They may say that they do, and they certainly respond positively to twist endings, properly delivered, within the conventions of the genre they were hoping to see. What they don’t like is going to a movie expecting one thing and being given something else. And while this is sometimes a justifiable response to misleading ads and trailers, it can also be a form of resentment at having one’s expectations upended.
I went on to quote a thoughtful analysis from Box Office Mojo, which put its finger on why the movie scored so badly with audiences:
By delivering something much different, the movie delighted a small group of audience members while generally frustrating those whose expectations were subverted. Moviegoers like to know what they are in for when they go to see a movie, and when it turns out to be something different the movie tends to get punished in exit polling.
And the funny thing is that you can’t really blame the audience for this. If you think of a movie primarily as a commercial product that you’ve paid ten dollars or more to see—which doesn’t even cover the ancillary costs of finding a babysitter and driving to and from the theater—you’re likely to be frustrated if it turns out to be something different from what you were expecting. This is especially the case if you only see a few movies a year, and doubly so if you avoid the reviews and base your decisions solely on trailers, social media, or the presence of a reliable star. In practice, this means that certain surprises are acceptable, while others aren’t. It’s fine if the genre you’re watching all but requires there to be a twist, even if it strains all logic or openly cheats. (A lot of people apparently liked Now You See Me.) But if the twist takes you out of the genre that you thought you were paying to see, viewers tend to get angry. Genre, in many ways, is the most useful metric for deciding where to put your money: if you pay to see an action movie or a romantic comedy or a slasher film, you have a pretty good sense of the story beats you’re going to experience. A movie that poses as one genre and turns out to be another feels like flagrant false advertising, and it leaves many viewers feeling ripped off.
As a result, it’s probably no longer possible for a mainstream movie to radically change in tone halfway through, at least not in a way that hasn’t been spoiled by trailers. Few viewers, I suspect, went into From Dusk Till Dawn without knowing that a bunch of vampires were coming, and a film like Psycho couldn’t be made today at all. (Any attempt to preserve the movie’s secrets in the ads would be seen, after the fact, as a tragic miscalculation in marketing, as many industry insiders thought it was for The Cabin in the Woods.) There’s an interesting exception to this rule, though, and it applies to trailers themselves. Unless it’s for something like The Force Awakens, a trailer, by definition, isn’t something you’ve paid to see: you don’t have any particular investment in what they’re showing you, and it’s only going to claim your attention for a couple of minutes. As a result, trailers can indulge in all kinds of formal experiments that movies can’t, and probably shouldn’t, attempt at feature length. For the most part, trailers aren’t edited according to the same rules as movies, and they’re often cut together by a separate team of editors who are looking at the footage using a very different set of criteria. And as it turns out, one of the most reliable conventions of movie trailers is the old switcheroo: you start off in one genre, then shift abruptly to another, often accompanied by a needle scratch or ominous music cue.
In other words, the trailers frequently try to appeal to audiences using exactly the kind of surprise that the movies themselves can no longer provide. Sometimes it starts off realistically, only to introduce monsters or aliens, as Cloverfield and District 9 did so memorably, and trailers never tire of the gimmick of giving us what looks like a romantic comedy before switching into thriller mode. The ultimate example, to my mind, remains Vanilla Sky, which is still one of my favorite trailers. When I saw it for the first time, the genre switcheroo wasn’t as overused as it later became, and the result knocked me sidways. By now, most of its tricks have become clichés in themselves, down to its use of “Solsbury Hill,” so maybe you’ll have to take my word for it when I say that it was unbelievably effective. (In some ways, I wish the movie, which I also love, had followed the trailer’s template more closely, instead of tipping its hand early on about the weirdness to come.) And I suspect that such trailers, with their ability to cross genre boundaries, represent a kind of longing by directors about the sorts of films that they’d really like to make. The logic of the marketplace has made it impossible for such surprises to survive in the finished product, but a trailer can serve a sort of miniature version of what it might have been under different circumstances. This isn’t always true: in most cases, the studio just cuts together a trailer for the movie that they wish the director had made, rather than the one that he actually delivered. But every now and then, a great trailer can feel like a glimpse of a movie’s inner, secret life, even if it turns out that it was all a dream.
Multiple personalities
When I was in my early twenties, I was astonished to learn that “One,” “Coconut,” the soundtrack to The Point, and “He Needs Me”—as sung by Shelley Duvall in Popeye and, much later, in Punch-Drunk Love—were all written by the same man, who also sang “Everybody’s Talkin'” from Midnight Cowboy. (This doesn’t even cover “Without You” or “Jump Into the Fire,” which I discovered only later, and it also ignores some of the weirder detours in Harry Nilsson’s huge discography.) At the time, I was reminded of Homer Simpson’s response when Lisa told him that bacon, ham, and pork chops all came from the same animal: “Yeah, right, Lisa. A wonderful, magical animal.” Which is exactly what Nilsson was. But it’s also the kind of diversity that arises from decades of productive, idiosyncratic work. Nilsson was a facile songwriter with a lot of tricks up his sleeve, as he notes in an interview in the book Songwriters on Songwriting:
Most [songs] I find you can write in less time than it takes to sing them. The concept, if there is a concept, or the hook, is all you’re concerned with. Because you know you can go back and fill in the pieces. If you get a front line and a punch line, it’s a question of just filling in the missing bits.
And given Nilsson’s diverse, prolific output, it shouldn’t come as a surprise that I encountered him in so many different guises before realizing that they were all aspects of a single creative personality.
Of course, not every career generates this kind of enticing randomness. Nilsson occupied a curious position for much of his life, stuck somewhere halfway between superstardom and seclusion, and it freed him to make a long series of odd, peculiar choices. When other artists end up in the same position, it’s often less by choice than by necessity. When you look at the résumé of a veteran supporting actor or working writing, you usually find that they resist easy categorizations, since each credit resulted from a confluence of circumstances that may never be repeated. A glance at the filmography of any character actor inspires moment after moment of recognition, as you realize, for instance, that the same guy who played Mr. Noodle on Sesame Street was also the dad in Rachel Getting Married and Tars in Interstellar. A few artists have the luxury of shaping careers that seem all of a piece, but others aren’t all that interested in it, or find that their body of work is determined more by external factors. Most actors aren’t in a position to turn down a paycheck, and learning how and why they took one role and not another is part of what makes Will Harris’s A.V. Club interviews in “Random Roles” so fascinating. When you’re at the constant mercy of trends and casting agents, you can end up with a career that looks like it should belong to three different people. And as someone like Matthew McConaughey can tell you, that goes for stars as well.
It’s particularly true of actresses. I’ve spoken here before of the starlet’s dilemma, in which young actresses are required to balance the needs of extending their shelf life as ingenues for a few more seasons with the very different set of choices required to sustain a career over decades. In many cases, the decisions that make sense now, like engaging in cosmetic surgery, can come back to haunt them later, but the pressure to extend their prime earning years is immense, and it’s no surprise that few manage to navigate the pitfalls that Hollywood presents. I was reminded of this while leafing—never mind why—through the latest issue of Allure, which features Jessica Alba on its cover. Alba has recently begun a second act as the head of her own consumer goods company, and she seems far happier and more satisfied in that role than she ever was as an actress: she admits that she tried to be what everyone else wanted her to be, and she accepted roles and made choices without a larger plan in mind. The result, sadly, was a career without shape or character, determined by an industry that could never decide whether Alba was best suited for comedy, romance, or action. I don’t think any of her movies will still be watched twenty years from now, and I expect that we’ll be surprised one day to remember that the founder of the Honest Company was also a movie star, in the way it amuses us to reflect that Martha Stewart used to be a model.
So how do you end up with a career more like Nilsson’s and less like Alba’s, given countless uncontrollable factors that can govern a life in the arts? You can begin, perhaps, by remembering like an artist, like any human being, will play many roles, and not all of them are going to be consistent. When you look back at what you’ve done, it can be hard to find any particular shape, aside from what was determined by the needs of the moment, and it may even be difficult to recognize the person who thought that a particular project was a good idea—if you had any choice in the matter at all. (When I look at my own career, I find that it divides neatly in two, with one half in science fiction and the other in suspense, with no overlap between them whatsoever, a situation that was created almost entirely by the demands of the market.) But if you need to wear multiple hats, or even multiple personalities, you can at least strive to make all of them interesting. Consistency, as Emerson puts it, is the hobgoblin of little minds, and it’s an equally elusive goal in the arts: the only way to be consistent is to be dependably mediocre. The life you get by staying true to yourself in the face of external pressure will be more interesting than the one that results from a perfect plan. It can even be easier to have two careers than one. And if you try too hard to make everything fit into a single frame, you might find that one is the loneliest number.
Pictures at an exhibition
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What piece of art has actually stopped you in your tracks?”
“All art constantly aspires toward the condition of music,” Walter Pater famously said, but these days, it seems more accurate to say that all art aspires toward the condition of advertising. There’s always been a dialogue between the two, of course, and it runs in both directions, with commercials and print ads picking up on advances in the fine arts, even as artists begin to utilize techniques initially developed on Madison Avenue. Advertising is a particularly ruthless medium—you have only a few seconds to grab the viewer’s attention—and the combination of quick turnover, rapid feedback, and intense financial pressure allows innovations to be adapted and refined with blinding speed, at least within a certain narrow range. (There’s a real sense in which the hard lessons that Jim Henson, say, learned while shooting commercials for Wilkins Coffee are what made Sesame Street so successful.) The difference today is that the push for virality—the need to attract eyeballs in brutal competition with countless potential diversions—has superseded all other considerations, including the ability to grow and maintain an audience. When thousands of “content providers” are fighting for our time on equal terms, there’s no particular reason to remain loyal to any one of them. Everything is an ad now, and it’s selling nothing but itself.
This isn’t a new idea, and I’ve written about it here at length before. What really interests me, though, is how even the most successful examples of storytelling are judged by how effectively they point to some undefined future product. The Marvel movies are essentially commercials or trailers for the idea of a superhero film: every installment builds to a big, meaningless battle that serves as a preview for the confrontation in an upcoming sequel, and we know that nothing can ever truly upset the status quo when the studio’s slate of tentpole releases has already been announced well into the next decade. They aren’t bad films, but they’re just ever so slightly better than they have to be, and I don’t have much of an interest in seeing any more. (Man of Steel has plenty of problems, but at least it represents an actual point of view and an attempt to work through its considerable confusions, and I’d sooner watch it again than The Avengers.) Marvel is fortunate enough to possess one of the few brands capable of maintaining an audience, and it’s petrified at the thought of losing it with anything so upsetting as a genuine surprise. And you can’t blame anyone involved. As Christopher McQuarrie aptly puts it, everyone in Hollywood is “terribly lost and desperately in need of help,” and the last thing Marvel or Disney wants is to turn one of the last reliable franchises into anything less than a predictable stream of cash flows. The pop culture pundits who criticize it—many of whom may not have jobs this time next year—should be so lucky.
But it’s unclear where this leaves the rest of us, especially with the question of how to catch the viewer’s eye while inspiring an engagement that lasts. The human brain is wired in such a way that the images or ideas that seize its attention most easily aren’t likely to retain it over the long term: the quicker the impression, the sooner it evaporates, perhaps because it naturally appeals to our most superficial impulses. Which only means that it’s worth taking a close look at works of art that both capture our interest and reward it. It’s like going to an art gallery. You wander from room to room, glancing at most of the exhibits for just a few seconds, but every now and then, you see something that won’t let go. Usually, it only manages to intrigue you for the minute it takes to read the explanatory text beside it, but occasionally, the impression it makes is a lasting one. Speaking from personal experience, I can think of two revelatory moments in which a glimpse of a picture out of the corner of my eye led to a lifelong obsession. One was Cindy Sherman’s Untitled Film Stills; the other was the silhouette work of Kara Walker. They could hardly be more different, but both succeed because they evoke something to which we instinctively respond—movie archetypes and clichés in Sherman’s case, classic children’s illustrations in Walker’s—and then force us to question why they appealed to us in the first place.
And they manage to have it both ways to an extent that most artists would have reason to envy. Sherman’s film stills both parody and exploit the attitudes that they meticulously reconstruct: they wouldn’t be nearly as effective if they didn’t also serve as pin-ups for readers of Art in America. Similarly, Walker’s cutouts fill us with a kind of uneasy nostalgia for the picture books we read growing up, even as they investigate the darkest subjects imaginable. (They also raise fascinating questions about intentionality. Sherman, like David Lynch, can come across as a naif in interviews, while Walker is closer to Michael Haneke, an artist who is nothing if not completely aware of how each effect was achieved.) That strange combination of surface appeal and paradoxical depth may be the most promising angle of attack that artists currently have. You could say much the same about Vijith Assar’s recent piece for McSweeney’s about ambiguous grammar, which starts out as the kind of viral article that we all love to pass around—the animated graphics, the prepackaged nuggets of insight—only to end on a sweet sucker punch. The future of art may lie in forms that seize on the tools of virality while making us think twice about why we’re tempted to click the share button. And it requires artists of unbelievable virtuosity, who are able to exactly replicate the conditions of viral success while infusing them with a white-hot irony. It isn’t easy, but nothing worth doing ever is. This is the game we’re all playing, like it or not, and the artists who are most likely to survive are the ones who can catch the eye while also burrowing into the brain.