Posts Tagged ‘The A.V. Club’
I first saw Brian De Palma’s Raising Cain when I was fourteen years old. In a weird way, it amounted to a peak moment of my early adolescence: I was on a school trip to our nation’s capital, sharing a hotel room with my friends from middle school, and we were just tickled to get away with watching an R-rated movie on cable. The fact that we ended up with Raising Cain doesn’t quite compare with the kids on The Simpsons cheering at the chance to see Barton Fink, but it isn’t too far off. I think that we liked it, and while I won’t claim that we understood it, that doesn’t mean much of anything—it’s hard for me to imagine anybody, of any age, entirely understanding this movie, which includes both me and De Palma himself. A few years later, I caught it again on television, and while I can’t say I’ve thought about it much since, I never forgot it. Gradually, I began to catch up on my De Palma, going mostly by whatever movies made Pauline Kael the most ecstatic at the time, which in itself was an education in the gap between a great critic’s pet enthusiasms and what exists on the screen. (In her review of The Fury, Kael wrote: “No Hitchcock thriller was ever so intense, went so far, or had so many ‘classic’ sequences.” I love Kael, but there are at least three things wrong with that sentence.) And ultimately De Palma came to mean a lot to me, as he does to just about anyone who responds to the movies in a certain way.
When I heard about the recut version of Raising Cain—in an interview with John Lithgow on The A.V. Club, no less, in which he was promoting his somewhat different role on The Crown—I was intrigued. And its backstory is particularly interesting. Shortly before the movie was first released, De Palma moved a crucial sequence from the beginning to the middle, eliminating an extended flashback and allowing the film to play more or less chronologically. He came to regret the change, but it was too late to do anything about it. Years later, a freelance director and editor named Peet Gelderblom read about the original cut and decided to restore it, performing a judicious edit on a digital copy. He put it online, where, unbelievably, it was seen by De Palma himself, who not only loved it but asked that it be included as a special feature on the new Blu-ray release. If nothing else, it’s a reminder of the true possibilities of fan edits, which have served mostly for competing visions of the ideal version of Star Wars. With modern software, a fan can do for a movie what Walter Murch did for Touch of Evil, restoring it to the director’s original version based on a script or a verbal description. In the case of Raising Cain, this mostly just involved rearranging the pieces in the theatrical cut, but other fans have tackled such challenges as restoring all the deleted scenes in Twin Peaks: Fire Walk With Me, and there are countless other candidates.
Yet Raising Cain might be the most instructive case study of all, because simply restoring the original opening to its intended place results in a radical transformation. It isn’t for everyone, and it’s necessary to grant De Palma his usual passes for clunky dialogue and characterization, but if you’re ready to meet it halfway, you’re rewarded with a thriller that twists back on itself like a Möbius strip. De Palma plunders his earlier movies so blatantly that it isn’t clear if he’s somehow paying loving homage to himself—bypassing Hitchcock entirely—or recycling good ideas that he feels like using again. The recut opens with a long mislead that recalls Dressed to Kill, which means that Lithgow barely even appears for the first twenty minutes. You can almost see why De Palma chickened out for the theatrical version: Lithgow’s performance as the meek Carter and his psychotic imaginary brother Cain feels too juicy to withhold. But the logic of the script was destroyed. For a film that tests an audience’s suspension of disbelief in so many other ways, it’s unclear why De Palma thought that a flashback would be too much for the viewer to handle. The theatrical release preserves all the great shock effects that are the movie’s primary reason for existing, but they don’t build to anything, and you’re left with a film that plays like a series of sketches. With the original order restored, it becomes what it was meant to be all along: a great shaggy dog story with a killer punchline.
Raising Cain is gleefully about nothing but itself, and I wouldn’t force anybody to watch it who wasn’t already interested. But the recut also serves as an excellent introduction to its director, just as the older version did for me: when I first encountered it, I doubt I’d seen anything by De Palma, except maybe The Untouchables, and Mission: Impossible was still a year away. It’s safe to say that if you like Raising Cain, you’ll like De Palma in general, and if you can’t get past its archness, campiness, and indifference to basic plausibility—well, I can hardly blame you. Watching it again, I was reminded of Blue Velvet, a far greater movie that presents the viewer with a similar test. It has the same mixture of naïveté and incredible technical virtuosity, with scenes that barely seem to have been written alternating with ones that push against the boundaries of the medium itself. You’re never quite sure if the director is in on the gag, and maybe it doesn’t matter. There isn’t much beauty in Raising Cain, and De Palma is a hackier and more mechanical director than Lynch, but both are so strongly visual that the nonsensory aspects of their films, like the obligatory scenes with the cops, seem to wither before our eyes. (It’s an approach that requires a kind of raw, intuitive trust from the cast, and as much as I enjoy what Lithgow does here, he may be too clever and resourceful an actor to really disappear into the role.) Both are rooted, crucially, in Hitchcock, who was equally obsessive, but was careful to never work from his own script. Hitchcock kept his secret self hidden, while De Palma puts it in plain sight. And if it turns out to be nothing at all, that’s probably part of the joke.
Note: Major spoilers follow for the most recent episode of Westworld.
Shortly before the final scene of “Trompe L’Oeil,” it occurred to me that Westworld, after a strong start, was beginning to coast a little. Like any ensemble drama on a premium cable channel, it’s a machine with a lot of moving parts, so it can be hard to pin down any specific source of trouble. But it appears to be a combination of factors. The plot thread centering on Dolores, which I’ve previously identified as the engine that drives the entire series, has entered something of a holding pattern—presumably because the show is saving its best material for closer to the finale. (I was skeptical of the multiple timelines theory at first, but I’m reluctantly coming around to it.) The introduction of Delos, the corporation that owns the park, as an active participant in the story is a decision that probably looked good on paper, but it doesn’t quite work. So far, the series has given us what amounts to a closed ecosystem, with a cast of characters that consists entirely of the hosts, the employees, and a handful of guests. At this stage, bringing in a broadly villainous executive from corporate headquarters comes precariously close to a gimmick: it would have been more interesting to have the conflict arise from someone we’d already gotten to know in a more nuanced way. Finally, it’s possible that the events of the last week have made me more sensitive to the tendency of the series to fall back on images of violence against women to drive the story forward. I don’t know how those scenes would have played earlier, but they sure don’t play for me now.
And then we get the twist that a lot of viewers, including me, had suspected might be coming: Bernard is a robot. Taken on its own, the revelation is smartly handled, and there are a lot of clever touches. In a scene at the beginning between Bernard and Hector, the episode establishes that the robots simply can’t process details that conflict with their programming, and this pays off nicely at the end, when Bernard doesn’t see the door that leads into Dr. Ford’s secret lab. A minute later, when Theresa hands him the schematics that show his own face, Bernard says: “It doesn’t look like anything to me.” (This raises an enticing possibility for future reveals, in which scenes from previous episodes that were staged from Bernard’s point of view are shown to have elements that we didn’t see at the time, because Bernard couldn’t. I don’t know if the show will take that approach, but it should—it’s nothing less than an improvement on the structural mislead in The Sixth Sense, and it would be a shame not to use it.) Yet the climactic moment, in which Dr. Ford calmly orders Bernard to murder Theresa, doesn’t land as well as it could have. It should have felt like a shocking betrayal, but the groundwork wasn’t quite there: Bernard and Theresa’s affair was treated very casually, and by the time we get to their defining encounter, whatever affection they had for each other is long gone. From the point of view of the overall plot, this arguably makes sense. But it also drains some of the horror from a payoff that the show must have known was coming. If we imagine Elsie as the victim instead, we can glimpse what the scene might have been.
Yet I’m not entirely sure this wasn’t intentional. Westworld is a cerebral, even clinical show, and it doesn’t seem to take pleasure in action or visceral climaxes for their own sake. Part of this probably reflects the temperament of its creators, but it also feels like an attempt by the show to position itself in a challenging time for this kind of storytelling. It’s a serialized drama that delivers new installments each week, but these days, such shows are just as likely to drop all ten episodes at once. This was obviously never an option for a show on HBO, but the weekly format creates real problems for a show that seems determined to set up twists that are more considered and logical than the usual shock deaths. To its credit, the show has played fair with viewers, and the clues to Bernard’s true nature were laid in with care. (If I noticed them, it was only because I was looking: I asked myself, working from first principles, what kind of surprise a show like this would be likely to spring, and the revelation that one of the staff members was actually a host seemed like a strong contender.) When a full week of online discussion and speculation falls between each episode, it becomes harder to deliver such surprises. Even if the multiple timeline theory doesn’t turn out to be correct, its very existence indicates the amount of energy, ingenuity, and obsessive analysis that the audience is willing to devote to it. As a result, the show’s emotional detachment comes off as a preemptive defense mechanism. It downplays the big twists, as if to tell us that it isn’t the surprises that count, but their implications.
In the case of Bernard, I’m willing to take that leap, if only because the character is in the hands of Jeffrey Wright, who is more qualified than any other actor alive to work through the repercussions. It’s a casting choice that speaks a lot, in itself, to the show’s intelligence. (In an interview with The A.V. Club, Wright has revealed that he didn’t know that Bernard was a robot when he shot the pilot, and that his own theory was that Dr. Ford was a creation of Bernard’s, which would have been even more interesting.) The revelation effectively reveals Bernard to have been the show’s secret protagonist all along, which is where he belongs, and it occurs at just about the right point in the season for it to resonate: we’ve still got three episodes to go, which gives the show room, refreshingly, to deal with the consequences, rather than rushing past them to the finale. Whether it can do the same with whatever else it has up its sleeve, including the possibility of multiple timelines, remains to be seen. But even though I’ve been slightly underwhelmed by the last two episodes, I’m still excited to see how it plays its hand. Even as Westworld unfolds from one week to the next, it clearly sees the season as a single continuous story, and the qualities that I’ve found unsatisfying in the moment—the lulls, the lack of connection between the various plot threads, the sense that it’s holding back for the climax—are those that I hope will pay off the most in the end. Like its robots, the series is built with a bicameral mind, with the logic of the whole whispering its instructions to the present. And more than any show since Mad Men, it seems to have its eye on the long game.
My short story “Ernesto,” which originally appeared in the March 2012 issue of Analog Science Fiction and Fact, has just been reprinted by Lightspeed. To celebrate its reappearance, I’ll be publishing revised versions of a few posts in which I described the origins of this story, which you can read for free here, along with a nice interview.
In an excellent interview from a few years ago with The A.V. Club, the director Steven Soderbergh spoke about the disproportionately large impact that small changes can have on a film: “Two frames can be the difference between something that works and something that doesn’t. It’s fascinating.” The playwright and screenwriter Jez Butterworth once made a similar point, noting that the gap between “nearly” and “really” in a photograph—or a script—can come down to a single frame. The same principle holds just as true, if not more so, for fiction. A cut, a new sentence, or a tiny clarification can turn a decent but unpublishable story into one that sells. These changes are often so invisible that the author himself would have trouble finding them after the fact, but their overall effect can’t be denied. And I’ve learned this lesson more than once in my life, perhaps most vividly with “Ernesto,” a story that I thought was finished, but which turned out to have a few more surprises in store.
When I was done with “Ernesto,” I sent it to Stanley Schmidt at Analog, who had just purchased my novelette “The Last Resort.” Stan’s response, which I still have somewhere in my files, was that the story didn’t quite grab him enough to find room for it in a rather crowded schedule, but that he’d hold onto it, just in case, while I sent it around to other publications. It wasn’t a rejection, exactly, but it was hardly an acceptance. (Having just gone through three decades of John W. Campbell’s correspondence, I now know that this kind of response is fairly common when a magazine is overstocked.) I dutifully sent it around to most of the usual suspects at the time: Asimov’s, Fantasy & Science Fiction, and the online magazines Clarkesworld and Intergalatic Medicine Show. Some had a few kind words for the story, but they all ultimately passed. At that point, I concluded that “Ernesto” just wasn’t publishable. This was hardly the end of the world—it had only taken two weeks to write—but it was an unfortunate outcome for a story that I thought was still pretty clever.
A few months later, I saw a call for submissions for a independent paperback anthology, the kind that pays its contributors in author’s copies, and its theme—science fiction stories about monks—seemed to fit “Ernesto” fairly well. The one catch was that the maximum length for submissions was 6,000 words, while “Ernesto” weighed in at over 7,500. Cutting twenty percent of a story that was already highly compressed, at least to my eyes, was no joke, but I figured that I’d give it a try. Over the course of a couple of days, then, I cut it to the bone, removing scenes and extra material wherever I could. Since almost a year had passed since I’d first written it, it was easy to see what was and wasn’t necessary. More significantly, I added an epigraph, from Ernest Hemingway’s interview with The Paris Review, that made it clear from the start that the main character was Hemingway, which wasn’t the case with the earlier draft. And the result read a lot more smoothly than the version I’d sent out before.
It might have ended there, with “Ernesto” appearing without fanfare in an unpaid anthology, but as luck would have it, Analog had just accepted a revised version of my novelette “The Boneless One,” which had also been rejected by a bunch of magazines in its earlier form. Encouraged by this, I thought I’d try the same thing with “Ernesto.” So I sent it to Analog again, and it was accepted, almost twelve months after my first submission. Now it’s being reprinted more than four years later by Lightspeed, a magazine that didn’t even exist when I first wrote it. The moral, I guess, is that if a story has been turned down by five of the top magazines in your field, it probably isn’t good enough to be published—but that doesn’t mean it can’t get better. In this case, my rule of spending two weeks on a short story ended up being not quite correct: I wrote the story in two weeks, shopped it around for a year, and then spent two more days on it. And those last two days, like Soderbergh’s two frames, were what made all the difference.
Earlier this week, The A.V. Club, which is still the pop culture website at which I spend the vast majority of my online life, announced a new food section called “Supper Club.” It’s helmed by the James Beard Award-winning food critic and journalist Kevin Pang, a talented writer and documentarian whose work I’ve admired for years. On Wednesday, alongside the site’s usual television and movie coverage, seemingly half the homepage was devoted to features like “America’s ten tastiest fast foods,” followed a day later by “All of Dairy Queen’s Blizzards, ranked.” And the reaction from the community was—not good. Pang’s introductory post quickly drew over a thousand comments, with the most upvoted response reading:
I’ll save you about six months of pissed-away cash. Please reallocate the money that will be wasted on this venture to add more shows to the TV Club review section.
Most of the other food features received the same treatment, with commenters ignoring the content of the articles themselves and complaining about the new section on principle. Internet commenters, it must be said, are notoriously resistant to change, and most vocal segment of the community represents a tiny fraction of the overall readership of The A.V. Club. But I think it’s fair to say that the site’s editors can’t be entirely happy with how the launch has gone.
Yet the readers aren’t altogether wrong, either, and in retrospect, you could make a good case that the rollout should have been handled differently. The A.V. Club has gone through a rough couple of years, with many of its most recognizable writers leaving to start the movie site The Dissolve—which recently folded—even as its signature television coverage has been scaled back. Those detailed reviews of individual episodes might be popular with commenters, but they evidently don’t generate enough page views to justify the same degree of investment, and the site is looking at ways to stabilize its revenue at a challenging time for the entire industry. The community is obviously worried abut this, and Supper Club happened to appear at a moment when the commenters were likely to be skeptical about any new move, as if it were all a zero-sum game, which it isn’t. But the launch itself didn’t help matters. It makes sense to start an enterprise like this with a lot of articles on its first day, but taking over half the site with minimal advance warning lost it a lot of goodwill. Pang could also have been introduced more gradually: he’s a celebrity in foodie circles, but to most A.V. Club readers, he’s just a name. (It was also probably a miscalculation to have Pang write the introductory post himself, which placed him in the awkward position of having to drum up interest in his own work for an audience that didn’t know who he was.) And while I’ve enjoyed some of the content so far, and I understand the desire to keep the features lightweight and accessible, I don’t think the site has done itself any favors by leading with articles like “Do we eat soup or do we drink soup?”
This might seem like a lot of analysis for a kerfuffle that will be forgotten within a few weeks, no matter how Supper Club does in the meantime. But The A.V. Club has been a landmark site for pop culture coverage for the last decade, and its efforts to reinvent itself should concern anyone who cares about whether such venues can survive. I found myself thinking about this shortly after reading the excellent New Yorker profile of Pete Wells, the restaurant critic of the New York Times. Its author, Ian Parker, notes that modern food writing has become a subset of cultural criticism:
“A lot of reviews now tend to be food features,” [former Times restaurant critic Mimi Sheraton] said. She recalled a reference to Martin Amis in a Wells review of a Spanish restaurant in Brooklyn; she said she would have mentioned Amis only “if he came in and sat down and ordered chopped liver.”
Craig Claiborne, in a review from 1966, observed, “The lobster tart was palatable but bland and the skewered lamb on the dry side. The mussels marinière were creditable.” Thanks, in part, to the informal and diverting columns of Gael Greene, at New York, and Ruth Reichl, the Times’ critic during the nineties, restaurant reviewing in American papers has since become as much a vehicle for cultural criticism and literary entertainment—or, as Sheraton put it, “gossip”—as a guide to eating out.
If this is true, and I think it is, it means that food criticism, for better or worse, falls squarely within the mandate of The A.V. Club, whether its commenters like it or not.
But that doesn’t mean that we shouldn’t hold The A.V. Club to unreasonably high standards. In fact, we should be harder on it than we would on most sites, for reasons that Parker neatly outlines in his profile of Wells:
As Wells has come to see it, a disastrous restaurant is newsworthy only if it has a pedigree or commercial might. The mom-and-pop catastrophe can be overlooked. “I shouldn’t be having to explain to people what the place is,” he said. This reasoning seems civil, though, as Wells acknowledged, it means that his pans focus disproportionately on restaurants that have corporate siblings. Indeed, hype is often his direct or indirect subject. Of the fifteen no-star evaluations in his first four years, only two went to restaurants that weren’t part of a group of restaurants.
Parker continues: “There are restaurants that exist to have four Times stars. With fewer, they become a kind of paradox.” And when it comes to pop culture, The A.V. Club is the equivalent of a four-star restaurant. It was writing deeply felt, outrageously long essays on film and television before the longread was even a thing—in part, I suspect, because of its historical connection to The Onion: because it was often mistaken for a parody site, it always felt the need to prove its fundamental seriousness, which it did, over and over again. If Supper Club had launched with one of the ambitious, richly reported pieces that Pang has written elsewhere, the response might have been very different. Listicles might make more economic sense, and they can be fun if done right, but The A.V. Club has defined itself as a place where obsessively detailed and personal pop culture writing has a home. That’s what Supper Club should be. And until it is, we shouldn’t be surprised if readers have trouble swallowing it.
“It’s the rare writer who cannot have sentences lifted from his work,” Norman Mailer once wrote. What he meant is that if a reviewer is eager to find something to mock, dismiss, or pick apart, any interesting book will provide plenty of ammunition. On a simple level of craft, it’s hard for most authors to sustain a high pitch of technical proficiency in every line, and if you want to make a novelist seem boring or ordinary, you can just focus on the sentences that fall between the high points. In his famously savage takedown of Thomas Harris’s Hannibal, Martin Amis quotes another reviewer who raved: “There is not a single ugly or dead sentence.” Amis then acidly observes:
Hannibal is a genre novel, and all genre novels contain dead sentences—unless you feel the throb of life in such periods as “Tommaso put the lid back on the cooler” or “Eric Pickford answered” or “Pazzi worked like a man possessed” or “Margot laughed in spite of herself” or “Bob Sneed broke the silence.”
Amis knows that this is a cheap shot, and he glories in it. But it isn’t so different from what critics do when they list the awful sentences from a current bestseller or nominate lines for the Bad Sex in Fiction Award. I laugh at this along with anyone else, but I also wince a little, because there are few authors alive who aren’t vulnerable to that sort of treatment. As G.K. Chesterton pointed out: “You could compile the worst book in the world entirely out of selected passages from the best writers in the world.”
This is even more true of authors who take considerable stylistic or thematic risks, which usually result in individual sentences that seem crazy or, worse, silly. The fear of seeming ridiculous is what prevents a lot of writers from taking chances, and it isn’t always unjustified. An ambitious novel opens itself up to savaging from all sides, precisely because it provides so much material that can be turned against the author when taken out of context. And it doesn’t need to be malicious, either: even objective or actively sympathetic critics can be seduced by the ease with which a writer can be excerpted to make a case. I’ve become increasingly daunted by the prospect of distilling the work of Robert A. Heinlein, for example, because his career was so long, varied, and often intentionally provocative that you can find sentences to support any argument about him that you want to make. (It doesn’t help that his politics evolved drastically over time, and they probably would have undergone several more transformations if he had lived for longer.) This isn’t to say that his opinions aren’t a fair target for criticism, but any reasonable understanding of who Heinlein was and what he believed—which I’m still trying to sort out for myself—can’t be conveyed by a handful of cherry-picked quotations. Literary biography is useful primarily to the extent that it can lay out a writer’s life in an orderly fashion, providing a frame that tells us something about the work that we wouldn’t know by encountering it out of order. But even that involves a process of selection, as does everything else about a biography. The biographer’s project isn’t essentially different from that of a working critic or reviewer: it just takes place on a larger scale.
And it’s worth noting that prolific critics themselves are particularly susceptible to this kind of treatment. When Renata Adler described Pauline Kael’s output as “not simply, jarringly, piece by piece, line by line, and without interruption, worthless,” any devotee of Kael’s work had to disagree—but it was also impossible to deny that there was plenty of evidence for the prosecution. If you’re determined to hate Roger Ebert, you just have to search for the reviews in which his opinions, written on deadline, weren’t sufficiently in line with the conclusions reached by posterity, as when he unforgivably gave only three stars to The Godfather Part II. And there isn’t a single page in the work of David Thomson, who is probably the most interesting movie critic who ever lived, that couldn’t be mined for outrageous, idiotic, or infuriating statements. I still remember a review on The A.V. Club of How to Watch a Movie that quoted lines like this:
Tell me a story, we beg as children, while wanting so many other things. Story will put off sleep (or extinction) and the child’s organism hardly trusts the habit of waking yet.
You came into this book under deceptive promises (mine) and false hopes (yours). You believed we might make decisive progress in the matter of how to watch a movie. So be it, but this was a ruse to make you look at life.
The reviewer quoted these sentences as examples of the book’s deficiencies, and they were duly excoriated in the comments. But anyone who has really read Thomson knows that such statements are part of the package, and removing them would also deny most of what makes him so fun, perverse, and valuable.
So what’s a responsible reviewer to do? We could start, maybe, by quoting longer or complete sections, rather than sentences in isolation, and by providing more context when we offer up just a line or two. We can also respect an author’s feelings, explicit or otherwise, about what sections are actually important. In the passage I mentioned at the beginning of this post, which is about John Updike, Mailer goes on to quote a few sentences from Rabbit, Run, and he adds:
The first quotation is taken from the first five sentences of the book, the second is on the next-to-last page, and the third is nothing less than the last three sentences of the novel. The beginning and end of a novel are usually worked over. They are the index to taste in the writer.
That’s a pretty good rule, and it ensures that the critic is discussing something reasonably close to what the writer intended to say. Best of all, we can approach the problem of excerpting with a kind of joy in the hunt: the search for the slice of a work that will stand as a synecdoche of the whole. In the book U & I, which is also about Updike, Nicholson Baker writes about the “standardized ID phrase” and “the aphoristic consensus” and “the jingle we will have to fight past at some point in the future” to see a writer clearly again, just as fans of Joyce have to do their best to forget about “the ineluctable modality of the visible” and “yes I said yes I will Yes.” For a living author, that repository of familiar quotations is constantly in flux, and reviewers might approach their work with a greater sense of responsibility if they realized that they were playing a part in creating it—one tiny excerpt at a time.
It might seem like a stretch, or at least premature, to compare Lin-Manuel Miranda to Shakespeare, but after playing Hamilton nonstop over the last couple of months, I can’t put the notion away. What the two of them have in common, aside from a readiness to plunder history as material for drama and a fondness for blatant anachronism, is their density and rapidity. When we try to figure out what sets Shakespeare apart from other playwrights, we’re likely to think first of the way his ideas and images succeed each other so quickly that they run the risk of turning into mixed metaphors, and how both characters and scenes can turn on a dime to introduce a new tone or register. Hamilton, at its best, has many of the same qualities. Hip-hop is capable of conveying more information per line than just about any other idiom, and Miranda exploits it to the fullest. But what really strikes me, after repeated listens, is his ability to move swiftly from one character, subplot, or theme to another, often in the course of a single song. For a musical to accomplish as much in two and a half hours as Hamilton does, it has to nail all the transitions. My favorite example is the one in the first act that carries us from “Helpless” to “Satisfied” to “Wait For It,” or from Hamilton’s courtship of Eliza to Angelica’s unrequited love to checking in with Burr in the space of about fifteen minutes. I’ve listened to that sequence multiple times, marveling at how all the pieces fit together, and it never even occurred to me to wonder how it was constructed until I’d internalized it. Which may be the most Shakespearean attribute of all.
But this doesn’t happen by accident. A few days ago, Manuel tweeted out a picture of his notebook for the incomparable “My Shot,” along with the dry comment: “Songs take time.” Like most musicals, Hamilton was refined and restructured in workshops—many recordings of which are available online—and continued to evolve between its Off-Broadway and Broadway incarnations. In theater, revision has a way of taking place in plain sight: it’s impossible to know the impact of any changes until you’ve seen them in performance, and the feedback you get in real time naturally informs the next iteration. Hamilton was developed under greater scrutiny than Miranda’s In the Heights, which was the product of five years of readings and workshops, and its evolution was constrained by what its creator has called “these weirdly visible benchmarks,” including the American Songbook Series at Lincoln Center and a high-profile presentation at Vassar. Still, much of the revision took place in Miranda’s head, a balance between public and private revision that feels Shakespearean in itself, if only because Shakespeare was better at it than anybody else. He clearly understood the creative utility of rehearsal and collaboration with a specific cast of actors, and he was cheerfully willing to rework a play based on how the audience responded. But we also know, based on surviving works like the unfinished Timon of Athens, that he revised the plays carefully on his own, roughing out large blocks of the action in prose form before going back to transform it into verse. We don’t have any of his manuscripts, but I suspect that they looked a lot like Miranda’s, and that he was ready to rearrange scenes and drop entire sequences to streamline and unify the whole. Like Hamilton, and Miranda, Shakespeare wrote like he was running out of time.
As it happens, I got to thinking about all this shortly after reading a description of a very different creative experience, in the form of playwright Glen Berger’s interview with The A.V. Club about the doomed production of Spider-Man: Turn Off the Dark. The whole thing is worth checking out, and I’ll probably end up reading Berger’s book Song of Spider-Man to get the full version. But this is the detail that stuck in my head the most:
Almost inevitably during previews for a Broadway musical, several songs are cut and several new songs are written. Sometimes, the new songs are the best songs. There’s the famous story of “Comedy Tonight” for A Funny Thing Happened On The Way To The Forum being written out of town. There are hundreds of other examples of songs being changed and scenes rearranged.
From our first preview to the day Julie [Taymor] left the show seven months later, not a single song was cut, which is kind of indicative of the rigidity that was setting in for one camp of the creators who felt like, “No, we came up with the perfect show. We just need to find a way to render it competently.”
A lot of things went wrong with Spider-Man, but this inability to revise—which might have allowed the show to address its other problems—seems like a fatal flaw. As books like Stephen Sondheim’s Finishing the Hat make clear, a musical can undergo drastic transformations between its earliest conception and opening night, and the lack of it here is what made the difference between a troubled production and a debacle.
But it’s also hard to blame Taymor, Berger, or any other individual involved when you consider the conditions under which the musical was produced, which made it hard for any kind of meaningful revision to occur at all. Even in theater, revision works best when it’s essentially private: following any train of thought to its logical conclusion requires the security that only solitude provides. A writer or director is less likely to learn from mistakes or test out the alternatives when the process is occurring in plain sight. From the very beginning, the creators of Spider-Man never had a moment of solitary reflection: it was a project that was born in a corporate boardroom and jumped immediately to Broadway. As Berger says:
Our biggest blunder was that we only had one workshop, and then we went into rehearsals for the Broadway run of the show. I’m working on another bound-for-Broadway musical now, and we’ve already had four workshops. Every time you hear, “Oh, we’re going to do another workshop,” the knee-jerk reaction is, “We don’t need any more. We can just go straight into rehearsals,” but we learn some new things every time. They provide you the opportunity to get rid of stuff that doesn’t work, songs that fall flat that you thought were amazing, or totally rewrite scenes. I’m all for workshops now.
It isn’t impossible to revise properly under conditions of extreme scrutiny—Pixar does a pretty good job of it—but it requires a degree of bravery that wasn’t evident here. And I’m curious to see how Miranda handles similar pressure, now that he occupies the position of an artist in residence at Disney, where Spider-Man also resides. Fame opens doors and creates possibilities, but real revision can only occur in the sessions of sweet silent thought.
Note: I’m heading out this afternoon for Kansas City, Missouri, where I’ll be taking part in programming over the next four days at the World Science Fiction Convention. Hope to see some of you there!
Note: I’m taking a break for the next few days, so I’ll be republishing some of my favorite posts from earlier in this blog’s run. This post originally appeared, in a slightly different form, on March 24, 2015.
Years ago, my online browsing habits followed a predictable routine. Each morning, after checking my email, I’d click over to read the headlines on the New York Times, then The A.V. Club, followed by whatever blogger, probably Andrew Sullivan, I was following at the moment. Although I didn’t think of it in those terms, in each case, I was responding to a brand: I trusted these sites to provide me with a few minutes of engaging content, and although I didn’t know exactly what would be posted each day, there were certain intangibles—a voice, a writer’s point of view, a stamp of quality—that assured me that a visit there would be worth my time. These days, my regimen looks very different. I still tune into the New York Times and The A.V. Club for old time’s sake, but the bulk of my browsing is done through Reddit or Digg. I don’t visit a lot of sites specifically for the content they provide; instead, I trust in aggregators, whether crowdsourced by upvotes or curated more deliberately, to direct my attention to whatever is worth reading from one hour to the next. In many cases, when I click through to a story, I don’t even know where the link goes, and I’ve lost count of the times I’ve told my wife about an article I saw “somewhere on Digg.” And once I’m done with that one spotlighted piece, I’m not particularly likely to visit the site later to see what else it might have to offer.
As a content provider—which is a term I hate—in my own right, the pattern of consumption that I see in myself chills me to the bone. Yet it represents a rational, if subconscious, choice. I’m simply betting that I’ll have a better time by trusting the aggregators, which admittedly are brands in themselves, rather than the brand of a specific writer or publication. Individual authors or sites can be erratic; on slow news days, even the Times can seem like a bore. But an aggregator that sweeps the entire web for material will always come up with something diverting, and I’m not tied down to any one source. After all, even the most consistently reliable reads can lose interest over time. I started visiting Reddit more regularly during the last presidential election, for instance, after I got tired of Andrew Sullivan’s increasingly panicky and hysterical tone: reading his blog turned into a chore. And I became less active on The A.V. Club, particularly as a commenter, after much of its core staff decamped for The Dissolve and Vox, although I still read certain features faithfully. To be honest, it’s been years since a new site grabbed my attention to the point where I wanted to read it every day. And I’m not alone: the problem of retaining loyalty to brands is the single greatest challenge confronting journalism of all kinds, even as musical artists deal with much the same issues on Spotify and Pandora.
Faced with a future driven by aggregators, which destroy the old business models for distributing content, most media companies have turned to one of two solutions. Either you provide content in a form that resists aggregation while still attracting an audience, or you nurture a voice or personality compelling enough to draw readers back on a regular basis. Both have their problems. At first glance, the two kinds of content that might seem immune to aggregation are television shows and podcasts, but that’s more of a structural quirk. From a network’s perspective, the real brand at stake isn’t Community or Parks and Recreation but NBC itself, and with the proliferation of viewing and streaming options, we’re much less likely to tune in to whatever the network wants to show us on Thursday night. And podcasts are simply awaiting the appearance of a reliable aggregator that will cull the day’s best episodes, or, even more likely, the best two- or three-minute snippets. Once that happens, we’re likely to start listening to podcasts as we consume written content, as a kind of smorgasbord of diversion that isn’t tied down to any one creator. As for personalities, they’re great when you can get them, but they’re excruciatingly rare. Talk radio is a fantastic example: the fact that maybe half a dozen guys—and they’re mostly men—have divided the radio audience between them for decades now points to how few can really do it.
And there’s no reason to expect other kinds of content to be any different. Every author hopes that his voice will be distinctive enough to draw in people who simply want to hear everything he says, but there aren’t many such writers left: David Carr, who passed away over a year ago, was one of the last. Even I’m mostly reconciled to the fact that readership on this blog is largely dependent on factors outside my control. My single busiest day occurred after one of my posts appeared on the front page of Reddit, but as I’ve noted elsewhere, after a heady period in which a mass of eyeballs equivalent to the population of Cincinnati came to visit, few, if any, stuck around to read more. I’ve slowly acquired a coterie of regular readers, but page views have remained more or less fixed for a long time, and my only spikes in traffic come when a post is linked somewhere else. I do what I can to keep the level of quality consistent, and if nothing else, I don’t lack for productivity. All I can really do is keep writing, throw out ideas, and hope that a few of them stick, which isn’t all that different from what the major media companies are doing on a much larger scale. (Although you can find lessons in unexpected places. One brand that caught my eye—in the form of a shelf of musty books, most of them long out of print—was the Bollingen Foundation, which I still think is a fascinating, if not entirely useful, counterexample.) But I can’t help but feel that there must be a better way.