Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘Pauline Kael

Fire and Fury

leave a comment »

I’ve been thinking a lot recently about Brian De Palma’s horror movie The Fury, which celebrated its fortieth anniversary earlier this year. More specifically, I’ve been thinking about Pauline Kael’s review, which is one of the pieces included in her enormous collection For Keeps. I’ve read that book endlessly for two decades now, and as a result, The Fury is one of those films from the late seventies—like Philip Kaufman’s Invasion of the Body Snatchers—that endure in my memory mostly as a few paragraphs of Kael’s prose. In particular, I often find myself remembering these lines:

De Palma is the reverse side of the coin from Spielberg. Close Encounters gives us the comedy of hope. The Fury is the comedy of cruelly dashed hope. With Spielberg, what happens is so much better than you dared hope that you have to laugh; with De Palma, it’s so much worse than you feared that you have to laugh.

That sums up how I feel about a lot of things these days, when everything is consistently worse than I could have imagined, although laughter usually feels very far away. (Another line from Kael inadvertently points to the danger of identifying ourselves with our political heroes: “De Palma builds up our identification with the very characters who will be destroyed, or become destroyers, and some people identified so strongly with Carrie that they couldn’t laugh—they felt hurt and betrayed.”) And her description of one pivotal scene, which appears in her review of Dressed to Kill, gets closer than just about anything else to my memories of the last presidential election: “There’s nothing here to match the floating, poetic horror of the slowed-down sequence in which Amy Irving and Carrie Snodgress are running to freedom: it’s as if each of them and each of the other people on the street were in a different time frame, and Carrie Snodgress’s face is full of happiness just as she’s flung over the hood of a car.”

The Fury seems to have been largely forgotten by mainstream audiences, but references to it pop up in works ranging from Looper to Stranger Things, and I suspect that it might be due for a reappraisal. It’s about two teenagers, a boy and a girl, who have never met, but who share a psychic connection. As Kael notes, they’re “superior beings” who might have been prophets or healers in an earlier age, but now they’ve been targeted by our “corrupt government…which seeks to use them for espionage, as secret weapons.” Reading this now, I’m slightly reminded of our current administration’s unapologetic willingness to use vulnerable families and children as political pawns, but that isn’t really the point. What interests me more is how De Palma’s love of violent imagery undercuts the whole moral arc of the movie. I might call this a problem, except that it isn’t—it’s a recurrent feature of his work that resonated uneasily with viewers who were struggling to integrate the specter of institutionalized violence into their everyday lives. (In a later essay, Kael wrote of acquaintances who resisted such movies because of its association with the “guilty mess” of the recently concluded war: “There’s a righteousness in their tone when they say they don’t like violence; I get the feeling that I’m being told that my urging them to see The Fury means that I’ll be responsible if there’s another Vietnam.”) And it’s especially striking in this movie, which for much of its length is supposedly about an attempt to escape this cycle of vengeance. Of the two psychic teens, Robyn, played by Andrew Stevens, eventually succumbs to it, while Gillian, played by Amy Irving, fights it for as long as she can. As Kael explains: “Both Gillian and Robyn have the power to zap people with their minds. Gillian is trying to cling to her sanity—she doesn’t want to hurt anyone. And, knowing that her power is out of her conscious control, she’s terrified of her own secret rages.”

And it’s hard for me to read this passage now without connecting it to the ongoing discussion over women’s anger, in which the word “fury” occurs with surprising frequency. Here’s the journalist Rebecca Traister writing in the New York Times, in an essay adapted from her bestselling book Good and Mad:

Fury was a tool to be marshaled by men like Judge Kavanaugh and Senator Graham, in defense of their own claims to political, legal, public power. Fury was a weapon that had not been made available to the woman who had reason to question those claims…Most of the time, female anger is discouraged, repressed, ignored, swallowed. Or transformed into something more palatable, and less recognizable as fury—something like tears. When women are truly livid, they often weep…This political moment has provoked a period in which more and more women have been in no mood to dress their fury up as anything other than raw and burning rage.

Traister’s article was headlined: “Fury is a Political Weapon. And Women Need to Wield It.” And if you were so inclined, you could take The Fury as an extended metaphor for the issue that Casey Cep raises in her recent roundup of books on the subject in The New Yorker: “A major problem with anger is that some people are allowed to express it while others are not.” In the film, Gillian spends most of the movie resisting her violent urges, while her male psychic twin gives into them, and the climax—which is the only scene that most viewers remember—hinges on her embrace of the rage that Robyn passed to her at the moment of his death.

This brings us to Childress, the villain played by John Cassavetes, whose demise Kael hyperbolically describes as “the greatest finish for any villain ever.” A few paragraphs earlier, Kael writes of this scene:

This is where De Palma shows his evil grin, because we are implicated in this murderousness: we want it, just as we wanted to see the bitchy Chris get hers in Carrie. Cassavetes is an ideal villain (as he was in Rosemary’s Baby)—sullenly indifferent to anything but his own interests. He’s so right for Childress that one regrets that there wasn’t a real writer around to match his gloomy, viscous nastiness.

“Gloomy, viscous nastiness” might ring a bell today, and Childress’s death—Gillian literally blows him up with her mind—feels like the embodiment of our impulses for punishment, revenge, and retribution. It’s stunning how quickly the movie discards Gillian’s entire character arc for the sake of this moment, but what makes the ending truly memorable is what happens next, which is nothing. Childress explodes, and the film just ends, because it has nothing left to show us. That works well enough in a movie, but in real life, we have to face the problem of what Brittney Cooper, whose new book explicitly calls rage a superpower, sums up as “what kind of world we want to see, not just what kind of things we want to get rid of.” In her article in The New Yorker, Cep refers to the philosopher and classicist Martha Nussbaum’s treatment of the Furies themselves, who are transformed at the end of the Oresteia into the Eumenides, “beautiful creatures that serve justice rather than pursue cruelty.” It isn’t clear how this transformation takes place, and De Palma, typically, sidesteps it entirely. But if we can’t imagine anything beyond cathartic vengeance, we’re left with an ending closer to what Kael writes of Dressed to Kill: “The spell isn’t broken and [De Palma] doesn’t fully resolve our fear. He’s saying that even after the horror has been explained, it stays with you—the nightmare never ends.”

Written by nevalalee

October 30, 2018 at 9:24 am

Don’t look now

leave a comment »

Note: This post discusses elements of the series finale of HBO’s Sharp Objects.

It’s been almost twenty years since I first saw Don’t Look Now at a revival screening at the Brattle Theatre in Cambridge, Massachusetts. I haven’t seen it again, but I’ve never gotten over it, and it remains one of my personal cinematic touchstones. (My novelette “Kawataro,” which is largely an homage to Japanese horror, includes a nod to its most famous visual conceit.) And it’s impossible to convey its power without revealing its ending, which I’m afraid I’ll need to do here. For most of its length, Nicholas Roeg’s movie is an evocative supernatural mystery set in Venice, less about conventional scares than about what the film critic Pauline Kael describes as its “unnerving cold ominousness,” with Donald Sutherland and Julie Christie as a husband and wife mourning the recent drowning death of their daughter. Throughout the movie, Sutherland glimpses a childlike figure in a red raincoat, which his daughter was wearing when she died. Finally, in the film’s closing minutes, he catches up with what he thinks is her ghost, only to find what Kael calls “a hideous joke of nature—their own child become a dwarf monstrosity.” A wrinkled crone in red, who is evidently just a serial killer, slashes him to death, in one of the great shock endings in the history of movies. Kael wasn’t convinced by it, but it clearly affected her as much as it did me:

The final kicker is predictable, and strangely flat, because it hasn’t been made to matter to us; fear is decorative, and there’s nothing to care about in this worldly, artificial movie. Yet at a mystery level the the movie can still affect the viewer; even the silliest ghost stories can. It’s not that I’m not impressionable; I’m just not as proud of it as some people are.

I had much the same reaction to the final scene of Sharp Objects, a prestige miniseries that I’ve been watching for two months now with growing impatience, only to have my feelings turn at the very end into a grudging respect. It’s a strange, frustrating, sometimes confusing show that establishes Jean-Marc Vallée, coming off the huge success of Big Little Lies, as one of our major directors—he’s got more pure craft at his disposal than just about anyone else working in television. (I don’t remember much about The Young Victoria, but it was clear even then that he was the real thing.) The series is endlessly clever in its production design, costuming, and music, and the actors do the best that they can with the material at hand. The first trailer led me to expect something heightened and Gothic, with a duel of wills between daughter Celeste (Amy Adams) and mother Adora (Patricia Clarkson), but the show itself spends most of its length going for something sadder and more wounded, and I don’t think it entirely succeeds. Like Big Little Lies, it exploits the structure of a mystery, but it isn’t particularly interested in furnishing clues or even basic information, and there are long stretches when it seems to forget about the two teenage girls who have been murdered in Celeste’s haunted hometown. Celeste is a bad reporter and a lousier investigator, which wouldn’t matter if this were really a psychological study. Yet the series isn’t all that interested in delving into its characters, either, apart from their gorgeously lit surfaces. For most of its eight episodes, it hits the same handful of notes, and by the end, we don’t have much more insight into Celeste, Adora, or anybody else than we did after the pilot. It has a few brilliant visual notions, but very little in the way of real ideas.

Then we come to the end, or the last minute of the finale, which I think is objectively staggering. (I’m not going to name the killer, but if you haven’t seen the show yet, you might want to skip this entire paragraph.) After an extended fake denouement that should serve as a warning sign in itself, Celeste stumbles across the truth, in the form of a few gruesome puzzle pieces that have been hiding in plain sight, followed by a smash cut to black. Aside from an unsettling montage of images during the closing credits, that’s it. It turns the entire series into the equivalent of a shaggy dog story, or an elephant joke, and I loved it—it’s a gigantic “screw you” to the audience that rises to Hitchcockian levels of bad taste. Yet I’m not entirely sure whether it redeems the rest of the series. When I replay Sharp Objects in my head, it seems to start out as a mystery, transition into a simulacrum of a character study, and then reveal at the last second that it was only messing with us. If it had been two hours long, it would have been very effective. But I don’t know if it works for a television series, even with a limited run, in which the episodes in the protracted second act can only deliver one tone at once. If this were a movie, I’d want to see it again, but I don’t think I’ll ever revisit the dusty middle innings of Sharp Objects, much of which was only marking time. As a confidence game, it works all too well, to the point that many critics thought that it was onto something profound. For some viewers, maybe it was. But I’d be curious to hear how they come to terms with that ending, which cuts so savagely away from anything like human resolution that it makes a mockery of the notion that this was ever about the characters at all.

And it works, at least to a point. If nothing else, I’ve been thinking about it ever since—as Kael says, I’m no less impressionable than anyone else, even if I’m not proud of it. But I’d also argue that the conventions of the prestige drama, which made this project possible in the first place, also interfere with its ultimate impact. There’s no particular reason why Sharp Objects had to be eight episodes long, and you could make a strong case that it would work better if the entire experience, like Don’t Look Now, were experienced in a single sitting. In the end, I liked it enough to want to see a shorter version, which might feel like a masterpiece. In a funny way, Sharp Objects represents the opposite problem as Gone Girl, another fascinating project that Gillian Flynn adapted from her own novel. That movie was a superb Hitchcockian toy that stumbled when it asked us to take it seriously at the end, while Sharp Objects is a superficially serious show that exposes itself in its final seconds as a twisted game. I prefer the latter, and that final shock is delivered with a newfound professionalism that promises great things from both Flynn and Vallée. (It certainly ends on a higher note than the first season of Big Little Lies, which also closed with an inexplicable ending that made much of the show seem meaningless, except not in a good way.) But the interminable central section makes me suspect that the creators were so seduced by Amy Adams—“so extraordinarily beautiful yet not adding up right for ordinary beauty,” as Kael said of Julie Christie—that they forgot what they were supposed to be doing. Kael ends her review on a typically inscrutable note: “It’s like an entertainment for bomb victims: nobody excepts any real pleasure from it.” But what I might remember most about Sharp Objects is that sick tingle of pleasure that it offered me at the very end, just after I’d given up looking for it.

Written by nevalalee

August 27, 2018 at 8:47 am

American Stories #2: Citizen Kane

leave a comment »

Note: As we enter what Joe Scarborough justifiably expects to be “the most consequential political year of our lives,” I’m looking back at ten works of art—books, film, television, and music—that deserve to be reexamined in light of where America stands today. You can find the earlier installments here

In his essay collection America in the Dark, the film critic David Thomson writes of Citizen Kane, which briefly went under the portentous working title American:

Citizen Kane grows with every year as America comes to resemble it. Kane is the willful success who tries to transcend external standards, and many plain Americans know his pent-up fury at lonely liberty. The film absorbs praise and criticism, unabashed by being voted the best ever made or by Pauline Kael’s skillful reassessment of its rather nasty cleverness. Perhaps both those claims are valid. The greatest film may be cunning, slick, and meretricious.

It might be even more accurate to say that the greatest American movie ever made needs to be cunning, slick, and meretricious, at least if it’s going to be true to the values of its country. Kane is “a shallow masterpiece,” as Kael famously put it, but it could hardly be anything else. (Just a few years later, Kael expressed a similar sentiment about Norman Mailer: “I think he’s our greatest writer. And what is unfortunate is that our greatest writer should be a bum.”) It’s a masterwork of genial fakery by and about a genial faker—Susan Alexander asks Kane at their first meeting if he’s a professional magician—and its ability to spin blatant artifice and sleight of hand into something unbearably moving goes a long way toward explaining why it was a favorite movie of men as different as Charles Schulz, L. Ron Hubbard, and Donald Trump.

And the most instructive aspect of Kane in these troubled times is how completely it deceives even its fans, including me. Its portrait of a man modeled on William Randolph Hearst is far more ambiguous than it was ever intended to be, because we’re distracted throughout by our fondness for the young Welles. He’s visible all too briefly in the early sequences at the Inquirer, and he winks at us through his makeup as an older man. As a result, the film that Hearst wanted to destroy turned out to be the best thing that could have happened to his legacy—it makes him far more interesting and likable than he ever was. The same factor tends to obscure the movie’s politics, as Kael wrote in the early seventies:

When Welles was young—he was twenty-five when the film opened—he used to be accused of “excessive showmanship,” but the same young audiences who now reject “theatre” respond innocently and wholeheartedly to the most unabashed tricks of theatre—and of early radio plays—in Citizen Kane. At some campus showings, they react so gullibly that when Kane makes a demagogic speech about “the underprivileged,” stray students will applaud enthusiastically, and a shout of “Right on!” may be heard.

Kane is a master manipulator, but so was Welles, and our love for all that this film represents shouldn’t blind us to how the same tricks can be turned to more insidious ends. As Kane says to poor Mr. Carter, shortly after taking over a New York newspaper at the age of twenty-five, just as Jared Kushner once did: “If the headline is big enough, it makes the news big enough.” Hearst understood this. And so does Steve Bannon.

Written by nevalalee

January 2, 2018 at 9:00 am

Shoot the piano player

with 2 comments

In his flawed but occasionally fascinating book Bambi vs. Godzilla, the playwright and director David Mamet spends a chapter discussing the concept of aesthetic distance, which is violated whenever viewers remember that they’re simply watching a movie. Mamet provides a memorable example:

An actor portrays a pianist. The actor sits down to play, and the camera moves, without a cut, to his hands, to assure us, the audience, that he is actually playing. The filmmakers, we see, have taken pains to show the viewers that no trickery has occurred, but in so doing, they have taught us only that the actor portraying the part can actually play the piano. This addresses a concern that we did not have. We never wondered if the actor could actually play the piano. We accepted the storyteller’s assurances that the character could play the piano, as we found such acceptance naturally essential to our understanding of the story.

Mamet imagines a hypothetical dialogue between the director and the audience: “I’m going to tell you a story about a pianist.” “Oh, good: I wonder what happens to her!” “But first, before I do, I will take pains to reassure you that the actor you see portraying the hero can actually play the piano.” And he concludes:

We didn’t care till the filmmaker brought it up, at which point we realized that, rather than being told a story, we were being shown a demonstration. We took off our “audience” hat and put on our “judge” hat. We judged the demonstration conclusive but, in so doing, got yanked right out of the drama. The aesthetic distance had been violated.

Let’s table this for now, and turn to a recent article in The Atlantic titled “The Remarkable Laziness of Woody Allen.” To prosecute the case laid out in the headline, the film critic Christopher Orr draws on Eric Lax’s new book Start to Finish: Woody Allen and the Art of Moviemaking, which describes the making of Irrational Man—a movie that nobody saw, which doesn’t make the book sound any less interesting. For Orr, however, it’s “an indictment framed as an encomium,” and he lists what he evidently sees as devastating charges:

Allen’s editor sometimes has to live with technical imperfections in the footage because he hasn’t shot enough takes for her to choose from…As for the shoot itself, Allen has confessed, “I don’t do any preparation. I don’t do any rehearsals. Most of the times I don’t even know what we’re going to shoot.” Indeed, Allen rarely has any conversations whatsoever with his actors before they show up on set…In addition to limiting the number of takes on any given shot, he strongly prefers “master shots”—those that capture an entire scene from one angle—over multiple shots that would subsequently need to be edited together.

For another filmmaker, all of these qualities might be seen as strengths, but that’s beside the point. Here’s the relevant passage:

The minimal commitment that appearing in an Allen film entails is a highly relevant consideration for a time-strapped actor. Lax himself notes the contrast with Mike Leigh—another director of small, art-house films—who rehearses his actors for weeks before shooting even starts. For Damien Chazelle’s La La Land, Stone and her co-star, Ryan Gosling, rehearsed for four months before the cameras rolled. Among other chores, they practiced singing, dancing, and, in Gosling’s case, piano. The fact that Stone’s Irrational Man character plays piano is less central to that movie’s plot, but Allen didn’t expect her even to fake it. He simply shot her recital with the piano blocking her hands.

So do we shoot the piano player’s hands or not? The boring answer, unfortunately, is that it depends—but perhaps we can dig a little deeper. It seems safe to say that it would be impossible to make The Pianist with Adrian Brody’s hands conveniently blocked from view for the whole movie. But I’m equally confident that it doesn’t matter the slightest bit in Irrational Man, which I haven’t seen, whether or not Emma Stone is really playing the piano. La La Land is a slightly trickier case. It would be hard to envision it without at least a few shots of Ryan Gosling playing the piano, and Damien Chazelle isn’t above indulging in exactly the camera move that Mamet decries, in which it tilts down to reassure us that it’s really Gosling playing. Yet the fact that we’re even talking about this gets down to a fundamental problem with the movie, which I mostly like and admire. Its characters are archetypes who draw much of their energy from the auras of the actors who play them, and in the case of Stone, who is luminous and moving as an aspiring actress suffering through an endless series of auditions, the film gets a lot of mileage from our knowledge that she’s been in the same situation. Gosling, to put it mildly, has never been an aspiring jazz pianist. This shouldn’t even matter, but every time we see him playing the piano, he briefly ceases to be a struggling artist and becomes a handsome movie star who has spent three months learning to fake it. And I suspect that the movie would have been elevated immensely by casting a real musician. (This ties into another issue with La La Land, which is that it resorts to telling us that its characters deserve to be stars, rather than showing it to us in overwhelming terms through Gosling and Stone’s singing and dancing, which is merely passable. It’s in sharp contrast to Martin Scorsese’s New York, New York, one of its clear spiritual predecessors, in which it’s impossible to watch Liza Minnelli without becoming convinced that she ought to be the biggest star in the world. And when you think of how quirky, repellent, and individual Minnelli and Robert De Niro are allowed to be in that film, La La Land starts to look a little schematic.)

And I don’t think I’m overstating it when I argue that the seemingly minor dilemma of whether to show the piano player’s hands shades into the larger problem of how much we expect our actors to really be what they pretend that they are. I don’t think any less of Bill Murray because he had to employ Terry Fryer as a “hand double” for his piano solo in Groundhog Day, and I don’t mind that the most famous movie piano player of them all—Dooley Wilson in Casablanca—was faking it. And there’s no question that you’re taken out of the movie a little when you see Richard Chamberlain playing Tchaikovsky’s Piano Concerto No. 1 in The Music Lovers, however impressive it might be. (I’m willing to forgive De Niro learning to mime the saxophone for New York, New York, if only because it’s hard to imagine how it would look otherwise. The piano is just about the only instrument in which it can plausibly be left at the director’s discretion. And in his article, revealingly, Orr fails to mention that none other than Woody Allen was insistent that Sean Penn learn the guitar for Sweet and Lowdown. As Allen himself might say, it depends.) On some level, we respond to an actor playing the piano much like the fans of Doctor Zhivago, whom Pauline Kael devastatingly called “the same sort of people who are delighted when a stage set has running water or a painted horse looks real enough to ride.” But it can serve the story as much as it can detract from it, and the hard part is knowing how and when. As one director notes:

Anybody can learn how to play the piano. For some people it will be very, very difficult—but they can learn it. There’s almost no one who can’t learn to play the piano. There’s a wide range in the middle, of people who can play the piano with various degrees of skill; a very, very narrow band at the top, of people who can play brilliantly and build upon a technical skill to create great art. The same thing is true of cinematography and sound mixing. Just technical skills. Directing is just a technical skill.

This is Mamet writing in On Directing Film, which is possibly the single best work on storytelling I know. You might not believe him when he says that directing is “just a technical skill,” but if you do, there’s a simple way to test if you have it. Do you show the piano player’s hands? If you know the right answer for every scene, you just might be a director.

The conveyor belt

leave a comment »

For all the endless discussion of various aspects of Twin Peaks, one quality that sometimes feels neglected is the incongruous fact that it had one of the most attractive casts in television history. In that respect—and maybe in that one alone—it was like just about every other series that ever existed. From prestige dramas to reality shows to local newscasts, the story of television has inescapably been that of beautiful men and women on camera. A show like The Hills, which was one of my guilty pleasures, seemed to be consciously trying to see how long it could coast on surface beauty alone, and nearly every series, ambitious or otherwise, has used the attractiveness of its actors as a commercial or artistic strategy. (In one of the commentary tracks on The Simpsons, a producer describes how a network executive might ask indirectly about the looks of the cast of a sitcom: “So how are we doing aesthetically?”) If this seemed even more pronounced on Twin Peaks, it was partially because, like Mad Men, it took its conventionally glamorous actors into dark, unpredictable places, and also because David Lynch had an eye for a certain kind of beauty, both male and female, that was more distinctive than that of the usual soap opera star. He’s continued this trend in the third season, which has been populated so far by such striking presences as Chrysta Bell, Ben Rosenfield, and Madeline Zima, and last night’s episode features an extended, very funny scene between a delighted Gordon Cole and a character played by Bérénice Marlohe, who, with her red lipstick and “très chic” spike heels, might be the platonic ideal of his type.

Lynch isn’t the first director to display a preference for actors, particularly women, with a very specific look—although he’s thankfully never taken it as far as his precursor Alfred Hitchcock did. And the notion that a film or television series can consist of little more than following around two beautiful people with a camera has a long and honorable history. My two favorite movies of my lifetime, Blue Velvet and Chungking Express, both understand this implicitly. It’s fair to say that the second half of the latter film would be far less watchable if it didn’t involve Tony Leung and Faye Wong, two of the most attractive people in the world, and Wong Kar-Wai, like so many filmmakers before him, uses it as a psychological hook to take us into strange, funny, romantic places. Blue Velvet is a much darker work, but it employs a similar lure, with the actors made up to look like illustrations of themselves. In a Time cover story on Lynch from the early nineties, Richard Corliss writes of Kyle MacLachlan’s face: “It is a startling visage, as pure of line as an art deco vase, with soft, all-American features and a comic-book hero’s jutting chin—you could park a Packard on it.” It echoes what Pauline Kael says of Isabella Rossellini in Blue Velvet: “She even has the kind of nostrils that cover artists can represent accurately with two dots.” MacLachlan’s chin and Rossellini’s nose would have caught our attention in any case, but it’s also a matter of lighting and makeup, and Lynch shoots them to emphasize their roots in the pulp tradition, or, more accurately, in the subconscious store of images that we take from those sources. And the casting gets him halfway there.

This leaves us in a peculiar position when it comes to the third season of Twin Peaks, which, both by nature and by design, is about aging. Mark Frost said in an interview: “It’s an exercise in engaging with one of the most powerful themes in all of art, which is the ruthless passage of time…We’re all trapped in time and we’re all going to die. We’re all traveling along this conveyor belt that is relentlessly moving us toward this very certain outcome.” One of the first, unforgettable images from the show’s promotional materials was Kyle MacLachlan’s face, a quarter of a century older, emerging from the darkness into light, and our feelings toward these characters when they were younger inevitably shape the way we regard them now. I felt this strongly in two contrasting scenes from last night’s episode. It offers us our first extended look at Sarah Palmer, played by Grace Zabriskie, who delivers a freakout in a grocery store that reminds us of how much we’ve missed and needed her—it’s one of the most electrifying moments of the season. And we also finally see Audrey Horne again, in a brutally frustrating sequence that feels to me like the first time that the show’s alienating style comes off as a miscalculation, rather than as a considered choice. Audrey isn’t just in a bad place, which we might have expected, but a sad, unpleasant one, with a sham marriage and a monster of a son, and she doesn’t even know the worst of it yet. It would be a hard scene to watch with anyone, but it’s particularly painful when we set it against our first glimpse of Audrey in the original series, when we might have said, along with the Norwegian businessman at the Great Northern Hotel: “Excuse me, is there something wrong, young pretty girl?”

Yet the two scenes aren’t all that dissimilar. Both Sarah and Audrey are deeply damaged characters who could fairly say: “Things can happen. Something happened to me.” And I can only explain away the difference by confessing that I was a little in love in my early teens with Audrey. Using those feelings against us—much as the show resists giving us Dale Cooper again, even as it extravagantly develops everything around him—must have been what Lynch and Frost had in mind. And it isn’t the first time that this series has toyed with our emotions about beauty and death. The original dream girl of Twin Peaks, after all, was Laura Palmer herself, as captured in two of its most indelible images: Laura’s prom photo, and her body wrapped in plastic. (Sheryl Lee, like January Jones in Mad Men, was originally cast for her look, and only later did anyone try to find out whether or not she could act.) The contrast between Laura’s lovely features and her horrifying fate, in death and in the afterlife, was practically the motor on which the show ran. Her face still opens every episode of the revival, dimly visible in the title sequence, but it also ended each installment of the original run, gazing out from behind the prison bars of the closing credits to the strains of “Laura Palmer’s Theme.” In the new season, the episodes generally conclude with whatever dream pop band Lynch feels like showcasing, usually with a few cool women, and I wouldn’t want to give that up. But I also wonder whether we’re missing something when we take away Laura at the end. This season began with Cooper being asked to find her, but she often seems like the last thing on anyone’s mind. Twin Peaks never allowed us to forget her before, because it left us staring at her photograph each week, which was the only time that one of its beautiful faces seemed to be looking back at us.

The driver and the signalman

leave a comment »

In his landmark book Design With Nature, the architect Ian L. McHarg shares an anecdote from the work of an English biologist named George Scott Williamson. McHarg, who describes Williamson as “a remarkable man,” mentions him in passing in a discussion of the social aspects of health: “He believed that physical, mental, and social health were unified attributes and that there were aspects of the physical and social environment that were their corollaries.” Before diving more deeply into the subject, however, McHarg offers up an apparently unrelated story that was evidently too interesting to resist:

One of the most endearing stories of this man concerns a discovery made when he was undertaking a study of the signalmen who maintain lonely vigils while operating the switches on British railroads. The question to be studied was whether these lonely custodians were subject to boredom, which would diminish their dependability. It transpired that lonely or not, underpaid or not, these men had a strong sense of responsibility and were entirely dependable. But this was not the major perception. Williamson learned that every single signalman, from London to Glasgow, could identify infallibly the drivers of the great express trains which flashed past their vision at one hundred miles per hour. The drivers were able to express their unique personalities through the unlikely and intractable medium of some thousand tons of moving train, passing in a fraction of a second. The signalmen were perceptive to this momentary expression of the individual, and Williamson perceived the power of the personality.

I hadn’t heard of Williamson before reading this wonderful passage, and all that I know about him is that he was the founder of the Peckham Experiment, an attempt to provide inexpensive health and recreation services to a neighborhood in Southeast London. The story of the signalmen seems to make its first appearance in his book Science, Synthesis, and Sanity: An Inquiry Into the Nature of Living, which he cowrote with his wife and collaborator Innes Hope Pearse. They relate:

Or again, sitting in a railway signal box on a dark night, in the far distance from several miles away came the rumble of the express train from London. “Hallo,” said my friend the signalman. “Forsyth’s driving her—wonder what’s happened to Courtney?” Next morning, on inquiry of the stationmaster at the junction, I found it was true. Courtney had been taken ill suddenly and Forsyth had deputized for him—all unknown, of course, to the signalman who in any case had met neither Forsyth nor Courtney. He knew them only as names on paper and by their “action-pattern” impressed on a dynamic medium—a unique action-pattern transmitted through the rumble of an unseen train. Or, in a listening post with nothing visible in the sky, said the listener: “That’s ‘Lizzie,’ and Crompton’s flying her.” “Lizzie” an airplane, and her pilot imprinting his action-pattern on her course.

And while Williamson and Pearse are mostly interested in the idea of an individual’s “action-pattern” being visible in an unlikely medium, it’s hard not to come away more struck, like McHarg, by the image of the lone signalman, the passing machine, and the transient moment of connection between them.

As I read over this, it occurred to me that it perfectly encapsulated our relationship with a certain kind of pop culture. We’re the signalmen, and the movie or television show is the train. As we sit in our living rooms, lonely and relatively isolated, something passes across our field of vision—an episode of Game of Thrones, say, which often feels like a locomotive to the face. This is the first time that we’ve seen it, but it represents the end result of a process that has unfolded for months or years, as the episode was written, shot, edited, scored, and mixed, with the contributions of hundreds of men and women we wouldn’t be able to name. As we experience it, however, we see the glimmer of another human being’s personality, as expressed through the narrative machine. It isn’t just a matter of the visible choices made on the screen, but of something less definable, a “style” or “voice” or “attitude,” behind which, we think, we can make out the amorphous factors of influence and intent. We identify an artist’s obsessions, hangups, and favorite tricks, and we believe that we can recognize the mark of a distinctive style even when it goes uncredited. Sometimes we have a hunch about what happened on the set that day, or the confluence of studio politics that led to a particular decision, even if we have no way of knowing it firsthand. (This was one of the tics of Pauline Kael’s movie reviews that irritated Renata Adler: “There was also, in relation to filmmaking itself, an increasingly strident knowingness: whatever else you may think about her work, each column seemed more hectoringly to claim, she certainly does know about movies. And often, when the point appeared most knowing, it was factually false.”) We may never know the truth, but it’s enough if a theory seems plausible. And the primary difference between us and the railway signalman is that we can share our observations with everyone in sight.

I’m not saying that these inferences are necessarily incorrect, any more than the signalmen were wrong when they recognized the personal styles of particular drivers. If Williamson’s account is accurate, they were often right. But it’s worth emphasizing that the idea that you can recognize a driver from the passage of a train is no less strange than the notion that we can know something about, say, Christopher Nolan’s personality from Dunkirk. Both are “unlikely and intractable” mediums that serve as force multipliers for individual ability, and in the case of a television show or movie, there are countless unseen variables that complicate our efforts to attribute anything to anyone, much less pick apart the motivations behind specific details. The auteur theory in film represents an attempt to read movies like novels, but as Thomas Schatz pointed out decades ago in his book The Genius of the System, trying to read Casablanca as the handiwork of Michael Curtiz, rather than that of all of its collaborators taken together, is inherently problematic. And this is easy to forget. (I was reminded of this by the recent controversy over David Benioff and D.B. Weiss’s pitch for their Civil War alternate history series Confederate. I agree with the case against it that the critic Roxane Gay presents in her opinion piece for the New York Times, but the fact that we’re closely scrutinizing a few paragraphs for clues about the merits of a show that doesn’t even exist only hints at how fraught the conversation will be after it actually premieres.) There’s a place for informed critical discussion about any work of art, but we’re often drawing conclusions based on the momentary passage of a huge machine before our eyes, and we don’t know much about how it got there or what might be happening inside. Most of us aren’t even signalmen, who are a part of the system itself. We’re trainspotters.

The genius naïf

leave a comment »

Last night, after watching the latest episode of Twin Peaks, I turned off the television before the premiere of the seventh season of Game of Thrones. This is mostly because I only feel like subscribing to one premium channel at a time, but even if I still had HBO, I doubt that I would have tuned in. I gave up on Game of Thrones a while back, both because I was uncomfortable with its sexual violence and because I felt that the average episode had degenerated into a holding pattern—it cut between storylines simply to remind us that they still existed, and it relied on unexpected character deaths and bursts of bloodshed to keep the audience awake. The funny thing, of course, is that you could level pretty much the same charges against the third season of Twin Peaks, which I’m slowly starting to feel may be the television event of the decade. Its images of violence against women are just as unsettling now as they were a quarter of a century ago, when Madeline Ferguson met her undeserved end; it cuts from one subplot to another so inscrutably that I’ve compared its structure to that of a sketch comedy show; and it has already delivered a few scenes that rank among the goriest in recent memory. So what’s the difference? If you’re feeling generous, you can say that one is an opportunistic display of popular craftsmanship, while the other is a singular, if sometimes incomprehensible, artistic vision. And if you’re less forgiving, you can argue that I’m being hard on one show that I concluded was jerking me around, while indulging another that I wanted badly to love.

It’s a fair point, although I don’t think it’s necessarily true, based solely on my experience of each show in the moment. I’ve often found my attention wandering during even solid episodes of Game of Thrones, while I’m rarely less than absorbed for the full hour of Twin Peaks, even though I’d have trouble explaining why. But there’s no denying the fact that I approach each show in a different state of mind. One of the most obvious criticisms of Twin Peaks, then and now, is that its pedigree prompts viewers to overlook or forgive scenes that might seem questionable within in a more conventional series. (There have been times, I’ll confess, when I’ve felt like Homer Simpson chuckling “Brilliant!” and then confessing: “I have absolutely no idea what’s going on.”) Yet I don’t think we need to apologize for this. The history of the series, the track record of its creators, and everything implied by its brand mean that most viewers are willing to give it the benefit the doubt. David Lynch and Mark Frost are clearly aware of their position, and they’ve leveraged it to the utmost, resulting in a show in which they’re free to do just about anything they like. It’s hard to imagine any other series getting away with this, but’s also hard to imagine another show persuading a million viewers each week to meet it halfway. The implicit contract between Game of Thrones and its audience is very different, which makes the show’s lapses harder to forgive. One of the great fascinations of Lynch’s career is whether he even knows what he’s doing half the time, and it’s much less interesting to ask this question of David Benioff and D.B. Weiss, any more than it is of Chris Carter.

By now, I don’t think there’s any doubt that Lynch knows exactly what he’s doing, but that confusion is still central to his appeal. Pauline Kael’s review of Blue Velvet might have been written of last night’s Twin Peaks:

You wouldn’t mistake frames from Blue Velvet for frames from any other movie. It’s an anomaly—the work of a genius naïf. If you feel that there’s very little art between you and the filmmaker’s psyche, it may be because there’s less than the usual amount of inhibition…It’s easy to forget about the plot, because that’s where Lynch’s naïve approach has its disadvantages: Lumberton’s subterranean criminal life needs to be as organic as the scrambling insects, and it isn’t. Lynch doesn’t show us how the criminals operate or how they’re bound to each other. So the story isn’t grounded in anything and has to be explained in little driblets of dialogue. But Blue Velvet has so much aural-visual humor and poetry that it’s sustained despite the wobbly plot and the bland functional dialogue (that’s sometimes a deliberate spoof of small-town conventionality and sometimes maybe not)…Lynch skimps on these commercial-movie basics and fouls up on them, too, but it’s as if he were reinventing movies.

David Thomson, in turn, called the experience of seeing Blue Velvet a moment of transcendence: “A kind of passionate involvement with both the story and the making of a film, so that I was simultaneously moved by the enactment on screen and by discovering that a new director had made the medium alive and dangerous again.”

Twin Peaks feels more alive and dangerous than Game of Thrones ever did, and the difference, I think, lies in our awareness of the effects that the latter is trying to achieve. Even at its most shocking, there was never any question about what kind of impact it wanted to have, as embodied by the countless reaction videos that it inspired. (When you try to imagine videos of viewers reacting to Twin Peaks, you get a sense of the aesthetic abyss that lies between these two shows.) There was rarely a scene in which the intended emotion wasn’t clear, and even when it deliberately sought to subvert our expectations, it was by substituting one stimulus and response for another—which doesn’t mean that it wasn’t effective, or that there weren’t moments, at its best, that affected me as powerfully as any I’ve ever seen. Even the endless succession of “Meanwhile, back at the Wall” scenes had a comprehensible structural purpose. On Twin Peaks, by contrast, there’s rarely any sense of how we’re supposed to be feeling about any of it. Its violence is shocking because it doesn’t seem to serve anything, certainly not anyone’s character arc, and our laughter is often uncomfortable, so that we don’t know if we’re laughing at the situation onscreen, at the show, or at ourselves. It may not be an experiment that needs to be repeated ever again, any more than Blue Velvet truly “reinvented” anything over the long run, except my own inner life. But at a time when so many prestige dramas seem content to push our buttons in ever more expert and ruthless ways, I’m grateful for a show that resists easy labels. Lynch may or may not be a genius naïf, but no ordinary professional could have done what he does here.

Written by nevalalee

July 17, 2017 at 7:54 am

We lost it at the movies

with 3 comments

Over a decade ago, the New Yorker film critic David Denby published a memoir titled American Sucker. I read it when it first came out, and I honestly can’t remember much about it, but there’s one section that has stuck in my mind ever since. Denby is writing of his obsession with investing, which has caused him to lose much of what he once loved about life, and he concludes sadly:

Well, you can’t get back to that. Do your job, then. After much starting and stopping, and considerable shifting of clauses, all the while watching the Nasdaq run above 5,000 on the CNNfn website, I put together the following as the opening of a review.

It happens to be his piece on Steven Soderbergh’s Erin Brockovich, which begins like this:

In Erin Brockovich, Julia Roberts appears in scene after scene wearing halter tops with a bit of bra showing; there’s a good bit of leg showing, too, often while she’s holding an infant on one arm. This upbeat, inspirational melodrama, based on a true story and written by Susannah Grant and directed by Steven Soderbergh, has been bought to life by a movie star on a heavenly rampage. Roberts swings into rooms, ablaze with indignation, her breasts pushed up and bulging out of the skimpy tops, and she rants at the people gaping at her. She’s a mother and a moral heroine who dresses like trailer trash but then snaps at anyone who doesn’t take her seriously—a real babe in arms, who gets to protect the weak and tell off the powerful while never turning her back on what she is.

Denby stops to evaluate his work: “Nothing great, but not bad either. I was reasonably happy with it as a lead—it moves, it’s active, it conveys a little of my pleasure in the picture. I got up and walked around the outer perimeter of the twentieth floor, looking west, looking east.”

I’ve never forgotten this passage, in part because it represents one of the few instances in which a prominent film critic has pulled back the curtain on an obvious but rarely acknowledged fact—that criticism is a genre of writing in itself, and that the phrases with which a movie is praised, analyzed, or dismissed are subject to the same sort of tinkering, revision, and doubt that we associate with other forms of expression. Critics are only human, even if sometimes try to pretend that they aren’t, as they present their opinions as the product of an unruffled sensibility. I found myself thinking of this again as I followed the recent furor over David Edelstein’s review of Wonder Woman in New York magazine, which starts as follows:

The only grace note in the generally clunky Wonder Woman is its star, the five-foot-ten-inch Israeli actress and model Gal Gadot, who is somehow the perfect blend of superbabe-in-the-woods innocence and mouthiness. She plays Diana, the daughter of the Amazon queen Hippolyta (Connie Nielsen) and a trained warrior. But she’s also a militant peacenik. Diana lives with Amazon women on a mystically shrouded island but she’s not Amazonian herself. She was, we’re told, sculpted by her mother from clay and brought to life by Zeus. (I’d like to have seen that.)

Edelstein was roundly attacked for what was perceived as the sexist tone of his review, which also includes such observations as “Israeli women are a breed unto themselves, which I say with both admiration and trepidation,” and “Fans might be disappointed that there’s no trace of the comic’s well-documented S&M kinkiness.” He responded with a private Facebook post, widely circulated, in which he wrote: “Right now I think the problem is that some people can’t read.” And he has since written a longer, more apologetic piece in which he tries to explain his choice of words.

I haven’t seen Wonder Woman, although I’m looking forward to it, so I won’t wade too far into the controversy itself. But when I look at these two reviews—which, significantly, are about films focusing on different sorts of heroines—I see some striking parallels. It isn’t just the echo of “a real babe in arms” with “superbabe-in-the-woods,” or how Brockovich “gets to protect the weak and tell off the powerful” while Diana is praised for her “mouthiness.” It’s something in the rhythm of their openings, which start at a full sprint with a consideration of a movie star’s appearance. As Denby says, “it moves, it’s active,” almost to a fault. Here are three additional examples, taken at random from the first paragraphs of reviews published in The New Yorker:

Gene Wilder stares at the world with nearsighted, pale-blue-eyed wonder; he was born with a comic’s flyblown wig and the look of a reddish creature from outer space. His features aren’t distinct; his personality lacks definition. His whole appearance is so fuzzy and weak he’s like mist on the lens.

There is a thick, raw sensuality that some adolescents have which seems almost preconscious. In Saturday Night Fever, John Travolta has this rawness to such a degree that he seems naturally exaggerated: an Expressionist painter’s view of a young role. As Tony, a nineteen-year-old Italian Catholic who works selling paint in a hardware store in Brooklyn’s Bay Ridge, he wears his heavy black hair brushed up in a blower-dried pompadour. His large, wide mouth stretches across his narrow face, and his eyes—small slits, close together—are, unexpectedly, glintingly blue and panicky.

As Jake La Motta, the former middleweight boxing champ, in Raging Bull, Robert De Niro wears scar tissue and a big, bent nose that deform his face. It’s a miracle that he didn’t grow them—he grew everything else. He developed a thick-muscled neck and a fighter’s body, and for the scenes of the broken, drunken La Motta he put on so much weight that he seems to have sunk in the fat with hardly a trace of himself left.

All of these reviews were written, of course, by Pauline Kael, who remains the movie critic who has inspired the greatest degree of imitation among her followers. And when you go back and read Denby and Edelstein’s openings, they feel like Kael impersonations, which is the mode on which a critic tends to fall back when he or she wants to start a review so that “it moves, it’s active.” Beginning with a description of the star, delivered in her trademark hyperaware, slightly hyperbolic style, was one of Kael’s stock devices, as if she were observing an animal seen in the wild and frantically jotting down her impressions before they faded. It’s a technical trick, but it’s a good one, and it isn’t surprising that Kael’s followers like to employ it, consciously or otherwise. It’s when a male critic uses it to describe the appearance of a woman that we run into trouble. (The real offender here isn’t Denby or Edelstein, but Anthony Lane, Kael’s successor at The New Yorker, whose reviews have the curious habit of panning a movie for a page and a half, and then pausing a third of the way from the end to rhapsodize about the appearance of a starlet in a supporting role, which is presented as its only saving grace. He often seems to be leering at her a little, which is possibly an inadvertent consequence of his literary debt to Kael. When Lane says of Scarlett Johansson, “She seemed to be made from champagne,” he’s echoing the Kael who wrote of Madeline Kahn: “When you look at her, you see a water bed at just the right temperature.”) Kael was a sensualist, and to the critics who came after her, who are overwhelmingly male, she bequeathed a toolbox that is both powerful and susceptible to misuse when utilized reflexively or unthinkingly. I don’t think that Edelstein is necessarily sexist, but he was certainly careless, and in his routine ventriloquism of Kael, which to a professional critic comes as easily as breathing, he temporarily forgot who he was and what movie he was reviewing. Kael was the Wonder Woman of film critics. But when we try to channel her voice, and we can hardly help it, it’s worth remembering—as another superhero famously learned—that with great power comes great responsibility.

The critical path

leave a comment »

Renata Adler

Note: I’m taking a few days off, so I’ll be republishing some of my favorite pieces from earlier in this blog’s run. This post originally appeared, in a slightly different form, on February 16, 2016.

Every few years or so, I go back and revisit Renata Adler’s famous attack in the New York Review of Books on the reputation of the film critic Pauline Kael. As a lifelong Kael fan, I don’t agree with Adler—who describes Kael’s output as “not simply, jarringly, piece by piece, line by line, and without interruption, worthless”—but I respect the essay’s fire and eloquence, and it’s still a great read. What is sometimes forgotten is that Adler opens with an assault, not on Kael alone, but on the entire enterprise of professional criticism itself. Here’s what she says:

The job of the regular daily, weekly, or even monthly critic resembles the work of the serious intermittent critic, who writes only when he is asked to or genuinely moved to, in limited ways and for only a limited period of time…Normally, no art can support for long the play of a major intelligence, working flat out, on a quotidian basis. No serious critic can devote himself, frequently, exclusively, and indefinitely, to reviewing works most of which inevitably cannot bear, would even be misrepresented by, review in depth…

The simple truth—this is okay, this is not okay, this is vile, this resembles that, this is good indeed, this is unspeakable—is not a day’s work for a thinking adult. Some critics go shrill. Others go stale. A lot go simultaneously shrill and stale.

Adler concludes: “By far the most common tendency, however, is to stay put and simply to inflate, to pretend that each day’s text is after all a crisis—the most, first, best, worst, finest, meanest, deepest, etc.—to take on, since we are dealing in superlatives, one of the first, most unmistakable marks of the hack.” And I think that she has a point, even if I have to challenge a few of her assumptions. (The statement that most works of art “inevitably cannot bear, would even be misrepresented by, review in depth,” is particularly strange, with its implicit division of all artistic productions into the sheep and the goats. It also implies that it’s the obligation of the artist to provide a worthy subject for the major critic, when in fact it’s the other way around: as a critic, you prove yourself in large part through your ability to mine insight from the unlikeliest of sources.) Writing reviews on a daily or weekly basis, especially when you have a limited amount of time to absorb the work itself, lends itself inevitably to shortcuts, and you often find yourself falling back on the same stock phrases and judgments. And Adler’s warning about “dealing in superlatives” seems altogether prescient. As Keith Phipps and Tasha Robinson of The A.V. Club pointed out a few years back, the need to stand out in an ocean of competing coverage means that every topic under consideration becomes either an epic fail or an epic win: a sensible middle ground doesn’t generate page views.

Pauline Kael

But the situation, at least from Adler’s point of view, is even more dire than when she wrote this essay in the early eighties. When Adler’s takedown of Kael first appeared, the most threatening form of critical dilution lay in weekly movie reviews: today, we’re living in a media environment in which every episode of every television show gets thousands of words of critical analysis from multiple pop culture sites. (Adler writes: “Television, in this respect, is clearly not an art but an appliance, through which reviewable material is sometimes played.” Which is only a measure of how much the way we think and talk about the medium has changed over the intervening three decades.) The conditions that Adler identifies as necessary for the creation of a major critic like Edmund Wilson or Harold Rosenberg—time, the ability to choose one’s subjects, and the freedom to quit when necessary—have all but disappeared for most writers hoping to make a mark, or even just a living. To borrow a trendy phrase, we’ve reached a point of peak content, with a torrent of verbiage being churned out at an unsustainable pace without the advertising dollars to support it, in a situation that can be maintained only by the seemingly endless supply of aspiring writers willing to be chewed up by the machine. And if Adler thought that even a monthly reviewing schedule was deadly for serious criticism, I’d be curious to hear how she feels about the online apprenticeship that all young writers seem expected to undergo these days.

Still, I’d like to think that Adler got it wrong, just as I believe that she was ultimately mistaken about Kael, whose legacy, for all its flaws, still endures. (It’s revealing to note that Adler had a long, distinguished career as a writer and critic herself, and yet she almost certainly remains best known among casual readers for her Kael review.) Not every lengthy writeup of the latest episode of Riverdale is going to stand the test of time, but as a crucible for forming a critic’s judgment, this daily grind feels like a necessary component, even if it isn’t the only one. A critic needs time and leisure to think about major works of art, which is a situation that the current media landscape doesn’t seem prepared to offer. But the ability to form quick judgments about works of widely varying quality and to express them fluently on deadline is an indispensable part of any critic’s toolbox. When taken as an end itself, it can be deadening, as Adler notes, but it can also be the foundation for something more, even if it has to be undertaken outside of—or despite—the critic’s day job. The critic’s responsibility, now more than ever, isn’t to detach entirely from the relentless pace of pop culture, but to find ways of channeling it into something deeper than the instantaneous think piece or hot take. As a daily blogger who also undertakes projects that can last for months or years, I’m constantly mindful of the relationship between my work on demand and my larger ambitions. And I sure hope that the two halves can work together. Because, like it or not, every critic is walking that path already.

Written by nevalalee

April 18, 2017 at 9:00 am

The art of the anti-blurb

leave a comment »

In a recent issue of The New Yorker, the critic Dan Chiasson offers up an appraisal of the poet Bill Knott, who died in 2014. To be honest, I’d either never heard of Knott or forgotten his name, but I suspect that he might have been pleased by this. Knott, who taught for decades at Emerson College, spent his entire career sticking resolutely to the edges of the literary world, distancing himself from mainstream publishers and electing to distribute his poems himself in cheap editions on Amazon. Chiasson relates:

The books that did make it to print usually featured brutal “anti-blurbs,” which Knott culled from reviews good and bad alike: his work was “grotesque,” “malignant,” “tasteless,” and “brainless,” according to some of the big names of the day.

Here are a few more of the blurbs he reprinted: “Bill Knott’s ancient, academic ramblings are part of what’s wrong with poetry today. Ignore the old bastard.” “Bill Knott bores me to tears.” “Bill Knott should be beaten with a flail.” “Bill Knott’s poems are so naïve that the question of their poetic quality hardly arises…Mr. Knott practices a dead language.” According to another reminiscence by the editor Robert P. Baird, Knott sometimes took it even further: “On his various blogs, which spawned and deceased like mayflies, he posted collages of rejection slips and a running tally of anti-blurbs: positive reviews and compliments that he’d carved up with ellipses to read like pans.” Even his actual negative reviews weren’t enough—Knott felt obliged to create his own.

The idea of a writer embracing his attackers has an obvious subversive appeal. Norman Mailer, revealingly, liked the idea so much that he indulged in it no fewer than three times, and far less nimbly than Knott did. After the release of The Deer Park, he ran an ad in The Village Voice that amounted to a parody of the usual collage of laudatory quotes—“The year’s worst snake pit in fiction,” “Moronic mindlessness,” “A bunch of bums”—and noted in fine print at the bottom, just in case we didn’t get the point: “This advertisement was paid for by Norman Mailer.” Two decades later, he decided to do the same thing with Marilyn, mostly as a roundabout way of responding to a single bad review by Pauline Kael. As the editor Robert Markel recalls in Peter Manso’s oral biography:

The book was still selling well when [Mailer] came in with his idea of a full two-page ad. Since he was now more or less in the hands of [publisher] Harold Roth, there was a big meeting in Harold’s office. What he wanted to do was exactly what he’d done with The Village Voice ad for The Deer Park: present all the positive and negative reviews, including Kael’s, setting the two in opposition. Harold was very much against it. He thought the two pages would be a stupid waste of money, but more, it was the adversarial nature of the ad as Norman conceived it.

Ultimately, Mailer persuaded Roth to play along: “He implied he’d made a study of this kind of thing and knew what he was talking about.” And five years down the line, he did it yet again with his novel Ancient Evenings, printing up a counter display for bookstores with bad reviews for Moby Dick, Anna Karenina, Leaves of Grass, and his own book, followed by a line with a familiar ring to it: “The quotations in this poster were selected by Norman Mailer.”

This compulsiveness about reprinting his bad reviews, and his insistence that everyone know that he had conceived and approved of it, is worth analyzing, because it’s very different from Knott’s. Mailer’s whole life was built on sustaining an image of intellectual machismo that often rested on unstable foundations, and embracing the drubbings that his books received was a way of signaling that he was tougher than his critics. Like so much else, it was a pose—Mailer hungered for fame and attention, and he felt his negative reviews as keenly as anyone. When Time ran a snarky notice of his poetry collection Deaths for the Ladies, Mailer replied, “in a fury of incalculable pains,” with a poem of his own, in which he compared himself to a bull in the ring and the reviewer to a cowardly picador. He recalled in Existential Errands:

The review in Time put iron into my heart again, and rage, and the feeling that the enemy was more alive than ever, and dirtier in the alley, and so one had to mend, and put on the armor, and go to war, go out to war again, and try to hew huge strokes with the only broadsword God ever gave you, a glimpse of something like Almighty prose.

This is probably a much healthier response. But in the contrast between Mailer’s expensive advertisements for himself and Knott’s photocopied chapbooks, you can see the difference between a piece of performance art and a philosophy of life truly lived. Of the two, Mailer ends up seeming more vulnerable. As he admits: “I had secret hopes, I now confess, that Deaths for the Ladies would be a vast success at the bar of poetry.”

Of course, Knott’s attitude was a bit of a pose as well. Chiasson once encountered his own name on Knott’s blog, which referred to him as “Chiasson-the-Assassin,” which indicates that the poet’s attitude toward critics was something other than indifference. But it was also a pose that was indistinguishable from the man inside, as Elisa Gabbert, one of Kott’s former students, observed: “It was kind of a goof, but that was his whole life. It was a really grand goof.” And you can judge them by their fruits. Mailer’s advertisements are brilliant, but the product that they’re selling is Mailer himself, and you’re clearly supposed to depart with the impression that the critics have trashed a major work of art. After reading Knott’s anti-blurbs, you end up questioning the whole notion of laudatory quotes itself, which is a more productive kind of skepticism. (David Lynch pulled off something similar when he printed an ad for Lost Highway with the words: “Two Thumbs Down!” In response, Roger Ebert wrote: “It’s creative to use the quote in that way…These days quotes in movie ads have been devalued by the ‘quote whores’ who supply gushing praise to publicists weeks in advance of an opening.” The situation with blurbs is slightly different, but there’s no question that they’ve been devalued as well—a book without “advance praise” looks vaguely suspicious, so the only meaningful fact about most blurbs is that they exist.) Resistance to reviews is so hard for a writer to maintain that asserting it feels like a kind of superpower. If asked, Mailer might have replied, like Bruce Banner in The Avengers: “That’s my secret. I’m always angry.” But I have a hunch that the truth is closer to what Wolverine says when Rogue asks if it hurts when his claws come out: “Every time.”

Falls the Shadow

with one comment

Over the last year or so, I’ve found myself repeatedly struck by the parallels between the careers of John W. Campbell and Orson Welles. At first, the connection might seem tenuous. Campbell and Welles didn’t look anything alike, although they were about the same height, and their politics couldn’t have been more different—Welles was a staunch progressive and defender of civil rights, while Campbell, to put it mildly, wasn’t. Welles was a wanderer, while Campbell spent most of his life within driving distance of his birthplace in New Jersey. But they’re inextricably linked in my imagination. Welles was five years younger than Campbell, but they flourished at exactly the same time, with their careers peaking roughly between 1937 and 1942. Both owed significant creative breakthroughs to the work of H.G. Wells, who inspired Campbell’s story “Twilight” and Welles’s Mercury Theater adaptation of The War of the Worlds. In 1938, Campbell saw Welles’s famous modern-dress production of Julius Caesar with the writer L. Sprague de Camp, of which he wrote in a letter:

It represented, in a way, what I’m trying to do in the magazine. Those humans of two thousand years ago thought and acted as we do—even if they did dress differently. Removing the funny clothes made them more real and understandable. I’m trying to get away from funny clothes and funny-looking people in the pictures of the magazine. And have more humans.

And I suspect that the performance started a train of thought in both men’s minds that led to de Camp’s novel Lest Darkness Fall, which is about a man from the present who ends up in ancient Rome.

Campbell was less pleased by Welles’s most notable venture into science fiction, which he must have seen as an incursion on his turf. He wrote to his friend Robert Swisher: “So far as sponsoring that War of [the] Worlds thing—I’m damn glad we didn’t! The thing is going to cost CBS money, what with suits, etc., and we’re better off without it.” In Astounding, he said that the ensuing panic demonstrated the need for “wider appreciation” of science fiction, in order to educate the public about what was and wasn’t real:

I have long been an exponent of the belief that, should interplanetary visitors actually arrive, no one could possibly convince the public of the fact. These stories wherein the fact is suddenly announced and widespread panic immediately ensues have always seemed to me highly improbable, simply because the average man did not seem ready to visualize and believe such a statement.

Undoubtedly, Mr. Orson Welles felt the same way.

Their most significant point of intersection was The Shadow, who was created by an advertising agency for Street & Smith, the publisher of Astounding, as a fictional narrator for the radio series Detective Story Hour. Before long, he became popular enough to star in his own stories. Welles, of course, voiced The Shadow from September 1937 to October 1938, and Campbell plotted some of the magazine installments in collaboration with the writer Walter B. Gibson and the editor John Nanovic, who worked in the office next door. And his identification with the character seems to have run even deeper. In a profile published in the February 1946 issue of Pic magazine, the reporter Dickson Hartwell wrote of Campbell: “You will find him voluble, friendly and personally depressing only in what his friends claim is a startling physical resemblance to The Shadow.”

It isn’t clear if Welles was aware of Campbell, although it would be more surprising if he wasn’t. Welles flitted around science fiction for years, and he occasionally crossed paths with other authors in that circle. To my lasting regret, he never met L. Ron Hubbard, which would have been an epic collision of bullshitters—although Philip Seymour Hoffman claimed that he based his performance in The Master mostly on Welles, and Theodore Sturgeon once said that Welles and Hubbard were the only men he had ever met who could make a room seem crowded simply by walking through the door. In 1946, Isaac Asimov received a call from a lawyer whose client wanted to buy all rights to his robot story “Evidence” for $250. When he asked Campbell for advice, the editor said that he thought it seemed fair, but Asimov’s wife told him to hold out for more. Asimov called back to ask for a thousand dollars, adding that he wouldn’t discuss it further until he found out who the client was. When the lawyer told him that it was Welles, Asimov agreed to the sale, delighted, but nothing ever came of it. (Welles also owned the story in perpetuity, making it impossible for Asimov to sell it elsewhere, a point that Campbell, who took a notoriously casual attitude toward rights, had neglected to raise.) Twenty years later, Welles made inquiries into the rights for Heinlein’s The Puppet Masters, which were tied up at the time with Roger Corman, but never followed up. And it’s worth noting that both stories are concerned with the problem of knowing how other people are what they claim to be, which Campbell had brilliantly explored in “Who Goes There?” It’s a theme to which Welles obsessively returned, and it’s fascinating to speculate what he might have done with it if Howard Hawks and Christian Nyby hadn’t gotten there first with The Thing From Another World. Who knows what evil lurks in the hearts of men?

But their true affinities were spiritual ones. Both Campbell and Welles were child prodigies who reinvented an art form largely by being superb organizers of other people’s talents—although Campbell always downplayed his own contributions, while Welles appears to have done the opposite. Each had a spectacular early success followed by what was perceived as decades of decline, which they seem to have seen coming. (David Thomson writes: “As if Welles knew that Kane would hang over his own future, regularly being used to denigrate his later works, the film is shot through with his vast, melancholy nostalgia for self-destructive talent.” And you could say much the same thing about “Twilight.”) Both had a habit of abandoning projects as soon as they realized that they couldn’t control them, and they both managed to seem isolated while occupying the center of attention in any crowd. They enjoyed staking out unreasonable positions in conversation, just to get a rise out of listeners, and they ultimately drove away their most valuable collaborators. What Pauline Kael writes of Welles in “Raising Kane” is equally true of Campbell:

He lost the collaborative partnerships that he needed…He was alone, trying to be “Orson Welles,” though “Orson Welles” had stood for the activities of a group. But he needed the family to hold him together on a project and to take over for him when his energies became scattered. With them, he was a prodigy of accomplishments; without them, he flew apart, became disorderly.

Both men were alone when they died, and both filled their friends, admirers, and biographers with intensely mixed feelings. I’m still coming to terms with Campbell. But I have a hunch that I’ll end up somewhere close to Kael’s ambivalence toward Welles, who, at the end of an essay that was widely seen as puncturing his myth, could only conclude: “In a less confused world, his glory would be greater than his guilt.”

Farewell to Mystic Falls

with one comment

Note: Spoilers follow for the series finale of The Vampire Diaries.

On Friday, I said goodbye to The Vampire Diaries, a series that I once thought was one of the best genre shows on television, only to stop watching it for its last two seasons. Despite its flaws, it occupies a special place in my memory, in part because its strengths were inseparable from the reasons that I finally abandoned it. Like Glee, The Vampire Diaries responded to its obvious debt to an earlier franchise—High School Musical for the former, Twilight for the latter—both by subverting its predecessor and by burning through ideas as relentlessly as it could. It’s as if both shows decided to refute any accusations of unoriginality by proving that they could be more ingenious than their inspirations, and amazingly, it sort of worked, at least for a while. There’s a limit to how long any series can repeatedly break down and reassemble itself, however, and both started to lose steam after about three years. In the case of The Vampire Diaries, its problems crystallized around its ostensible lead, Elena Gilbert, as portrayed by the game and talented Nina Dobrev, who left the show two seasons ago before returning for an encore in the finale. Elena spent most of her first sendoff asleep, and she isn’t given much more to do here. There’s a lot about the episode that I liked, and it provides satisfying moments of closure for many of its characters, but Elena isn’t among them. In the end, when she awakens from the magical coma in which she has been slumbering, it’s so anticlimactic that it reminds me of what Pauline Kael wrote of Han’s revival in Return of the Jedi: “It’s as if Han Solo had locked himself in the garage, tapped on the door, and been let out.”

And what happened to Elena provides a striking case study of why the story’s hero is often fated to become the least interesting person in sight. The main character of a serialized drama is under such pressure to advance the plot that he or she becomes reduced to the diagram of a pattern of forces, like one of the fish in D’Arcy Wentworth Thompson’s On Growth and Form, in which the animal’s physical shape is determined by the outside stresses to which it has been subjected. Instead of making her own decisions, Elena was obliged to become whatever the series needed her to be. Every protagonist serves as a kind of motor for the story, which is frequently a thankless role, but it was particularly problematic on a show that defined itself by its willingness to burn through a year of potential storylines each month. Every episode felt like a season finale, and characters were freely killed, resurrected, and brainwashed to keep the wheels turning. It was hardest on Elena, who, at her best, was a compelling, resourceful heroine. After six seasons of personality changes, possessions, memory wipes, and the inexplicable choices that she made just because the story demanded it, she became an empty shell. If you were designing a show in a laboratory to see what would happen if its protagonist was forced to live through plot twists at an accelerated rate, like the stress tests that engineers use to put a component through a lifetime’s worth of wear in a short period of time, you couldn’t do much better than The Vampire Diaries. And while it might have been theoretically interesting to see what happened to the series after that one piece was removed, I didn’t think it was worth sitting through another two seasons of increasingly frustrating television.

After the finale was shot, series creators Kevin Williamson and Julie Plec made the rounds of interviews to discuss the ending, and they shared one particular detail that fascinates me. If you haven’t watched The Vampire Diaries, all you need to know is that its early seasons revolved around a love triangle between Elena and the vampire brothers Stefan and Damon, a nod to Twilight that quickly became one of the show’s least interesting aspects. Elena seemed fated to end up with Stefan, but she spent the back half of the series with Damon, and it ended with the two of them reunited. In a conversation with Deadline, Williamson revealed that this wasn’t always the plan:

Well, I always thought it would be Stefan and Elena. They were sort of the anchor of the show, but because we lost Elena in season six, we couldn’t go back. You know Nina could only come back for one episode—maybe if she had came back for the whole season, we could even have warped back towards that, but you can’t just do it in forty-two minutes.

Dobrev’s departure, in other words, froze that part of the story in place, even as the show around it continued its usual frantic developments, and when she returned, there wasn’t time to do anything but keep Elena and Damon where they had left off. There’s a limit to how much ground you can cover in the course of a single episode, so it seemed easier for the producers to stick with what they had and figure out a way to make it seem inevitable.

The fact that it works at all is a tribute to the skill of the writers and cast, as well as to the fact that the whole love triangle was basically arbitrary in the first place. As James Joyce said in a very different context, it was a bridge across which the characters could walk, and once they were safely on the other side, it could be blown to smithereens. The real challenge was how to make the finale seem like a definitive ending, after the show had killed off and resurrected so many characters that not even death itself felt like a conclusion. It resorted to much the same solution that Lost did when faced with a similar problem: it shut off all possibility of future narrative by reuniting its characters in heaven. This partially a form of wish fulfillment, as we’ve seen with so many other television series, but it also puts a full stop on the story by leaving us in an afterlife, where, by definition, nothing can ever change. It’s hilariously unlike the various versions of the world to come that the series has presented over the years, from which characters can always be yanked back to life when necessary, but it’s also oddly moving and effective. Watching it, I began to appreciate how the show’s biggest narrative liability—a cast that just can’t be killed—also became its greatest asset. The defining image of The Vampire Diaries was that of a character who has his neck snapped, and then just shakes it off. Williamson and Plec must have realized, consciously or otherwise, that it was a reset button that would allow them to go through more ideas than would be possible than a show on which a broken neck was permanent. Every denizen of Mystic Falls got a great death scene, often multiple times per season, and the show exploited that freedom until it exhausted itself. It only really worked for three years out of eight, but it was a great run while it lasted. And now, after life’s fitful fever, the characters can sleep well, as they sail off into the mystic.

The children are our future

leave a comment »

Clive Owen and Clare-Hope Ashitey in Children of Men

Sometimes a great film takes years to reveal its full power. Occasionally, you know what you’ve witnessed as soon as the closing credits begin to roll. And very rarely, you realize in the middle of the movie that you’re watching something extraordinary. I’ve experienced this last feeling only a handful of times in my life, and my most vivid memory of it is from ten years ago, when I saw Children of Men. I’d been looking forward to it ever since seeing the trailer, and for the first twenty minutes or so, it more than lived up to my expectations. But halfway through a crucial scene—and if you’ve seen the movie, you know the one I mean—I began to feel the movie expanding in my head, as Pauline Kael said of The Godfather Part II, “like a soft bullet.” Two weeks later, I wrote to a friend: “Alfonso Cuarón has just raised the bar for every director in the world.” And I still believe this, even if the ensuing decade has clarified the film’s place in the history of movies. Cuarón hasn’t had the productive career that I’d hoped he would, and it took him years to follow up on his masterpiece, although he finally earned his Oscar for Gravity. The only unambiguous winner to come out of it all was the cinematographer Emmanuel Lubzeki, who has won three Academy Awards in a row for refinements of the discoveries that he made here. And the story now seems prescient, of course, as Abraham Riesman of Vulture recently noted: “The film, in hindsight, seems like a documentary about a future that, in 2016, finally arrived.” If nothing else, the world certainly appears to be run by exactly the sort of people of whom Jarvis Cocker was warning us.

But the most noteworthy thing about Children of Men, and the one aspect of it that its fans and imitators should keep in mind, is the insistently visceral nature of its impact. I don’t think I’m alone when I say that I was blown away the most by three elements: the tracking shots, the use of music, and the level of background detail in every scene. These are all qualities that are independent of its politics, its message, and even, to some extent, its script, which might be its weakest point. The movie can be refreshingly elliptical when it comes to the backstory of its characters and its world, but there are also holes and shortcuts that are harder to forgive. (Its clumsiest moment, for me, is when Theo is somehow able to observe and overhear Jasper’s death—an effective scene in itself—from higher ground without being noticed by anyone else. We aren’t sure where he’s standing in relation to the house, so it feels contrived and stagy, a strange lapse for a movie that is otherwise so bracingly specific about its geography.) But maybe that’s how it had to be. If the screenplay were as rich and crowded as the images, it would turn into a Christopher Nolan movie, for better or worse, and Cuarón is a very different sort of filmmaker. He’s content to leave entire swaths of the story in outline form, as if he forgot to fill in the blanks, and he’s happy to settle for a cliché if it saves time, just because his attention is so intensely focused elsewhere.

Michael Caine in Children of Men

Occasionally, this has led his movies to be something less than they should be. I really want to believe that Harry Potter and the Prisoner of Azkaban is the strongest installment in the series, but it has real structural problems that stem precisely from Cuarón’s indifference to exposition: he cuts out an important chunk of dialogue that leaves the climax almost incomprehensible, so that nonreaders have to scramble to figure out what the hell is going on, when we should be caught up in the action. Gravity impressed me enormously when I saw it on the big screen, but I’m not particularly anxious to revisit it at home, where its technical marvels run the risk of being swallowed up by its rudimentary characters and dialogue. (It strikes me now that Gravity might have some of the same problems, to a much lesser extent, as Birdman, in which the use of extended takes makes it impossible to give scenes the necessary polish in the editing room. Which also implies that if you’re going to hire Lubzeki as your cinematographer, you’d better have a really good script.) But Children of Men is the one film in which Cuarón’s shortcomings are inseparable from his strengths. His usual omissions and touches of carelessness were made for a story in which we’re only meant to glimpse the overall picture. And its allegory is so vague that we can apply it to whatever we like.

This might sound like a criticism, but it isn’t: Children of Men is undeniably one of the major movies of my lifetime. And its message is more insightful than it seems, even if it takes a minute of thought to unpack. Its world falls apart as soon as humanity realizes that it doesn’t have a future, which isn’t so far from where we are now. We find it very hard, as a species, to keep the future in mind, and we often behave—even in the presence of our own children—as if this generation will be the last. When a society has some measure of economic and political security, it can make efforts to plan ahead for a decade or two, but even that modest degree of foresight disappears as soon as stability does. In Children of Men, the childbirth crisis, which doesn’t respect national or racial boundaries, takes the sort of disruptions that tend to occur far from the developed world and brings them into the heart of Europe and America, and it doesn’t even need to change any of the details. The most frightening thing about Cuarón’s movie, and what makes it most relevant to our current predicament, is that its extrapolations aren’t across time, but across the map of the world as it exists today. You don’t need to look far to see landscapes like the ones through which the characters move, or the ways in which they could spread across the planet. In the words of William Gibson, the future of Children of Men is already here. It just isn’t evenly distributed yet.

The last tango

with 5 comments

Bernardo Bertoclucci, Marlon Brando, and Maria Schneider on the set of Last Tango in Paris

When I look back at many of my favorite movies, I’m troubled by a common thread that they share. It’s the theme of the control of a vulnerable woman by a man in a position of power. The Red Shoes, my favorite film of all time, is about artistic control, while Blue Velvet, my second favorite, is about sexual domination. Even Citizen Kane has that curious subplot about Kane’s attempt to turn Susan into an opera star, which may have originated as an unkind reference to William Randolph Hearst and Marion Davies, but which survives in the final version as an emblem of Kane’s need to collect human beings like playthings. It’s also hard to avoid the feeling that some of these stories secretly mirror the relationship between the director and his actresses on the set. Vertigo, of course, can be read as an allegory for Hitchcock’s own obsession with his leading ladies, whom he groomed and remade as meticulously as Scotty attempts to do with Madeline. In The Shining, Jack’s abuse of Wendy feels only slightly more extreme than what we know Kubrick—who even resembles Jack a bit in the archival footage that survives—imposed on Shelley Duvall. (Duvall’s mental health issues have cast a new pall on those accounts, and the involvement of Kubrick’s daughter Vivian has done nothing to clarify the situation.) And Roger Ebert famously hated Blue Velvet because he felt that David Lynch’s treatment of Isabella Rossellini had crossed an invisible moral line.

The movie that has been subjected to this kind of scrutiny most recently is Last Tango in Paris, after interview footage resurfaced of Bernardo Bertolucci discussing its already infamous rape scene. (Bertolucci originally made these comments three years ago, and the fact that they’ve drawn attention only now is revealing in itself—it was hiding in plain sight, but it had to wait until we were collectively prepared to talk about it.) Since the story first broke, there has been some disagreement over what Maria Schneider knew on the day of the shoot. You can read all about it here. But it seems undeniable that Bertolucci and Brando deliberately withheld crucial information about the scene from Schneider until the cameras were rolling. Even the least offensive version makes me sick to my stomach, all the more so because Last Tango in Paris has been an important movie to me for most of my life. In online discussions of the controversy, I’ve seen commenters dismissing the film as an overrated relic, a vanity project for Brando, or one of Pauline Kael’s misguided causes célèbres. If anything, though, this attitude lets us off the hook too easily. It’s much harder to admit that a film that genuinely moved audiences and changed lives might have been made under conditions that taint the result beyond retrieval. It’s a movie that has meant a lot to me, as it did to many other viewers, including some I knew personally. And I don’t think I can ever watch it again.

Marlon Brando in Last Tango in Paris

But let’s not pretend that it ends there. It reflects a dynamic that has existed between directors and actresses since the beginning, and all too often, we’ve forgiven it, as long as it results in great movies. We write critical treatments of how Vertigo and Psycho masterfully explore Hitchcock’s ambivalence toward women, and we overlook the fact that he sexually assaulted Tippi Hedren. When we think of the chummy partnerships that existed between men like Cary Grant and Howard Hawks, or John Wayne and John Ford, and then compare them with how directors have regarded their female collaborators, the contrast couldn’t be more stark. (The great example here is Gone With the Wind: George Cukor, the original director, was fired because he made Clark Gable uncomfortable, and he was replaced by Gable’s buddy Victor Fleming. Vivien Leigh and Olivia de Havilland were forced to consult with Cukor in secret.) And there’s an unsettling assumption on the part of male directors that this is the only way to get a good performance from a woman. Bertolucci says that he and Brando were hoping to get Schneider’s raw reaction “as a girl, instead of as an actress.” You can see much the same impulse in Kubrick’s treatment of Duvall. Even Michael Powell, one of my idols, writes of how he and the other actors frightened Moira Shearer to the point of tears for the climactic scene of The Red Shoes—“This was no longer acting”—and says elsewhere: “I never let love interfere with business, or I would have made love to her. It would have improved her performance.”

So what’s a film buff to do? We can start by acknowledging that the problem exists, and that it continues to affect women in the movies, whether in the process of filmmaking itself or in the realities of survival in an industry that is still dominated by men. Sometimes it leads to abuse or worse. We can also honor the work of those directors, from Ozu to Almodóvar to Wong Kar-Wai, who have treated their actresses as partners in craft. Above all else, we can come to terms with the fact that sometimes even a masterpiece fails to make up for the choices that went into it. Thinking of Last Tango in Paris, I was reminded of Norman Mailer, who wrote one famous review of the movie and was linked to it in another. (Kael wrote: “On the screen, Brando is our genius as Mailer is our genius in literature.”) Years later, Mailer supported the release from prison of a man named Jack Henry Abbott, a gifted writer with whom he had corresponded at length. Six weeks later, Abbott stabbed a stranger to death. Afterward, Mailer infamously remarked:

I’m willing to gamble with a portion of society to save this man’s talent. I am saying that culture is worth a little risk.

But it isn’t—at least not like this. Last Tango in Paris is a masterpiece. It contains the single greatest male performance I’ve ever seen. But it wasn’t worth it.

Cain rose up

with 2 comments

John Lithgow in Raising Cain

I first saw Brian De Palma’s Raising Cain when I was fourteen years old. In a weird way, it amounted to a peak moment of my early adolescence: I was on a school trip to our nation’s capital, sharing a hotel room with my friends from middle school, and we were just tickled to get away with watching an R-rated movie on cable. The fact that we ended up with Raising Cain doesn’t quite compare with the kids on The Simpsons cheering at the chance to see Barton Fink, but it isn’t too far off. I think that we liked it, and while I won’t claim that we understood it, that doesn’t mean much of anything—it’s hard for me to imagine anybody, of any age, entirely understanding this movie, which includes both me and De Palma himself. A few years later, I caught it again on television, and while I can’t say I’ve thought about it much since, I never forgot it. Gradually, I began to catch up on my De Palma, going mostly by whatever movies made Pauline Kael the most ecstatic at the time, which in itself was an education in the gap between a great critic’s pet enthusiasms and what exists on the screen. (In her review of The Fury, Kael wrote: “No Hitchcock thriller was ever so intense, went so far, or had so many ‘classic’ sequences.” I love Kael, but there are at least three things wrong with that sentence.) And ultimately De Palma came to mean a lot to me, as he does to just about anyone who responds to the movies in a certain way.

When I heard about the recut version of Raising Cain—in an interview with John Lithgow on The A.V. Club, no less, in which he was promoting his somewhat different role on The Crown—I was intrigued. And its backstory is particularly interesting. Shortly before the movie was first released, De Palma moved a crucial sequence from the beginning to the middle, eliminating an extended flashback and allowing the film to play more or less chronologically. He came to regret the change, but it was too late to do anything about it. Years later, a freelance director and editor named Peet Gelderblom read about the original cut and decided to restore it, performing a judicious edit on a digital copy. He put it online, where, unbelievably, it was seen by De Palma himself, who not only loved it but asked that it be included as a special feature on the new Blu-ray release. If nothing else, it’s a reminder of the true possibilities of fan edits, which have served mostly for competing visions of the ideal version of Star Wars. With modern software, a fan can do for a movie what Walter Murch did for Touch of Evil, restoring it to the director’s original version based on a script or a verbal description. In the case of Raising Cain, this mostly just involved rearranging the pieces in the theatrical cut, but other fans have tackled such challenges as restoring all the deleted scenes in Twin Peaks: Fire Walk With Me, and there are countless other candidates.

Raising Cain

Yet Raising Cain might be the most instructive case study of all, because simply restoring the original opening to its intended place results in a radical transformation. It isn’t for everyone, and it’s necessary to grant De Palma his usual passes for clunky dialogue and characterization, but if you’re ready to meet it halfway, you’re rewarded with a thriller that twists back on itself like a Möbius strip. De Palma plunders his earlier movies so blatantly that it isn’t clear if he’s somehow paying loving homage to himself—bypassing Hitchcock entirely—or recycling good ideas that he feels like using again. The recut opens with a long mislead that recalls Dressed to Kill, which means that Lithgow barely even appears for the first twenty minutes. You can almost see why De Palma chickened out for the theatrical version: Lithgow’s performance as the meek Carter and his psychotic imaginary brother Cain feels too juicy to withhold. But the logic of the script was destroyed. For a film that tests an audience’s suspension of disbelief in so many other ways, it’s unclear why De Palma thought that a flashback would be too much for the viewer to handle. The theatrical release preserves all the great shock effects that are the movie’s primary reason for existing, but they don’t build to anything, and you’re left with a film that plays like a series of sketches. With the original order restored, it becomes what it was meant to be all along: a great shaggy dog story with a killer punchline.

Raising Cain is gleefully about nothing but itself, and I wouldn’t force anybody to watch it who wasn’t already interested. But the recut also serves as an excellent introduction to its director, just as the older version did for me: when I first encountered it, I doubt I’d seen anything by De Palma, except maybe The Untouchables, and Mission: Impossible was still a year away. It’s safe to say that if you like Raising Cain, you’ll like De Palma in general, and if you can’t get past its archness, campiness, and indifference to basic plausibility—well, I can hardly blame you. Watching it again, I was reminded of Blue Velvet, a far greater movie that presents the viewer with a similar test. It has the same mixture of naïveté and incredible technical virtuosity, with scenes that barely seem to have been written alternating with ones that push against the boundaries of the medium itself. You’re never quite sure if the director is in on the gag, and maybe it doesn’t matter. There isn’t much beauty in Raising Cain, and De Palma is a hackier and more mechanical director than Lynch, but both are so strongly visual that the nonsensory aspects of their films, like the obligatory scenes with the cops, seem to wither before our eyes. (It’s an approach that requires a kind of raw, intuitive trust from the cast, and as much as I enjoy what Lithgow does here, he may be too clever and resourceful an actor to really disappear into the role.) Both are rooted, crucially, in Hitchcock, who was equally obsessive, but was careful to never work from his own script. Hitchcock kept his secret self hidden, while De Palma puts it in plain sight. And if it turns out to be nothing at all, that’s probably part of the joke.

The low road to Xanadu

with 3 comments

Orson Welles in Citizen Kane

It was a miracle of rare device,
A sunny pleasure-dome with caves of ice!

—Samuel Taylor Coleridge, “Kubla Khan”

A couple of weeks ago, I wrote of Donald Trump: “He’s like Charles Foster Kane, without any of the qualities that make Kane so misleadingly attractive.” If anything, that’s overly generous to Trump himself, but it also points to a real flaw in what can legitimately be called the greatest American movie ever made. Citizen Kane is more ambiguous than it was ever intended to be, because we’re distracted throughout by our fondness for the young Orson Welles. He’s visible all too briefly in the early sequences at the Inquirer; he winks at us through his makeup as an older man; and the aura he casts was there from the beginning. As David Thomson points out in The New Biographical Dictionary of Film:

Kane is less about William Randolph Hearst—a humorless, anxious man—than a portrait and prediction of Welles himself. Given his greatest opportunity, [screenwriter Herman] Mankiewicz could only invent a story that was increasingly colored by his mixed feelings about Welles and that, he knew, would be brought to life by Welles the overpowering actor, who could not resist the chance to dress up as the old man he might one day become, and who relished the young showoff Kane just as he loved to hector and amaze the Mercury Theater.

You can see Welles in the script when Susan Alexander asks Kane if he’s “a professional magician,” or when Kane, asked if he’s still eating, replies: “I’m still hungry.” And although his presence deepens and enhances the movie’s appeal, it also undermines the story that Welles and Mankiewicz set out to tell in the first place.

As a result, the film that Hearst wanted to destroy turned out to be the best thing that could have happened to his legacy—it makes him far more interesting and likable than he ever was. The same factor tends to obscure the movie’s politics. As Pauline Kael wrote in the early seventies in the essay “Raising Kane”: “At some campus showings, they react so gullibly that when Kane makes a demagogic speech about ‘the underprivileged,’ stray students will applaud enthusiastically, and a shout of ‘Right on!’ may be heard.” But in an extraordinary review that was published when the movie was first released, Jorge Luis Borges saw through to the movie’s icy heart:

Citizen Kane…has at least two plots. The first, pointlessly banal, attempts to milk applause from dimwits: a vain millionaire collects statues, gardens, palaces, swimming pools, diamonds, cars, libraries, men and women…The second plot is far superior…At the end we realize that the fragments are not governed by any apparent unity: the detested Charles Foster Kane is a simulacrum, a chaos of appearances…In a story by Chesterton—“The Head of Caesar,” I think—the hero observes that nothing is so frightening as a labyrinth with no center. This film is precisely that labyrinth.

Borges concludes: “We all know that a party, a palace, a great undertaking, a lunch for writers and journalists, an enterprise of cordial and spontaneous camaraderie, are essentially horrendous. Citizen Kane is the first film to show such things with an awareness of this truth.” He might well be talking about the Trump campaign, which is also a labyrinth without a center. And Trump already seems to be preparing for defeat with the same defense that Kane did.

Everett Sloane in Citizen Kane

Yet if we’re looking for a real counterpart to Kane, it isn’t Trump at all, but someone standing just off to the side: his son-in-law, Jared Kushner. I’ve been interested in Kushner’s career for a long time, in part because we overlapped at college, although I doubt we’ve ever been in the same room. Ten years ago, when he bought the New York Observer, it was hard not to think of Kane, and not just because Kushner was twenty-five. It recalled the effrontery in Kane’s letter to Mr. Thatcher: “I think it would be fun to run a newspaper.” And I looked forward to seeing what Kushner would do next. His marriage to Ivanka Trump was a twist worthy of Mankiewicz, who married Kane to the president’s daughter, and as Trump lurched into politics, I wasn’t the only one wondering what Ivanka and Kushner—whose father was jailed after an investigation by Chris Christie—made of it all. Until recently, you could kid yourself that Kushner was torn between loyalty to his wife’s father and whatever else he might be feeling, even after he published his own Declaration of Principles in the Observer, writing: “My father-in-law is not an anti-Semite.” But that’s no longer possible. As the Washington Post reports, Kushner, along with former Breitbart News chief Stephen K. Bannon, personally devised the idea to seat Bill Clinton’s accusers in the family box at the second debate. The plan failed, but there’s no question that Kushner has deliberately placed himself at the center of Trump’s campaign, and that he bears an active, not passive, share of the responsibility for what promises to be the ugliest month in the history of presidential politics.

So what happened? If we’re going to press the analogy to its limit, we can picture the isolated Kane in his crumbling estate in Xanadu. It was based on Hearst Castle in San Simeon, and the movie describes it as standing on the nonexistent desert coast of Florida—but it could just as easily be a suite in Trump Tower. We all tend to surround ourselves with people with whom we agree, whether it’s online or in the communities in which we live, and if you want to picture this as a series of concentric circles, the ultimate reality distortion field must come when you’re standing in a room next to Trump himself. Now that Trump has purged his campaign of all reasonable voices, it’s easy for someone like Kushner to forget that there is a world elsewhere, and that his actions may not seem sound, or even sane, beyond those four walls. Eventually, this election will be over, and whatever the outcome, I feel more pity for Kushner than I do for his father-in-law. Trump can only stick around for so much longer, while Kushner still has half of his life ahead of him, and I have a feeling that it’s going to be defined by his decisions over the last three months. Maybe he’ll realize that he went straight from the young Kane to the old without any of the fun in between, and that his only choice may be to wall himself up in Xanadu in his thirties, with the likes of Christie, Giuliani, and Gingrich for company. As the News on the March narrator says in Kane: “An emperor of newsprint continued to direct his failing empire, vainly attempted to sway, as he once did, the destinies of a nation that had ceased to listen to him, ceased to trust him.” It’s a tragic ending for an old man. But it’s even sadder for a young one.

The excerpt opinion

leave a comment »

Norman Mailer

“It’s the rare writer who cannot have sentences lifted from his work,” Norman Mailer once wrote. What he meant is that if a reviewer is eager to find something to mock, dismiss, or pick apart, any interesting book will provide plenty of ammunition. On a simple level of craft, it’s hard for most authors to sustain a high pitch of technical proficiency in every line, and if you want to make a novelist seem boring or ordinary, you can just focus on the sentences that fall between the high points. In his famously savage takedown of Thomas Harris’s Hannibal, Martin Amis quotes another reviewer who raved: “There is not a single ugly or dead sentence.” Amis then acidly observes:

Hannibal is a genre novel, and all genre novels contain dead sentences—unless you feel the throb of life in such periods as “Tommaso put the lid back on the cooler” or “Eric Pickford answered” or “Pazzi worked like a man possessed” or “Margot laughed in spite of herself” or “Bob Sneed broke the silence.”

Amis knows that this is a cheap shot, and he glories in it. But it isn’t so different from what critics do when they list the awful sentences from a current bestseller or nominate lines for the Bad Sex in Fiction Award. I laugh at this along with anyone else, but I also wince a little, because there are few authors alive who aren’t vulnerable to that sort of treatment. As G.K. Chesterton pointed out: “You could compile the worst book in the world entirely out of selected passages from the best writers in the world.”

This is even more true of authors who take considerable stylistic or thematic risks, which usually result in individual sentences that seem crazy or, worse, silly. The fear of seeming ridiculous is what prevents a lot of writers from taking chances, and it isn’t always unjustified. An ambitious novel opens itself up to savaging from all sides, precisely because it provides so much material that can be turned against the author when taken out of context. And it doesn’t need to be malicious, either: even objective or actively sympathetic critics can be seduced by the ease with which a writer can be excerpted to make a case. I’ve become increasingly daunted by the prospect of distilling the work of Robert A. Heinlein, for example, because his career was so long, varied, and often intentionally provocative that you can find sentences to support any argument about him that you want to make. (It doesn’t help that his politics evolved drastically over time, and they probably would have undergone several more transformations if he had lived for longer.) This isn’t to say that his opinions aren’t a fair target for criticism, but any reasonable understanding of who Heinlein was and what he believed—which I’m still trying to sort out for myself—can’t be conveyed by a handful of cherry-picked quotations. Literary biography is useful primarily to the extent that it can lay out a writer’s life in an orderly fashion, providing a frame that tells us something about the work that we wouldn’t know by encountering it out of order. But even that involves a process of selection, as does everything else about a biography. The biographer’s project isn’t essentially different from that of a working critic or reviewer: it just takes place on a larger scale.

John Updike

And it’s worth noting that prolific critics themselves are particularly susceptible to this kind of treatment. When Renata Adler described Pauline Kael’s output as “not simply, jarringly, piece by piece, line by line, and without interruption, worthless,” any devotee of Kael’s work had to disagree—but it was also impossible to deny that there was plenty of evidence for the prosecution. If you’re determined to hate Roger Ebert, you just have to search for the reviews in which his opinions, written on deadline, weren’t sufficiently in line with the conclusions reached by posterity, as when he unforgivably gave only three stars to The Godfather Part II. And there isn’t a single page in the work of David Thomson, who is probably the most interesting movie critic who ever lived, that couldn’t be mined for outrageous, idiotic, or infuriating statements. I still remember a review on The A.V. Club of How to Watch a Movie that quoted lines like this:

Tell me a story, we beg as children, while wanting so many other things. Story will put off sleep (or extinction) and the child’s organism hardly trusts the habit of waking yet.

And this:

You came into this book under deceptive promises (mine) and false hopes (yours). You believed we might make decisive progress in the matter of how to watch a movie. So be it, but this was a ruse to make you look at life.

The reviewer quoted these sentences as examples of the book’s deficiencies, and they were duly excoriated in the comments. But anyone who has really read Thomson knows that such statements are part of the package, and removing them would also deny most of what makes him so fun, perverse, and valuable.

So what’s a responsible reviewer to do? We could start, maybe, by quoting longer or complete sections, rather than sentences in isolation, and by providing more context when we offer up just a line or two. We can also respect an author’s feelings, explicit or otherwise, about what sections are actually important. In the passage I mentioned at the beginning of this post, which is about John Updike, Mailer goes on to quote a few sentences from Rabbit, Run, and he adds:

The first quotation is taken from the first five sentences of the book, the second is on the next-to-last page, and the third is nothing less than the last three sentences of the novel. The beginning and end of a novel are usually worked over. They are the index to taste in the writer.

That’s a pretty good rule, and it ensures that the critic is discussing something reasonably close to what the writer intended to say. Best of all, we can approach the problem of excerpting with a kind of joy in the hunt: the search for the slice of a work that will stand as a synecdoche of the whole. In the book U & I, which is also about Updike, Nicholson Baker writes about the “standardized ID phrase” and “the aphoristic consensus” and “the jingle we will have to fight past at some point in the future” to see a writer clearly again, just as fans of Joyce have to do their best to forget about “the ineluctable modality of the visible” and “yes I said yes I will Yes.” For a living author, that repository of familiar quotations is constantly in flux, and reviewers might approach their work with a greater sense of responsibility if they realized that they were playing a part in creating it—one tiny excerpt at a time.

My alternative canon #3: The Long Goodbye

leave a comment »

Poster for The Long Goodbye by Jack Davis

Note: I’ve often discussed my favorite movies on this blog, but I also love films that are relatively overlooked or unappreciated. Over the next two weeks, I’ll be looking at some of the neglected gems, problem pictures, and flawed masterpieces that have shaped my inner life, and which might have become part of the standard cinematic canon if the circumstances had been just a little bit different. You can read the previous installments here

During my freshman year of college, one of my first orders of business was to watch a bunch of movies I’d never had the chance to see. This was back in the late nineties, long before Netflix or streaming video, and filling in the gaps in my cinematic education was a far more haphazard process than it is now: I’d never even had a Blockbuster card. (When I finally got a video store membership, the first movie I rented at the Garage Mall in Cambridge was Twin Peaks: Fire Walk With Me.) I saw many of these films on videocassette in one of the viewing booths at Lamont Library, where you could borrow a pair of headphones and watch a title from the open stacks: it’s how I was introduced to Vertigo, Miller’s Crossing, 8 1/2, the first half of Chimes at Midnight—I never finished it—and many others, including The Long Goodbye. I’d wanted to watch it ever since reading Pauline Kael’s ecstatic review from The New Yorker, especially for the line: “What separates [Robert] Altman from other directors is that time after time he can attain crowning visual effects…and they’re so elusive they’re never precious. They’re like ribbons tying up the whole history of movies.” And when I finally took it in alone one night, I liked it for what it clearly was: a quirky satire of Los Angeles noir that managed to remain compelling despite devoting a total of about five minutes to the plot. Many of its scenes seemed even quirkier then than they did in its initial release, as when Elliott Gould, playing Philip Marlowe, is menaced by a gang of thugs that turns out to include a young, mustachioed Arnold Schwarzenegger.

But it wasn’t until I saw it again a few years later, with an enthusiastic audience at the Brattle Film Archive, that I realized how funny it was. It’s perhaps the one film, aside from M*A*S*H, in which Altman seems so willing to structure comedic set pieces with a genuine setup and payoff, with a big assist from screenwriter Leigh Brackett, who uses the framework of Raymond Chandler’s original novel as a kind of low-horsepower engine that keeps the whole thing running. The film’s basic pleasures are most obvious in the scene in which Mark Rydell’s gangster smashes a Coke bottle across his own girlfriend’s face and then says to Marlowe: “That’s someone I love! And you I don’t even like!” But an even better example is the scene in which Marlowe is hit by a car, followed by a cut to an unconscious figure in a hospital covered from head to toe in bandages—followed in turn by a shot of Marlowe, in the same room, looking balefully at the patient in the next bed. Described like this, it sounds unbearably corny, but I don’t think I’ve ever been so delighted by a gag. In fact, it might be my favorite comedy ever. (It also has my favorite movie poster, drawn with Mad-style dialogue balloons by Jack Davis, which includes a joke that I didn’t get for years. Robert Altman: “This is Nina van Pallandt, who portrays a femme fatale involved in a deceptive plot of shadowy intrigue!” Van Pallandt: “How do you want me to play it?” Altman: “From memory!”) Movies from The Big Lebowski to Inherent Vice have drawn on its mood and incomparable air of cool, but The Long Goodbye remains the great original. It tried to deflate a myth, but in the process, it became a delicious myth in itself. And part of me still wants to live in its world.

Written by nevalalee

June 8, 2016 at 9:00 am

The watchful protectors

leave a comment »

Ben Affleck in Batman V. Superman: Dawn Of Justice

In the forward to his new book Better Living Through Criticism, the critic A.O. Scott imagines a conversation with a hypothetical interlocutor who asks: “Would it be accurate to say that you wrote this whole book to settle a score with Samuel L. Jackson?” “Not exactly,” Scott replies. The story, in case you’ve forgotten, is that after reading Scott’s negative review of The Avengers, Jackson tweeted that it was time to find the New York Times critic a job “he can actually do.” As Scott recounts:

Scores of his followers heeded his call, not by demanding that my editors fire me but, in the best Twitter tradition, by retweeting Jackson’s outburst and adding their own vivid suggestions about what I was qualified to do with myself. The more coherent tweets expressed familiar, you might even say canonical, anticritical sentiments: that I had no capacity for joy; that I wanted to ruin everyone else’s fun; that I was a hater, a square, and a snob; even—and this was kind of a new one—that the nerdy kid in middle school who everybody picked on because he didn’t like comic books had grown up to be me.

Before long, it all blew over, although not before briefly turning Scott into “both a hissable villain and a make-believe martyr for a noble and much-maligned cause.” And while he says that he didn’t write his book solely as a rebuttal to Jackson, he implies that the kerfuffle raised a valuable question: what, exactly, is the function of a critic these days?

It’s an issue that seems worth revisiting after this weekend, when a movie openly inspired by the success of The Avengers rode a tide of fan excitement to a record opening, despite a significantly less positive response from critics. (Deadline quotes an unnamed studio executive: “I don’t think anyone read the reviews!”) By some measures, it’s the biggest opening in history for a movie that received such a negative critical reaction, and if anything, the disconnect between critical and popular reaction is even more striking this time around. But it doesn’t seem to have resulted in the kind of war of words that blindsided Scott four years ago. Part of this might be due to the fact that fans seem much more mixed on the movie itself, or that the critical consensus was uniform enough that no single naysayer stood out. You could even argue—as somebody inevitably does whenever a critically panned movie becomes a big financial success—that the critical reaction is irrelevant for this kind of blockbuster. To some extent, you’d be right: the only tentpole series that seems vulnerable to reviews is the Bond franchise, which skews older, and for the most part, the moviegoers who lined up to see Dawn of Justice were taking something other than the opinions of professional critics into account. This isn’t a superpower on the movie’s part: it simply reflects a different set of concerns. And you might reasonably ask whether this kind of movie has rendered the role of a professional critic obsolete.

A.O. Scott

But I would argue that such critics are more important than ever, and for reasons that have a lot to do with the “soulless corporate spectacle” that Scott decried in The AvengersI’ve noted here before that the individual installments in such franchises aren’t designed to stand on their own: when you’ve got ten more sequels on the release schedule, it’s hard to tell a self-contained, satisfying story, and even harder to change the status quo. (As Joss Whedon said in an interview with Mental Floss: “You’re living in franchise world—not just Marvel, but in most big films—where you can’t kill anyone, or anybody significant.”) You could be cynical and say that no particular film can be allowed to interfere with the larger synergies at stake, or, if you’re in a slightly more generous mood, you could note that this approach is perfectly consistent with the way in which superhero stories have always been told. For the most part, no one issue of Batman is meant to stand as a definitive statement: it’s a narrative that unfolds month by month, year by year, and the character of Batman himself is far more important than any specific adventure. Sustaining that situation for decades on end involves a lot of artistic compromises, as we see in the endless reboots, resets, spinoffs, and alternate universes that the comic book companies use to keep their continuities under control. Like a soap opera, a superhero comic has to create the illusion of forward momentum while remaining more or less in the same place. It’s no surprise that comic book movies would employ the same strategy, which also implies that we need to start judging them by the right set of standards.

But you could say much the same thing about a professional critic. What A.O. Scott says about any one movie may not have an impact on what the overall population of moviegoers—even the ones who read the New York Times—will pay to see, and a long string of reviews quickly blurs together. But a critic who writes thoughtfully about the movies from week to week is gradually building up a narrative, or at least a voice, that isn’t too far removed from what we find in the comics. Critics are usually more concerned with meeting that day’s deadline than with adding another brick to their life’s work, but when I think of Roger Ebert or Pauline Kael, it’s sort of how I think of Batman: it’s an image or an attitude created by its ongoing interactions with the minds of its readers. (Reading Roger Ebert’s memoirs is like revisiting a superhero’s origin story: it’s interesting, but it only incidentally touches the reasons that Ebert continues to mean so much to me.) The career of a working critic these days naturally unfolds in parallel with the franchise movies that will dominate studio filmmaking for the foreseeable future, and if the Justice League series will be defined by our engagement with it for years to come, a critic whose impact is meted out over the same stretch of time is better equipped to talk about it than almost anyone else—as long as he or she approaches it as a dialogue that never ends. If franchises are fated to last forever, we need critics who can stick around long enough to see larger patterns, to keep the conversation going, and to offer some perspective to balance out the hype. These are the critics we deserve. And they’re the ones we need right now.

The critical path

with 5 comments

Renata Adler

A few weeks ago, I had occasion to mention Renata Adler’s famous attack in the New York Review of Books on the reputation of the film critic Pauline Kael. As a lifelong Kael fan, I don’t agree with Adler—who describes Kael’s output as “not simply, jarringly, piece by piece, line by line, and without interruption, worthless”—but I respect the essay’s fire and eloquence, and it’s still a great read. What I’d forgotten is that Adler opens with an assault, not on Kael alone, but on the entire enterprise of professional criticism itself. Here’s what she says:

The job of the regular daily, weekly, or even monthly critic resembles the work of the serious intermittent critic, who writes only when he is asked to or genuinely moved to, in limited ways and for only a limited period of time…Normally, no art can support for long the play of a major intelligence, working flat out, on a quotidian basis. No serious critic can devote himself, frequently, exclusively, and indefinitely, to reviewing works most of which inevitably cannot bear, would even be misrepresented by, review in depth…

The simple truth—this is okay, this is not okay, this is vile, this resembles that, this is good indeed, this is unspeakable—is not a day’s work for a thinking adult. Some critics go shrill. Others go stale. A lot go simultaneously shrill and stale.

Adler concludes: “By far the most common tendency, however, is to stay put and simply to inflate, to pretend that each day’s text is after all a crisis—the most, first, best, worst, finest, meanest, deepest, etc.—to take on, since we are dealing in superlatives, one of the first, most unmistakable marks of the hack.” And I think that she has a point, even if I have to challenge a few of her assumptions. (The statement that most works of art “inevitably cannot bear, would even be misrepresented by, review in depth,” is particularly strange, with its implicit division of all artistic productions into the sheep and the goats. It also implies that it’s the obligation of the artist to provide a worthy subject for the major critic, when in fact it’s the other way around: as a critic, you prove yourself in large part through your ability to mine insight from the unlikeliest of sources.) Writing reviews on a daily or weekly basis, especially when you have a limited amount of time to absorb the work itself, lends itself inevitably to shortcuts, and you often find yourself falling back on the same stock phrases and judgments. And Adler’s warning about “dealing in superlatives” seems altogether prescient. As Keith Phipps and Tasha Robinson of The A.V. Club pointed out a few years back, the need to stand out in an ocean of competing coverage means that every topic under consideration becomes either an epic fail or an epic win: a sensible middle ground doesn’t generate page views.

Pauline Kael

But the situation, at least from Adler’s point of view, is even more dire than when she wrote this essay in the early eighties. When Adler’s takedown of Kael first appeared, the most threatening form of critical dilution lay in weekly movie reviews: today, we’re living in a media environment in which every episode of every television show gets thousands of words of critical analysis from multiple pop culture sites. (Adler writes: “Television, in this respect, is clearly not an art but an appliance, through which reviewable material is sometimes played.” Which is only a measure of how much the way we think and talk about the medium has changed over the intervening three decades.) The conditions that Adler identifies as necessary for the creation of a major critic like Edmund Wilson or Harold Rosenberg—time, the ability to choose one’s subjects, and the freedom to quit when necessary—have all but disappeared for most writers hoping to make a mark, or even just a living. To borrow a trendy phrase, we’ve reached a point of peak content, with a torrent of verbiage being churned out at an unsustainable pace without the advertising dollars to support it, in a situation that can be maintained only by the seemingly endless supply of aspiring writers willing to be chewed up by the machine. And if Adler thought that even a monthly reviewing schedule was deadly for serious criticism, I’d be curious to hear how she feels about the online apprenticeship that all young writers seem expected to undergo these days.

Still, I’d like to think that Adler got it wrong, just as I believe that she was ultimately mistaken about Kael, whose legacy, for all its flaws, still endures. (It’s revealing to note that Adler had a long, distinguished career as a writer and critic herself, and yet she almost certainly remains best known among casual readers for her Kael review.) Not every lengthy writeup of the latest episode of The Vampire Diaries is going to stand the test of time, but as a crucible for forming a critic’s judgment, this daily grind feels like a necessary component, even if it isn’t the only one. A critic needs time and leisure to think about major works of art, which is a situation that the current media landscape doesn’t seem prepared to offer. But the ability to form quick judgments about works of widely varying quality and to express them fluently on deadline is an indispensable part of any critic’s toolbox. When taken as an end itself, it can be deadening, as Adler notes, but it can also be the foundation for something more, even if it has to be undertaken outside of—or despite—the critic’s day job. The critic’s responsibility, now more than ever, isn’t to detach entirely from the relentless pace of pop culture, but to find ways of channeling it into something deeper than the instant think piece or hot take. As a blogger who frequently undertakes projects that can last for months or years, I’m constantly mindful of the relationship between my work on demand and my larger ambitions. And I sure hope that the two halves can work together. Because like it or not, every critic is walking that path already.

Written by nevalalee

February 16, 2016 at 8:55 am

%d bloggers like this: