Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘Pauline Kael

Shoot the piano player

with 2 comments

In his flawed but occasionally fascinating book Bambi vs. Godzilla, the playwright and director David Mamet spends a chapter discussing the concept of aesthetic distance, which is violated whenever viewers remember that they’re simply watching a movie. Mamet provides a memorable example:

An actor portrays a pianist. The actor sits down to play, and the camera moves, without a cut, to his hands, to assure us, the audience, that he is actually playing. The filmmakers, we see, have taken pains to show the viewers that no trickery has occurred, but in so doing, they have taught us only that the actor portraying the part can actually play the piano. This addresses a concern that we did not have. We never wondered if the actor could actually play the piano. We accepted the storyteller’s assurances that the character could play the piano, as we found such acceptance naturally essential to our understanding of the story.

Mamet imagines a hypothetical dialogue between the director and the audience: “I’m going to tell you a story about a pianist.” “Oh, good: I wonder what happens to her!” “But first, before I do, I will take pains to reassure you that the actor you see portraying the hero can actually play the piano.” And he concludes:

We didn’t care till the filmmaker brought it up, at which point we realized that, rather than being told a story, we were being shown a demonstration. We took off our “audience” hat and put on our “judge” hat. We judged the demonstration conclusive but, in so doing, got yanked right out of the drama. The aesthetic distance had been violated.

Let’s table this for now, and turn to a recent article in The Atlantic titled “The Remarkable Laziness of Woody Allen.” To prosecute the case laid out in the headline, the film critic Christopher Orr draws on Eric Lax’s new book Start to Finish: Woody Allen and the Art of Moviemaking, which describes the making of Irrational Man—a movie that nobody saw, which doesn’t make the book sound any less interesting. For Orr, however, it’s “an indictment framed as an encomium,” and he lists what he evidently sees as devastating charges:

Allen’s editor sometimes has to live with technical imperfections in the footage because he hasn’t shot enough takes for her to choose from…As for the shoot itself, Allen has confessed, “I don’t do any preparation. I don’t do any rehearsals. Most of the times I don’t even know what we’re going to shoot.” Indeed, Allen rarely has any conversations whatsoever with his actors before they show up on set…In addition to limiting the number of takes on any given shot, he strongly prefers “master shots”—those that capture an entire scene from one angle—over multiple shots that would subsequently need to be edited together.

For another filmmaker, all of these qualities might be seen as strengths, but that’s beside the point. Here’s the relevant passage:

The minimal commitment that appearing in an Allen film entails is a highly relevant consideration for a time-strapped actor. Lax himself notes the contrast with Mike Leigh—another director of small, art-house films—who rehearses his actors for weeks before shooting even starts. For Damien Chazelle’s La La Land, Stone and her co-star, Ryan Gosling, rehearsed for four months before the cameras rolled. Among other chores, they practiced singing, dancing, and, in Gosling’s case, piano. The fact that Stone’s Irrational Man character plays piano is less central to that movie’s plot, but Allen didn’t expect her even to fake it. He simply shot her recital with the piano blocking her hands.

So do we shoot the piano player’s hands or not? The boring answer, unfortunately, is that it depends—but perhaps we can dig a little deeper. It seems safe to say that it would be impossible to make The Pianist with Adrian Brody’s hands conveniently blocked from view for the whole movie. But I’m equally confident that it doesn’t matter the slightest bit in Irrational Man, which I haven’t seen, whether or not Emma Stone is really playing the piano. La La Land is a slightly trickier case. It would be hard to envision it without at least a few shots of Ryan Gosling playing the piano, and Damien Chazelle isn’t above indulging in exactly the camera move that Mamet decries, in which it tilts down to reassure us that it’s really Gosling playing. Yet the fact that we’re even talking about this gets down to a fundamental problem with the movie, which I mostly like and admire. Its characters are archetypes who draw much of their energy from the auras of the actors who play them, and in the case of Stone, who is luminous and moving as an aspiring actress suffering through an endless series of auditions, the film gets a lot of mileage from our knowledge that she’s been in the same situation. Gosling, to put it mildly, has never been an aspiring jazz pianist. This shouldn’t even matter, but every time we see him playing the piano, he briefly ceases to be a struggling artist and becomes a handsome movie star who has spent three months learning to fake it. And I suspect that the movie would have been elevated immensely by casting a real musician. (This ties into another issue with La La Land, which is that it resorts to telling us that its characters deserve to be stars, rather than showing it to us in overwhelming terms through Gosling and Stone’s singing and dancing, which is merely passable. It’s in sharp contrast to Martin Scorsese’s New York, New York, one of its clear spiritual predecessors, in which it’s impossible to watch Liza Minnelli without becoming convinced that she ought to be the biggest star in the world. And when you think of how quirky, repellent, and individual Minnelli and Robert De Niro are allowed to be in that film, La La Land starts to look a little schematic.)

And I don’t think I’m overstating it when I argue that the seemingly minor dilemma of whether to show the piano player’s hands shades into the larger problem of how much we expect our actors to really be what they pretend that they are. I don’t think any less of Bill Murray because he had to employ Terry Fryer as a “hand double” for his piano solo in Groundhog Day, and I don’t mind that the most famous movie piano player of them all—Dooley Wilson in Casablanca—was faking it. And there’s no question that you’re taken out of the movie a little when you see Richard Chamberlain playing Tchaikovsky’s Piano Concerto No. 1 in The Music Lovers, however impressive it might be. (I’m willing to forgive De Niro learning to mime the saxophone for New York, New York, if only because it’s hard to imagine how it would look otherwise. The piano is just about the only instrument in which it can plausibly be left at the director’s discretion. And in his article, revealingly, Orr fails to mention that none other than Woody Allen was insistent that Sean Penn learn the guitar for Sweet and Lowdown. As Allen himself might say, it depends.) On some level, we respond to an actor playing the piano much like the fans of Doctor Zhivago, whom Pauline Kael devastatingly called “the same sort of people who are delighted when a stage set has running water or a painted horse looks real enough to ride.” But it can serve the story as much as it can detract from it, and the hard part is knowing how and when. As one director notes:

Anybody can learn how to play the piano. For some people it will be very, very difficult—but they can learn it. There’s almost no one who can’t learn to play the piano. There’s a wide range in the middle, of people who can play the piano with various degrees of skill; a very, very narrow band at the top, of people who can play brilliantly and build upon a technical skill to create great art. The same thing is true of cinematography and sound mixing. Just technical skills. Directing is just a technical skill.

This is Mamet writing in On Directing Film, which is possibly the single best work on storytelling I know. You might not believe him when he says that directing is “just a technical skill,” but if you do, there’s a simple way to test if you have it. Do you show the piano player’s hands? If you know the right answer for every scene, you just might be a director.

The conveyor belt

leave a comment »

For all the endless discussion of various aspects of Twin Peaks, one quality that sometimes feels neglected is the incongruous fact that it had one of the most attractive casts in television history. In that respect—and maybe in that one alone—it was like just about every other series that ever existed. From prestige dramas to reality shows to local newscasts, the story of television has inescapably been that of beautiful men and women on camera. A show like The Hills, which was one of my guilty pleasures, seemed to be consciously trying to see how long it could coast on surface beauty alone, and nearly every series, ambitious or otherwise, has used the attractiveness of its actors as a commercial or artistic strategy. (In one of the commentary tracks on The Simpsons, a producer describes how a network executive might ask indirectly about the looks of the cast of a sitcom: “So how are we doing aesthetically?”) If this seemed even more pronounced on Twin Peaks, it was partially because, like Mad Men, it took its conventionally glamorous actors into dark, unpredictable places, and also because David Lynch had an eye for a certain kind of beauty, both male and female, that was more distinctive than that of the usual soap opera star. He’s continued this trend in the third season, which has been populated so far by such striking presences as Chrysta Bell, Ben Rosenfield, and Madeline Zima, and last night’s episode features an extended, very funny scene between a delighted Gordon Cole and a character played by Bérénice Marlohe, who, with her red lipstick and “très chic” spike heels, might be the platonic ideal of his type.

Lynch isn’t the first director to display a preference for actors, particularly women, with a very specific look—although he’s thankfully never taken it as far as his precursor Alfred Hitchcock did. And the notion that a film or television series can consist of little more than following around two beautiful people with a camera has a long and honorable history. My two favorite movies of my lifetime, Blue Velvet and Chungking Express, both understand this implicitly. It’s fair to say that the second half of the latter film would be far less watchable if it didn’t involve Tony Leung and Faye Wong, two of the most attractive people in the world, and Wong Kar-Wai, like so many filmmakers before him, uses it as a psychological hook to take us into strange, funny, romantic places. Blue Velvet is a much darker work, but it employs a similar lure, with the actors made up to look like illustrations of themselves. In a Time cover story on Lynch from the early nineties, Richard Corliss writes of Kyle MacLachlan’s face: “It is a startling visage, as pure of line as an art deco vase, with soft, all-American features and a comic-book hero’s jutting chin—you could park a Packard on it.” It echoes what Pauline Kael says of Isabella Rossellini in Blue Velvet: “She even has the kind of nostrils that cover artists can represent accurately with two dots.” MacLachlan’s chin and Rossellini’s nose would have caught our attention in any case, but it’s also a matter of lighting and makeup, and Lynch shoots them to emphasize their roots in the pulp tradition, or, more accurately, in the subconscious store of images that we take from those sources. And the casting gets him halfway there.

This leaves us in a peculiar position when it comes to the third season of Twin Peaks, which, both by nature and by design, is about aging. Mark Frost said in an interview: “It’s an exercise in engaging with one of the most powerful themes in all of art, which is the ruthless passage of time…We’re all trapped in time and we’re all going to die. We’re all traveling along this conveyor belt that is relentlessly moving us toward this very certain outcome.” One of the first, unforgettable images from the show’s promotional materials was Kyle MacLachlan’s face, a quarter of a century older, emerging from the darkness into light, and our feelings toward these characters when they were younger inevitably shape the way we regard them now. I felt this strongly in two contrasting scenes from last night’s episode. It offers us our first extended look at Sarah Palmer, played by Grace Zabriskie, who delivers a freakout in a grocery store that reminds us of how much we’ve missed and needed her—it’s one of the most electrifying moments of the season. And we also finally see Audrey Horne again, in a brutally frustrating sequence that feels to me like the first time that the show’s alienating style comes off as a miscalculation, rather than as a considered choice. Audrey isn’t just in a bad place, which we might have expected, but a sad, unpleasant one, with a sham marriage and a monster of a son, and she doesn’t even know the worst of it yet. It would be a hard scene to watch with anyone, but it’s particularly painful when we set it against our first glimpse of Audrey in the original series, when we might have said, along with the Norwegian businessman at the Great Northern Hotel: “Excuse me, is there something wrong, young pretty girl?”

Yet the two scenes aren’t all that dissimilar. Both Sarah and Audrey are deeply damaged characters who could fairly say: “Things can happen. Something happened to me.” And I can only explain away the difference by confessing that I was a little in love in my early teens with Audrey. Using those feelings against us—much as the show resists giving us Dale Cooper again, even as it extravagantly develops everything around him—must have been what Lynch and Frost had in mind. And it isn’t the first time that this series has toyed with our emotions about beauty and death. The original dream girl of Twin Peaks, after all, was Laura Palmer herself, as captured in two of its most indelible images: Laura’s prom photo, and her body wrapped in plastic. (Sheryl Lee, like January Jones in Mad Men, was originally cast for her look, and only later did anyone try to find out whether or not she could act.) The contrast between Laura’s lovely features and her horrifying fate, in death and in the afterlife, was practically the motor on which the show ran. Her face still opens every episode of the revival, dimly visible in the title sequence, but it also ended each installment of the original run, gazing out from behind the prison bars of the closing credits to the strains of “Laura Palmer’s Theme.” In the new season, the episodes generally conclude with whatever dream pop band Lynch feels like showcasing, usually with a few cool women, and I wouldn’t want to give that up. But I also wonder whether we’re missing something when we take away Laura at the end. This season began with Cooper being asked to find her, but she often seems like the last thing on anyone’s mind. Twin Peaks never allowed us to forget her before, because it left us staring at her photograph each week, which was the only time that one of its beautiful faces seemed to be looking back at us.

The driver and the signalman

leave a comment »

In his landmark book Design With Nature, the architect Ian L. McHarg shares an anecdote from the work of an English biologist named George Scott Williamson. McHarg, who describes Williamson as “a remarkable man,” mentions him in passing in a discussion of the social aspects of health: “He believed that physical, mental, and social health were unified attributes and that there were aspects of the physical and social environment that were their corollaries.” Before diving more deeply into the subject, however, McHarg offers up an apparently unrelated story that was evidently too interesting to resist:

One of the most endearing stories of this man concerns a discovery made when he was undertaking a study of the signalmen who maintain lonely vigils while operating the switches on British railroads. The question to be studied was whether these lonely custodians were subject to boredom, which would diminish their dependability. It transpired that lonely or not, underpaid or not, these men had a strong sense of responsibility and were entirely dependable. But this was not the major perception. Williamson learned that every single signalman, from London to Glasgow, could identify infallibly the drivers of the great express trains which flashed past their vision at one hundred miles per hour. The drivers were able to express their unique personalities through the unlikely and intractable medium of some thousand tons of moving train, passing in a fraction of a second. The signalmen were perceptive to this momentary expression of the individual, and Williamson perceived the power of the personality.

I hadn’t heard of Williamson before reading this wonderful passage, and all that I know about him is that he was the founder of the Peckham Experiment, an attempt to provide inexpensive health and recreation services to a neighborhood in Southeast London. The story of the signalmen seems to make its first appearance in his book Science, Synthesis, and Sanity: An Inquiry Into the Nature of Living, which he cowrote with his wife and collaborator Innes Hope Pearse. They relate:

Or again, sitting in a railway signal box on a dark night, in the far distance from several miles away came the rumble of the express train from London. “Hallo,” said my friend the signalman. “Forsyth’s driving her—wonder what’s happened to Courtney?” Next morning, on inquiry of the stationmaster at the junction, I found it was true. Courtney had been taken ill suddenly and Forsyth had deputized for him—all unknown, of course, to the signalman who in any case had met neither Forsyth nor Courtney. He knew them only as names on paper and by their “action-pattern” impressed on a dynamic medium—a unique action-pattern transmitted through the rumble of an unseen train. Or, in a listening post with nothing visible in the sky, said the listener: “That’s ‘Lizzie,’ and Crompton’s flying her.” “Lizzie” an airplane, and her pilot imprinting his action-pattern on her course.

And while Williamson and Pearse are mostly interested in the idea of an individual’s “action-pattern” being visible in an unlikely medium, it’s hard not to come away more struck, like McHarg, by the image of the lone signalman, the passing machine, and the transient moment of connection between them.

As I read over this, it occurred to me that it perfectly encapsulated our relationship with a certain kind of pop culture. We’re the signalmen, and the movie or television show is the train. As we sit in our living rooms, lonely and relatively isolated, something passes across our field of vision—an episode of Game of Thrones, say, which often feels like a locomotive to the face. This is the first time that we’ve seen it, but it represents the end result of a process that has unfolded for months or years, as the episode was written, shot, edited, scored, and mixed, with the contributions of hundreds of men and women we wouldn’t be able to name. As we experience it, however, we see the glimmer of another human being’s personality, as expressed through the narrative machine. It isn’t just a matter of the visible choices made on the screen, but of something less definable, a “style” or “voice” or “attitude,” behind which, we think, we can make out the amorphous factors of influence and intent. We identify an artist’s obsessions, hangups, and favorite tricks, and we believe that we can recognize the mark of a distinctive style even when it goes uncredited. Sometimes we have a hunch about what happened on the set that day, or the confluence of studio politics that led to a particular decision, even if we have no way of knowing it firsthand. (This was one of the tics of Pauline Kael’s movie reviews that irritated Renata Adler: “There was also, in relation to filmmaking itself, an increasingly strident knowingness: whatever else you may think about her work, each column seemed more hectoringly to claim, she certainly does know about movies. And often, when the point appeared most knowing, it was factually false.”) We may never know the truth, but it’s enough if a theory seems plausible. And the primary difference between us and the railway signalman is that we can share our observations with everyone in sight.

I’m not saying that these inferences are necessarily incorrect, any more than the signalmen were wrong when they recognized the personal styles of particular drivers. If Williamson’s account is accurate, they were often right. But it’s worth emphasizing that the idea that you can recognize a driver from the passage of a train is no less strange than the notion that we can know something about, say, Christopher Nolan’s personality from Dunkirk. Both are “unlikely and intractable” mediums that serve as force multipliers for individual ability, and in the case of a television show or movie, there are countless unseen variables that complicate our efforts to attribute anything to anyone, much less pick apart the motivations behind specific details. The auteur theory in film represents an attempt to read movies like novels, but as Thomas Schatz pointed out decades ago in his book The Genius of the System, trying to read Casablanca as the handiwork of Michael Curtiz, rather than that of all of its collaborators taken together, is inherently problematic. And this is easy to forget. (I was reminded of this by the recent controversy over David Benioff and D.B. Weiss’s pitch for their Civil War alternate history series Confederate. I agree with the case against it that the critic Roxane Gay presents in her opinion piece for the New York Times, but the fact that we’re closely scrutinizing a few paragraphs for clues about the merits of a show that doesn’t even exist only hints at how fraught the conversation will be after it actually premieres.) There’s a place for informed critical discussion about any work of art, but we’re often drawing conclusions based on the momentary passage of a huge machine before our eyes, and we don’t know much about how it got there or what might be happening inside. Most of us aren’t even signalmen, who are a part of the system itself. We’re trainspotters.

The genius naïf

leave a comment »

Last night, after watching the latest episode of Twin Peaks, I turned off the television before the premiere of the seventh season of Game of Thrones. This is mostly because I only feel like subscribing to one premium channel at a time, but even if I still had HBO, I doubt that I would have tuned in. I gave up on Game of Thrones a while back, both because I was uncomfortable with its sexual violence and because I felt that the average episode had degenerated into a holding pattern—it cut between storylines simply to remind us that they still existed, and it relied on unexpected character deaths and bursts of bloodshed to keep the audience awake. The funny thing, of course, is that you could level pretty much the same charges against the third season of Twin Peaks, which I’m slowly starting to feel may be the television event of the decade. Its images of violence against women are just as unsettling now as they were a quarter of a century ago, when Madeline Ferguson met her undeserved end; it cuts from one subplot to another so inscrutably that I’ve compared its structure to that of a sketch comedy show; and it has already delivered a few scenes that rank among the goriest in recent memory. So what’s the difference? If you’re feeling generous, you can say that one is an opportunistic display of popular craftsmanship, while the other is a singular, if sometimes incomprehensible, artistic vision. And if you’re less forgiving, you can argue that I’m being hard on one show that I concluded was jerking me around, while indulging another that I wanted badly to love.

It’s a fair point, although I don’t think it’s necessarily true, based solely on my experience of each show in the moment. I’ve often found my attention wandering during even solid episodes of Game of Thrones, while I’m rarely less than absorbed for the full hour of Twin Peaks, even though I’d have trouble explaining why. But there’s no denying the fact that I approach each show in a different state of mind. One of the most obvious criticisms of Twin Peaks, then and now, is that its pedigree prompts viewers to overlook or forgive scenes that might seem questionable within in a more conventional series. (There have been times, I’ll confess, when I’ve felt like Homer Simpson chuckling “Brilliant!” and then confessing: “I have absolutely no idea what’s going on.”) Yet I don’t think we need to apologize for this. The history of the series, the track record of its creators, and everything implied by its brand mean that most viewers are willing to give it the benefit the doubt. David Lynch and Mark Frost are clearly aware of their position, and they’ve leveraged it to the utmost, resulting in a show in which they’re free to do just about anything they like. It’s hard to imagine any other series getting away with this, but’s also hard to imagine another show persuading a million viewers each week to meet it halfway. The implicit contract between Game of Thrones and its audience is very different, which makes the show’s lapses harder to forgive. One of the great fascinations of Lynch’s career is whether he even knows what he’s doing half the time, and it’s much less interesting to ask this question of David Benioff and D.B. Weiss, any more than it is of Chris Carter.

By now, I don’t think there’s any doubt that Lynch knows exactly what he’s doing, but that confusion is still central to his appeal. Pauline Kael’s review of Blue Velvet might have been written of last night’s Twin Peaks:

You wouldn’t mistake frames from Blue Velvet for frames from any other movie. It’s an anomaly—the work of a genius naïf. If you feel that there’s very little art between you and the filmmaker’s psyche, it may be because there’s less than the usual amount of inhibition…It’s easy to forget about the plot, because that’s where Lynch’s naïve approach has its disadvantages: Lumberton’s subterranean criminal life needs to be as organic as the scrambling insects, and it isn’t. Lynch doesn’t show us how the criminals operate or how they’re bound to each other. So the story isn’t grounded in anything and has to be explained in little driblets of dialogue. But Blue Velvet has so much aural-visual humor and poetry that it’s sustained despite the wobbly plot and the bland functional dialogue (that’s sometimes a deliberate spoof of small-town conventionality and sometimes maybe not)…Lynch skimps on these commercial-movie basics and fouls up on them, too, but it’s as if he were reinventing movies.

David Thomson, in turn, called the experience of seeing Blue Velvet a moment of transcendence: “A kind of passionate involvement with both the story and the making of a film, so that I was simultaneously moved by the enactment on screen and by discovering that a new director had made the medium alive and dangerous again.”

Twin Peaks feels more alive and dangerous than Game of Thrones ever did, and the difference, I think, lies in our awareness of the effects that the latter is trying to achieve. Even at its most shocking, there was never any question about what kind of impact it wanted to have, as embodied by the countless reaction videos that it inspired. (When you try to imagine videos of viewers reacting to Twin Peaks, you get a sense of the aesthetic abyss that lies between these two shows.) There was rarely a scene in which the intended emotion wasn’t clear, and even when it deliberately sought to subvert our expectations, it was by substituting one stimulus and response for another—which doesn’t mean that it wasn’t effective, or that there weren’t moments, at its best, that affected me as powerfully as any I’ve ever seen. Even the endless succession of “Meanwhile, back at the Wall” scenes had a comprehensible structural purpose. On Twin Peaks, by contrast, there’s rarely any sense of how we’re supposed to be feeling about any of it. Its violence is shocking because it doesn’t seem to serve anything, certainly not anyone’s character arc, and our laughter is often uncomfortable, so that we don’t know if we’re laughing at the situation onscreen, at the show, or at ourselves. It may not be an experiment that needs to be repeated ever again, any more than Blue Velvet truly “reinvented” anything over the long run, except my own inner life. But at a time when so many prestige dramas seem content to push our buttons in ever more expert and ruthless ways, I’m grateful for a show that resists easy labels. Lynch may or may not be a genius naïf, but no ordinary professional could have done what he does here.

Written by nevalalee

July 17, 2017 at 7:54 am

We lost it at the movies

with 2 comments

Over a decade ago, the New Yorker film critic David Denby published a memoir titled American Sucker. I read it when it first came out, and I honestly can’t remember much about it, but there’s one section that has stuck in my mind ever since. Denby is writing of his obsession with investing, which has caused him to lose much of what he once loved about life, and he concludes sadly:

Well, you can’t get back to that. Do your job, then. After much starting and stopping, and considerable shifting of clauses, all the while watching the Nasdaq run above 5,000 on the CNNfn website, I put together the following as the opening of a review.

It happens to be his piece on Steven Soderbergh’s Erin Brockovich, which begins like this:

In Erin Brockovich, Julia Roberts appears in scene after scene wearing halter tops with a bit of bra showing; there’s a good bit of leg showing, too, often while she’s holding an infant on one arm. This upbeat, inspirational melodrama, based on a true story and written by Susannah Grant and directed by Steven Soderbergh, has been bought to life by a movie star on a heavenly rampage. Roberts swings into rooms, ablaze with indignation, her breasts pushed up and bulging out of the skimpy tops, and she rants at the people gaping at her. She’s a mother and a moral heroine who dresses like trailer trash but then snaps at anyone who doesn’t take her seriously—a real babe in arms, who gets to protect the weak and tell off the powerful while never turning her back on what she is.

Denby stops to evaluate his work: “Nothing great, but not bad either. I was reasonably happy with it as a lead—it moves, it’s active, it conveys a little of my pleasure in the picture. I got up and walked around the outer perimeter of the twentieth floor, looking west, looking east.”

I’ve never forgotten this passage, in part because it represents one of the few instances in which a prominent film critic has pulled back the curtain on an obvious but rarely acknowledged fact—that criticism is a genre of writing in itself, and that the phrases with which a movie is praised, analyzed, or dismissed are subject to the same sort of tinkering, revision, and doubt that we associate with other forms of expression. Critics are only human, even if sometimes try to pretend that they aren’t, as they present their opinions as the product of an unruffled sensibility. I found myself thinking of this again as I followed the recent furor over David Edelstein’s review of Wonder Woman in New York magazine, which starts as follows:

The only grace note in the generally clunky Wonder Woman is its star, the five-foot-ten-inch Israeli actress and model Gal Gadot, who is somehow the perfect blend of superbabe-in-the-woods innocence and mouthiness. She plays Diana, the daughter of the Amazon queen Hippolyta (Connie Nielsen) and a trained warrior. But she’s also a militant peacenik. Diana lives with Amazon women on a mystically shrouded island but she’s not Amazonian herself. She was, we’re told, sculpted by her mother from clay and brought to life by Zeus. (I’d like to have seen that.)

Edelstein was roundly attacked for what was perceived as the sexist tone of his review, which also includes such observations as “Israeli women are a breed unto themselves, which I say with both admiration and trepidation,” and “Fans might be disappointed that there’s no trace of the comic’s well-documented S&M kinkiness.” He responded with a private Facebook post, widely circulated, in which he wrote: “Right now I think the problem is that some people can’t read.” And he has since written a longer, more apologetic piece in which he tries to explain his choice of words.

I haven’t seen Wonder Woman, although I’m looking forward to it, so I won’t wade too far into the controversy itself. But when I look at these two reviews—which, significantly, are about films focusing on different sorts of heroines—I see some striking parallels. It isn’t just the echo of “a real babe in arms” with “superbabe-in-the-woods,” or how Brockovich “gets to protect the weak and tell off the powerful” while Diana is praised for her “mouthiness.” It’s something in the rhythm of their openings, which start at a full sprint with a consideration of a movie star’s appearance. As Denby says, “it moves, it’s active,” almost to a fault. Here are three additional examples, taken at random from the first paragraphs of reviews published in The New Yorker:

Gene Wilder stares at the world with nearsighted, pale-blue-eyed wonder; he was born with a comic’s flyblown wig and the look of a reddish creature from outer space. His features aren’t distinct; his personality lacks definition. His whole appearance is so fuzzy and weak he’s like mist on the lens.

There is a thick, raw sensuality that some adolescents have which seems almost preconscious. In Saturday Night Fever, John Travolta has this rawness to such a degree that he seems naturally exaggerated: an Expressionist painter’s view of a young role. As Tony, a nineteen-year-old Italian Catholic who works selling paint in a hardware store in Brooklyn’s Bay Ridge, he wears his heavy black hair brushed up in a blower-dried pompadour. His large, wide mouth stretches across his narrow face, and his eyes—small slits, close together—are, unexpectedly, glintingly blue and panicky.

As Jake La Motta, the former middleweight boxing champ, in Raging Bull, Robert De Niro wears scar tissue and a big, bent nose that deform his face. It’s a miracle that he didn’t grow them—he grew everything else. He developed a thick-muscled neck and a fighter’s body, and for the scenes of the broken, drunken La Motta he put on so much weight that he seems to have sunk in the fat with hardly a trace of himself left.

All of these reviews were written, of course, by Pauline Kael, who remains the movie critic who has inspired the greatest degree of imitation among her followers. And when you go back and read Denby and Edelstein’s openings, they feel like Kael impersonations, which is the mode on which a critic tends to fall back when he or she wants to start a review so that “it moves, it’s active.” Beginning with a description of the star, delivered in her trademark hyperaware, slightly hyperbolic style, was one of Kael’s stock devices, as if she were observing an animal seen in the wild and frantically jotting down her impressions before they faded. It’s a technical trick, but it’s a good one, and it isn’t surprising that Kael’s followers like to employ it, consciously or otherwise. It’s when a male critic uses it to describe the appearance of a woman that we run into trouble. (The real offender here isn’t Denby or Edelstein, but Anthony Lane, Kael’s successor at The New Yorker, whose reviews have the curious habit of panning a movie for a page and a half, and then pausing a third of the way from the end to rhapsodize about the appearance of a starlet in a supporting role, which is presented as its only saving grace. He often seems to be leering at her a little, which is possibly an inadvertent consequence of his literary debt to Kael. When Lane says of Scarlett Johansson, “She seemed to be made from champagne,” he’s echoing the Kael who wrote of Madeline Kahn: “When you look at her, you see a water bed at just the right temperature.”) Kael was a sensualist, and to the critics who came after her, who are overwhelmingly male, she bequeathed a toolbox that is both powerful and susceptible to misuse when utilized reflexively or unthinkingly. I don’t think that Edelstein is necessarily sexist, but he was certainly careless, and in his routine ventriloquism of Kael, which to a professional critic comes as easily as breathing, he temporarily forgot who he was and what movie he was reviewing. Kael was the Wonder Woman of film critics. But when we try to channel her voice, and we can hardly help it, it’s worth remembering—as another superhero famously learned—that with great power comes great responsibility.

The critical path

leave a comment »

Renata Adler

Note: I’m taking a few days off, so I’ll be republishing some of my favorite pieces from earlier in this blog’s run. This post originally appeared, in a slightly different form, on February 16, 2016.

Every few years or so, I go back and revisit Renata Adler’s famous attack in the New York Review of Books on the reputation of the film critic Pauline Kael. As a lifelong Kael fan, I don’t agree with Adler—who describes Kael’s output as “not simply, jarringly, piece by piece, line by line, and without interruption, worthless”—but I respect the essay’s fire and eloquence, and it’s still a great read. What is sometimes forgotten is that Adler opens with an assault, not on Kael alone, but on the entire enterprise of professional criticism itself. Here’s what she says:

The job of the regular daily, weekly, or even monthly critic resembles the work of the serious intermittent critic, who writes only when he is asked to or genuinely moved to, in limited ways and for only a limited period of time…Normally, no art can support for long the play of a major intelligence, working flat out, on a quotidian basis. No serious critic can devote himself, frequently, exclusively, and indefinitely, to reviewing works most of which inevitably cannot bear, would even be misrepresented by, review in depth…

The simple truth—this is okay, this is not okay, this is vile, this resembles that, this is good indeed, this is unspeakable—is not a day’s work for a thinking adult. Some critics go shrill. Others go stale. A lot go simultaneously shrill and stale.

Adler concludes: “By far the most common tendency, however, is to stay put and simply to inflate, to pretend that each day’s text is after all a crisis—the most, first, best, worst, finest, meanest, deepest, etc.—to take on, since we are dealing in superlatives, one of the first, most unmistakable marks of the hack.” And I think that she has a point, even if I have to challenge a few of her assumptions. (The statement that most works of art “inevitably cannot bear, would even be misrepresented by, review in depth,” is particularly strange, with its implicit division of all artistic productions into the sheep and the goats. It also implies that it’s the obligation of the artist to provide a worthy subject for the major critic, when in fact it’s the other way around: as a critic, you prove yourself in large part through your ability to mine insight from the unlikeliest of sources.) Writing reviews on a daily or weekly basis, especially when you have a limited amount of time to absorb the work itself, lends itself inevitably to shortcuts, and you often find yourself falling back on the same stock phrases and judgments. And Adler’s warning about “dealing in superlatives” seems altogether prescient. As Keith Phipps and Tasha Robinson of The A.V. Club pointed out a few years back, the need to stand out in an ocean of competing coverage means that every topic under consideration becomes either an epic fail or an epic win: a sensible middle ground doesn’t generate page views.

Pauline Kael

But the situation, at least from Adler’s point of view, is even more dire than when she wrote this essay in the early eighties. When Adler’s takedown of Kael first appeared, the most threatening form of critical dilution lay in weekly movie reviews: today, we’re living in a media environment in which every episode of every television show gets thousands of words of critical analysis from multiple pop culture sites. (Adler writes: “Television, in this respect, is clearly not an art but an appliance, through which reviewable material is sometimes played.” Which is only a measure of how much the way we think and talk about the medium has changed over the intervening three decades.) The conditions that Adler identifies as necessary for the creation of a major critic like Edmund Wilson or Harold Rosenberg—time, the ability to choose one’s subjects, and the freedom to quit when necessary—have all but disappeared for most writers hoping to make a mark, or even just a living. To borrow a trendy phrase, we’ve reached a point of peak content, with a torrent of verbiage being churned out at an unsustainable pace without the advertising dollars to support it, in a situation that can be maintained only by the seemingly endless supply of aspiring writers willing to be chewed up by the machine. And if Adler thought that even a monthly reviewing schedule was deadly for serious criticism, I’d be curious to hear how she feels about the online apprenticeship that all young writers seem expected to undergo these days.

Still, I’d like to think that Adler got it wrong, just as I believe that she was ultimately mistaken about Kael, whose legacy, for all its flaws, still endures. (It’s revealing to note that Adler had a long, distinguished career as a writer and critic herself, and yet she almost certainly remains best known among casual readers for her Kael review.) Not every lengthy writeup of the latest episode of Riverdale is going to stand the test of time, but as a crucible for forming a critic’s judgment, this daily grind feels like a necessary component, even if it isn’t the only one. A critic needs time and leisure to think about major works of art, which is a situation that the current media landscape doesn’t seem prepared to offer. But the ability to form quick judgments about works of widely varying quality and to express them fluently on deadline is an indispensable part of any critic’s toolbox. When taken as an end itself, it can be deadening, as Adler notes, but it can also be the foundation for something more, even if it has to be undertaken outside of—or despite—the critic’s day job. The critic’s responsibility, now more than ever, isn’t to detach entirely from the relentless pace of pop culture, but to find ways of channeling it into something deeper than the instantaneous think piece or hot take. As a daily blogger who also undertakes projects that can last for months or years, I’m constantly mindful of the relationship between my work on demand and my larger ambitions. And I sure hope that the two halves can work together. Because, like it or not, every critic is walking that path already.

Written by nevalalee

April 18, 2017 at 9:00 am

The art of the anti-blurb

leave a comment »

In a recent issue of The New Yorker, the critic Dan Chiasson offers up an appraisal of the poet Bill Knott, who died in 2014. To be honest, I’d either never heard of Knott or forgotten his name, but I suspect that he might have been pleased by this. Knott, who taught for decades at Emerson College, spent his entire career sticking resolutely to the edges of the literary world, distancing himself from mainstream publishers and electing to distribute his poems himself in cheap editions on Amazon. Chiasson relates:

The books that did make it to print usually featured brutal “anti-blurbs,” which Knott culled from reviews good and bad alike: his work was “grotesque,” “malignant,” “tasteless,” and “brainless,” according to some of the big names of the day.

Here are a few more of the blurbs he reprinted: “Bill Knott’s ancient, academic ramblings are part of what’s wrong with poetry today. Ignore the old bastard.” “Bill Knott bores me to tears.” “Bill Knott should be beaten with a flail.” “Bill Knott’s poems are so naïve that the question of their poetic quality hardly arises…Mr. Knott practices a dead language.” According to another reminiscence by the editor Robert P. Baird, Knott sometimes took it even further: “On his various blogs, which spawned and deceased like mayflies, he posted collages of rejection slips and a running tally of anti-blurbs: positive reviews and compliments that he’d carved up with ellipses to read like pans.” Even his actual negative reviews weren’t enough—Knott felt obliged to create his own.

The idea of a writer embracing his attackers has an obvious subversive appeal. Norman Mailer, revealingly, liked the idea so much that he indulged in it no fewer than three times, and far less nimbly than Knott did. After the release of The Deer Park, he ran an ad in The Village Voice that amounted to a parody of the usual collage of laudatory quotes—“The year’s worst snake pit in fiction,” “Moronic mindlessness,” “A bunch of bums”—and noted in fine print at the bottom, just in case we didn’t get the point: “This advertisement was paid for by Norman Mailer.” Two decades later, he decided to do the same thing with Marilyn, mostly as a roundabout way of responding to a single bad review by Pauline Kael. As the editor Robert Markel recalls in Peter Manso’s oral biography:

The book was still selling well when [Mailer] came in with his idea of a full two-page ad. Since he was now more or less in the hands of [publisher] Harold Roth, there was a big meeting in Harold’s office. What he wanted to do was exactly what he’d done with The Village Voice ad for The Deer Park: present all the positive and negative reviews, including Kael’s, setting the two in opposition. Harold was very much against it. He thought the two pages would be a stupid waste of money, but more, it was the adversarial nature of the ad as Norman conceived it.

Ultimately, Mailer persuaded Roth to play along: “He implied he’d made a study of this kind of thing and knew what he was talking about.” And five years down the line, he did it yet again with his novel Ancient Evenings, printing up a counter display for bookstores with bad reviews for Moby Dick, Anna Karenina, Leaves of Grass, and his own book, followed by a line with a familiar ring to it: “The quotations in this poster were selected by Norman Mailer.”

This compulsiveness about reprinting his bad reviews, and his insistence that everyone know that he had conceived and approved of it, is worth analyzing, because it’s very different from Knott’s. Mailer’s whole life was built on sustaining an image of intellectual machismo that often rested on unstable foundations, and embracing the drubbings that his books received was a way of signaling that he was tougher than his critics. Like so much else, it was a pose—Mailer hungered for fame and attention, and he felt his negative reviews as keenly as anyone. When Time ran a snarky notice of his poetry collection Deaths for the Ladies, Mailer replied, “in a fury of incalculable pains,” with a poem of his own, in which he compared himself to a bull in the ring and the reviewer to a cowardly picador. He recalled in Existential Errands:

The review in Time put iron into my heart again, and rage, and the feeling that the enemy was more alive than ever, and dirtier in the alley, and so one had to mend, and put on the armor, and go to war, go out to war again, and try to hew huge strokes with the only broadsword God ever gave you, a glimpse of something like Almighty prose.

This is probably a much healthier response. But in the contrast between Mailer’s expensive advertisements for himself and Knott’s photocopied chapbooks, you can see the difference between a piece of performance art and a philosophy of life truly lived. Of the two, Mailer ends up seeming more vulnerable. As he admits: “I had secret hopes, I now confess, that Deaths for the Ladies would be a vast success at the bar of poetry.”

Of course, Knott’s attitude was a bit of a pose as well. Chiasson once encountered his own name on Knott’s blog, which referred to him as “Chiasson-the-Assassin,” which indicates that the poet’s attitude toward critics was something other than indifference. But it was also a pose that was indistinguishable from the man inside, as Elisa Gabbert, one of Kott’s former students, observed: “It was kind of a goof, but that was his whole life. It was a really grand goof.” And you can judge them by their fruits. Mailer’s advertisements are brilliant, but the product that they’re selling is Mailer himself, and you’re clearly supposed to depart with the impression that the critics have trashed a major work of art. After reading Knott’s anti-blurbs, you end up questioning the whole notion of laudatory quotes itself, which is a more productive kind of skepticism. (David Lynch pulled off something similar when he printed an ad for Lost Highway with the words: “Two Thumbs Down!” In response, Roger Ebert wrote: “It’s creative to use the quote in that way…These days quotes in movie ads have been devalued by the ‘quote whores’ who supply gushing praise to publicists weeks in advance of an opening.” The situation with blurbs is slightly different, but there’s no question that they’ve been devalued as well—a book without “advance praise” looks vaguely suspicious, so the only meaningful fact about most blurbs is that they exist.) Resistance to reviews is so hard for a writer to maintain that asserting it feels like a kind of superpower. If asked, Mailer might have replied, like Bruce Banner in The Avengers: “That’s my secret. I’m always angry.” But I have a hunch that the truth is closer to what Wolverine says when Rogue asks if it hurts when his claws come out: “Every time.”

%d bloggers like this: