Posts Tagged ‘Pauline Kael’
Note: I’m taking a few days off, so I’ll be republishing some of my favorite pieces from earlier in this blog’s run. This post originally appeared, in a slightly different form, on February 16, 2016.
Every few years or so, I go back and revisit Renata Adler’s famous attack in the New York Review of Books on the reputation of the film critic Pauline Kael. As a lifelong Kael fan, I don’t agree with Adler—who describes Kael’s output as “not simply, jarringly, piece by piece, line by line, and without interruption, worthless”—but I respect the essay’s fire and eloquence, and it’s still a great read. What is sometimes forgotten is that Adler opens with an assault, not on Kael alone, but on the entire enterprise of professional criticism itself. Here’s what she says:
The job of the regular daily, weekly, or even monthly critic resembles the work of the serious intermittent critic, who writes only when he is asked to or genuinely moved to, in limited ways and for only a limited period of time…Normally, no art can support for long the play of a major intelligence, working flat out, on a quotidian basis. No serious critic can devote himself, frequently, exclusively, and indefinitely, to reviewing works most of which inevitably cannot bear, would even be misrepresented by, review in depth…
The simple truth—this is okay, this is not okay, this is vile, this resembles that, this is good indeed, this is unspeakable—is not a day’s work for a thinking adult. Some critics go shrill. Others go stale. A lot go simultaneously shrill and stale.
Adler concludes: “By far the most common tendency, however, is to stay put and simply to inflate, to pretend that each day’s text is after all a crisis—the most, first, best, worst, finest, meanest, deepest, etc.—to take on, since we are dealing in superlatives, one of the first, most unmistakable marks of the hack.” And I think that she has a point, even if I have to challenge a few of her assumptions. (The statement that most works of art “inevitably cannot bear, would even be misrepresented by, review in depth,” is particularly strange, with its implicit division of all artistic productions into the sheep and the goats. It also implies that it’s the obligation of the artist to provide a worthy subject for the major critic, when in fact it’s the other way around: as a critic, you prove yourself in large part through your ability to mine insight from the unlikeliest of sources.) Writing reviews on a daily or weekly basis, especially when you have a limited amount of time to absorb the work itself, lends itself inevitably to shortcuts, and you often find yourself falling back on the same stock phrases and judgments. And Adler’s warning about “dealing in superlatives” seems altogether prescient. As Keith Phipps and Tasha Robinson of The A.V. Club pointed out a few years back, the need to stand out in an ocean of competing coverage means that every topic under consideration becomes either an epic fail or an epic win: a sensible middle ground doesn’t generate page views.
But the situation, at least from Adler’s point of view, is even more dire than when she wrote this essay in the early eighties. When Adler’s takedown of Kael first appeared, the most threatening form of critical dilution lay in weekly movie reviews: today, we’re living in a media environment in which every episode of every television show gets thousands of words of critical analysis from multiple pop culture sites. (Adler writes: “Television, in this respect, is clearly not an art but an appliance, through which reviewable material is sometimes played.” Which is only a measure of how much the way we think and talk about the medium has changed over the intervening three decades.) The conditions that Adler identifies as necessary for the creation of a major critic like Edmund Wilson or Harold Rosenberg—time, the ability to choose one’s subjects, and the freedom to quit when necessary—have all but disappeared for most writers hoping to make a mark, or even just a living. To borrow a trendy phrase, we’ve reached a point of peak content, with a torrent of verbiage being churned out at an unsustainable pace without the advertising dollars to support it, in a situation that can be maintained only by the seemingly endless supply of aspiring writers willing to be chewed up by the machine. And if Adler thought that even a monthly reviewing schedule was deadly for serious criticism, I’d be curious to hear how she feels about the online apprenticeship that all young writers seem expected to undergo these days.
Still, I’d like to think that Adler got it wrong, just as I believe that she was ultimately mistaken about Kael, whose legacy, for all its flaws, still endures. (It’s revealing to note that Adler had a long, distinguished career as a writer and critic herself, and yet she almost certainly remains best known among casual readers for her Kael review.) Not every lengthy writeup of the latest episode of Riverdale is going to stand the test of time, but as a crucible for forming a critic’s judgment, this daily grind feels like a necessary component, even if it isn’t the only one. A critic needs time and leisure to think about major works of art, which is a situation that the current media landscape doesn’t seem prepared to offer. But the ability to form quick judgments about works of widely varying quality and to express them fluently on deadline is an indispensable part of any critic’s toolbox. When taken as an end itself, it can be deadening, as Adler notes, but it can also be the foundation for something more, even if it has to be undertaken outside of—or despite—the critic’s day job. The critic’s responsibility, now more than ever, isn’t to detach entirely from the relentless pace of pop culture, but to find ways of channeling it into something deeper than the instantaneous think piece or hot take. As a daily blogger who also undertakes projects that can last for months or years, I’m constantly mindful of the relationship between my work on demand and my larger ambitions. And I sure hope that the two halves can work together. Because, like it or not, every critic is walking that path already.
In a recent issue of The New Yorker, the critic Dan Chiasson offers up an appraisal of the poet Bill Knott, who died in 2014. To be honest, I’d either never heard of Knott or forgotten his name, but I suspect that he might have been pleased by this. Knott, who taught for decades at Emerson College, spent his entire career sticking resolutely to the edges of the literary world, distancing himself from mainstream publishers and electing to distribute his poems himself in cheap editions on Amazon. Chiasson relates:
The books that did make it to print usually featured brutal “anti-blurbs,” which Knott culled from reviews good and bad alike: his work was “grotesque,” “malignant,” “tasteless,” and “brainless,” according to some of the big names of the day.
Here are a few more of the blurbs he reprinted: “Bill Knott’s ancient, academic ramblings are part of what’s wrong with poetry today. Ignore the old bastard.” “Bill Knott bores me to tears.” “Bill Knott should be beaten with a flail.” “Bill Knott’s poems are so naïve that the question of their poetic quality hardly arises…Mr. Knott practices a dead language.” According to another reminiscence by the editor Robert P. Baird, Knott sometimes took it even further: “On his various blogs, which spawned and deceased like mayflies, he posted collages of rejection slips and a running tally of anti-blurbs: positive reviews and compliments that he’d carved up with ellipses to read like pans.” Even his actual negative reviews weren’t enough—Knott felt obliged to create his own.
The idea of a writer embracing his attackers has an obvious subversive appeal. Norman Mailer, revealingly, liked the idea so much that he indulged in it no fewer than three times, and far less nimbly than Knott did. After the release of The Deer Park, he ran an ad in The Village Voice that amounted to a parody of the usual collage of laudatory quotes—“The year’s worst snake pit in fiction,” “Moronic mindlessness,” “A bunch of bums”—and noted in fine print at the bottom, just in case we didn’t get the point: “This advertisement was paid for by Norman Mailer.” Two decades later, he decided to do the same thing with Marilyn, mostly as a roundabout way of responding to a single bad review by Pauline Kael. As the editor Robert Markel recalls in Peter Manso’s oral biography:
The book was still selling well when [Mailer] came in with his idea of a full two-page ad. Since he was now more or less in the hands of [publisher] Harold Roth, there was a big meeting in Harold’s office. What he wanted to do was exactly what he’d done with The Village Voice ad for The Deer Park: present all the positive and negative reviews, including Kael’s, setting the two in opposition. Harold was very much against it. He thought the two pages would be a stupid waste of money, but more, it was the adversarial nature of the ad as Norman conceived it.
Ultimately, Mailer persuaded Roth to play along: “He implied he’d made a study of this kind of thing and knew what he was talking about.” And five years down the line, he did it yet again with his novel Ancient Evenings, printing up a counter display for bookstores with bad reviews for Moby Dick, Anna Karenina, Leaves of Grass, and his own book, followed by a line with a familiar ring to it: “The quotations in this poster were selected by Norman Mailer.”
This compulsiveness about reprinting his bad reviews, and his insistence that everyone know that he had conceived and approved of it, is worth analyzing, because it’s very different from Knott’s. Mailer’s whole life was built on sustaining an image of intellectual machismo that often rested on unstable foundations, and embracing the drubbings that his books received was a way of signaling that he was tougher than his critics. Like so much else, it was a pose—Mailer hungered for fame and attention, and he felt his negative reviews as keenly as anyone. When Time ran a snarky notice of his poetry collection Deaths for the Ladies, Mailer replied, “in a fury of incalculable pains,” with a poem of his own, in which he compared himself to a bull in the ring and the reviewer to a cowardly picador. He recalled in Existential Errands:
The review in Time put iron into my heart again, and rage, and the feeling that the enemy was more alive than ever, and dirtier in the alley, and so one had to mend, and put on the armor, and go to war, go out to war again, and try to hew huge strokes with the only broadsword God ever gave you, a glimpse of something like Almighty prose.
This is probably a much healthier response. But in the contrast between Mailer’s expensive advertisements for himself and Knott’s photocopied chapbooks, you can see the difference between a piece of performance art and a philosophy of life truly lived. Of the two, Mailer ends up seeming more vulnerable. As he admits: “I had secret hopes, I now confess, that Deaths for the Ladies would be a vast success at the bar of poetry.”
Of course, Knott’s attitude was a bit of a pose as well. Chiasson once encountered his own name on Knott’s blog, which referred to him as “Chiasson-the-Assassin,” which indicates that the poet’s attitude toward critics was something other than indifference. But it was also a pose that was indistinguishable from the man inside, as Elisa Gabbert, one of Kott’s former students, observed: “It was kind of a goof, but that was his whole life. It was a really grand goof.” And you can judge them by their fruits. Mailer’s advertisements are brilliant, but the product that they’re selling is Mailer himself, and you’re clearly supposed to depart with the impression that the critics have trashed a major work of art. After reading Knott’s anti-blurbs, you end up questioning the whole notion of laudatory quotes itself, which is a more productive kind of skepticism. (David Lynch pulled off something similar when he printed an ad for Lost Highway with the words: “Two Thumbs Down!” In response, Roger Ebert wrote: “It’s creative to use the quote in that way…These days quotes in movie ads have been devalued by the ‘quote whores’ who supply gushing praise to publicists weeks in advance of an opening.” The situation with blurbs is slightly different, but there’s no question that they’ve been devalued as well—a book without “advance praise” looks vaguely suspicious, so the only meaningful fact about most blurbs is that they exist.) Resistance to reviews is so hard for a writer to maintain that asserting it feels like a kind of superpower. If asked, Mailer might have replied, like Bruce Banner in The Avengers: “That’s my secret. I’m always angry.” But I have a hunch that the truth is closer to what Wolverine says when Rogue asks if it hurts when his claws come out: “Every time.”
Over the last year or so, I’ve found myself repeatedly struck by the parallels between the careers of John W. Campbell and Orson Welles. At first, the connection might seem tenuous. Campbell and Welles didn’t look anything alike, although they were about the same height, and their politics couldn’t have been more different—Welles was a staunch progressive and defender of civil rights, while Campbell, to put it mildly, wasn’t. Welles was a wanderer, while Campbell spent most of his life within driving distance of his birthplace in New Jersey. But they’re inextricably linked in my imagination. Welles was five years younger than Campbell, but they flourished at exactly the same time, with their careers peaking roughly between 1937 and 1942. Both owed significant creative breakthroughs to the work of H.G. Wells, who inspired Campbell’s story “Twilight” and Welles’s Mercury Theater adaptation of The War of the Worlds. In 1938, Campbell saw Welles’s famous modern-dress production of Julius Caesar with the writer L. Sprague de Camp, of which he wrote in a letter:
It represented, in a way, what I’m trying to do in the magazine. Those humans of two thousand years ago thought and acted as we do—even if they did dress differently. Removing the funny clothes made them more real and understandable. I’m trying to get away from funny clothes and funny-looking people in the pictures of the magazine. And have more humans.
And I suspect that the performance started a train of thought in both men’s minds that led to de Camp’s novel Lest Darkness Fall, which is about a man from the present who ends up in ancient Rome.
Campbell was less pleased by Welles’s most notable venture into science fiction, which he must have seen as an incursion on his turf. He wrote to his friend Robert Swisher: “So far as sponsoring that War of [the] Worlds thing—I’m damn glad we didn’t! The thing is going to cost CBS money, what with suits, etc., and we’re better off without it.” In Astounding, he said that the ensuing panic demonstrated the need for “wider appreciation” of science fiction, in order to educate the public about what was and wasn’t real:
I have long been an exponent of the belief that, should interplanetary visitors actually arrive, no one could possibly convince the public of the fact. These stories wherein the fact is suddenly announced and widespread panic immediately ensues have always seemed to me highly improbable, simply because the average man did not seem ready to visualize and believe such a statement.
Undoubtedly, Mr. Orson Welles felt the same way.
Their most significant point of intersection was The Shadow, who was created by an advertising agency for Street & Smith, the publisher of Astounding, as a fictional narrator for the radio series Detective Story Hour. Before long, he became popular enough to star in his own stories. Welles, of course, voiced The Shadow from September 1937 to October 1938, and Campbell plotted some of the magazine installments in collaboration with the writer Walter B. Gibson and the editor John Nanovic, who worked in the office next door. And his identification with the character seems to have run even deeper. In a profile published in the February 1946 issue of Pic magazine, the reporter Dickson Hartwell wrote of Campbell: “You will find him voluble, friendly and personally depressing only in what his friends claim is a startling physical resemblance to The Shadow.”
It isn’t clear if Welles was aware of Campbell, although it would be more surprising if he wasn’t. Welles flitted around science fiction for years, and he occasionally crossed paths with other authors in that circle. To my lasting regret, he never met L. Ron Hubbard, which would have been an epic collision of bullshitters—although Philip Seymour Hoffman claimed that he based his performance in The Master mostly on Welles, and Theodore Sturgeon once said that Welles and Hubbard were the only men he had ever met who could make a room seem crowded simply by walking through the door. In 1946, Isaac Asimov received a call from a lawyer whose client wanted to buy all rights to his robot story “Evidence” for $250. When he asked Campbell for advice, the editor said that he thought it seemed fair, but Asimov’s wife told him to hold out for more. Asimov called back to ask for a thousand dollars, adding that he wouldn’t discuss it further until he found out who the client was. When the lawyer told him that it was Welles, Asimov agreed to the sale, delighted, but nothing ever came of it. (Welles also owned the story in perpetuity, making it impossible for Asimov to sell it elsewhere, a point that Campbell, who took a notoriously casual attitude toward rights, had neglected to raise.) Twenty years later, Welles made inquiries into the rights for Heinlein’s The Puppet Masters, which were tied up at the time with Roger Corman, but never followed up. And it’s worth noting that both stories are concerned with the problem of knowing how other people are what they claim to be, which Campbell had brilliantly explored in “Who Goes There?” It’s a theme to which Welles obsessively returned, and it’s fascinating to speculate what he might have done with it if Howard Hawks and Christian Nyby hadn’t gotten there first with The Thing From Another World. Who knows what evil lurks in the hearts of men?
But their true affinities were spiritual ones. Both Campbell and Welles were child prodigies who reinvented an art form largely by being superb organizers of other people’s talents—although Campbell always downplayed his own contributions, while Welles appears to have done the opposite. Each had a spectacular early success followed by what was perceived as decades of decline, which they seem to have seen coming. (David Thomson writes: “As if Welles knew that Kane would hang over his own future, regularly being used to denigrate his later works, the film is shot through with his vast, melancholy nostalgia for self-destructive talent.” And you could say much the same thing about “Twilight.”) Both had a habit of abandoning projects as soon as they realized that they couldn’t control them, and they both managed to seem isolated while occupying the center of attention in any crowd. They enjoyed staking out unreasonable positions in conversation, just to get a rise out of listeners, and they ultimately drove away their most valuable collaborators. What Pauline Kael writes of Welles in “Raising Kane” is equally true of Campbell:
He lost the collaborative partnerships that he needed…He was alone, trying to be “Orson Welles,” though “Orson Welles” had stood for the activities of a group. But he needed the family to hold him together on a project and to take over for him when his energies became scattered. With them, he was a prodigy of accomplishments; without them, he flew apart, became disorderly.
Both men were alone when they died, and both filled their friends, admirers, and biographers with intensely mixed feelings. I’m still coming to terms with Campbell. But I have a hunch that I’ll end up somewhere close to Kael’s ambivalence toward Welles, who, at the end of an essay that was widely seen as puncturing his myth, could only conclude: “In a less confused world, his glory would be greater than his guilt.”
Note: Spoilers follow for the series finale of The Vampire Diaries.
On Friday, I said goodbye to The Vampire Diaries, a series that I once thought was one of the best genre shows on television, only to stop watching it for its last two seasons. Despite its flaws, it occupies a special place in my memory, in part because its strengths were inseparable from the reasons that I finally abandoned it. Like Glee, The Vampire Diaries responded to its obvious debt to an earlier franchise—High School Musical for the former, Twilight for the latter—both by subverting its predecessor and by burning through ideas as relentlessly as it could. It’s as if both shows decided to refute any accusations of unoriginality by proving that they could be more ingenious than their inspirations, and amazingly, it sort of worked, at least for a while. There’s a limit to how long any series can repeatedly break down and reassemble itself, however, and both started to lose steam after about three years. In the case of The Vampire Diaries, its problems crystallized around its ostensible lead, Elena Gilbert, as portrayed by the game and talented Nina Dobrev, who left the show two seasons ago before returning for an encore in the finale. Elena spent most of her first sendoff asleep, and she isn’t given much more to do here. There’s a lot about the episode that I liked, and it provides satisfying moments of closure for many of its characters, but Elena isn’t among them. In the end, when she awakens from the magical coma in which she has been slumbering, it’s so anticlimactic that it reminds me of what Pauline Kael wrote of Han’s revival in Return of the Jedi: “It’s as if Han Solo had locked himself in the garage, tapped on the door, and been let out.”
And what happened to Elena provides a striking case study of why the story’s hero is often fated to become the least interesting person in sight. The main character of a serialized drama is under such pressure to advance the plot that he or she becomes reduced to the diagram of a pattern of forces, like one of the fish in D’Arcy Wentworth Thompson’s On Growth and Form, in which the animal’s physical shape is determined by the outside stresses to which it has been subjected. Instead of making her own decisions, Elena was obliged to become whatever the series needed her to be. Every protagonist serves as a kind of motor for the story, which is frequently a thankless role, but it was particularly problematic on a show that defined itself by its willingness to burn through a year of potential storylines each month. Every episode felt like a season finale, and characters were freely killed, resurrected, and brainwashed to keep the wheels turning. It was hardest on Elena, who, at her best, was a compelling, resourceful heroine. After six seasons of personality changes, possessions, memory wipes, and the inexplicable choices that she made just because the story demanded it, she became an empty shell. If you were designing a show in a laboratory to see what would happen if its protagonist was forced to live through plot twists at an accelerated rate, like the stress tests that engineers use to put a component through a lifetime’s worth of wear in a short period of time, you couldn’t do much better than The Vampire Diaries. And while it might have been theoretically interesting to see what happened to the series after that one piece was removed, I didn’t think it was worth sitting through another two seasons of increasingly frustrating television.
After the finale was shot, series creators Kevin Williamson and Julie Plec made the rounds of interviews to discuss the ending, and they shared one particular detail that fascinates me. If you haven’t watched The Vampire Diaries, all you need to know is that its early seasons revolved around a love triangle between Elena and the vampire brothers Stefan and Damon, a nod to Twilight that quickly became one of the show’s least interesting aspects. Elena seemed fated to end up with Stefan, but she spent the back half of the series with Damon, and it ended with the two of them reunited. In a conversation with Deadline, Williamson revealed that this wasn’t always the plan:
Well, I always thought it would be Stefan and Elena. They were sort of the anchor of the show, but because we lost Elena in season six, we couldn’t go back. You know Nina could only come back for one episode—maybe if she had came back for the whole season, we could even have warped back towards that, but you can’t just do it in forty-two minutes.
Dobrev’s departure, in other words, froze that part of the story in place, even as the show around it continued its usual frantic developments, and when she returned, there wasn’t time to do anything but keep Elena and Damon where they had left off. There’s a limit to how much ground you can cover in the course of a single episode, so it seemed easier for the producers to stick with what they had and figure out a way to make it seem inevitable.
The fact that it works at all is a tribute to the skill of the writers and cast, as well as to the fact that the whole love triangle was basically arbitrary in the first place. As James Joyce said in a very different context, it was a bridge across which the characters could walk, and once they were safely on the other side, it could be blown to smithereens. The real challenge was how to make the finale seem like a definitive ending, after the show had killed off and resurrected so many characters that not even death itself felt like a conclusion. It resorted to much the same solution that Lost did when faced with a similar problem: it shut off all possibility of future narrative by reuniting its characters in heaven. This partially a form of wish fulfillment, as we’ve seen with so many other television series, but it also puts a full stop on the story by leaving us in an afterlife, where, by definition, nothing can ever change. It’s hilariously unlike the various versions of the world to come that the series has presented over the years, from which characters can always be yanked back to life when necessary, but it’s also oddly moving and effective. Watching it, I began to appreciate how the show’s biggest narrative liability—a cast that just can’t be killed—also became its greatest asset. The defining image of The Vampire Diaries was that of a character who has his neck snapped, and then just shakes it off. Williamson and Plec must have realized, consciously or otherwise, that it was a reset button that would allow them to go through more ideas than would be possible than a show on which a broken neck was permanent. Every denizen of Mystic Falls got a great death scene, often multiple times per season, and the show exploited that freedom until it exhausted itself. It only really worked for three years out of eight, but it was a great run while it lasted. And now, after life’s fitful fever, the characters can sleep well, as they sail off into the mystic.
Sometimes a great film takes years to reveal its full power. Occasionally, you know what you’ve witnessed as soon as the closing credits begin to roll. And very rarely, you realize in the middle of the movie that you’re watching something extraordinary. I’ve experienced this last feeling only a handful of times in my life, and my most vivid memory of it is from ten years ago, when I saw Children of Men. I’d been looking forward to it ever since seeing the trailer, and for the first twenty minutes or so, it more than lived up to my expectations. But halfway through a crucial scene—and if you’ve seen the movie, you know the one I mean—I began to feel the movie expanding in my head, as Pauline Kael said of The Godfather Part II, “like a soft bullet.” Two weeks later, I wrote to a friend: “Alfonso Cuarón has just raised the bar for every director in the world.” And I still believe this, even if the ensuing decade has clarified the film’s place in the history of movies. Cuarón hasn’t had the productive career that I’d hoped he would, and it took him years to follow up on his masterpiece, although he finally earned his Oscar for Gravity. The only unambiguous winner to come out of it all was the cinematographer Emmanuel Lubzeki, who has won three Academy Awards in a row for refinements of the discoveries that he made here. And the story now seems prescient, of course, as Abraham Riesman of Vulture recently noted: “The film, in hindsight, seems like a documentary about a future that, in 2016, finally arrived.” If nothing else, the world certainly appears to be run by exactly the sort of people of whom Jarvis Cocker was warning us.
But the most noteworthy thing about Children of Men, and the one aspect of it that its fans and imitators should keep in mind, is the insistently visceral nature of its impact. I don’t think I’m alone when I say that I was blown away the most by three elements: the tracking shots, the use of music, and the level of background detail in every scene. These are all qualities that are independent of its politics, its message, and even, to some extent, its script, which might be its weakest point. The movie can be refreshingly elliptical when it comes to the backstory of its characters and its world, but there are also holes and shortcuts that are harder to forgive. (Its clumsiest moment, for me, is when Theo is somehow able to observe and overhear Jasper’s death—an effective scene in itself—from higher ground without being noticed by anyone else. We aren’t sure where he’s standing in relation to the house, so it feels contrived and stagy, a strange lapse for a movie that is otherwise so bracingly specific about its geography.) But maybe that’s how it had to be. If the screenplay were as rich and crowded as the images, it would turn into a Christopher Nolan movie, for better or worse, and Cuarón is a very different sort of filmmaker. He’s content to leave entire swaths of the story in outline form, as if he forgot to fill in the blanks, and he’s happy to settle for a cliché if it saves time, just because his attention is so intensely focused elsewhere.
Occasionally, this has led his movies to be something less than they should be. I really want to believe that Harry Potter and the Prisoner of Azkaban is the strongest installment in the series, but it has real structural problems that stem precisely from Cuarón’s indifference to exposition: he cuts out an important chunk of dialogue that leaves the climax almost incomprehensible, so that nonreaders have to scramble to figure out what the hell is going on, when we should be caught up in the action. Gravity impressed me enormously when I saw it on the big screen, but I’m not particularly anxious to revisit it at home, where its technical marvels run the risk of being swallowed up by its rudimentary characters and dialogue. (It strikes me now that Gravity might have some of the same problems, to a much lesser extent, as Birdman, in which the use of extended takes makes it impossible to give scenes the necessary polish in the editing room. Which also implies that if you’re going to hire Lubzeki as your cinematographer, you’d better have a really good script.) But Children of Men is the one film in which Cuarón’s shortcomings are inseparable from his strengths. His usual omissions and touches of carelessness were made for a story in which we’re only meant to glimpse the overall picture. And its allegory is so vague that we can apply it to whatever we like.
This might sound like a criticism, but it isn’t: Children of Men is undeniably one of the major movies of my lifetime. And its message is more insightful than it seems, even if it takes a minute of thought to unpack. Its world falls apart as soon as humanity realizes that it doesn’t have a future, which isn’t so far from where we are now. We find it very hard, as a species, to keep the future in mind, and we often behave—even in the presence of our own children—as if this generation will be the last. When a society has some measure of economic and political security, it can make efforts to plan ahead for a decade or two, but even that modest degree of foresight disappears as soon as stability does. In Children of Men, the childbirth crisis, which doesn’t respect national or racial boundaries, takes the sort of disruptions that tend to occur far from the developed world and brings them into the heart of Europe and America, and it doesn’t even need to change any of the details. The most frightening thing about Cuarón’s movie, and what makes it most relevant to our current predicament, is that its extrapolations aren’t across time, but across the map of the world as it exists today. You don’t need to look far to see landscapes like the ones through which the characters move, or the ways in which they could spread across the planet. In the words of William Gibson, the future of Children of Men is already here. It just isn’t evenly distributed yet.
When I look back at many of my favorite movies, I’m troubled by a common thread that they share. It’s the theme of the control of a vulnerable woman by a man in a position of power. The Red Shoes, my favorite film of all time, is about artistic control, while Blue Velvet, my second favorite, is about sexual domination. Even Citizen Kane has that curious subplot about Kane’s attempt to turn Susan into an opera star, which may have originated as an unkind reference to William Randolph Hearst and Marion Davies, but which survives in the final version as an emblem of Kane’s need to collect human beings like playthings. It’s also hard to avoid the feeling that some of these stories secretly mirror the relationship between the director and his actresses on the set. Vertigo, of course, can be read as an allegory for Hitchcock’s own obsession with his leading ladies, whom he groomed and remade as meticulously as Scotty attempts to do with Madeline. In The Shining, Jack’s abuse of Wendy feels only slightly more extreme than what we know Kubrick—who even resembles Jack a bit in the archival footage that survives—imposed on Shelley Duvall. (Duvall’s mental health issues have cast a new pall on those accounts, and the involvement of Kubrick’s daughter Vivian has done nothing to clarify the situation.) And Roger Ebert famously hated Blue Velvet because he felt that David Lynch’s treatment of Isabella Rossellini had crossed an invisible moral line.
The movie that has been subjected to this kind of scrutiny most recently is Last Tango in Paris, after interview footage resurfaced of Bernardo Bertolucci discussing its already infamous rape scene. (Bertolucci originally made these comments three years ago, and the fact that they’ve drawn attention only now is revealing in itself—it was hiding in plain sight, but it had to wait until we were collectively prepared to talk about it.) Since the story first broke, there has been some disagreement over what Maria Schneider knew on the day of the shoot. You can read all about it here. But it seems undeniable that Bertolucci and Brando deliberately withheld crucial information about the scene from Schneider until the cameras were rolling. Even the least offensive version makes me sick to my stomach, all the more so because Last Tango in Paris has been an important movie to me for most of my life. In online discussions of the controversy, I’ve seen commenters dismissing the film as an overrated relic, a vanity project for Brando, or one of Pauline Kael’s misguided causes célèbres. If anything, though, this attitude lets us off the hook too easily. It’s much harder to admit that a film that genuinely moved audiences and changed lives might have been made under conditions that taint the result beyond retrieval. It’s a movie that has meant a lot to me, as it did to many other viewers, including some I knew personally. And I don’t think I can ever watch it again.
But let’s not pretend that it ends there. It reflects a dynamic that has existed between directors and actresses since the beginning, and all too often, we’ve forgiven it, as long as it results in great movies. We write critical treatments of how Vertigo and Psycho masterfully explore Hitchcock’s ambivalence toward women, and we overlook the fact that he sexually assaulted Tippi Hedren. When we think of the chummy partnerships that existed between men like Cary Grant and Howard Hawks, or John Wayne and John Ford, and then compare them with how directors have regarded their female collaborators, the contrast couldn’t be more stark. (The great example here is Gone With the Wind: George Cukor, the original director, was fired because he made Clark Gable uncomfortable, and he was replaced by Gable’s buddy Victor Fleming. Vivien Leigh and Olivia de Havilland were forced to consult with Cukor in secret.) And there’s an unsettling assumption on the part of male directors that this is the only way to get a good performance from a woman. Bertolucci says that he and Brando were hoping to get Schneider’s raw reaction “as a girl, instead of as an actress.” You can see much the same impulse in Kubrick’s treatment of Duvall. Even Michael Powell, one of my idols, writes of how he and the other actors frightened Moira Shearer to the point of tears for the climactic scene of The Red Shoes—“This was no longer acting”—and says elsewhere: “I never let love interfere with business, or I would have made love to her. It would have improved her performance.”
So what’s a film buff to do? We can start by acknowledging that the problem exists, and that it continues to affect women in the movies, whether in the process of filmmaking itself or in the realities of survival in an industry that is still dominated by men. Sometimes it leads to abuse or worse. We can also honor the work of those directors, from Ozu to Almodóvar to Wong Kar-Wai, who have treated their actresses as partners in craft. Above all else, we can come to terms with the fact that sometimes even a masterpiece fails to make up for the choices that went into it. Thinking of Last Tango in Paris, I was reminded of Norman Mailer, who wrote one famous review of the movie and was linked to it in another. (Kael wrote: “On the screen, Brando is our genius as Mailer is our genius in literature.”) Years later, Mailer supported the release from prison of a man named Jack Henry Abbott, a gifted writer with whom he had corresponded at length. Six weeks later, Abbott stabbed a stranger to death. Afterward, Mailer infamously remarked:
I’m willing to gamble with a portion of society to save this man’s talent. I am saying that culture is worth a little risk.
But it isn’t—at least not like this. Last Tango in Paris is a masterpiece. It contains the single greatest male performance I’ve ever seen. But it wasn’t worth it.
I first saw Brian De Palma’s Raising Cain when I was fourteen years old. In a weird way, it amounted to a peak moment of my early adolescence: I was on a school trip to our nation’s capital, sharing a hotel room with my friends from middle school, and we were just tickled to get away with watching an R-rated movie on cable. The fact that we ended up with Raising Cain doesn’t quite compare with the kids on The Simpsons cheering at the chance to see Barton Fink, but it isn’t too far off. I think that we liked it, and while I won’t claim that we understood it, that doesn’t mean much of anything—it’s hard for me to imagine anybody, of any age, entirely understanding this movie, which includes both me and De Palma himself. A few years later, I caught it again on television, and while I can’t say I’ve thought about it much since, I never forgot it. Gradually, I began to catch up on my De Palma, going mostly by whatever movies made Pauline Kael the most ecstatic at the time, which in itself was an education in the gap between a great critic’s pet enthusiasms and what exists on the screen. (In her review of The Fury, Kael wrote: “No Hitchcock thriller was ever so intense, went so far, or had so many ‘classic’ sequences.” I love Kael, but there are at least three things wrong with that sentence.) And ultimately De Palma came to mean a lot to me, as he does to just about anyone who responds to the movies in a certain way.
When I heard about the recut version of Raising Cain—in an interview with John Lithgow on The A.V. Club, no less, in which he was promoting his somewhat different role on The Crown—I was intrigued. And its backstory is particularly interesting. Shortly before the movie was first released, De Palma moved a crucial sequence from the beginning to the middle, eliminating an extended flashback and allowing the film to play more or less chronologically. He came to regret the change, but it was too late to do anything about it. Years later, a freelance director and editor named Peet Gelderblom read about the original cut and decided to restore it, performing a judicious edit on a digital copy. He put it online, where, unbelievably, it was seen by De Palma himself, who not only loved it but asked that it be included as a special feature on the new Blu-ray release. If nothing else, it’s a reminder of the true possibilities of fan edits, which have served mostly for competing visions of the ideal version of Star Wars. With modern software, a fan can do for a movie what Walter Murch did for Touch of Evil, restoring it to the director’s original version based on a script or a verbal description. In the case of Raising Cain, this mostly just involved rearranging the pieces in the theatrical cut, but other fans have tackled such challenges as restoring all the deleted scenes in Twin Peaks: Fire Walk With Me, and there are countless other candidates.
Yet Raising Cain might be the most instructive case study of all, because simply restoring the original opening to its intended place results in a radical transformation. It isn’t for everyone, and it’s necessary to grant De Palma his usual passes for clunky dialogue and characterization, but if you’re ready to meet it halfway, you’re rewarded with a thriller that twists back on itself like a Möbius strip. De Palma plunders his earlier movies so blatantly that it isn’t clear if he’s somehow paying loving homage to himself—bypassing Hitchcock entirely—or recycling good ideas that he feels like using again. The recut opens with a long mislead that recalls Dressed to Kill, which means that Lithgow barely even appears for the first twenty minutes. You can almost see why De Palma chickened out for the theatrical version: Lithgow’s performance as the meek Carter and his psychotic imaginary brother Cain feels too juicy to withhold. But the logic of the script was destroyed. For a film that tests an audience’s suspension of disbelief in so many other ways, it’s unclear why De Palma thought that a flashback would be too much for the viewer to handle. The theatrical release preserves all the great shock effects that are the movie’s primary reason for existing, but they don’t build to anything, and you’re left with a film that plays like a series of sketches. With the original order restored, it becomes what it was meant to be all along: a great shaggy dog story with a killer punchline.
Raising Cain is gleefully about nothing but itself, and I wouldn’t force anybody to watch it who wasn’t already interested. But the recut also serves as an excellent introduction to its director, just as the older version did for me: when I first encountered it, I doubt I’d seen anything by De Palma, except maybe The Untouchables, and Mission: Impossible was still a year away. It’s safe to say that if you like Raising Cain, you’ll like De Palma in general, and if you can’t get past its archness, campiness, and indifference to basic plausibility—well, I can hardly blame you. Watching it again, I was reminded of Blue Velvet, a far greater movie that presents the viewer with a similar test. It has the same mixture of naïveté and incredible technical virtuosity, with scenes that barely seem to have been written alternating with ones that push against the boundaries of the medium itself. You’re never quite sure if the director is in on the gag, and maybe it doesn’t matter. There isn’t much beauty in Raising Cain, and De Palma is a hackier and more mechanical director than Lynch, but both are so strongly visual that the nonsensory aspects of their films, like the obligatory scenes with the cops, seem to wither before our eyes. (It’s an approach that requires a kind of raw, intuitive trust from the cast, and as much as I enjoy what Lithgow does here, he may be too clever and resourceful an actor to really disappear into the role.) Both are rooted, crucially, in Hitchcock, who was equally obsessive, but was careful to never work from his own script. Hitchcock kept his secret self hidden, while De Palma puts it in plain sight. And if it turns out to be nothing at all, that’s probably part of the joke.