Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Archive for the ‘Movies’ Category

Time for the stars

leave a comment »

Last year, the screenwriter Terry Rossio, whose blog is the best online resource I’ve ever seen for advice on survival in Hollywood, posted a long post titled “Time Risk.” How long was it? If published, it could be sold as a short book of a hundred pages or so, and it would probably be acclaimed as one of the two or three most useful works ever written on the business of screenwriting. Rossio has spent more time than any successful writer since William Goldman on sharing his experiences in the industry, and this post is his masterpiece. (It received a flurry of attention earlier this year because of one unflattering anecdote about Johnny Depp, which is a classic instance of missing the forest for the trees.) I don’t know why Rossio invested so much effort into this essay, but I suspect that it was because he realized that he had stumbled across a single powerful idea that explained so much that was otherwise inexplicable, even cruel, about the life of a writer in the movies. It’s the fact that any investment of time presents a risk, which means that there’s an enormous incentive to transfer it to others—and the writer, for better or worse, is where the process ends. As Rossio puts it in an exchange with a producer whom he calls Jake:

At the point of sitting down to write, there was no way for my writer to know whether this particular story was going to work. She set forth on faith alone. So did thousands, tens of thousands of other writers around town, none of them knowing whether their stories would pan out, or even whether they could finish, or whether they could beat out the competition and have their work land on your desk…You [the producer] not only gain the value of the time my writer put at risk, but also the risk of every other writer who sat down to face the blank page around the same time, most of whom came up short. It’s like having everyone play the lotto, then you call the one person with the winning ticket. At the start it’s a giant risk pool, and all that collective risk is represented by this one winning screenplay.

This is a remarkable insight, and it applies to more than just screenwriting. Rossio doesn’t come out and say it, but he strongly implies there’s a fundamental cognitive divide between people who can work on more than one thing at a time and those who mostly can’t. It’s the difference between writers and agents, writers and book editors, writers and producers. The relationship doesn’t need to be adversarial, but it unquestionably creates different incentives, and it can result in situations in which the two players in the room aren’t even speaking the same language. It also lead to apparently paradoxical consequences, as when Rossio describes what he calls “Death by Sale”:

The day you sell your screenplay, you gain a small real chance it will be produced, at the same time almost guaranteeing that it will never be produced. Put another way, the same screenplay, unsold, has a much better chance of reaching the silver screen than it does when purchased by a studio…Selling a screenplay represents the exchange of all future positive outcomes of a project for a single, often unlikely, current scenario. You throw in with a particular set of players, at a particular time and place, with a particular set of restrictions and parameters.

This might sound crazy, but like everything else in Rossio’s post, it’s a logical extension of the principle in the title. If you’re a rational producer, you deal with time risk in the same way that a fund manager deals with investment risk—by diversifying your portfolio. A producer can have twenty or thirty projects in the hopper at any one time, in hopes that one winner will make up for all the losers. Writers don’t have this luxury, but they engage in a kind of simulation of it during the submission process. An unsold script has a virtual portfolio of potential buyers, one of whom might one day pay off. As soon as someone buys it, all those other possibilities disappear, and if it fails, the project might be tainted forever.

So how in the world do you deal with this? Rossio’s advice is simple, but it’s also the exact opposite of the reality that most writers face: “Spend as much time as you can making films, rather than trying to get films made.” Every strategy that he proposes comes down to knowing where to commit your time and how much of it to devote to a given situation. Take what he says about buyers and sellers:

First, understand when you’re in a room with fellow sellers, and temper your excitement accordingly. Second, commit less time risk to fellow sellers—and infinite time risk to an actual buyer. Third, understand the real value of investing time with fellow sellers. The value is not just an eventual project sale. The real value is building your team.

Rossio also advises writers to take cues from the industry players, notably producers, who have learned how to maximize the relationship between risk and return. (In financial terms, they’ve figured out an investment strategy with a good Sharpe ratio.) He quotes the producer Ram Bergman: “I told Rian [Johnson], I simply will not let you sell anything you write…The more we put it together, script, cast, producer, the more effectively we can dictate how it gets made.” If you can be a director or a novelist—or set up an animation studio in your garage, as Rossio repeatedly recommends—that’s even better. But even powerful people need to take what comes. Rossio devotes a considerable amount of space to the travails of his screenplay Déjà Vu, which set and still holds the record for the highest price ever paid for a spec script, only to run into rewrite problems and a reluctant director. When Rossio complained and suggested that they pull out, the producer Jerry Bruckheimer replied: “I have a director, a script, a star, and the studio giving me a green light. It’s not my job to not make movies.” And he was right.

I could keep quoting forever from this essay, which is loaded with throwaway insights that deserve a full post of their own. (Here’s one of my favorites: “Writers and producers often do the majority of their work with the cameras snug in their form-fitting foam cases. Actors get paid when cameras roll. And it’s only when cameras are rolling that power accumulates, and brands are established.” And another: “It’s amusing to listen to film critics assign responsibility for the content of a film exclusively to the screenwriter, the one person on the team with no final authority to insist on any particular story choice.”) But I’ll close with a story about a project in which I take an obvious interest—the adaptation of Robert A. Heinlein’s The Moon is a Harsh Mistress, on which Rossio worked while the rights to the novel were still held by DreamWorks. Here’s what happened:

The screenplay was completed about a month prior to the rights renewal date, and to be honest, we nailed it. The source material is of course fantastic, one of the top ten science fiction novels of all time, and the draft we turned in would have made an amazing film. The renewal date came and went, with no word from the studio, but a few days later we got a phone call. “We’re going to let the rights expire,” said the executive. “Did you not like the script?” we asked. “I’ll be honest with you,” said the executive, “We’ve been really busy. I’m sure the screenplay is fantastic, you guys always do good work. But we just didn’t have time to read it.”

Rossio concludes: “While this sounds insane from a business perspective—why option the book rights at all, on such a high profile project, or hire screenwriters to do an adaptation—it makes perfect sense from a time risk perspective. If you’re an executive, and you know the project doesn’t fit your production schedule, why expend the time risk to even read the screenplay?” He’s perfectly right, of course. But the real takeaway here is one that he leaves unspoken. In this situation, you don’t want to be Terry Rossio, or the producer, or even the executive on the other end of the phone. You want to be Heinlein.

Written by nevalalee

September 19, 2017 at 8:51 am

Shoot the piano player

with 2 comments

In his flawed but occasionally fascinating book Bambi vs. Godzilla, the playwright and director David Mamet spends a chapter discussing the concept of aesthetic distance, which is violated whenever viewers remember that they’re simply watching a movie. Mamet provides a memorable example:

An actor portrays a pianist. The actor sits down to play, and the camera moves, without a cut, to his hands, to assure us, the audience, that he is actually playing. The filmmakers, we see, have taken pains to show the viewers that no trickery has occurred, but in so doing, they have taught us only that the actor portraying the part can actually play the piano. This addresses a concern that we did not have. We never wondered if the actor could actually play the piano. We accepted the storyteller’s assurances that the character could play the piano, as we found such acceptance naturally essential to our understanding of the story.

Mamet imagines a hypothetical dialogue between the director and the audience: “I’m going to tell you a story about a pianist.” “Oh, good: I wonder what happens to her!” “But first, before I do, I will take pains to reassure you that the actor you see portraying the hero can actually play the piano.” And he concludes:

We didn’t care till the filmmaker brought it up, at which point we realized that, rather than being told a story, we were being shown a demonstration. We took off our “audience” hat and put on our “judge” hat. We judged the demonstration conclusive but, in so doing, got yanked right out of the drama. The aesthetic distance had been violated.

Let’s table this for now, and turn to a recent article in The Atlantic titled “The Remarkable Laziness of Woody Allen.” To prosecute the case laid out in the headline, the film critic Christopher Orr draws on Eric Lax’s new book Start to Finish: Woody Allen and the Art of Moviemaking, which describes the making of Irrational Man—a movie that nobody saw, which doesn’t make the book sound any less interesting. For Orr, however, it’s “an indictment framed as an encomium,” and he lists what he evidently sees as devastating charges:

Allen’s editor sometimes has to live with technical imperfections in the footage because he hasn’t shot enough takes for her to choose from…As for the shoot itself, Allen has confessed, “I don’t do any preparation. I don’t do any rehearsals. Most of the times I don’t even know what we’re going to shoot.” Indeed, Allen rarely has any conversations whatsoever with his actors before they show up on set…In addition to limiting the number of takes on any given shot, he strongly prefers “master shots”—those that capture an entire scene from one angle—over multiple shots that would subsequently need to be edited together.

For another filmmaker, all of these qualities might be seen as strengths, but that’s beside the point. Here’s the relevant passage:

The minimal commitment that appearing in an Allen film entails is a highly relevant consideration for a time-strapped actor. Lax himself notes the contrast with Mike Leigh—another director of small, art-house films—who rehearses his actors for weeks before shooting even starts. For Damien Chazelle’s La La Land, Stone and her co-star, Ryan Gosling, rehearsed for four months before the cameras rolled. Among other chores, they practiced singing, dancing, and, in Gosling’s case, piano. The fact that Stone’s Irrational Man character plays piano is less central to that movie’s plot, but Allen didn’t expect her even to fake it. He simply shot her recital with the piano blocking her hands.

So do we shoot the piano player’s hands or not? The boring answer, unfortunately, is that it depends—but perhaps we can dig a little deeper. It seems safe to say that it would be impossible to make The Pianist with Adrian Brody’s hands conveniently blocked from view for the whole movie. But I’m equally confident that it doesn’t matter the slightest bit in Irrational Man, which I haven’t seen, whether or not Emma Stone is really playing the piano. La La Land is a slightly trickier case. It would be hard to envision it without at least a few shots of Ryan Gosling playing the piano, and Damien Chazelle isn’t above indulging in exactly the camera move that Mamet decries, in which it tilts down to reassure us that it’s really Gosling playing. Yet the fact that we’re even talking about this gets down to a fundamental problem with the movie, which I mostly like and admire. Its characters are archetypes who draw much of their energy from the auras of the actors who play them, and in the case of Stone, who is luminous and moving as an aspiring actress suffering through an endless series of auditions, the film gets a lot of mileage from our knowledge that she’s been in the same situation. Gosling, to put it mildly, has never been an aspiring jazz pianist. This shouldn’t even matter, but every time we see him playing the piano, he briefly ceases to be a struggling artist and becomes a handsome movie star who has spent three months learning to fake it. And I suspect that the movie would have been elevated immensely by casting a real musician. (This ties into another issue with La La Land, which is that it resorts to telling us that its characters deserve to be stars, rather than showing it to us in overwhelming terms through Gosling and Stone’s singing and dancing, which is merely passable. It’s in sharp contrast to Martin Scorsese’s New York, New York, one of its clear spiritual predecessors, in which it’s impossible to watch Liza Minnelli without becoming convinced that she ought to be the biggest star in the world. And when you think of how quirky, repellent, and individual Minnelli and Robert De Niro are allowed to be in that film, La La Land starts to look a little schematic.)

And I don’t think I’m overstating it when I argue that the seemingly minor dilemma of whether to show the piano player’s hands shades into the larger problem of how much we expect our actors to really be what they pretend that they are. I don’t think any less of Bill Murray because he had to employ Terry Fryer as a “hand double” for his piano solo in Groundhog Day, and I don’t mind that the most famous movie piano player of them all—Dooley Wilson in Casablanca—was faking it. And there’s no question that you’re taken out of the movie a little when you see Richard Chamberlain playing Tchaikovsky’s Piano Concerto No. 1 in The Music Lovers, however impressive it might be. (I’m willing to forgive De Niro learning to mime the saxophone for New York, New York, if only because it’s hard to imagine how it would look otherwise. The piano is just about the only instrument in which it can plausibly be left at the director’s discretion. And in his article, revealingly, Orr fails to mention that none other than Woody Allen was insistent that Sean Penn learn the guitar for Sweet and Lowdown. As Allen himself might say, it depends.) On some level, we respond to an actor playing the piano much like the fans of Doctor Zhivago, whom Pauline Kael devastatingly called “the same sort of people who are delighted when a stage set has running water or a painted horse looks real enough to ride.” But it can serve the story as much as it can detract from it, and the hard part is knowing how and when. As one director notes:

Anybody can learn how to play the piano. For some people it will be very, very difficult—but they can learn it. There’s almost no one who can’t learn to play the piano. There’s a wide range in the middle, of people who can play the piano with various degrees of skill; a very, very narrow band at the top, of people who can play brilliantly and build upon a technical skill to create great art. The same thing is true of cinematography and sound mixing. Just technical skills. Directing is just a technical skill.

This is Mamet writing in On Directing Film, which is possibly the single best work on storytelling I know. You might not believe him when he says that directing is “just a technical skill,” but if you do, there’s a simple way to test if you have it. Do you show the piano player’s hands? If you know the right answer for every scene, you just might be a director.

Broyles’s Law and the Ken Burns effect

leave a comment »

For most of my life as a moviegoer, I’ve followed a rule that has served me pretty well. Whenever the director of a documentary narrates the story in the first person, or, worse, appears on camera, I start to get suspicious. I’m not talking about movies like Roger and Me or even the loathsome Catfish, in which the filmmakers, for better or worse, are inherently part of the action, but about films in which the director inserts himself into the frame for no particular reason. Occasionally, I can forgive this, as I did with the brilliant The Cove, but usually, I feel a moment of doubt whenever the director’s voiceover begins. (In its worst form, it opens the movie with a redundant narration: “I first came across the story that you’re about to hear in the summer of 1990…”) But while I still think that this is a danger sign, I’ve recently concluded that I was wrong about why. I had always assumed that it was a sign of ego—that these directors were imposing themselves on a story that was really about other people, because they thought that it was all about them. In reality, it seems more likely that it’s a solution to a technical problem. What happens, I think, is that the director sits down to review his footage and discovers that it can’t be cut together as a coherent narrative. Perhaps there are are crucial scenes or beats missing, but the events that the movie depicts are long over, or there’s no budget to go back and shoot more. An interview might bridge the gaps, but maybe this isn’t logistically feasible. In the end, the director is left with just one person who is available to say all the right things on the soundtrack to provide the necessary transitions and clarifications. It’s himself. In a perfect world, if he had gotten the material that he needed, he wouldn’t have to be in his own movie at all, but he doesn’t have a choice. It isn’t a failure of character, but of technique, and the result ends up being much the same.

I got to thinking about this after reading a recent New Yorker profile by Ian Parker of the documentarian Ken Burns, whose upcoming series on the Vietnam War is poised to become a major cultural event. The article takes an irreverent tone toward Burns, whose cultural status encourages him to speechification in private: “His default conversational setting is Commencement Address, involving quotation from nineteenth-century heroes and from his own previous commentary, and moments of almost rhapsodic self-appreciation. He is readier than most people to regard his creative decisions as courageous.” But Parker also shares a fascinating anecdote about which I wish I knew more:

In the mid-eighties, Burns was working on a deft, entertaining documentary about Huey Long, the populist Louisiana politician. He asked two historians, William Leuchtenburg and Alan Brinkley, about a photograph he hoped to use, as a part of the account of Long’s assassination; it showed him protected by a phalanx of state troopers. Brinkley told him that the image might mislead; Long usually had plainclothes bodyguards. Burns felt thwarted. Then Leuchtenburg spoke. He’d just watched a football game in which Frank Broyles, the former University of Arkansas coach, was a commentator. When the game paused to allow a hurt player to be examined, Broyles explained that coaches tend to gauge the seriousness of an injury by asking a player his name or the time of day; if he can’t answer correctly, it’s serious. As Burns recalled it, Broyles went on, “But, of course, if the player is important to the game, we tell him what his name is, we tell him what time it is, and we send him back in.”

Hence Broyles’s Law: “If it’s super-important, if it’s working, you tell him what his name is, and you send him back into the game.” Burns decided to leave the photo in the movie. Parker continues:

Was this, perhaps, a terrible law? Burns laughed. “It’s a terrible law!” But, he went on, it didn’t let him off the hook, ethically. “This would be Werner Herzog’s ‘ecstatic truth’—‘I can do anything I want. I’ll pay the town drunk to crawl across the ice in the Russian village.’” He was referring to scenes in Herzog’s Bells from the Deep, which Herzog has been happy to describe, and defend, as stage-managed. “If he chooses to do that, that’s okay. And then there are other people who’d rather do reenactments than have a photograph that’s vague.” Instead, Burns said, “We do enough research that we can pretty much convince ourselves—in the best sense of the word—that we’ve done the honorable job.”

The reasoning in this paragraph is a little muddled, but Burns seems to be saying that he isn’t relying on “the ecstatic truth” of Herzog, who blurs the line between fiction and reality, or the reenactments favored by Errol Morris, who sometimes seems to be making a feature film interspersed with footage of talking heads. Instead, Burns is assembling a narrative solely out of primary sources, and if an image furthers the viewer’s intellectual understanding or emotional engagement, it can be included, even if it isn’t strictly accurate. These are the compromises that you make when you’re determined to use nothing but the visuals that you have available, and you trust in your understanding of the material to tell whether or not you’ve made the “honorable” choice.

On some level, this is basically what every author of nonfiction has to consider when assembling sources, which involves countless judgment calls about emphasis, order, and selection, as I’ve discussed here before. But I’m more interested in the point that this emerges from a technical issue inherent to the form of the documentary itself, in which the viewer always has to be looking at something. When the perfect image isn’t available, you have a few different options. You can ignore the problem; you can cut to an interview subject who tells the viewers about what they’re not seeing; or you can shoot a reenactment. (Recent documentaries seem to lean heavily on animation, presumably because it’s cheaper and easier to control in the studio.) Or, like Burns, you can make do with what you have, because that’s how you’ve defined the task for yourself. Burns wants to use nothing but interviews, narration, and archival materials, and the technical tricks that we’ve come to associate with his style—like the camera pan across photos that Apple actually calls the Ken Burns effect—arise directly out of those constraints. The result is often brilliant, in large part because Burns has no choice but to think hard about how to use the materials that he has. Broyles’s Law may be “terrible,” but it’s better than most of the alternatives. Burns has the luxury of big budgets, a huge staff, and a lot of time, which allows him to be fastidious about his solutions to such problems. But a desperate documentary filmmaker, faced with no money and a hole in the story to fill, may have no other recourse than to grab a microphone, sit down in the editing bay, and start to speak: “I first came across the story that you’re about to hear in the summer of 1990…”

Written by nevalalee

September 11, 2017 at 9:12 am

Out of the past

leave a comment »

You shouldn’t have been that sentimental.


About halfway through the beautiful, devastating finale of Twin Peaks—which I’ll be discussing here in detail—I began to reflect on what the figure of Dale Cooper really means. When we encounter him for the first time in the pilot, with his black suit, fastidious habits, and clipped diction, he’s the embodiment of what we’ve been taught to expect of a special agent of the Federal Bureau of Investigation. The FBI occupies a role in movies and television far out of proportion to its actual powers and jurisdiction, in part because it seems to exist on a level intriguingly beyond that of ordinary law enforcement, and it’s often been used to symbolize the sinister, the remote, or the impersonal. Yet when Cooper reveals himself to be a man of real empathy, quirkiness, and faith in the extraordinary, it comes almost as a relief. We want to believe that a person like this exists. Cooper carries a badge, he wears a tie, and he’s comfortable with a gun, but he’s here to enforce human reason in the face of a bewildering universe. The Black Lodge might be out there, but the Blue Rose task force is on it, and there’s something oddly consoling about the notion that it’s a part of the federal government. A few years later, Chris Carter took this premise and refined it into The X-Files, which, despite its paranoia, reassured us that somebody in a position of authority had noticed the weirdness in the world and was trying to make sense of it. They might rarely succeed, but it was comforting to think that their efforts had been institutionalized, complete with a basement office, a place in the org chart, and a budget. And for a lot of viewers, Mulder and Scully, like Cooper, came to symbolize law and order in stories that laugh at our attempts to impose it.

Even if you don’t believe in the paranormal, the image of the lone FBI agent—or two of them—arriving in a small town to solve a supernatural mystery is enormously seductive. It appeals to our hopes that someone in power cares enough about us to investigate problems that can’t be rationally addressed, which all stand, in one way or another, for the mystery of death. This may be why both Twin Peaks and The X-Files, despite their flaws, have sustained so much enthusiasm among fans. (No other television dramas have ever meant more to me.) But it’s also a myth. This isn’t really how the world works, and the second half of the Twin Peaks finale is devoted to tearing down, with remarkable cruelty and control, the very idea of such solutions. It can only do this by initially giving us what we think we want, and the first of last night’s two episodes misleads us with a satisfying dose of wish fulfillment. Not only is Cooper back, but he’s in complete command of the situation, and he seems to know exactly what to do at every given moment. He somehow knows all about Freddie and his magical green glove, which he utilizes to finally send Bob into oblivion. After rescuing Diane, he uses his room key from the Great Northern, like a magical item in a video game, to unlock the door that leads him to Mike and the disembodied Phillip Jeffries. He goes back in time, enters the events of Fire Walk With Me, and saves Laura on the night of her murder. The next day, Pete Martell simply goes fishing. Viewers at home even get the appearance by Julee Cruise that I’ve been awaiting since the premiere. After the credits ran, I told my wife that if it had ended there, I would have been totally satisfied.

But that was exactly what I was supposed to think, and even during the first half, there are signs of trouble. When Cooper first sees the eyeless Naido, who is later revealed to be the real Diane, his face freezes in a huge closeup that is superimposed for several minutes over the ensuing action. It’s a striking device that has the effect of putting us, for the first time, in Cooper’s head, rather than watching him with bemusement from the outside. We identify with him, and at the very end, when his efforts seemingly come to nothing, despite the fact that he did everything right, it’s more than heartbreaking—it’s like an existential crisis. It’s the side of the show that was embodied by Sheryl Lee’s performance as Laura Palmer, whose tragic life and horrifying death, when seen in its full dimension, put the lie to all the cozy, comforting stories that the series told us about the town of Twin Peaks. Nothing good could ever come out of a world in which Laura died in the way that she did, which was the message that Fire Walk With Me delivered so insistently. And seeing Laura share the screen at length with Cooper presents us with both halves of the show’s identity within a single frame. (It also gives us a second entry, after Blue Velvet, in the short list of great scenes in which Kyle MacLachlan enters a room to find a man sitting down with his brains blown out.) For a while, as Cooper drives Laura to the appointment with her mother, it seems almost possible that the series could pull off one last, unfathomable trick. Even if it means erasing the show’s entire timeline, it would be worth it to save Laura. Or so we think. In the end, they return to a Twin Peaks that neither of them recognize, in which the events of the series presumably never took place, and Cooper’s only reward is Laura’s scream of agony.

As I tossed and turned last night, thinking about Cooper’s final, shattering moment of comprehension, a line of dialogue from another movie drifted into my head: “It’s too late. There’s no bringing her back.” It’s from Vertigo, of course, which is a movie that David Lynch and Mark Frost have been quietly urging us to revisit all along. (Madeline Ferguson, Laura’s identical cousin, who was played by Lee, is named after the film’s two main characters, and both works of art pivot on a necklace and a dream sequence.) Along with so much else, Vertigo is about the futility of trying to recapture or change the past, and its ending, which might be the most unforgettable of any film I’ve ever seen, destroys Scotty’s delusions, which embody the assumptions of so many American movies: “One final thing I have to do, and then I’ll be rid of the past forever.” I think that Lynch and Frost are consciously harking back to Vertigo here—in the framing of the doomed couple on their long drive, as well as in Cooper’s insistence that Laura revisit the scene of the crime—and it doesn’t end well in either case. The difference is that Vertigo prepares us for it over the course of two hours, while Twin Peaks had more than a quarter of a century. Both works offer a conclusion that feels simultaneously like a profound statement of our helplessness in the face of an unfair universe and like the punchline to a shaggy dog story, and perhaps that’s the only way to express it. I’ve quoted Frost’s statement on this revival more than once: “It’s an exercise in engaging with one of the most powerful themes in all of art, which is the ruthless passage of time…We’re all trapped in time and we’re all going to die. We’re all traveling along this conveyor belt that is relentlessly moving us toward this very certain outcome.” Thirty seconds before the end, I didn’t know what he meant. But I sure do now. And I know at last why this show’s theme is called “Falling.”

Written by nevalalee

September 4, 2017 at 9:40 am

Asimov’s close encounter

with one comment

By the early seventies, Isaac Asimov had achieved the cultural status, which he still retains, of being the first—and perhaps the only—science fiction writer whom most ordinary readers would be able to name. As a result, he ended up on the receiving end of a lot of phone calls from famous newcomers to the field. In 1973, for example, he was contacted by a representative for Woody Allen, who asked if he’d be willing to look over the screenplay of the movie Sleeper. Asimov gladly agreed, and when he met with Allen over lunch, he told him that the script was perfect as it was. Allen didn’t seem to believe him: “How much science fiction have you written?” Asimov responded: “Not much. Very little, actually. Perhaps thirty books of it altogether. The other hundred books aren’t science fiction.” Allen was duly impressed, turning to ask his friends: “Did you hear him throw that line away?” Asimov turned down the chance to serve as a technical director, recommending Ben Bova instead, and the movie did just fine without him, although he later expressed irritation that Allen had never sent him a letter of thanks. Another project with Paul McCartney, whom Asimov met the following year, didn’t go anywhere, either:

McCartney wanted to do a fantasy, and he wanted me to write a story out of the fantasy out of which a screenplay would be prepared. He had the basic idea for the fantasy, which involved two sets of musical groups: a real one, and a group of extraterrestrial imposters…He had only a snatch of dialogue describing the moment when a real group realized they were being victimized by imposters.

Asimov wrote up what he thought was an excellent treatment, but McCartney rejected it: “He went back to his one scrap of dialogue, out of which he apparently couldn’t move, and wanted me to work with that.”

Of all of Asimov’s brushes with Hollywood, however, the most intriguing involved a director to whom he later referred as “Steve Spielberg.” In his memoir In Joy Still Felt, Asimov writes:

On July 18, 1975, I visited Steve Spielberg, a movie director, at his room in the Sherry-Netherland. He had done Jaws, a phenomenally successful picture, and now he planned to do another, involving flying saucers. He wanted me to work with him on it, but I didn’t really want to. The visual media are not my bag, really.

In a footnote, Asimov adds: “He went on to do it without me and it became the phenomenally successful Close Encounters of the Third Kind. I have no regrets.” For an autobiography that devotes enormous amounts of wordage to even the most trivial incidents, it’s a remarkably terse and unrevealing anecdote, and it’s hard not to wonder if something else might have been involved—because when Asimov finally saw Close Encounters, which is celebrating its fortieth anniversary this week with a new theatrical release, he hated it. A year after it came out, he wrote in Isaac Asimov’s Science Fiction Magazine:

Science Digest asked me to see the movie Close Encounters of the Third Kind and write an article for them on the science it contained. I saw the picture and was appalled. I remained appalled even after a doctor’s examination had assured me that no internal organs had been shaken loose by its ridiculous sound waves. (If you can’t be good, be loud, some say, and Close Encounters was very loud.) To begin with there was no accurate science in it; not a trace; and I said so in the article I wrote and which Science Digest published. There was also no logic in it; not a trace; and I said that, too.

Asimov’s essay on Close Encounters, in fact, might be the most unremittingly hostile piece of writing I’ve seen by him on any subject, and I’ve read a lot of it. He seems to have regarded it as little more than a cynical commercial ploy: “It made its play for Ufolators and mystics and, in its chase for the buck, did not scruple to violate every canon of good sense and internal consistency.” In response to readers who praised the special effects, he shot back:

Seeing a rotten picture for the special effects is like eating a tough steak for the smothered onions, or reading a bad book for the dirty parts. Optical wizardry is something a movie can do that a book can’t, but it is no substitute for a story, for logic, for meaning. It is ornamentation, not substance. In fact, whenever a science fiction picture is praised overeffusively for its special effects, I know it’s a bad picture. Is that all they can find to talk about?

Asimov was aware that his negative reaction had hurt the feelings of some of his fans, but he was willing to accept it: “There comes a time when one has to put one’s self firmly on the side of Good.” And he seemed particularly incensed at the idea that audiences might dare to think that Close Encounters was science fiction, and that it implied that the genre was allowed to be “silly, and childish, and stupid,” with nothing more than “loud noise and flashing lights.” He wasn’t against all instances of cinematic science fiction—he had liked Planet of the Apes and Star Wars, faintly praising the latter as “entertainment for the masses [that] did not try to do anything more,” and he even served as a technical consultant on Star Trek: The Motion Picture. But he remained unrelenting toward Close Encounters to the last: “It is a marvelous demonstration of what happens when the workings of extraterrestrial intelligence are handled without a trace of skill.”

And the real explanation comes in an interview that Asimov gave to the Los Angeles Times in 1988, in which he recalled of his close encounter with Spielberg: “I didn’t know who he was at the time, or what a hit the film would be, but I certainly wasn’t interested in a film that glorified flying saucers. I still would have refused, only with more regret.” The italics are mine. Asimov, as I’ve noted before, despised flying saucers, and he would have dismissed any movie that took them seriously as inherently unworthy of consideration. (The editor John W. Campbell was unusually cautious on the subject, writing of the UFO phenomenon in Astounding in 1959: “Its nature and cause are totally indeterminable from the data and the technical understanding available to us at the time.” Yet Asimov felt that even this was going too far, writing that Campbell “seemed to take seriously such things as flying saucers [and] psionic talents.”) From his point of view, he may well have been right to worry about the “glorification” of flying saucers in Close Encounters—its impact on the culture was so great that it seems to have fixed the look of aliens as reported by alleged abductees. And as a man whose brand as a science popularizer and explainer depended on his reputation for rationality and objectivity, he couldn’t allow himself to be associated with such ideas in any way, which may be why he attacked the movie with uncharacteristic savagery. As I’ve written elsewhere, a decade earlier, Asimov had been horrified when his daughter Robyn told him one night that she had seen a flying saucer. When he rushed outside and saw “a perfect featureless metallic circle of something like aluminum” in the sky, he was taken aback, and as he ran into the house for his glasses, he said to himself: “Oh no, this can’t happen to me.” It turned out to be the Goodyear blimp, and Asimov recalled: “I was incredibly relieved!” But his daughter may have come even closer to the truth when she said years later to the New York Times: “He thought he saw his career going down the drain.”

The greatest trick

leave a comment »

In the essay collection Candor and Perversion, the late critic Roger Shattuck writes: “The world scoffs at old ideas. It distrusts new ideas. It loves tricks.” He never explains what he means by “trick,” but toward the end of the book, in a chapter on Marcel Duchamp, he quotes a few lines from the poet Charles Baudelaire from the unpublished preface to Flowers of Evil:

Does one show to a now giddy, now indifferent public the working of one’s devices? Does one explain all the revision and improvised variations, right down to the way one’s sincerest impulses are mixed in with tricks and with the charlatanism indispensable to the work’s amalgamation?

Baudelaire is indulging here in an analogy from the theater—he speaks elsewhere of “the dresser’s and the decorator’s studio,” “the actor’s box,” and “the wrecks, makeup, pulleys, chains.” A trick, in this sense, is a device that the artist uses to convey an idea that also draws attention to itself, in the same way that we can simultaneously notice and accept certain conventions when we’re watching a play. In a theatrical performance, the action and its presentation are so intermingled that we can’t always say where one leaves off and the other begins, and we’re always aware, on some level, that we’re looking at actors on a stage behaving in a fashion that is necessarily stylized and artificial. In other art forms, we’re conscious of these tricks to a greater or lesser extent, and while artists are usually advised that such technical elements should be subordinated to the story, in practice, we often delight in them for their own sake.

For an illustration of the kind of trick that I mean, I can’t think of any better example than the climax of The Godfather, in which Michael Corleone attends the baptism of his godson—played by the infant Sofia Coppola—as his enemies are executed on his orders. This sequence seems as inevitable now as any scene in the history of cinema, but it came about almost by accident. The director Francis Ford Coppola had the idea to combine the christening with the killings after all of the constituent parts had already been shot, which left him with the problem of assembling footage that hadn’t been designed to fit together. As Michael Sragow recounts in The New Yorker:

[Editor Peter] Zinner, too, made a signal contribution. In a climactic sequence, Coppola had the stroke of genius (confirmed by Puzo) to intercut Michael’s serving as godfather at the christening of Connie’s baby with his minions’ savagely executing the Corleone family’s enemies. But, Zinner says, Coppola left him with thousands of feet of the baptism, shot from four or five angles as the priest delivered his litany, and relatively few shots of the assassins doing their dirty work. Zinner’s solution was to run the litany in its entirety on the soundtrack along with escalating organ music, allowing different angles of the service to dominate the first minutes, and then to build to an audiovisual crescendo with the wave of killings, the blaring organ, the priest asking Michael if he renounces Satan and all his works—and Michael’s response that he does renounce them. The effect sealed the movie’s inspired depiction of the Corleones’ simultaneous, duelling rituals—the sacraments of church and family, and the murders in the street.

Coppola has since described Zinner’s contribution as “the inspiration to add the organ music,” but as this account makes clear, the editor seems to have figured out the structure and rhythm of the entire sequence, building unforgettably on the director’s initial brainstorm.

The result speaks for itself. It’s hard to think of a more powerful instance in movies of the form of a scene, created by cuts and juxtaposition, merging with the power of its storytelling. As we watch it, consciously or otherwise, we respond both to its formal audacity and to the ideas and emotions that it expresses. It’s the ultimate trick, as Baudelaire defines it, and it also inspired one of my favorite passages of criticism, in David Thomson’s entry on Coppola in The Biographical Dictionary of Film:

When The Godfather measured its grand finale of murder against the liturgy of baptism, Coppola seemed mesmerized by the trick, and its nihilism. A Buñuel, by contrast, might have made that sequence ironic and hilarious. But Coppola is not long on those qualities, and he could not extricate himself from the engineering of scenes. The identification with Michael was complete and stricken.

Before reading these lines, I had never considered the possibility that the baptism scene could be “ironic and hilarious,” or indeed anything other than how it so overwhelmingly presents itself, although it might easily have played that way without the music. And I’ve never forgotten Thomson’s assertion that Coppola was mesmerized by his own trick, as if it had arisen from somewhere outside of himself. (It might be even more accurate to say that coming up with the notion that the sequences ought to be cut together is something altogether different from actually witnessing the result, after Zinner assembled all the pieces and added Bach’s Passacaglia and Fugue in C minor—which, notably, entwines three different themes.) Coppola was so taken by the effect that he reused it, years later, for a similar sequence in Bram Stoker’s Dracula, admitting cheerfully on the commentary track that he was stealing from himself.

It was a turning point both for Coppola and for the industry as a whole. Before The Godfather, Coppola had been a novelistic director of small, quirky stories, and afterward, like Michael coming into his true inheritance, he became the engineer of vast projects, following up on the clues that he had planted here for himself. (It’s typical of the contradictions of his career that he placed his own baby daughter at the heart of this sequence, which means that he could hardly keep from viewing the most technically nihilistic scene in all his work as something like a home movie.) And while this wasn’t the earliest movie to invite the audience to revel in its structural devices—half of Citizen Kane consists of moments like this—it may have been the first since The Birth of a Nation to do so while also becoming the most commercially successful film of all time. Along the way, it subtly changed us. In our movies, as in our politics, we’ve become used to thinking as much about how our stories are presented as about what they say in themselves. We can even come to prefer trickery, as Shattuck warns us, to true ideas. This doesn’t meant that we should renounce genuine artistic facility of the kind that we see here, as opposed to its imitation or its absence, any more than Michael can renounce Satan. But the consequences of this confusion can be profound. Coppola, the orchestrator of scenes, came to identify with the mafioso who executed his enemies with ruthless efficiency, and the beauty of Michael’s moment of damnation went a long way toward turning him into an attractive, even heroic figure, an impression that Coppola spent most of The Godfather Parts II and III trying in vain to correct. Pacino’s career was shaped by this moment as well. And we have to learn to distinguish between tricks and the truth, especially when they take pains to conceal themselves. As Baudelaire says somewhere else: “The greatest trick the devil ever pulled was convincing the world he didn’t exist.”

Thinking on your feet

leave a comment »

The director Elia Kazan, whose credits included A Streetcar Named Desire and On the Waterfront, was proud of his legs. In his memoirs, which the editor Robert Gottlieb calls “the most gripping and revealing book I know about the theater and Hollywood,” Kazan writes of his childhood:

Everything I wanted most I would have to obtain secretly. I learned to conceal my feelings and to work to fulfill them surreptitiously…What I wanted most I’d have to take—quietly and quickly—from others. Not a logical step, but I made it at a leap. I learned to mask my desires, hide my truest feeling; I trained myself to live in deprivation, in silence, never complaining, never begging, in isolation, without expecting kindness or favors or even good luck…I worked waxing floors—forty cents an hour. I worked at a small truck farm across the road—fifty cents an hour. I caddied every afternoon I could at the Wykagyl Country Club, carrying the bags of middle-aged women in long woolen skirts—a dollar a round. I spent nothing. I didn’t take trolleys; I walked. Everywhere. I have strong leg muscles from that time.

The italics are mine, but Kazan emphasized his legs often enough on his own. In an address that he delivered at a retrospective at Wesleyan University in 1973, long after his career had peaked, he told the audience: “Ask me how with all that knowledge and all that wisdom, and all that training and all those capabilities, including the strong legs of a major league outfielder, how did I manage to mess up some of the films I’ve directed so badly?”

As he grew older, Kazan’s feelings about his legs became inseparable from his thoughts on his own physical decline. In an essay titled “The Pleasures of Directing,” which, like the address quoted above, can be found in the excellent book Kazan on Directing, Kazan observes sadly: “They’ve all said it. ‘Directing is a young man’s game.’ And time passing proves them right.” He continues:

What goes first? With an athlete, the legs go first. A director stands all day, even when he’s provided with chairs, jeeps, and limos. He walks over to an actor, stands alongside and talks to him; with a star he may kneel at the side of the chair where his treasure sits. The legs do get weary. Mine have. I didn’t think it would happen because I’ve taken care of my body, always exercised. But I suddenly found I don’t want to play singles. Doubles, okay. I stand at the net when my partner serves, and I don’t have to cover as much ground. But even at that…

I notice also that I want a shorter game—that is to say also, shorter workdays, which is the point. In conventional directing, the time of day when the director has to be most able, most prepared to push the actors hard and get what he needs, usually the close-ups of the so-called “master scene,” is in the afternoon. A director can’t afford to be tired in the late afternoon. That is also the time—after the thoughtful quiet of lunch—when he must correct what has not gone well in the morning. He better be prepared, he better be good.

As far as artistic advice goes, this is as close to the real thing as it gets. But it can only occur to an artist who can no longer take for granted the energy on which he has unthinkingly relied for most of his life.

Kazan isn’t the only player in the film industry to draw a connection between physical strength—or at least stamina—and the medium’s artistic demands. Guy Hamilton, who directed Goldfinger, once said: “To be a director, all you need is a hide like a rhinoceros—and strong legs, and the ability to think on your feet…Talent is something else.” None other than Christopher Nolan believes so much in the importance of standing that he’s institutionalized it on his film sets, as Mark Rylance recently told The Independent: “He does things like he doesn’t like having chairs on set for actors or bottles of water, he’s very particular…[It] keeps you on your toes, literally.” Walter Murch, meanwhile, noted that a film editor needed “a strong back and arms” to lug around reels of celluloid, which is less of a concern in the days of digital editing, but still worth bearing in mind. Murch famously likes to stand while editing, like a surgeon in the operating room:

Editing is sort of a strange combination of being a brain surgeon and a short-order cook. You’ll never see those guys sitting down on the job. The more you engage your entire body in the process of editing, the better and more balletic the flow of images will be. I might be sitting when I’m reviewing material, but when I’m choosing the point to cut out of a shot, I will always jump out of the chair. A gunfighter will always stand, because it’s the fastest, most accurate way to get to his gun. Imagine High Noon with Gary Cooper sitting in a chair. I feel the fastest, most accurate way to choose the critically important frame I will cut out of a shot is to be standing. I have kind of a gunfighter’s stance.

And as Murch suggests, this applies as much to solitary craftsmen as it does to the social and physical world of the director. Philip Roth, who worked at a lectern, claimed that he paced half a mile for every page that he wrote, while the mathematician Robert P. Langlands reflected: “[My] many hours of physical effort as a youth also meant that my body, never frail but also initially not particularly strong, has lasted much longer than a sedentary occupation might have otherwise permitted.” Standing and walking can be a proxy for mental and moral acuity, as Bertrand Russell implied so memorably:

Our mental makeup is suited to a life of very severe physical labor. I used, when I was younger, to take my holidays walking. I would cover twenty-five miles a day, and when the evening came I had no need of anything to keep me from boredom, since the delight of sitting amply sufficed. But modern life cannot be conducted on these physically strenuous principles. A great deal of work is sedentary, and most manual work exercises only a few specialized muscles. When crowds assemble in Trafalgar Square to cheer to the echo an announcement that the government has decided to have them killed, they would not do so if they had all walked twenty-five miles that day.

Such energy, as Kazan reminds us, isn’t limitless. I still think of myself as relatively young, but I don’t have the raw mental or physical resources that I did fifteen years ago, and I’ve had to come up with various tricks—what a pickup basketball player might call “old-man shit”—to maintain my old levels of productivity. I’ve written elsewhere that certain kinds of thinking are best done sitting down, but there’s also a case to be made for thinking on your feet. Standing is the original power pose, and perhaps the only one likely to have any real effects. And it’s in the late afternoons, both of a working day and of an entire life, that you need to stand and deliver.

%d bloggers like this: