Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Search Results

My great books #9: On Directing Film

leave a comment »

On Directing Film

Note: I’m counting down my ten favorite works of nonfiction, in order of the publication dates of their first editions, and with an emphasis on books that deserve a wider readership. You can find the earlier installments here

When it comes to giving advice on something as inherently unteachable as writing, books on the subject tend to fall into one of three categories. The first treats the writing manual as an extension of the self-help genre, offering what amounts to an extended pep talk that is long on encouragement but short on specifics. A second, more useful approach is to consolidate material on a variety of potential strategies, either through the voices of multiple writers—as George Plimpton did so wonderfully in The Writer’s Chapbook, which assembles the best of the legendary interviews given to The Paris Review—or through the perspective of a writer and teacher, like John Gardner, generous enough to consider the full range of what the art of fiction can be. And the third, exemplified by David Mamet’s On Directing Film, is to lay out a single, highly prescriptive recipe for constructing stories. This last approach might seem unduly severe. Yet after a lifetime of reading what other writers have to say on the subject, Mamet’s little book is still the best I’ve ever found, not just for film, but for fiction and narrative nonfiction as well. On one level, it can serve as a starting point for your own thoughts about how the writing process should look: Mamet provides a strict, almost mathematical set of tools for building a plot from first principles, and even if you disagree with his methods, they clarify your thinking in a way that a more generalized treatment might not. But even if you just take it at face value, it’s still the closest thing I know to a foolproof formula for generating rock-solid first drafts. (If Mamet himself has a flaw as a director, it’s that he often stops there.) In fact, it’s so useful, so lucid, and so reliable that I sometimes feel reluctant to recommend it, as if I were giving away an industrial secret to my competitors.

Mamet’s principles are easy to grasp, but endlessly challenging to follow. You start by figuring out what every scene is about, mostly by asking one question: “What does the protagonist want?” You then divide each scene up into a sequence of beats, consisting of an immediate objective and a logical action that the protagonist takes to achieve it, ideally in a form that can be told in visual terms, without the need for expository dialogue. And you repeat the process until the protagonist succeeds or fails at his or her ultimate objective, at which point the story is over. This may sound straightforward, but as soon as you start forcing yourself to think this way consistently, you discover how tough it can be. Mamet’s book consists of a few simple examples, teased out in a series of discussions at a class he taught at Columbia, and it’s studded with insights that once heard are never forgotten: “We don’t want our protagonist to do things that are interesting. We want him to do things that are logical.” “Here is a tool—choose your shots, beats, scenes, objectives, and always refer to them by the names you chose.” “Keep it simple, stupid, and don’t violate those rules that you do know. If you don’t know which rule applies, just don’t muck up the more general rules.” “The audience doesn’t want to read a sign; they want to watch a motion picture.” “A good writer gets better only by learning to cut, to remove the ornamental, the descriptive, the narrative, and especially the deeply felt and meaningful.” “Now, why did all those Olympic skaters fall down? The only answer I know is that they hadn’t practiced enough.” And my own personal favorite: “The nail doesn’t have to look like a house; it is not a house. It is a nail. If the house is going to stand, the nail must do the work of a nail. To do the work of the nail, it has to look like a nail.”

Written by nevalalee

November 12, 2015 at 9:00 am

My great books #7: The Biographical Dictionary of Film

leave a comment »

The New Biographical Dictionary of Film

Note: I’m counting down my ten favorite works of nonfiction, in order of the publication dates of their first editions, and with an emphasis on books that deserve a wider readership. You can find the earlier installments here.

David Thomson’s Biographical Dictionary of Film is one of the weirdest books in all of literature, and more than the work of any other critic, it has subtly changed the way I think about both life and the movies. His central theme—which is stated everywhere and nowhere—is the essential strangeness of turning shadows on a screen into men and women who can seem more real to us than the people in our own lives. His writing isn’t conventional criticism so much as a single huge work of fiction, with Thomson himself as both protagonist and nemesis. It isn’t a coincidence that one of his earliest books was a biography of Laurence Sterne, author of Tristram Shandy: his entire career can be read as one long Shandean exercise, in which Thomson, as a fictional character in his own work, is cheerfully willing to come off as something of a creep, as long as it illuminates our reasons for going to the movies. And his looniness is part of his charm. Edmund Wilson once playfully speculated that George Saintsbury, the great English critic, invented his own Toryism “in the same way that a dramatist or novelist arranges contrasting elements,” and there are times when I suspect that Thomson is doing much the same thing. (If his work is a secret novel, its real precursor is Pale Fire, in which Thomson plays the role of Kinbote, and every article seems to hint darkly at some monstrous underlying truth. A recent, bewildered review of his latest book on The A.V. Club is a good example of the reaction he gets from readers who aren’t in on the joke.)

But if you leave him with nothing but his perversity and obsessiveness, you end up with Armond White, while Thomson succeeds because he’s also lucid, encyclopedically informed, and ultimately sane, although he does his best to hide it. The various editions of The Biographical Dictionary of Film haven’t been revised so much as they’ve accumulated: Thomson rarely goes back to rewrite earlier entries, but tacks on new thoughts to the end of each article, so that it grows by a process of accretion, like a coral reef. The result can be confusing, but when I go back to his earlier articles, I remember at once why this is still the essential book on film. I’ll look at Thomson on Coppola (“He is Sonny and Michael Corleone for sure, but there are traces of Fredo, too”); on Sydney Greenstreet (“Indeed, there were several men trapped in his grossness: the conventional thin man; a young man; an aesthete; a romantic”); or on Eleanor Powell’s dance with Astaire in Broadway Melody of 1940 (“Maybe the loveliest moment in films is the last second or so, as the dancers finish, and Powell’s alive frock has another half-turn, like a spirit embracing the person”). Or, perhaps most memorably of all, his thoughts on Citizen Kane, which, lest we forget, is about the futile search of a reporter named Thompson:

As if Welles knew that Kane would hang over his own future, regularly being used to denigrate his later works, the film is shot through with his vast, melancholy nostalgia for self-destructive talent…Kane is Welles, just as every apparent point of view in the film is warmed by Kane’s own memories, as if the entire film were his dream in the instant before death.

It’s a strange, seductive, indispensable book, and to paraphrase Thomson’s own musings on Welles, it’s the greatest career in film criticism, the most tragic, and the one with the most warnings for the rest of us.

Written by nevalalee

November 10, 2015 at 9:00 am

The films of a life

leave a comment »

Marcello Mastroianni and Anita Ekberg in La Dolce Vita

The other week, while musing on Richard Linklater’s Boyhood—which I still haven’t seen—I noted that we often don’t have the chance to experience the movies that might speak most urgently to us at the later stages of our lives. Many of us who love film encounter the movies we love at a relatively young age, and we spend our teens and twenties devouring the classics that came out before we were born. And that’s exactly how it should be: when we’re young, we have the time and energy to explore enormous swaths of the canon, and we absorb images and stories that will enrich the years to come. Yet we’re also handicapped by being relatively inexperienced and emotionally circumscribed, at least compared to later in life. We’re wowed by technical excellence, virtuoso effects, relentless action, or even just a vision of the world in which we’d like to believe. And by the time we’re old enough to judge such things more critically, we find that we aren’t watching movies as much as we once were, and it takes a real effort to seek out the more difficult, reflective masterpieces that might provide us with signposts for the way ahead.   

What we can do, however, is look back at the movies we loved when we were younger and see what they have to say to us now. I’ve always treasured Roger Ebert’s account of his shifting feelings toward Fellini’s La Dolce Vita, which he called “a page-marker in my own life”:

Movies do not change, but their viewers do. When I saw La Dolce Vita in 1960, I was an adolescent for whom “the sweet life” represented everything I dreamed of: sin, exotic European glamour, the weary romance of the cynical newspaperman. When I saw it again, around 1970, I was living in a version of Marcello’s world; Chicago’s North Avenue was not the Via Veneto, but at 3 a.m. the denizens were just as colorful, and I was about Marcello’s age.

When I saw the movie around 1980, Marcello was the same age, but I was ten years older, had stopped drinking, and saw him not as a role model but as a victim, condemned to an endless search for happiness that could never be found, not that way. By 1991, when I analyzed the film a frame at a time at the University of Colorado, Marcello seemed younger still, and while I had once admired and then criticized him, now I pitied and loved him.

Moira Shearer in The Red Shoes

And when we realize how our feelings toward certain movies have shifted, it can be both moving and a little terrifying. Life transforms us so insidiously that it’s often only when we compare our feelings to a fixed benchmark that we become aware of the changes that have taken place. Watching Citizen Kane at twenty and again at thirty is a disorienting experience, especially when you’re hoping to make a life for yourself in the arts. Orson Welles was twenty-five when he directed it, and when you see it at twenty, it feels like both an inspiration and a challenge: part of you believes, recklessly, that you could be Welles, and the possibilities of the next few years of your life seem limitless. Looking back at it at thirty, after a decade’s worth of effort and compromise, you start to realize both the absurdity of his achievement and how singular it really is, and the movie seems suffused with what David Thomson calls Welles’s “vast, melancholy nostalgia for self-destructive talent.” You begin to understand the ambivalence with which more experienced filmmakers regarded the Wellesian monster of energy and ambition, and it quietly affects the way you think about Kane‘s reflections on time and old age.

The more personal our attachment to a movie, the harder these lessons can be to swallow. The other night, I sat down to watch part of The Red Shoes, my favorite movie of all time, for the first time in several years. It’s a movie I thought I knew almost frame by frame, and I do, but I hadn’t taken the emotional component into account. I’ve loved this movie since I first saw it in high school, both for its incredible beauty and for the vision it offered of a life in the arts. Later, as I rewatched it in college and in my twenties, it provided a model, a warning, and a reminder of the values I was trying to honor. Now, after I’ve been through my own share of misadventures as a writer, it seems simultaneously like a fantasy and a bittersweet emblem of a world that still seems just out of reach. I’m older than many of the characters now—although I have yet to enter my Boris Lermontov phase—and my heart aches a little when I listen to Julian’s wistful, ambitious line: “I wonder what it feels like to wake up in the morning and find oneself famous.” If The Red Shoes once felt like a promise of what could be, it’s starting to feel to me now like what could have been, or might be again. Ten years from now, it will probably feel like something else entirely. And when that time comes, I’ll let you know what I find.

Written by nevalalee

July 23, 2014 at 9:30 am

The best closing shots in film

leave a comment »

Lawrence of Arabia

Note: Since I’m taking a deserved break for the holidays, I’m reposting a couple of my favorite entries from early in this blog’s run. This post was originally published, in a slightly different form, on January 13, 2011. Visual spoilers follow. Cover your eyes!

As I’ve noted before, the last line of a novel is almost always of interest, but the last line of a movie generally isn’t. It isn’t hard to understand why: movies are primarily a visual medium, and there’s a sense in which even the most brilliant dialogue can often seem beside the point. And as much the writer in me wants to believe otherwise, audiences don’t go to the movies to listen to words: they go to look at pictures.

Perhaps inevitably, then, there are significantly more great closing shots in film than there are great curtain lines. Indeed, the last shot of nearly every great film is memorable, so the list of finalists can easily expand into the dozens. Here, though, in no particular order, are twelve of my favorites. Click for the titles:

Jerry Goldsmith on the art of the film score

leave a comment »

Jerry Goldsmith

Working to timings and synchronising your musical thoughts with the film can be stimulating rather than restrictive. Scoring is a limitation but like any limitation it can be made to work for you. Verdi, except for a handful of pieces, worked best when he was “turned on” by a libretto. The most difficult problem in music is form, and in a film you already have this problem solved for you. You are presented with a basic structure, a blueprint, and provided the film has been well put together, well edited, it often suggests its own rhythms and tempo. The quality of the music is strictly up to the composer. Many people seem to assume that because film music serves the visual it must be something of secondary value. Well, the function of any art is to serve a purpose in society. For many years, music and painting served religion. The thing to bear in mind is that film is the youngest of the arts, and that scoring is the youngest of the music arts. We have a great deal of development ahead of us.

Jerry Goldsmith, quoted in Music for the Movies

Written by nevalalee

July 7, 2013 at 9:50 am

Daniel Clowes on the lessons of film editing

leave a comment »

To me, the most useful experience in working in “the film industry” has been watching and learning the editing process. You can write whatever you want and try to film whatever you want, but the whole thing really happens in that editing room. How do you edit comics? If you do them in a certain way, the standard way, it’s basically impossible. That’s what led me to this approach of breaking my stories into segments that all have a beginning and end on one, two, three pages. This makes it much easier to shift things around, to rearrange parts of the story sequence. It’s something that I’m really interested in trying to figure out, but there are pluses and minuses to every approach. For instance, I think if you did all your panels exactly the same size and left a certain amount of “breathing room” throughout the story, you could make fairly extensive after-the-fact changes, but you’d sacrifice a lot by doing that…

It’s a very mysterious process: you put together a cut of the film and at the first viewing it always seems just terrible, then you work on it for two weeks and you can’t imagine what else you could do with it; then six months later, you’re still working on it and making significant changes every day. It’s very odd, but you kind of know when it’s there.

Daniel Clowes, quoted by Todd Hignite in In the Studio: Visits with Contemporary Cartoonists

Written by nevalalee

October 28, 2012 at 9:50 am

Fiction into film: L.A. Confidential

with one comment

Of all the movies I’ve ever seen, Curtis Hanson’s adaptation of James Ellroy’s L.A. Confidential has influenced my own work the most. This isn’t to say that it’s my favorite movie of all time—although it’s certainly in the top ten—or even that I find its themes especially resonant: I have huge admiration for Ellroy’s talents, but it’s safe to say that he and I are operating under a different set of obsessions. Rather, it’s the structure of the film that I find so compelling: three protagonists, with three main stories, that interweave and overlap in unexpected ways until they finally converge at the climax. It’s a narrative structure that has influenced just about every novel I’ve ever written, or tried to write—and the result, ironically, has made my own work less adaptable for the movies.

Movies, you see, aren’t especially good at multiple plots and protagonists. Most screenplays center, with good reason, on a single character, the star part, whose personal story is the story of the movie. Anything that departs from this form is seen as inherently problematic, which is why L.A. Confidential’s example is so singular, so seductive, and so misleading. As epic and layered as the movie is, Ellroy’s novel is infinitely larger: it covers a longer span of time, with more characters and subplots, to the point where entire storylines—like that of a particularly gruesome serial killer—were jettisoned completely for the movie version. Originally it was optioned as a possible miniseries, which would have made a lot of sense, but to the eternal credit of Hanson and screenwriter Brian Helgeland, they decided that there might also be a movie here.

To narrow things down, they started with my own favorite creative tool: they made a list. As the excellent bonus materials for the film make clear, Hanson and Helgeland began with a list of characters or plot points they wanted to keep: Bloody Christmas, the Nite Owl massacre, Bud White’s romance with Lynn Bracken, and so on. Then they ruthlessly pared away the rest of the novel, keeping the strands they liked, finding ways to link them together, and writing new material when necessary, to the point where some of the film’s most memorable moments—including the valediction of Jack Vincennes and the final showdown at the Victory Motel, which repurposes elements of the book’s prologue—are entirely invented. And the result, as Ellroy says, was a kind of “alternate life” for the characters he had envisioned.

So what are the lessons here? For aspiring screenwriters, surprisingly few: a film like L.A. Confidential appears only a couple of times each decade, and the fact that it was made at all, without visible compromise, is one of the unheralded miracles of modern movies. If nothing else, though, it’s a reminder that adaptation is less about literal faithfulness than fidelity of spirit. L.A. Confidential may keep less than half of Ellroy’s original material, but it feels as turbulent and teeming with possibility, and gives us the sense that some of the missing stories may still be happening here, only slightly offscreen. Any attempt to adapt similarly complex material without that kind of winnowing process, as in the unfortunate Watchmen, usually leaves audiences bewildered. The key is to find the material’s alternate life. And no other movie has done it so well.

Written by nevalalee

August 8, 2011 at 10:12 am

Fiction into film: The Silence of the Lambs

leave a comment »

It’s been just over twenty years now since The Silence of the Lambs was released in theaters, and the passage of time—and its undisputed status as a classic—sometimes threatens to blind us to the fact that it’s such a peculiar movie. At the time, it certainly seemed like a dubious prospect: it had a director known better for comedy than suspense, an exceptional cast but no real stars, and a story whose violence verged on outright kinkiness. If it emphatically overcame those doubts, it was with its mastery of tone and style, a pair of iconic performances, and, not incidentally, the best movie poster of the modern era. And the fact that it not only became a financial success but took home the Academy Award for Best Picture, as well as the four other major Oscars, remains genre filmmaking’s single most unqualified triumph.

It also had the benefit of some extraordinary source material. I’ve written at length about Thomas Harris elsewhere, but what’s worth emphasizing about his original novel is that it’s the product of several diverse temperaments. Harris began his career as a journalist, and there’s a reportorial streak running through all his best early books, with their fascination with the technical language, tools, and arcana of various esoteric professions, from forensic profiling to brain tanning. He also has a Gothic sensibility that has only grown more pronounced with time, a love of language fed by the poetry of William Blake and John Donne, and, in a quality that is sometimes undervalued, the instincts of a great pulp novelist. The result is an endlessly fascinating book poised halfway between calculated bestseller and major novel, and all the better for that underlying tension.

Which is why it pains me as a writer to say that as good as the book is, the movie is better. Part of this is due to the inherent differences in the way we experience movies and popular fiction: for detailed character studies, novels have the edge, but for a character who is seen mostly from the outside, as an enigma, nothing in Harris prepares us for what Anthony Hopkins does with Hannibal Lecter, even if it amounts to nothing more than a few careful acting decisions for his eyes and voice. It’s also an example of how a popular novel can benefit from an intelligent, respectful adaptation. Over time, Ted Tally’s fine screenplay has come to seem less like a variation on Harris’s novel than a superlative second draft: Tally keeps all that is good in the book, pares away the excesses, and even improves the dialogue. (It’s the difference between eating a census taker’s liver with “a big Amarone” and “a nice Chianti.”)

And while the movie is a sleeker, more streamlined animal, it still benefits from the novel’s strangeness. For better or worse, The Silence of the Lambs created an entire genre—the sleek, modern serial killer movie—but like most founding works, it has a fundamental oddity that leaves it out of place among its own successors. The details of its crimes are horrible, but what lingers are its elegance, its dry humor, and the curious rhythms of its central relationship, which feels like a love story in ways that Hannibal made unfortunately explicit. It’s genuinely concerned with women, even as it subjects them to horrible fates, and in its look and mood, it’s a work of stark realism shading inexorably into a fairy tale. That ability to combine strangeness with ruthless efficiency is the greatest thing a thriller in any medium can do. Few movies, or books, have managed it since, even after twenty years of trying.

Written by nevalalee

July 12, 2011 at 8:39 am

Fiction into film: The English Patient

with 4 comments

A few months ago, after greatly enjoying The Conversations, Michael Ondaatje’s delightful book-length interview with Walter Murch, I decided to read Ondaatje’s The English Patient for the first time. I went through it very slowly, only a handful of pages each day, in parallel with my own work on the sequel to The Icon Thief. Upon finishing it last week, I was deeply impressed, not just by the writing, which had drawn me to the book in the first place, but also by the novel’s structural ingenuity—derived, Ondaatje says, from a long process of rewriting and revision—and the richness of its research. This is one of the few novels where detailed historical background has been integrated seamlessly into the poetry of the story itself, and it reflects a real, uniquely novelistic curiosity about other times and places. It’s a great book.

Reading The English Patient also made me want to check out the movie, which I hadn’t seen in more than a decade, when I watched it as part of a special screening for a college course. I recalled admiring it, although in a rather detached way, and found that I didn’t remember much about the story, aside from a few moments and images (and the phrase “suprasternal notch”). But I sensed it would be worth revisiting, both because I’d just finished the book and because I’ve become deeply interested, over the past few years, in the career of editor Walter Murch. Murch is one of film’s last true polymaths, an enormously intelligent man who just happened to settle into editing and sound design, and The English Patient, for which he won two Oscars (including the first ever awarded for a digitally edited movie), is a landmark in his career. It was with a great deal of interest, then, that I watched the film again last night.

First, the good news. The adaptation, by director Anthony Minghella, is very intelligently done. It was probably impossible to film Ondaatje’s full story, with its impressionistic collage of lives and memories, in any kind of commercially viable way, so the decision was wisely made to focus on the central romantic episode, the doomed love affair between Almásy (Ralph Fiennes) and Katherine Clifton (Kristin Scott Thomas). Doing so involved inventing a lot of new, explicitly cinematic material, some satisfying (the car crash and sandstorm in the desert), some less so (Almásy’s melodramatic escape from the prison train). The film also makes the stakes more personal: the mission of Caravaggio (Willem Dafoe) is less about simple fact-finding, as it was in the book, than about revenge. And the new ending, with Almásy silently asking Hana (Juliette Binoche) to end his life, gives the film a sense of resolution that the book deliberately lacks.

These changes, while extensive, are smartly done, and they respect the book while acknowledging its limitations as source material. As Roger Ebert points out in his review of Apocalypse Now, another milestone in Murch’s career, movies aren’t very good at conveying abstract ideas, but they’re great for showing us “the look of a battle, the expression on a face, the mood of a country.” On this level, The English Patient sustains comparison with the works of David Lean, with a greater interest in women, and remains, as David Thomson says, “one of the most deeply textured of films.” Murch’s work, in particular, is astonishing, and the level of craft on display here is very impressive.

Yet the pieces don’t quite come together. The novel’s tentative, intellectual nature, which the adaptation doesn’t try to match, infects the movie as well. It feels like an art film that has willed itself into being an epic romance, when in fact the great epic romances need to be a little vulgar—just look at Gone With the Wind. Doomed romances may obsess their participants in real life, but in fiction, seen from the outside, they can seem silly or absurd. The English Patient understands a great deal about the craft of the romantic epic, the genre in which it has chosen to plant itself, but nothing of its absurdity. In the end, it’s just too intelligent, too beautifully made, to move us on more than an abstract level. It’s a heroic effort; I just wish it were something a little more, or a lot less.

The best closing shots in film

with 7 comments

Warning: Visual spoilers follow. Cover your eyes!

As I’ve noted before, the last line of a novel is almost always of interest, but the last line of a movie generally isn’t. It isn’t hard to understand why: movies are primarily a visual medium, after all, and there’s a sense in which even the most brilliant dialogue can often seem beside the point. And as much the writer in me wants to believe otherwise, audiences don’t go to the movies to listen to words: they go to look at pictures.

Perhaps inevitably, then, there are significantly more great closing shots in film than there are great curtain lines. Indeed, the last shot of nearly every great film is memorable, so the list of finalists can easily expand into the dozens. Here, though, in no particular order, are twelve of my favorites. Click or mouse over for the titles:

The sound and the furry

with 3 comments

Last week, the podcast 99% Invisible devoted an episode to the editing and sound design tricks used by the makers of nature documentaries. For obvious reasons, most footage in the wild is captured from a distance using zoom lenses, and there’s no equivalent for sound, which means that unless David Attenborough himself is standing in the shot, the noises that you’re hearing were all added later. Foley artists will recreate hoofbeats or the footsteps of lions by running their hands over pits filled with gravel, while animal vocalizations can be taken from sound catalogs or captured by recordists working nowhere near the original shoot. This kind of artifice strikes me as forgivable, but there are times when the manipulation of reality crosses a line. In the fifties Disney documentary White Wilderness, lemmings were shown hurling themselves into the ocean, which required a helping hand: “The producers took the lemmings to a cliff in Alberta and, in some scenes, used a turntable device to throw them off the edge. Not only was it staged, but lemmings don’t even do this on their own. Scientists now know that the idea of a mass lemming suicide ritual is entirely apocryphal.” And then there’s the movie Wolves, which rented wolves from a game farm and filmed them in an artificial den. When Chris Palmer, the director, was asked about the scene at a screening, it didn’t go well:

Palmer’s heart sank, but he decided to come clean, and when he did, he could feel the excitement leave the room. Up to this moment, he had assumed people wouldn’t care. “But they do care,” he realized. “They are assuming they are seeing the truth…things that are authentic and genuine.”

When viewers realize that elements of nature documentaries utilize the same techniques as other genres of filmmaking, they tend to feel betrayed. When you think about the conditions under which such movies are produced, however, it shouldn’t be surprising. If every cut is a lie, as Godard famously said, that’s even more true when you’re dealing with animals in the wild. As David Mamet writes in On Directing Film:

Documentaries take basically unrelated footage and juxtapose it in order to give the viewer the idea the filmmaker wants to convey. They take footage of birds snapping a twig. They take footage of a fawn raising its head. The two shots have nothing to do with each other. They were shot days or years, and miles, apart. And the filmmaker juxtaposes the images to give the viewer the idea of great alertness. The shots have nothing to do with each other. They are not a record of what the protagonist did. They are not a record of how the deer reacted to the bird. They’re basically uninflected images. But they give the viewer the idea of alertness to danger when they are juxtaposed. That’s good filmmaking.

Mamet is trying to make a point about how isolated images—which have little choice but to be “uninflected” when the actors are some birds and a deer—can be combined to create meaning, and he chose this example precisely because the narrative emerges from nothing but that juxtaposition. But it also gets at something fundamental about the grammar of the wildlife documentary itself, which trains us to think about nature in terms of stories. And that’s a fiction in itself.

You could argue that a movie that purports to be educational or “scientific” has no business engaging in artifice of any kind, but in fact, it’s exactly in that context that this sort of manipulation is most justified. Scientific illustration is often used when a subject can’t be photographed directly—as in Ken Marschall’s wonderful paintings for Dr. Robert D. Ballard’s The Discovery of the Titanic—or when more information can conveyed through an idealized situation. In Sociobiology, Edward O. Wilson writes of Sarah Landry’s detailed drawings: “In the case of the vertebrate species, her compositions are among the first to represent entire societies, in the correct demographic proportions, with as many social interactions displayed as can plausibly be included in one scene.” Landry’s compositions of a troop of baboons or a herd of elephants could never have been captured in a photograph, but they get at a truth that is deeper than reality, or at least more useful. As the nature illustrator Jonathan Kingdon writes in Field Notes on Science and Nature:

Even an outline sketch that bears little relationship to the so-called objectivity of a photograph might actually transmit information to another human being more selectively, sometimes even more usefully, than a photograph. For example, a few quick sketches of a hippopotamus allow the difference between sexes, the peculiar architecture of amphibious existence in a giant quadruped, and the combination of biting and antlerlike clashing of enlarged lower jaws to be appreciated at a glance…”Outline drawings”…can represent, in themselves, artifacts that may correspond more closely with what the brain seeks than the charts of light-fall that photographs represent.

On some level, nature documentaries fall into much the same category, providing us with idealized situations and narratives in order to facilitate understanding. (You could even say that the impulse to find a story in nature is a convenient tool in itself. It’s no more “true” than the stories that we tell about human history, but those narratives, as Walter Pater observes of philosophical theories, “may help us to gather up what might otherwise pass unregarded by us.”) If anything, our discomfort with more extreme kinds of artifice has more to do with an implicit violation of the contract between the filmmaker and the audience. We expect that the documentarian will go into the field and shoot hundreds of hours of footage in search of the few minutes—or seconds—that will amaze us. As Jesse David Fox of Vulture wrote of the stunning iguana and snake chase from the new Planet Earth series: “This incredible footage is the result of the kind of extreme luck that only comes with hard work. A camera crew worked from dusk to dawn for weeks filming the exact spot, hoping something would happen, and if it did, that the camera would be in focus.” After shooting the hatchlings for weeks, they finally ended up with their “hero” iguana, and this combination of luck and preparation is what deserves to be rewarded. Renting wolves or throwing lemmings off a cliff seems like a form of cheating, an attempt to fit the story to the script, rather than working with what nature provided. But the boundary isn’t always clear. Every documentary depends on a sort of artificial selection, with the best clips making it into the finished result in a kind of survival of the fittest. But there’s also a lot of intelligent design.

The vision thing

with 6 comments

A few days ago, I was struck by the fact that a mere thirty-one years separated The Thing From Another World from John Carpenter’s The Thing. The former was released on April 21, 1951, the latter on June 25, 1982, and another remake, which I haven’t yet seen, arrived right on schedule in 2011. Three decades might have once seemed like a long time to me, but now, it feels like the blink of an eye. It’s the equivalent of the upcoming remake of David Cronenberg’s The Fly, which was itself a reimagining of a movie that had been around for about the same amount of time. I picked these examples at random, and while there isn’t anything magical about a thirty-year cycle, it isn’t hard to understand. It’s enough time for a new generation of viewers to come of age, but not quite long enough for the memory of the earlier movie to fade entirely. (From my perspective, the films of the eighties seem psychologically far closer than those of the seventies, and not just for reasons of style.) It’s also long enough for the original reaction to a movie to be largely forgotten, so that it settles at what feels like its natural level. When The Thing From Another World first premiered, Isaac Asimov thought that it was one of the worst movies ever made. John W. Campbell, on whose original story it was based, was more generous, writing of the filmmakers: “I think they may be right in feeling that the proposition in ‘Who Goes There?’ is a little strong if presented literally in the screen.” Elsewhere, he noted:

I have an impression that the original version directed and acted with equal restraint would have sent some ten percent of the average movie audience into genuine, no-kidding, semi-permanent hysterical screaming meemies…You think that [story] wouldn’t tip an insipid paranoid psychotic right off the edge if it were presented skillfully?

For once, Campbell, whose predictions were only rarely on the mark, was entirely prescient. By the time John Carpenter’s The Thing came out, The Thing From Another World was seen as classic, and the remake, which tracked the original novella much more closely, struck many viewers as an assault on its legacy. One of its most vocal detractors, curiously, was Harlan Ellison, who certainly couldn’t be accused of squeamishness. In a column for L.A. Weekly, Ellison wrote that Carpenter “showed some stuff with Halloween,” but dismissed his later movies as “a swan dive into the potty.” He continued:

The Thing…[is a] depredation [Carpenter] attempts to validate by saying he wanted to pull out of the original John W. Campbell story those treasures undiscovered by the original creators…One should not eat before seeing it…and one cannot eat after having seen it.

If the treasures Carpenter sought to unearth are contained in the special effects lunacy of mannequins made to look like men, splitting open to disgorge sentient lasagna that slaughters for no conceivable reason, then John Carpenter is a raider of the lost ark of Art who ought to be sentenced to a lifetime of watching Neil Simon plays and films.

The Thing did not need to be remade, if the best this fearfully limited director could bring forth was a ripoff of Alien in the frozen tundra, this pointless, dehumanized freeway smashup of grisly special effects dreck, flensed of all characterization, philosophy, subtext, or rationality.

Thirty years later, the cycle of pop culture has come full circle, and it’s fair to say that Carpenter’s movie has eclipsed not just Howard Hawks and Christian Nyby, but even Campbell himself. (Having spent the last year trying to explain what I’m doing to people who aren’t science fiction fans, I can testify that if Campbell’s name resonates with them at all, it’s thanks solely to the 1982 version of The Thing.) Yet the two movies also share surprising affinities, and not simply because Carpenter idolized Hawks. Both seem interested in Campbell’s premise mostly for the visual possibilities that it suggests. In the late forties, the rights to “Who Goes There?” were purchased by RKO at the urging of Ben Hecht and Charles Lederer, the latter of whom wrote the script, with uncredited contributions from Hecht and Hawks. The direction was credited to Nyby, Hawks’s protégé, but Hawks was always on the set and later claimed most of the director’s fee, leading to much disagreement over who was responsible for the result. In the end, it threw out nearly all of Campbell’s story, keeping only the basic premise of an alien spacecraft discovered by researchers in an icy environment, while shifting the setting from Antarctica to Alaska. The filmmakers were clearly more drawn to the idea of a group of men facing danger in isolation, one of Hawks’s favorite themes, and they lavished greater attention on the stock types that they understood—the pilot, the journalist, the girl—than on the scientists, who were reduced to thankless foils. David Thomson has noted that the central principle of Hawks’s work is that “men are more expressive rolling a cigarette than saving the world,” and the contrast has never been more evident than it is here.

And while Hawks isn’t usually remembered as a visual director, The Thing From Another World exists almost entirely as a series of images: the opening titles burning through the screen, the crew standing in a circle on the ice to reveal the shape of the flying saucer underneath, the shock reveal of the alien itself in the doorway. When you account for the passage of time, Carpenter’s version rests on similar foundations. His characters and dialogue are less distinct than Hawks’s, but he also seems to have regarded Campbell’s story primarily as a source of visual problems and solutions. I don’t think I’m alone in saying that the images that are burned into my brain from The Thing probably add up to a total of about five minutes: the limits of its technology mean that we only see it in action for a few seconds at a time. But those images, most of which were the work of the special effects prodigy Rob Bottin, are still the best practical effects I’ve ever seen. (It also includes the single best jump scare in the movies, which is taken all but intact from Campbell.) Even after thirty years, its shock moments are so unforgettable that they have a way of overpowering the rest, as they did for Ellison, and neither version ever really approximates the clean narrative momentum of “Who Goes There?” But maybe that’s how it should be. Campbell, for all his gifts, wasn’t primarily a visual writer, and the movies are a visual medium, particularly in horror and science fiction. Both of the classic versions of The Thing are translations from one kind of storytelling to another, and they stick in the imagination precisely to the extent that they depart from the original. They’re works for the eye, not the mind, which may be why the only memorable line in either movie is the final warning in Hawks’s version, broadcast over the airwaves to the world, telling us to watch the skies.

Blazing the trail

leave a comment »

When I’m looking for insights into writing, I often turn to the nonliterary arts, and the one that I’ve found the most consistently stimulating is film editing. This is partially because the basic problem that a movie editor confronts—the arrangement and distillation of a huge mass of unorganized material into a coherent shape—is roughly analogous to what a writer does, but at a larger scale and under conditions of greater scrutiny and pressure, which encourages the development of pragmatic technical solutions. This was especially true in the era before digital editing. As Walter Murch, my hero, has pointed out, one minute of film equals a pound of celluloid. A movie like Apocalypse Now generates something like seven tons of raw footage, so an editor, as Murch notes, needs “a strong back and arms.” At the same time, incredibly, he or she also has to keep track of the location of individual frames, which weigh just a few thousandths of an ounce. With such software tools as Final Cut Pro, this kind of bookkeeping becomes relatively easier, and I doubt that many professional editors are inclined to be sentimental about the old days. But there’s also a sense in which wrestling with celluloid required habits of mind and organization that are slowly being lost. In A Guide for the Perplexed, which I once described as the first book I’d recommend to anyone about almost anything, Werner Herzog writes:

I can edit almost as fast as I can think because I’m able to sink details of fifty hours of footage into my mind. This might have something to do with the fact that I started working on film, when there was so much celluloid about the place that you had to know where absolutely every frame was. But my memory of all this footage never lasts long, and within two days of finishing editing it becomes a blur in my mind.

On a more practical level, editing a movie means keeping good notes, and all editors eventually come up with their own system. Here’s how Herzog describes his method:

The way I work is to look through everything I have—very quickly, over a couple of days—and make notes. For all my films over the past decade I have kept a logbook in which I briefly describe, in longhand, the details of every shot and what people are saying. I know there’s a particularly wonderful moment at minute 4:13 on tape eight because I have marked the description of the action with an exclamation point. These days my editor Joe Bini and I just move from one exclamation point to the next; anything unmarked is almost always bypassed. When it comes to those invaluable clips with three exclamation marks, I tell Joe, “If these moments don’t appear in the finished film, I have lived in vain.”

What I like about Herzog’s approach to editing is its simplicity. Other editors, including Murch, keep detailed notes on each take, but Herzog knows that all he has to do is flag it and move on. When the time comes, he’ll remember why it seemed important, and he has implicit faith in the instincts of his past self, which he trusts to steer him in the right direction. It’s like blazing a trail through the woods. A few marks on a tree or a pile of stones, properly used, are all you need to indicate the path, but instead of trying to communicate with hikers who come after you, you’re sending a message to yourself in the future. As Herzog writes: “I feel safe in my skills of navigation.”

Reading Herzog’s description of his editorial notes, I realized that I do much the same thing with the books that I read for my work, whether it’s fiction or nonfiction. Whenever I go back to revisit a source, I’ll often see underlinings or other marks that I left on a previous pass, and I naturally look at those sections more closely, in order to remind myself why it seemed to matter. (I’ve learned to mark passages with a single vertical line in the outer margin, which allows me to flip quickly through the book to scan for key sections.) The screenwriter William Goldman describes a similar method of signaling to himself in his great book Which Lie Did I Tell?, in which he talks about the process of adapting novels to the screen:

Here is how I adapt and it’s very simple: I read the text again. And I read it this time with a pen in my hand—let’s pick a color, blue. Armed with that, I go back to the book, slower this time than when I was a traveler. And as I go through the book word by word, page by page, every time I hit anything I think might be useful—dialogue line, sequence, description—I make a mark in the margin…Then maybe two weeks later, I read the book again, this time with a different color pen…And I repeat the same marking process—a line in the margin for anything I think might make the screenplay…When I am done with all my various color-marked readings—five or six of them—I should have the spine. I should know where the story starts, where it ends. The people should be in my head now.

Goldman doesn’t say this explicitly, but he implies that if a passage struck him on multiple passes, which he undertook at different times and states of mind, it’s likely to be more useful than one that caught his eye only once. Speaking of a page in Stephen King’s novel Misery that ended up with six lines in the margin—it’s the scene in which Annie cuts off Paul’s foot—Goldman writes: “It’s pretty obvious that whatever the spine of the piece was, I knew from the start it had to pass through this sequence.”

And a line or an exclamation point is sometimes all you need. Trying to keep more involved notes can even be a hindrance: not only do they slow you down, but they can distort your subsequent impressions. If a thought is worth having, it will probably occur to you each time you encounter the same passage. You often won’t know its true significance until later, and in the meantime, you should just keep going. (This is part of the reason why Walter Mosley recommends that writers put a red question mark next to any unresolved questions in the first draft, rather than trying to work them out then and there. Stopping to research something the first time around can easily turn into a form of procrastination, and when you go back, you may find that you didn’t need it at all.) Finally, it’s worth remembering that an exclamation point, a line in the margin, or a red question mark are subtly different on paper than on a computer screen. There are plenty of ways to flag sections in a text document, and I often use the search function in Microsoft Word that allows me to review everything I’ve underlined. But having a physical document that you periodically mark up in ink has benefits of its own. When you repeatedly go back to the same book, manuscript, or journal over the course of a project, you find that you’ve changed, but the pages have stayed the same. It starts to feel like a piece of yourself that you’ve externalized and put in a safe place. You’ll often be surprised by the clues that your past self has left behind, like a hobo leaving signs for others, or Leonard writing notes to himself in Memento, and it helps if the hints are a little opaque. Faced with that exclamation point, you ask yourself: “What was I thinking?” And there’s no better way to figure out what you’re thinking right now.

Written by nevalalee

April 20, 2017 at 9:08 am

The critical path

leave a comment »

Renata Adler

Note: I’m taking a few days off, so I’ll be republishing some of my favorite pieces from earlier in this blog’s run. This post originally appeared, in a slightly different form, on February 16, 2016.

Every few years or so, I go back and revisit Renata Adler’s famous attack in the New York Review of Books on the reputation of the film critic Pauline Kael. As a lifelong Kael fan, I don’t agree with Adler—who describes Kael’s output as “not simply, jarringly, piece by piece, line by line, and without interruption, worthless”—but I respect the essay’s fire and eloquence, and it’s still a great read. What is sometimes forgotten is that Adler opens with an assault, not on Kael alone, but on the entire enterprise of professional criticism itself. Here’s what she says:

The job of the regular daily, weekly, or even monthly critic resembles the work of the serious intermittent critic, who writes only when he is asked to or genuinely moved to, in limited ways and for only a limited period of time…Normally, no art can support for long the play of a major intelligence, working flat out, on a quotidian basis. No serious critic can devote himself, frequently, exclusively, and indefinitely, to reviewing works most of which inevitably cannot bear, would even be misrepresented by, review in depth…

The simple truth—this is okay, this is not okay, this is vile, this resembles that, this is good indeed, this is unspeakable—is not a day’s work for a thinking adult. Some critics go shrill. Others go stale. A lot go simultaneously shrill and stale.

Adler concludes: “By far the most common tendency, however, is to stay put and simply to inflate, to pretend that each day’s text is after all a crisis—the most, first, best, worst, finest, meanest, deepest, etc.—to take on, since we are dealing in superlatives, one of the first, most unmistakable marks of the hack.” And I think that she has a point, even if I have to challenge a few of her assumptions. (The statement that most works of art “inevitably cannot bear, would even be misrepresented by, review in depth,” is particularly strange, with its implicit division of all artistic productions into the sheep and the goats. It also implies that it’s the obligation of the artist to provide a worthy subject for the major critic, when in fact it’s the other way around: as a critic, you prove yourself in large part through your ability to mine insight from the unlikeliest of sources.) Writing reviews on a daily or weekly basis, especially when you have a limited amount of time to absorb the work itself, lends itself inevitably to shortcuts, and you often find yourself falling back on the same stock phrases and judgments. And Adler’s warning about “dealing in superlatives” seems altogether prescient. As Keith Phipps and Tasha Robinson of The A.V. Club pointed out a few years back, the need to stand out in an ocean of competing coverage means that every topic under consideration becomes either an epic fail or an epic win: a sensible middle ground doesn’t generate page views.

Pauline Kael

But the situation, at least from Adler’s point of view, is even more dire than when she wrote this essay in the early eighties. When Adler’s takedown of Kael first appeared, the most threatening form of critical dilution lay in weekly movie reviews: today, we’re living in a media environment in which every episode of every television show gets thousands of words of critical analysis from multiple pop culture sites. (Adler writes: “Television, in this respect, is clearly not an art but an appliance, through which reviewable material is sometimes played.” Which is only a measure of how much the way we think and talk about the medium has changed over the intervening three decades.) The conditions that Adler identifies as necessary for the creation of a major critic like Edmund Wilson or Harold Rosenberg—time, the ability to choose one’s subjects, and the freedom to quit when necessary—have all but disappeared for most writers hoping to make a mark, or even just a living. To borrow a trendy phrase, we’ve reached a point of peak content, with a torrent of verbiage being churned out at an unsustainable pace without the advertising dollars to support it, in a situation that can be maintained only by the seemingly endless supply of aspiring writers willing to be chewed up by the machine. And if Adler thought that even a monthly reviewing schedule was deadly for serious criticism, I’d be curious to hear how she feels about the online apprenticeship that all young writers seem expected to undergo these days.

Still, I’d like to think that Adler got it wrong, just as I believe that she was ultimately mistaken about Kael, whose legacy, for all its flaws, still endures. (It’s revealing to note that Adler had a long, distinguished career as a writer and critic herself, and yet she almost certainly remains best known among casual readers for her Kael review.) Not every lengthy writeup of the latest episode of Riverdale is going to stand the test of time, but as a crucible for forming a critic’s judgment, this daily grind feels like a necessary component, even if it isn’t the only one. A critic needs time and leisure to think about major works of art, which is a situation that the current media landscape doesn’t seem prepared to offer. But the ability to form quick judgments about works of widely varying quality and to express them fluently on deadline is an indispensable part of any critic’s toolbox. When taken as an end itself, it can be deadening, as Adler notes, but it can also be the foundation for something more, even if it has to be undertaken outside of—or despite—the critic’s day job. The critic’s responsibility, now more than ever, isn’t to detach entirely from the relentless pace of pop culture, but to find ways of channeling it into something deeper than the instantaneous think piece or hot take. As a daily blogger who also undertakes projects that can last for months or years, I’m constantly mindful of the relationship between my work on demand and my larger ambitions. And I sure hope that the two halves can work together. Because, like it or not, every critic is walking that path already.

Written by nevalalee

April 18, 2017 at 9:00 am

The illusion of life

leave a comment »

Last week, The A.V. Club ran an entire article devoted to television shows in which the lead is also the best character, which only points to how boring many protagonists tend to be. I’ve learned to chalk this up to two factors, one internal, the other external. The internal problem stems from the reasonable principle that the narrative and the hero’s objectives should be inseparable: the conflict should emerge from something that the protagonist urgently needs to accomplish, and when the goal has been met—or spectacularly thwarted—the story is over. It’s great advice, but in practice, it often results in leads who are boringly singleminded: when every action needs to advance the plot, there isn’t much room for the digressions and quirks that bring characters to life. The supporting cast has room to go off on tangents, but the characters at the center have to constantly triangulate between action, motivation, and relatability, which can drain them of all surprise. A protagonist is under so much narrative pressure that when the story relaxes, he bursts, like a sea creature brought up from its crevasse to the surface. Elsewhere, I’ve compared a main character to a diagram of a pattern of forces, like one of the fish in D’Arcy Wentworth Thompson’s On Growth and Form, in which the animal’s physical shape is determined by the outside stresses to which it has been subjected. And on top of this, there’s an external factor, which is the universal desire of editors, producers, and studio executives to make the protagonist “likable,” which, whether or not you agree with it, tends to smooth out the rough edges that make a character vivid and memorable.

In the classic textbook Disney Animation: The Illusion of Life, we find a useful perspective on this problem. The legendary animators Frank Thomas and Ollie Johnston provide a list of guidelines for evaluating story material before the animation begins, including the following:

Tell your story through the broad cartoon characters rather than the “straight” ones. There is no way to animate strong-enough attitudes, feelings, or expressions on realistic characters to get the communication you should have. The more real, the less latitude for clear communication. This is more easily done with the cartoon characters who can carry the story with more interest and spirit anyway. Snow White was told through the animals, the dwarfs, and the witch—not through the prince or the queen or the huntsman. They had vital roles, but their scenes were essentially situation. The girl herself was a real problem, but she was helped by always working to a sympathetic animal or a broad character. This is the old vaudeville trick of playing the pretty girl against the buffoon; it helps both characters.

Even more than Snow White, the great example here is Sleeping Beauty, which has always fascinated me as an attempt by Disney to recapture past glories by a mechanical application of its old principles raised to dazzling technical heights. Not only do Aurora and Prince Philip fail to drive the story, but they’re all but abandoned by it—Aurora speaks fewer lines than any other Disney main character, and neither of them talk for the last thirty minutes. Not only does the film acknowledge the dullness of its protagonists, but it practically turns it into an artistic statement in itself.

And it arises from a tension between the nature of animation, which is naturally drawn to caricature, and the notion that sympathetic protagonists need to be basically realistic. With regard to the first point, Thomas and Johnston advise:

Ask yourself, “Can the story point be done in caricature?” Be sure the scenes call for action, or acting that can be caricatured if you are to make a clear statement. Just to imitate nature, illustrate reality, or duplicate live action not only wastes the medium but puts an enormous burden on the animator. It should be believable, but not realistic.

The italics are mine. This is a good rule, but it collides headlong with the principle that the “real” characters should be rendered with greater naturalism:

Of course, there is always a big problem in making the “real” or “straight” characters in our pictures have enough personality to carry their part of the story…The point of this is misinterpreted by many to mean that characters who have to be represented as real should be left out of feature films, that the stories should be told with broad characters who can be handled more easily. This would be a mistake, for spectators need to have someone or something they can believe in, or the picture falls apart.

And while you could make a strong case that viewers relate just as much to the sidekicks, it’s probably also true that a realistic central character serves an important functional role, which allows the audience to take the story seriously. This doesn’t just apply to animation, either, but to all forms of storytelling—including most fiction, film, and television—that work best with broad strokes. In many cases, you can sense the reluctance of animators to tackle characters who don’t lend themselves to such bold gestures:

Early in the story development, these questions will be asked: “Does this character have to be straight?” “What is the role we need here?” If it is a prince or a hero or a sympathetic person who needs acceptance from the audience to make the story work, then the character must be drawn realistically.

Figuring out the protagonists is a thankless job: they have to serve a function within the overall story, but they’re also liable to be taken out and judged on their own merits, in the absence of the narrative pressures that created them in the first place. The best stories, it seems, are the ones in which that pattern of forces results in something fascinating in its own right, or which transform a stock character into something more. (It’s revealing that Thomas and Johnston refer to the queen and the witch in Snow White as separate figures, when they’re really a single person who evolves over the course of the story into her true form.) And their concluding advice is worth bearing in mind by everyone: “Generally speaking, if there is a human character in a story, it is wise to draw the person with as much caricature as the role will permit.”

Cutty Sark and the semicolon

leave a comment »

Vladimir Nabokov

Note: I’m taking a few days off, so I’ll be republishing some of my favorite pieces from earlier in this blog’s run. This post originally appeared, in a slightly different form, on December 22, 2015.

In an interview that was first published in The Paris Review, the novelist Herbert Gold asked Vladimir Nabokov if an editor had ever offered him any useful advice. This is what Nabokov said in response:

By “editor” I suppose you mean proofreader. Among these I have known limpid creatures of limitless tact and tenderness who would discuss with me a semicolon as if it were a point of honor—which, indeed, a point of art often is. But I have also come across a few pompous avuncular brutes who would attempt to “make suggestions” which I countered with a thunderous “stet!”

I’ve always adored that thunderous stet, which tells us so much about Nabokov and his imperious resistance to being edited by anybody. Today, however, I’m more interested in the previous sentence. A semicolon, as Nabokov puts it, can indeed be a point of honor. Nabokov was perhaps the most painstaking of all modern writers, and it’s no surprise that the same perfectionism that produced such conceptual and structural marvels as Lolita and Pale Fire would filter down to the smallest details. But I imagine that even ordinary authors can relate to how a single punctuation mark in a manuscript can start to loom as large as the finger of God on the Sistine Chapel ceiling.

And there’s something about the semicolon that seems to inspire tussles between writers and their editors—or at least allows it to stand as a useful symbol of the battles that can occur during the editorial process. Here’s an excerpt from a piece by Charles McGrath in The New York Times Magazine about the relationship between Robert Caro, author of The Years of Lyndon Johnson, and his longtime editor Robert Gottlieb:

“You know that insane old expression, ‘The quality of his defect is the defect of his quality,’ or something like that?” Gottlieb asked me. “That’s really true of Bob. What makes him such a genius of research and reliability is that everything is of exactly the same importance to him. The smallest thing is as consequential as the biggest. A semicolon matters as much as, I don’t know, whether Johnson was gay. But unfortunately, when it comes to English, I have those tendencies, too, and we could go to war over a semicolon. That’s as important to me as who voted for what law.”

It’s possible that the semicolon keeps cropping up in such stories because its inherent ambiguity lends itself to disagreement. As Kurt Vonnegut once wrote: “Here is a lesson in creative writing. First rule: Do not use semicolons. They are transvestite hermaphrodites representing absolutely nothing. All they do is show you’ve been to college.” And I’ve more or less eliminated semicolons from my own work for much the same reason.

Robert De Niro and Martin Scorsese on the set of Raging Bull

But the larger question here is why artists fixate on things that even the most attentive reader would pass over without noticing. On one level, you could take a fight over a semicolon as an illustration of the way that the creative act—in which the artist is immersed in the work for months on end—tends to turn mountains into molehills. Here’s one of my favorite stories about the making of Raging Bull:

One night, when the filmmakers were right up against the deadline to make their release date, they were working on a nothing little shot that takes place in a nightclub, where a minor character turns to the bartender and orders a Cutty Sark. “I can’t hear what he’s saying,” [Martin Scorsese] said. Fiddling ensued—extensive fiddling—without satisfying him. [Producer Irwin] Winkler, who was present, finally deemed one result good enough and pointed out that messengers were standing by to hand-carry release prints to the few theaters where the picture was about to premiere. At which point, Scorsese snapped. “I want my name taken off the picture,” he cried—which bespeaks his devotion to detail. It also bespeaks his exhaustion at the end of Raging Bull, not to mention the craziness that so often overtakes movies as they wind down. Needless to say, he was eventually placated. And you can more or less hear the line in the finished print.

And you could argue that this kind of microscopic attention is the only thing that can lead to a work that succeeds on the largest possible scale.

But there’s yet another story that gets closer to truth. In Existential Errands, Norman Mailer describes a bad period in his life—shortly after he was jailed for stabbing his second wife Adele—in which he found himself descending into alcoholism and unable to work. His only source of consolation were the scraps of paper, “little crossed communications from some wistful outpost of my mind,” that he would find in his jacket pocket after a drunken night. Mailer writes of these poems:

I would go to work, however, on my scraps of paper. They were all I had for work. I would rewrite them carefully, printing in longhand and ink, and I would spend hours whenever there was time going over these little poems…And since I wasn’t doing anything else very well in those days, I worked the poems over every chance I had. Sometimes a working day would go by, and I might put a space between two lines and remove a word. Maybe I was mending.

Which just reminds us that a seemingly minuscule change can be the result of a prolonged confrontation with the work as a whole. You can’t obsess over a semicolon without immersing yourself in the words around it, and there are times when you need such a focal point to structure your engagement with the rest. It’s a little like what is called a lakshya in yoga: the tiny spot on the body or in the mind on which you concentrate while meditating. In practice, the lakshya can be anything or nothing, but without it, your attention tends to drift. In art, it can be a semicolon, a word, or a line about Cutty Sark. It may not be much in itself. But when you need to tether yourself to something, even a semicolon can be a lifeline.

The dark side of the moon

with 2 comments

In March 1969, Robert A. Heinlein flew with his wife Ginny to Brazil, where he had been invited to serve as a guest of honor at a film festival in Rio de Janeiro. Another passenger on their plane was the director Roman Polanski, who introduced Heinlein to his wife, the actress Sharon Tate, at a party at the French embassy a few days after their arrival. (Tate had been in Italy filming The Thirteen Chairs, her final movie role before her death, which she had taken largely out of a desire to work with Orson Welles.) On August 8, Tate and four others were murdered in Los Angeles by members of the Manson Family. Two months later, Heinlein received a letter from a woman named “Annette or Nanette or something,” who claimed that police helicopters were chasing her and her friends. Ginny was alarmed by its incoherent tone, and she told her husband to stay out of it: “Honey, this is worse than the crazy fan mail. This is absolutely insane. Don’t have anything to do with it.” Heinlein contented himself with calling the Inyo County Sheriff’s Office, which confirmed that a police action was underway. In fact, it was a joint federal, state, and county raid of the Myers and Barker Ranches, where Charles Manson and his followers had been living, as part of an investigation into an auto theft ring—their connection to the murders had not yet been established. Manson was arrested, along with two dozen others. And the woman who wrote to Heinlein was probably Lynette “Squeaky” Fromme, another member of the Manson Family, who would be sentenced to life in prison for a botched assassination attempt six years later on President Gerald Ford.

On January 8, 1970, the San Francisco Herald-Examiner ran a story on the front page with the headline “Manson’s Blueprint? Claim Tate Suspect Used Science Fiction Plot.” Later that month, Time published an article, “A Martian Model,” that began:

In the psychotic mind, fact and fantasy mingle freely. The line between the real and the imagined easily blurs or disappears. Most madmen invent their own worlds. If the charges against Charles Manson, accused along with five members of his self-styled “family” of killing Sharon Tate and six other people, are true, Manson showed no powers of invention at all. In the weeks since his indictment, those connected with the case have discovered that he may have murdered by the book. The book is Robert A. Heinlein’s Stranger in a Strange Land, an imaginative science-fiction novel long popular among hippies…

Not surprisingly, the Heinleins were outraged by the implication, although Robert himself was in no condition to respond—he was hospitalized with a bad case of peritonitis. In any event, the parallels between the career of Charles Manson and Heinlein’s fictional character Valentine Michael Smith were tenuous at best, and the angle was investigated by the prosecutor Vincent Bugliosi, who dismissed it. A decade later, in a letter to the science fiction writer and Heinlein fan J. Neil Schulman, Manson stated, through another prisoner, that he had never read the book. Yet the novel was undeniably familiar to members of his circle, as it was throughout the countercultural community of the late sixties. The fact that Fromme wrote to Heinlein is revealing in itself, and Manson’s son, who was born on April 15, 1968, was named Valentine Michael by his mother.

Years earlier, Manson had been exposed—to a far more significant extent—to the work of another science fiction author. In Helter Skelter, his account of the case, Bugliosi writes of Manson’s arrival at McNeil Island Federal Penitentiary in 1961:

Manson gave as his claimed religion “Scientologist,” stating that he “has never settled upon a religious formula for his beliefs and is presently seeking an answer to his question in the new mental health cult known as Scientology”…Manson’s teacher, i.e. “auditor” was another convict, Lanier Rayner. Manson would later claim that while in prison he achieved Scientology’s highest level, “theta clear.”

In his own memoir, Manson writes: “A cell partner turned me on to Scientology. With him and another guy I got pretty heavy into dianetics and Scientology…There were times when I would try to sell [fellow inmate Alan Karpis] on the things I was learning through Scientology.” In total, Manson appears to have received about one hundred and fifty hours of auditing, and his yearly progress report noted: “He appears to have developed a certain amount of insight into his problems through his study of this discipline.” The following year, another report stated: “In his effort to ‘find’ himself, Manson peruses different religious philosophies, e.g. Scientology and Buddhism; however, he never remains long enough with any given teachings to reap material benefits.” In 1968, Manson visited a branch of the Church of Scientology in Los Angeles, where he asked the receptionist: “What do you do after ‘clear?'” But Bugliosi’s summary of the matter seems accurate enough:

Although Manson remained interested in Scientology much longer than he did in any other subject except music, it appears that…he stuck with it only as long as his enthusiasm lasted, then dropped it, extracting and retaining a number of terms and phrases (“auditing,” “cease to exist,” “coming to Now”) and some concepts (karma, reincarnation, etc.) which, perhaps fittingly, Scientology had borrowed in the first place.

So what should we make of all this? I think that there are a few relevant points here. The first is that Heinlein and Hubbard’s influence on Manson—or any of his followers, including Fromme, who had been audited as well—appears to have been marginal, and only in the sense that you could say that he was “influenced” by the Beatles. Manson was a scavenger who assembled his notions out of scraps gleaned from whatever materials were currently in vogue, and science fiction had saturated the culture to an extent that it would have been hard to avoid it entirely, particularly for someone who was actively searching for such ideas. On some level, it’s a testament to the cultural position that both Hubbard and Heinlein had attained, although it also cuts deeper than this. Manson represented the psychopathic fringe of an impulse for which science fiction and its offshoots provided a convenient vocabulary. It was an urge for personal transformation in the face of what felt like apocalyptic social change, rooted in the ideals that Campbell and his authors had defined, and which underwent several mutations in the decades since its earliest incarnation. (And it would mutate yet again. The Aum Shinrikyo cult, which was responsible for the sarin gas attacks in the Japanese subway system in 1995, borrowed elements of Asimov’s Foundation trilogy for its vision of a society of the elect that would survive the coming collapse of civilization.) It’s an aspect of the genre that takes light and dark forms, and it sometimes displays both faces simultaneously, which can lead to resistance from both sides. The Manson Family murders began with the killing of a man named Gary Hinman, who was taken hostage on July 25, 1969, a day in which the newspapers were filled with accounts of the successful splashdown of Apollo 11. The week before, at the ranch where Manson’s followers were living, a woman had remarked: “There’s somebody on the moon today.” And another replied: “They’re faking it.”

Written by nevalalee

March 24, 2017 at 10:09 am

Falls the Shadow

with one comment

Over the last year or so, I’ve found myself repeatedly struck by the parallels between the careers of John W. Campbell and Orson Welles. At first, the connection might seem tenuous. Campbell and Welles didn’t look anything alike, although they were about the same height, and their politics couldn’t have been more different—Welles was a staunch progressive and defender of civil rights, while Campbell, to put it mildly, wasn’t. Welles was a wanderer, while Campbell spent most of his life within driving distance of his birthplace in New Jersey. But they’re inextricably linked in my imagination. Welles was five years younger than Campbell, but they flourished at exactly the same time, with their careers peaking roughly between 1937 and 1942. Both owed significant creative breakthroughs to the work of H.G. Wells, who inspired Campbell’s story “Twilight” and Welles’s Mercury Theater adaptation of The War of the Worlds. In 1938, Campbell saw Welles’s famous modern-dress production of Julius Caesar with the writer L. Sprague de Camp, of which he wrote in a letter:

It represented, in a way, what I’m trying to do in the magazine. Those humans of two thousand years ago thought and acted as we do—even if they did dress differently. Removing the funny clothes made them more real and understandable. I’m trying to get away from funny clothes and funny-looking people in the pictures of the magazine. And have more humans.

And I suspect that the performance started a train of thought in both men’s minds that led to de Camp’s novel Lest Darkness Fall, which is about a man from the present who ends up in ancient Rome.

Campbell was less pleased by Welles’s most notable venture into science fiction, which he must have seen as an incursion on his turf. He wrote to his friend Robert Swisher: “So far as sponsoring that War of [the] Worlds thing—I’m damn glad we didn’t! The thing is going to cost CBS money, what with suits, etc., and we’re better off without it.” In Astounding, he said that the ensuing panic demonstrated the need for “wider appreciation” of science fiction, in order to educate the public about what was and wasn’t real:

I have long been an exponent of the belief that, should interplanetary visitors actually arrive, no one could possibly convince the public of the fact. These stories wherein the fact is suddenly announced and widespread panic immediately ensues have always seemed to me highly improbable, simply because the average man did not seem ready to visualize and believe such a statement.

Undoubtedly, Mr. Orson Welles felt the same way.

Their most significant point of intersection was The Shadow, who was created by an advertising agency for Street & Smith, the publisher of Astounding, as a fictional narrator for the radio series Detective Story Hour. Before long, he became popular enough to star in his own stories. Welles, of course, voiced The Shadow from September 1937 to October 1938, and Campbell plotted some of the magazine installments in collaboration with the writer Walter B. Gibson and the editor John Nanovic, who worked in the office next door. And his identification with the character seems to have run even deeper. In a profile published in the February 1946 issue of Pic magazine, the reporter Dickson Hartwell wrote of Campbell: “You will find him voluble, friendly and personally depressing only in what his friends claim is a startling physical resemblance to The Shadow.”

It isn’t clear if Welles was aware of Campbell, although it would be more surprising if he wasn’t. Welles flitted around science fiction for years, and he occasionally crossed paths with other authors in that circle. To my lasting regret, he never met L. Ron Hubbard, which would have been an epic collision of bullshitters—although Philip Seymour Hoffman claimed that he based his performance in The Master mostly on Welles, and Theodore Sturgeon once said that Welles and Hubbard were the only men he had ever met who could make a room seem crowded simply by walking through the door. In 1946, Isaac Asimov received a call from a lawyer whose client wanted to buy all rights to his robot story “Evidence” for $250. When he asked Campbell for advice, the editor said that he thought it seemed fair, but Asimov’s wife told him to hold out for more. Asimov called back to ask for a thousand dollars, adding that he wouldn’t discuss it further until he found out who the client was. When the lawyer told him that it was Welles, Asimov agreed to the sale, delighted, but nothing ever came of it. (Welles also owned the story in perpetuity, making it impossible for Asimov to sell it elsewhere, a point that Campbell, who took a notoriously casual attitude toward rights, had neglected to raise.) Twenty years later, Welles made inquiries into the rights for Heinlein’s The Puppet Masters, which were tied up at the time with Roger Corman, but never followed up. And it’s worth noting that both stories are concerned with the problem of knowing how other people are what they claim to be, which Campbell had brilliantly explored in “Who Goes There?” It’s a theme to which Welles obsessively returned, and it’s fascinating to speculate what he might have done with it if Howard Hawks and Christian Nyby hadn’t gotten there first with The Thing From Another World. Who knows what evil lurks in the hearts of men?

But their true affinities were spiritual ones. Both Campbell and Welles were child prodigies who reinvented an art form largely by being superb organizers of other people’s talents—although Campbell always downplayed his own contributions, while Welles appears to have done the opposite. Each had a spectacular early success followed by what was perceived as decades of decline, which they seem to have seen coming. (David Thomson writes: “As if Welles knew that Kane would hang over his own future, regularly being used to denigrate his later works, the film is shot through with his vast, melancholy nostalgia for self-destructive talent.” And you could say much the same thing about “Twilight.”) Both had a habit of abandoning projects as soon as they realized that they couldn’t control them, and they both managed to seem isolated while occupying the center of attention in any crowd. They enjoyed staking out unreasonable positions in conversation, just to get a rise out of listeners, and they ultimately drove away their most valuable collaborators. What Pauline Kael writes of Welles in “Raising Kane” is equally true of Campbell:

He lost the collaborative partnerships that he needed…He was alone, trying to be “Orson Welles,” though “Orson Welles” had stood for the activities of a group. But he needed the family to hold him together on a project and to take over for him when his energies became scattered. With them, he was a prodigy of accomplishments; without them, he flew apart, became disorderly.

Both men were alone when they died, and both filled their friends, admirers, and biographers with intensely mixed feelings. I’m still coming to terms with Campbell. But I have a hunch that I’ll end up somewhere close to Kael’s ambivalence toward Welles, who, at the end of an essay that was widely seen as puncturing his myth, could only conclude: “In a less confused world, his glory would be greater than his guilt.”

Assisted living

leave a comment »

If you’re a certain kind of writer, whenever you pick up a new book, instead of glancing at the beginning or opening it to a random page, you turn immediately to the acknowledgments. Once you’ve spent any amount of time trying to get published, that short section of fine print starts to read like a gossip column, a wedding announcement, and a high school yearbook all rolled into one. For most writers, it’s also the closest they’ll ever get to an Oscar speech, and many of them treat it that way, with loving tributes and inside jokes attached to every name. It’s a chance to thank their editors and agents—while the unagented reader suppresses a twinge of envy—and to express gratitude to various advisers, colonies, and fellowships. (The most impressive example I’ve seen has to be in The Lisle Letters by Muriel St. Clare Byrne, which pays tribute to the generosity of “Her Majesty Queen Elizabeth II.”) But if there’s one thing I’ve learned from the acknowledgments that I’ve been reading recently, it’s that I deserve an assistant. It seems as if half the nonfiction books I see these days thank a whole squadron of researchers, inevitably described as “indefatigable,” who live in libraries, work through archives and microfilm reels, and pass along the results to their grateful employers. If the author is particularly famous, like Bob Woodward or Kurt Eichenwald, the acknowledgment can sound like a letter of recommendation: “I was startled by his quick mind and incomparable work ethic.” Sometimes the assistants are described in such glowing terms that you start to wonder why you aren’t reading their books instead. And when I’m trying to decipher yet another illegible scan of a carbon copy of a letter written fifty years ago on a manual typewriter, I occasionally wish that I could outsource it to an intern.

But there are also good reasons for doing everything yourself, at least at the early stages of a project. In his book The Integrity of the Body, the immunologist Sir Frank Macfarlane Burnet says that there’s one piece of advice that he always gives to “ambitious young research workers”: “Do as large a proportion as possible of your experiments with your own hands.” In Discovering, Robert Scott Root-Bernstein expands on this point:

When you climb those neighboring hills make sure you do your own observing. Many scientists assign all experimental work to lab techs and postdocs. But…only the prepared mind will note and attach significance to an anomaly. Each individual possesses a specific blend of personality, codified science, science in the making, and cultural biases that will match particular observations. If you don’t do your own observing, the discovery won’t be made. Never delegate research.

Obviously, there are situations in which you can’t avoid delegating the work to some degree. But I think Root-Bernstein gets at something essential when he frames it in terms of recognizing anomalies. If you don’t sift through the raw material yourself, it’s difficult to know what is unusual or important, and even if you have a bright assistant who will flag any striking items for your attention, it’s hard to put them in perspective. As I’ve noted elsewhere, drudgery can be an indispensable precursor to insight. You’re more likely to come up with worthwhile connections if you’re the one mining the ore.

This is why the great biographers and historians often seem like monsters of energy. I never get tired of quoting the advice that Alan Hathaway gave to the young Robert Caro at Newsday: “Turn every goddamn page.” Caro took this to heart, noting proudly of one of the archives he consulted: “The number [of pages] may be in the area of forty thousand. I don’t know how many of these pages I’ve read, but I’ve read a lot of them.” And it applies to more than just what you read, as we learn from a famous story about Caro and his editor Robert Gottlieb:

Gott­lieb likes to point to a passage fairly early in The Power Broker describing Moses’ parents one morning in their lodge at Camp Madison, a fresh-air charity they established for poor city kids, picking up the Times and reading that their son had been fined $22,000 for improprieties in a land takeover. “Oh, he never earned a dollar in his life, and now we’ll have to pay this,” Bella Moses says.

“How do you know that?” Gottlieb asked Caro. Caro explained that he tried to talk to all of the social workers who had worked at Camp Madison, and in the process he found one who had delivered the Moseses’ paper. “It was as if I had asked him, ‘How do you know it’s raining out?’”

This is the kind of thing that you’d normally ask your assistant to do, if it occurred to you at all, and it’s noteworthy that Caro has kept at it long after he could have hired an army of researchers. Instead, he relies entirely on his wife Ina, whom he calls “the only person besides myself who has done research on the four volumes of The Years of Lyndon Johnson or on the biography of Robert Moses that preceded them, the only person I would ever trust to do so.” And perhaps a trusted spouse is the best assistant you could ever have.

Of course, there are times when an assistant is necessary, especially if, unlike Caro, you’re hoping to finish your project in fewer than forty years. But it’s often the assistant who benefits. As one of them recalled:

I was working for [Professor] Bernhard J. Stern…and since he was writing a book on social resistance to technological change, he had me reading a great many books that might conceivably be of use to him. My orders were to take note of any passages that dealt with the subject and to copy them down.

It was a liberal education for me and I was particularly struck by a whole series of articles by astronomer Simon Newcomb, which I read at Stern’s direction. Newcomb advanced arguments that demonstrated the impossibility of heavier-than-air flying machines, and maintained that one could not be built that would carry a man. While these articles were appearing, the Wright brothers flew their plane. Newcomb countered with an article that said, essentially, “Very well, one man, but not two.”

Every significant social advance roused opposition on the part of many, it seemed. Well, then, shouldn’t space flight, which involved technological advances, arouse opposition too?

The assistant in question was Isaac Asimov, who used this idea as the basis for his short story “Trends,” which became his first sale to John W. Campbell. It launched his career, and the rest is history. And that’s part of the reason why, when I think of my own book, I say to myself: “Very well, one man, but not two.”

The imperious and tyrannical images

leave a comment »

You can’t throw in images the way you throw in a fishhook, at random! These obedient images are, in a film constructed according to the dark and mysterious rules of the unconscious, necessary images, imperious and tyrannical images…It can be useful for a while to rediscover by methods that are unusual, excessive, arbitrary, methods that are primitive, direct, and stripped of nonessentials, polished to the bone, the laws of eternal poetry, but these laws are always the same, and the goal of poetry cannot be simply to play with the laws by which it is made…Just because with the help of psychoanalysis the rules of the game have become infinitely clear, and because the technique of poetry has revealed its secrets, the point is not to show that we are extraordinarily intelligent and that we now know how to go about it.

Antonin Artaud, in a letter to Jean Paulhan

Written by nevalalee

March 11, 2017 at 7:29 am

%d bloggers like this: