Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Search Results

Famous monsters of filmland

leave a comment »

For his new book The Big Picture: The Fight for the Future of the Movies, the journalist Ben Fritz reviewed every email from the hack of Sony Pictures, which are still available online. Whatever you might think about the ethics of using such material, it’s a gold mine of information about how Hollywood has done business over the last decade, and Fritz has come up with some fascinating nuggets. One of the most memorable finds is an exchange between studio head Amy Pascal and the producer Scott Rudin, who was trying to convince her to take a chance on Danny Boyle’s adaptation of Steve Jobs. Pascal had expressed doubts about the project, particularly over the casting of Michael Fassbender in the lead, and after arguing it was less risky than The Social Network, Rudin delivered a remarkable pep talk:

You ought to be doing this movie—period—and you and I both know that the cold feet you are feeling is costing you this movie that you want. Once you have cold feet, you’re done. You’re making this decision in the anticipation of how you will be looked at in failure. That’s how you fail. So you’re feeling wobbly in the job right now. Here’s the fact: nothing conventional you could do is going to change that, and there is no life-changing hit that is going to fall into your lap that is not a nervous decision, because the big obvious movies are going to go elsewhere and you don’t have the IP right now to create them from standard material. You have this. Face it…Force yourself to muster some confidence about it and do the exact thing right now for which your career will be known in movie history: be the person who makes the tough decisions and sticks with them and makes the unlikely things succeed. Fall on your sword—when you’ve lost that, it’s finished. You’re the person who does these movies. That’s—for better or worse—who you are and who you will remain. To lose that is to lose yourself.

Steve Jobs turned out to be a financial disappointment, and its failure—despite the prestige of its subject, director, and cast—feels emblematic of the move away from films driven by stars to those that depend on “intellectual property” of the kind that Sony lacked. In particular, the movie industry seems to have shifted to a model perfected by Marvel Studios, which builds a cinematic universe that can drum up excitement for future installments and generate huge grosses overseas. Yet this isn’t exactly new. In the groundbreaking book The Genius of the System, which was published three decades ago, Thomas Schatz notes that Universal did much the same in the thirties, when it pioneered the genre of cinematic horror under founder Carl Laemmle and his son:

The horror picture scarcely emerged full-blown from the Universal machinery, however. In fact, the studio had been cultivating the genre for years, precisely because it played to Universal’s strengths and maximized its resources…Over the years Carl Laemmle built a strong international distribution system, particularly in Europe…[European filmmakers] brought a fascination for the cinema’s distinctly unrealistic qualities, its capacity to depict a surreal landscape of darkness, nightmare logic, and death. This style sold well in Europe.

After noting that the aesthetics of horror lent itself to movies built out of little more than shadows and fog, which were the visual effects of its time, Schatz continues: “This rather odd form of narrative economy was vitally important to a studio with limited financial resources and no top stars to carry its pictures. And in casting, too, the studio turned a limitation into an asset, since the horror film did not require romantic leads or name stars.”

The turning point was Tod Browning’s Dracula, a movie “based on a presold property” that could serve as an entry point for other films along the same lines. It didn’t require a star, but “an offbeat character actor,” and Universal’s expectations for it eerily foreshadow the way in which studio executives still talk today. Schatz writes:

Laemmle was sure it would [succeed]—so sure, in fact, that he closed the Frankenstein deal several weeks before Dracula’s February 1931 release. The Lugosi picture promptly took off at the box office, and Laemmle was more convinced than ever that the horror film was an ideal formula for Universal, given its resources and the prevailing market conditions. He was convinced, too, that he had made the right decision with Frankenstein, which had little presold appeal but now had the success of Dracula to generate audience anticipation.

Frankenstein, in short, was sort of like the Ant-Man of the thirties, a niche property that leveraged the success of its predecessors into something like real excitement. It worked, and Universal’s approach to its monsters anticipates what Marvel would later do on a vaster scale, with “ambitious crossover events” like House of Frankenstein and House of Dracula that combined the studio’s big franchises with lesser names that seemed unable to carry a film on their own. (If Universal’s more recent attempt to do the same with The Mummy fell flat, it was partially because it was unable to distinguish between the horror genre, the star picture, and the comic book movie, resulting in a film that turned out to be none of the above. The real equivalent today would be Blumhouse Productions, which has done a much better job of building its brand—and which distributes its movies through Universal.)

And the inability of such movies to provide narrative closure isn’t a new development, either. After seeing James Whale’s Frankenstein, Carl Laemmle, Jr. reacted in much the same way that executives presumably do now:

Junior Laemmle was equally pleased with Whale’s work, but after seeing the rough cut he was certain that the end of the picture needed to be changed. His concerns were twofold. The finale, in which both Frankenstein and his monster are killed, seemed vaguely dissatisfying; Laemmle suspected that audiences might want a glimmer of hope or redemption. He also had a more pragmatic concern about killing off the characters—and thus any possibility of sequels. Laemmle now regretted letting Professor Van Helsing drive that stake through Count Dracula’s heart, since it consigned the original character to the grave…Laemmle was not about to make the same mistake by letting that angry mob do away with the mad doctor and his monster.

Whale disagreed, but he was persuaded to change the ending after a preview screening, leaving open the possibility that the monster might have survived. Over eight decades later, Joss Whedon offered a similar explanation in an interview with Mental Floss: “It’s difficult because you’re living in franchise world—not just Marvel, but in most big films—where you can’t kill anyone, or anybody significant…My feeling in these situations with Marvel is that if somebody has to be placed on the altar and sacrificed, I’ll let you guys decide if they stay there.” For now, we’re living in a world made by the Universal monsters—and with only a handful of viable properties, half of which are owned by Disney. Without them, it might seem impossible, as Rudin said, “to create them from standard material.” But we’re also still waiting to be blindsided by the next great franchise. As another famous monster once put it: “A lot of times, people don’t know what they want until you show it to them.” And when it came to the movies, at least, Steve Jobs was right.

My great books #9: On Directing Film

leave a comment »

On Directing Film

Note: I’m counting down my ten favorite works of nonfiction, in order of the publication dates of their first editions, and with an emphasis on books that deserve a wider readership. You can find the earlier installments here

When it comes to giving advice on something as inherently unteachable as writing, books on the subject tend to fall into one of three categories. The first treats the writing manual as an extension of the self-help genre, offering what amounts to an extended pep talk that is long on encouragement but short on specifics. A second, more useful approach is to consolidate material on a variety of potential strategies, either through the voices of multiple writers—as George Plimpton did so wonderfully in The Writer’s Chapbook, which assembles the best of the legendary interviews given to The Paris Review—or through the perspective of a writer and teacher, like John Gardner, generous enough to consider the full range of what the art of fiction can be. And the third, exemplified by David Mamet’s On Directing Film, is to lay out a single, highly prescriptive recipe for constructing stories. This last approach might seem unduly severe. Yet after a lifetime of reading what other writers have to say on the subject, Mamet’s little book is still the best I’ve ever found, not just for film, but for fiction and narrative nonfiction as well. On one level, it can serve as a starting point for your own thoughts about how the writing process should look: Mamet provides a strict, almost mathematical set of tools for building a plot from first principles, and even if you disagree with his methods, they clarify your thinking in a way that a more generalized treatment might not. But even if you just take it at face value, it’s still the closest thing I know to a foolproof formula for generating rock-solid first drafts. (If Mamet himself has a flaw as a director, it’s that he often stops there.) In fact, it’s so useful, so lucid, and so reliable that I sometimes feel reluctant to recommend it, as if I were giving away an industrial secret to my competitors.

Mamet’s principles are easy to grasp, but endlessly challenging to follow. You start by figuring out what every scene is about, mostly by asking one question: “What does the protagonist want?” You then divide each scene up into a sequence of beats, consisting of an immediate objective and a logical action that the protagonist takes to achieve it, ideally in a form that can be told in visual terms, without the need for expository dialogue. And you repeat the process until the protagonist succeeds or fails at his or her ultimate objective, at which point the story is over. This may sound straightforward, but as soon as you start forcing yourself to think this way consistently, you discover how tough it can be. Mamet’s book consists of a few simple examples, teased out in a series of discussions at a class he taught at Columbia, and it’s studded with insights that once heard are never forgotten: “We don’t want our protagonist to do things that are interesting. We want him to do things that are logical.” “Here is a tool—choose your shots, beats, scenes, objectives, and always refer to them by the names you chose.” “Keep it simple, stupid, and don’t violate those rules that you do know. If you don’t know which rule applies, just don’t muck up the more general rules.” “The audience doesn’t want to read a sign; they want to watch a motion picture.” “A good writer gets better only by learning to cut, to remove the ornamental, the descriptive, the narrative, and especially the deeply felt and meaningful.” “Now, why did all those Olympic skaters fall down? The only answer I know is that they hadn’t practiced enough.” And my own personal favorite: “The nail doesn’t have to look like a house; it is not a house. It is a nail. If the house is going to stand, the nail must do the work of a nail. To do the work of the nail, it has to look like a nail.”

Written by nevalalee

November 12, 2015 at 9:00 am

My great books #7: The Biographical Dictionary of Film

leave a comment »

The New Biographical Dictionary of Film

Note: I’m counting down my ten favorite works of nonfiction, in order of the publication dates of their first editions, and with an emphasis on books that deserve a wider readership. You can find the earlier installments here.

David Thomson’s Biographical Dictionary of Film is one of the weirdest books in all of literature, and more than the work of any other critic, it has subtly changed the way I think about both life and the movies. His central theme—which is stated everywhere and nowhere—is the essential strangeness of turning shadows on a screen into men and women who can seem more real to us than the people in our own lives. His writing isn’t conventional criticism so much as a single huge work of fiction, with Thomson himself as both protagonist and nemesis. It isn’t a coincidence that one of his earliest books was a biography of Laurence Sterne, author of Tristram Shandy: his entire career can be read as one long Shandean exercise, in which Thomson, as a fictional character in his own work, is cheerfully willing to come off as something of a creep, as long as it illuminates our reasons for going to the movies. And his looniness is part of his charm. Edmund Wilson once playfully speculated that George Saintsbury, the great English critic, invented his own Toryism “in the same way that a dramatist or novelist arranges contrasting elements,” and there are times when I suspect that Thomson is doing much the same thing. (If his work is a secret novel, its real precursor is Pale Fire, in which Thomson plays the role of Kinbote, and every article seems to hint darkly at some monstrous underlying truth. A recent, bewildered review of his latest book on The A.V. Club is a good example of the reaction he gets from readers who aren’t in on the joke.)

But if you leave him with nothing but his perversity and obsessiveness, you end up with Armond White, while Thomson succeeds because he’s also lucid, encyclopedically informed, and ultimately sane, although he does his best to hide it. The various editions of The Biographical Dictionary of Film haven’t been revised so much as they’ve accumulated: Thomson rarely goes back to rewrite earlier entries, but tacks on new thoughts to the end of each article, so that it grows by a process of accretion, like a coral reef. The result can be confusing, but when I go back to his earlier articles, I remember at once why this is still the essential book on film. I’ll look at Thomson on Coppola (“He is Sonny and Michael Corleone for sure, but there are traces of Fredo, too”); on Sydney Greenstreet (“Indeed, there were several men trapped in his grossness: the conventional thin man; a young man; an aesthete; a romantic”); or on Eleanor Powell’s dance with Astaire in Broadway Melody of 1940 (“Maybe the loveliest moment in films is the last second or so, as the dancers finish, and Powell’s alive frock has another half-turn, like a spirit embracing the person”). Or, perhaps most memorably of all, his thoughts on Citizen Kane, which, lest we forget, is about the futile search of a reporter named Thompson:

As if Welles knew that Kane would hang over his own future, regularly being used to denigrate his later works, the film is shot through with his vast, melancholy nostalgia for self-destructive talent…Kane is Welles, just as every apparent point of view in the film is warmed by Kane’s own memories, as if the entire film were his dream in the instant before death.

It’s a strange, seductive, indispensable book, and to paraphrase Thomson’s own musings on Welles, it’s the greatest career in film criticism, the most tragic, and the one with the most warnings for the rest of us.

Written by nevalalee

November 10, 2015 at 9:00 am

The films of a life

leave a comment »

Marcello Mastroianni and Anita Ekberg in La Dolce Vita

The other week, while musing on Richard Linklater’s Boyhood—which I still haven’t seen—I noted that we often don’t have the chance to experience the movies that might speak most urgently to us at the later stages of our lives. Many of us who love film encounter the movies we love at a relatively young age, and we spend our teens and twenties devouring the classics that came out before we were born. And that’s exactly how it should be: when we’re young, we have the time and energy to explore enormous swaths of the canon, and we absorb images and stories that will enrich the years to come. Yet we’re also handicapped by being relatively inexperienced and emotionally circumscribed, at least compared to later in life. We’re wowed by technical excellence, virtuoso effects, relentless action, or even just a vision of the world in which we’d like to believe. And by the time we’re old enough to judge such things more critically, we find that we aren’t watching movies as much as we once were, and it takes a real effort to seek out the more difficult, reflective masterpieces that might provide us with signposts for the way ahead.   

What we can do, however, is look back at the movies we loved when we were younger and see what they have to say to us now. I’ve always treasured Roger Ebert’s account of his shifting feelings toward Fellini’s La Dolce Vita, which he called “a page-marker in my own life”:

Movies do not change, but their viewers do. When I saw La Dolce Vita in 1960, I was an adolescent for whom “the sweet life” represented everything I dreamed of: sin, exotic European glamour, the weary romance of the cynical newspaperman. When I saw it again, around 1970, I was living in a version of Marcello’s world; Chicago’s North Avenue was not the Via Veneto, but at 3 a.m. the denizens were just as colorful, and I was about Marcello’s age.

When I saw the movie around 1980, Marcello was the same age, but I was ten years older, had stopped drinking, and saw him not as a role model but as a victim, condemned to an endless search for happiness that could never be found, not that way. By 1991, when I analyzed the film a frame at a time at the University of Colorado, Marcello seemed younger still, and while I had once admired and then criticized him, now I pitied and loved him.

Moira Shearer in The Red Shoes

And when we realize how our feelings toward certain movies have shifted, it can be both moving and a little terrifying. Life transforms us so insidiously that it’s often only when we compare our feelings to a fixed benchmark that we become aware of the changes that have taken place. Watching Citizen Kane at twenty and again at thirty is a disorienting experience, especially when you’re hoping to make a life for yourself in the arts. Orson Welles was twenty-five when he directed it, and when you see it at twenty, it feels like both an inspiration and a challenge: part of you believes, recklessly, that you could be Welles, and the possibilities of the next few years of your life seem limitless. Looking back at it at thirty, after a decade’s worth of effort and compromise, you start to realize both the absurdity of his achievement and how singular it really is, and the movie seems suffused with what David Thomson calls Welles’s “vast, melancholy nostalgia for self-destructive talent.” You begin to understand the ambivalence with which more experienced filmmakers regarded the Wellesian monster of energy and ambition, and it quietly affects the way you think about Kane‘s reflections on time and old age.

The more personal our attachment to a movie, the harder these lessons can be to swallow. The other night, I sat down to watch part of The Red Shoes, my favorite movie of all time, for the first time in several years. It’s a movie I thought I knew almost frame by frame, and I do, but I hadn’t taken the emotional component into account. I’ve loved this movie since I first saw it in high school, both for its incredible beauty and for the vision it offered of a life in the arts. Later, as I rewatched it in college and in my twenties, it provided a model, a warning, and a reminder of the values I was trying to honor. Now, after I’ve been through my own share of misadventures as a writer, it seems simultaneously like a fantasy and a bittersweet emblem of a world that still seems just out of reach. I’m older than many of the characters now—although I have yet to enter my Boris Lermontov phase—and my heart aches a little when I listen to Julian’s wistful, ambitious line: “I wonder what it feels like to wake up in the morning and find oneself famous.” If The Red Shoes once felt like a promise of what could be, it’s starting to feel to me now like what could have been, or might be again. Ten years from now, it will probably feel like something else entirely. And when that time comes, I’ll let you know what I find.

Written by nevalalee

July 23, 2014 at 9:30 am

The best closing shots in film

leave a comment »

Lawrence of Arabia

Note: Since I’m taking a deserved break for the holidays, I’m reposting a couple of my favorite entries from early in this blog’s run. This post was originally published, in a slightly different form, on January 13, 2011. Visual spoilers follow. Cover your eyes!

As I’ve noted before, the last line of a novel is almost always of interest, but the last line of a movie generally isn’t. It isn’t hard to understand why: movies are primarily a visual medium, and there’s a sense in which even the most brilliant dialogue can often seem beside the point. And as much the writer in me wants to believe otherwise, audiences don’t go to the movies to listen to words: they go to look at pictures.

Perhaps inevitably, then, there are significantly more great closing shots in film than there are great curtain lines. Indeed, the last shot of nearly every great film is memorable, so the list of finalists can easily expand into the dozens. Here, though, in no particular order, are twelve of my favorites. Click for the titles:

Jerry Goldsmith on the art of the film score

leave a comment »

Jerry Goldsmith

Working to timings and synchronising your musical thoughts with the film can be stimulating rather than restrictive. Scoring is a limitation but like any limitation it can be made to work for you. Verdi, except for a handful of pieces, worked best when he was “turned on” by a libretto. The most difficult problem in music is form, and in a film you already have this problem solved for you. You are presented with a basic structure, a blueprint, and provided the film has been well put together, well edited, it often suggests its own rhythms and tempo. The quality of the music is strictly up to the composer. Many people seem to assume that because film music serves the visual it must be something of secondary value. Well, the function of any art is to serve a purpose in society. For many years, music and painting served religion. The thing to bear in mind is that film is the youngest of the arts, and that scoring is the youngest of the music arts. We have a great deal of development ahead of us.

Jerry Goldsmith, quoted in Music for the Movies

Written by nevalalee

July 7, 2013 at 9:50 am

Daniel Clowes on the lessons of film editing

leave a comment »

To me, the most useful experience in working in “the film industry” has been watching and learning the editing process. You can write whatever you want and try to film whatever you want, but the whole thing really happens in that editing room. How do you edit comics? If you do them in a certain way, the standard way, it’s basically impossible. That’s what led me to this approach of breaking my stories into segments that all have a beginning and end on one, two, three pages. This makes it much easier to shift things around, to rearrange parts of the story sequence. It’s something that I’m really interested in trying to figure out, but there are pluses and minuses to every approach. For instance, I think if you did all your panels exactly the same size and left a certain amount of “breathing room” throughout the story, you could make fairly extensive after-the-fact changes, but you’d sacrifice a lot by doing that…

It’s a very mysterious process: you put together a cut of the film and at the first viewing it always seems just terrible, then you work on it for two weeks and you can’t imagine what else you could do with it; then six months later, you’re still working on it and making significant changes every day. It’s very odd, but you kind of know when it’s there.

Daniel Clowes, quoted by Todd Hignite in In the Studio: Visits with Contemporary Cartoonists

Written by nevalalee

October 28, 2012 at 9:50 am

Fiction into film: L.A. Confidential

with one comment

Of all the movies I’ve ever seen, Curtis Hanson’s adaptation of James Ellroy’s L.A. Confidential has influenced my own work the most. This isn’t to say that it’s my favorite movie of all time—although it’s certainly in the top ten—or even that I find its themes especially resonant: I have huge admiration for Ellroy’s talents, but it’s safe to say that he and I are operating under a different set of obsessions. Rather, it’s the structure of the film that I find so compelling: three protagonists, with three main stories, that interweave and overlap in unexpected ways until they finally converge at the climax. It’s a narrative structure that has influenced just about every novel I’ve ever written, or tried to write—and the result, ironically, has made my own work less adaptable for the movies.

Movies, you see, aren’t especially good at multiple plots and protagonists. Most screenplays center, with good reason, on a single character, the star part, whose personal story is the story of the movie. Anything that departs from this form is seen as inherently problematic, which is why L.A. Confidential’s example is so singular, so seductive, and so misleading. As epic and layered as the movie is, Ellroy’s novel is infinitely larger: it covers a longer span of time, with more characters and subplots, to the point where entire storylines—like that of a particularly gruesome serial killer—were jettisoned completely for the movie version. Originally it was optioned as a possible miniseries, which would have made a lot of sense, but to the eternal credit of Hanson and screenwriter Brian Helgeland, they decided that there might also be a movie here.

To narrow things down, they started with my own favorite creative tool: they made a list. As the excellent bonus materials for the film make clear, Hanson and Helgeland began with a list of characters or plot points they wanted to keep: Bloody Christmas, the Nite Owl massacre, Bud White’s romance with Lynn Bracken, and so on. Then they ruthlessly pared away the rest of the novel, keeping the strands they liked, finding ways to link them together, and writing new material when necessary, to the point where some of the film’s most memorable moments—including the valediction of Jack Vincennes and the final showdown at the Victory Motel, which repurposes elements of the book’s prologue—are entirely invented. And the result, as Ellroy says, was a kind of “alternate life” for the characters he had envisioned.

So what are the lessons here? For aspiring screenwriters, surprisingly few: a film like L.A. Confidential appears only a couple of times each decade, and the fact that it was made at all, without visible compromise, is one of the unheralded miracles of modern movies. If nothing else, though, it’s a reminder that adaptation is less about literal faithfulness than fidelity of spirit. L.A. Confidential may keep less than half of Ellroy’s original material, but it feels as turbulent and teeming with possibility, and gives us the sense that some of the missing stories may still be happening here, only slightly offscreen. Any attempt to adapt similarly complex material without that kind of winnowing process, as in the unfortunate Watchmen, usually leaves audiences bewildered. The key is to find the material’s alternate life. And no other movie has done it so well.

Written by nevalalee

August 8, 2011 at 10:12 am

Fiction into film: The Silence of the Lambs

leave a comment »

It’s been just over twenty years now since The Silence of the Lambs was released in theaters, and the passage of time—and its undisputed status as a classic—sometimes threatens to blind us to the fact that it’s such a peculiar movie. At the time, it certainly seemed like a dubious prospect: it had a director known better for comedy than suspense, an exceptional cast but no real stars, and a story whose violence verged on outright kinkiness. If it emphatically overcame those doubts, it was with its mastery of tone and style, a pair of iconic performances, and, not incidentally, the best movie poster of the modern era. And the fact that it not only became a financial success but took home the Academy Award for Best Picture, as well as the four other major Oscars, remains genre filmmaking’s single most unqualified triumph.

It also had the benefit of some extraordinary source material. I’ve written at length about Thomas Harris elsewhere, but what’s worth emphasizing about his original novel is that it’s the product of several diverse temperaments. Harris began his career as a journalist, and there’s a reportorial streak running through all his best early books, with their fascination with the technical language, tools, and arcana of various esoteric professions, from forensic profiling to brain tanning. He also has a Gothic sensibility that has only grown more pronounced with time, a love of language fed by the poetry of William Blake and John Donne, and, in a quality that is sometimes undervalued, the instincts of a great pulp novelist. The result is an endlessly fascinating book poised halfway between calculated bestseller and major novel, and all the better for that underlying tension.

Which is why it pains me as a writer to say that as good as the book is, the movie is better. Part of this is due to the inherent differences in the way we experience movies and popular fiction: for detailed character studies, novels have the edge, but for a character who is seen mostly from the outside, as an enigma, nothing in Harris prepares us for what Anthony Hopkins does with Hannibal Lecter, even if it amounts to nothing more than a few careful acting decisions for his eyes and voice. It’s also an example of how a popular novel can benefit from an intelligent, respectful adaptation. Over time, Ted Tally’s fine screenplay has come to seem less like a variation on Harris’s novel than a superlative second draft: Tally keeps all that is good in the book, pares away the excesses, and even improves the dialogue. (It’s the difference between eating a census taker’s liver with “a big Amarone” and “a nice Chianti.”)

And while the movie is a sleeker, more streamlined animal, it still benefits from the novel’s strangeness. For better or worse, The Silence of the Lambs created an entire genre—the sleek, modern serial killer movie—but like most founding works, it has a fundamental oddity that leaves it out of place among its own successors. The details of its crimes are horrible, but what lingers are its elegance, its dry humor, and the curious rhythms of its central relationship, which feels like a love story in ways that Hannibal made unfortunately explicit. It’s genuinely concerned with women, even as it subjects them to horrible fates, and in its look and mood, it’s a work of stark realism shading inexorably into a fairy tale. That ability to combine strangeness with ruthless efficiency is the greatest thing a thriller in any medium can do. Few movies, or books, have managed it since, even after twenty years of trying.

Written by nevalalee

July 12, 2011 at 8:39 am

Fiction into film: The English Patient

with 4 comments

A few months ago, after greatly enjoying The Conversations, Michael Ondaatje’s delightful book-length interview with Walter Murch, I decided to read Ondaatje’s The English Patient for the first time. I went through it very slowly, only a handful of pages each day, in parallel with my own work on the sequel to The Icon Thief. Upon finishing it last week, I was deeply impressed, not just by the writing, which had drawn me to the book in the first place, but also by the novel’s structural ingenuity—derived, Ondaatje says, from a long process of rewriting and revision—and the richness of its research. This is one of the few novels where detailed historical background has been integrated seamlessly into the poetry of the story itself, and it reflects a real, uniquely novelistic curiosity about other times and places. It’s a great book.

Reading The English Patient also made me want to check out the movie, which I hadn’t seen in more than a decade, when I watched it as part of a special screening for a college course. I recalled admiring it, although in a rather detached way, and found that I didn’t remember much about the story, aside from a few moments and images (and the phrase “suprasternal notch”). But I sensed it would be worth revisiting, both because I’d just finished the book and because I’ve become deeply interested, over the past few years, in the career of editor Walter Murch. Murch is one of film’s last true polymaths, an enormously intelligent man who just happened to settle into editing and sound design, and The English Patient, for which he won two Oscars (including the first ever awarded for a digitally edited movie), is a landmark in his career. It was with a great deal of interest, then, that I watched the film again last night.

First, the good news. The adaptation, by director Anthony Minghella, is very intelligently done. It was probably impossible to film Ondaatje’s full story, with its impressionistic collage of lives and memories, in any kind of commercially viable way, so the decision was wisely made to focus on the central romantic episode, the doomed love affair between Almásy (Ralph Fiennes) and Katherine Clifton (Kristin Scott Thomas). Doing so involved inventing a lot of new, explicitly cinematic material, some satisfying (the car crash and sandstorm in the desert), some less so (Almásy’s melodramatic escape from the prison train). The film also makes the stakes more personal: the mission of Caravaggio (Willem Dafoe) is less about simple fact-finding, as it was in the book, than about revenge. And the new ending, with Almásy silently asking Hana (Juliette Binoche) to end his life, gives the film a sense of resolution that the book deliberately lacks.

These changes, while extensive, are smartly done, and they respect the book while acknowledging its limitations as source material. As Roger Ebert points out in his review of Apocalypse Now, another milestone in Murch’s career, movies aren’t very good at conveying abstract ideas, but they’re great for showing us “the look of a battle, the expression on a face, the mood of a country.” On this level, The English Patient sustains comparison with the works of David Lean, with a greater interest in women, and remains, as David Thomson says, “one of the most deeply textured of films.” Murch’s work, in particular, is astonishing, and the level of craft on display here is very impressive.

Yet the pieces don’t quite come together. The novel’s tentative, intellectual nature, which the adaptation doesn’t try to match, infects the movie as well. It feels like an art film that has willed itself into being an epic romance, when in fact the great epic romances need to be a little vulgar—just look at Gone With the Wind. Doomed romances may obsess their participants in real life, but in fiction, seen from the outside, they can seem silly or absurd. The English Patient understands a great deal about the craft of the romantic epic, the genre in which it has chosen to plant itself, but nothing of its absurdity. In the end, it’s just too intelligent, too beautifully made, to move us on more than an abstract level. It’s a heroic effort; I just wish it were something a little more, or a lot less.

The best closing shots in film

with 7 comments

Warning: Visual spoilers follow. Cover your eyes!

As I’ve noted before, the last line of a novel is almost always of interest, but the last line of a movie generally isn’t. It isn’t hard to understand why: movies are primarily a visual medium, after all, and there’s a sense in which even the most brilliant dialogue can often seem beside the point. And as much the writer in me wants to believe otherwise, audiences don’t go to the movies to listen to words: they go to look at pictures.

Perhaps inevitably, then, there are significantly more great closing shots in film than there are great curtain lines. Indeed, the last shot of nearly every great film is memorable, so the list of finalists can easily expand into the dozens. Here, though, in no particular order, are twelve of my favorites. Click or mouse over for the titles:

Quote of the Day

leave a comment »

The ideal, for me, is to obtain right away what will work—and without retouches. If they are necessary, it falls short of the mark. The immediate is chance. At the same time it is definitive. What I want is the definitive by chance.

Jean-Luc Godard, to Andrew Sarris in Interviews with Film Directors

Written by nevalalee

July 16, 2018 at 7:30 am

Critical thinking

with one comment

When you’re a technology reporter, as my wife was for many years, you quickly find that your subjects have certain expectations about the coverage that you’re supposed to be providing. As Benjamin Wallace wrote a while back in New York magazine:

“A smart young person in the Valley thinks being a reporter is basically being a PR person,” says one tech journalist. “Like, We have news to share, we’d like to come and tell you about it.” Reporters who write favorably about companies receive invitations to things; critics don’t. “They’re very thin-skinned,” says another reporter. “On Wall Street, if you call them a douchebag, they’ve already heard seventeen worse things in the last hour. Here, if you criticize a company, you’re criticizing the spirit of innovation.”

Mike Isaac of the New York Times recently made a similar observation in an interview with Recode: “One of the perceptions [of tech entrepreneurs] is A) Well, the press is slanted against us in some way [and] B) Why aren’t they appreciating how awesome we are? And like all these other things…I think a number of companies, including and especially Uber, get really upset when you don’t recognize the gravitas of their genius and the scope of how much they have changed.” Along the same lines, you also sometimes hear that reporters should be “supporting” local startups—which essentially means any company not based in Silicon Valley or New York—or businesses run by members of traditionally underrepresented groups.

As a result, critical coverage of any kind can be seen as a betrayal. But it isn’t a reporter’s job to “support” anything, whether it’s a city, the interests of particular stakeholders, or the concept of innovation itself—and this applies to much more than just financial journalism. In a perceptive piece for Vox, Alissa Wilkinson notes that similar pressures apply to movie critics. She begins with the example of Ocean’s 8, which Cate Blanchett, one of the film’s stars, complained had been reviewed through a “prism of misunderstanding” by film critics, who are mostly white and male. And Wilkinson responds with what I think is a very important point:

They’re not wrong about the makeup of the pool of critics. And this discussion about the demographic makeup of film critics is laudable and necessary. But the way it’s being framed has less helpful implications: that the people whose opinions really count are those whom the movie is “for.” Not only does that ignore how most movies actually make their money, but it says a lot about Hollywood’s attitude toward criticism, best revealed in Blanchett’s statement. She compared studio’s “support” of a film—which means, essentially, a big marketing budget—with critics’ roles in a film’s success, which she says are a “really big part of the equation.” In that view, critics are mainly useful in how they “support” movies the industry thinks they should like because of the demographic group and audience segment into which they fall.

This has obvious affinities to the attitude that we often see among tech startups, perhaps because they’re operating under similar conditions as Hollywood. They’re both risky, volatile fields that depend largely on perception, which is shaped by coverage by a relatively small pool of influencers. It’s true of books as well. And it’s easy for all of them to fall into the trap of assuming that critics who aren’t being supportive somehow aren’t doing their jobs.

But that isn’t true, either. And it’s important to distinguish between the feelings of creators, who can hardly be expected to be objective, and those of outside players with an interest in an enterprise’s success or failure, which can be emotional as much as financial. There are certain movies or startups that many of us want to succeed because of what they say about an entire industry or culture. Black Panther was one, and it earned a reception that exceeded the hopes of even the most fervent fan. A Wrinkle in Time was another, and it didn’t, although I liked that movie a lot. But it isn’t a critic’s responsibility to support a work of art for such reasons. As Wilkinson writes:

Diversifying that pool [of critics] won’t automatically lead to the results the industry might like. Critics who belong to the same demographic group shouldn’t feel as if they need to move in lockstep with a movie simply because someone like them is represented in it, or because the film’s marketing is aimed at them. Women critics shouldn’t feel as if they need to ‘support’ a film telling a woman’s story, any more than men who want to appear to be feminists should. Black and Latinx and Asian critics shouldn’t be expected to love movies about black and Latinx and Asian people as a matter of course.

Wilkinson concludes: “The best reason to diversify criticism is so that when Hollywood puts out movies for women, or movies for people of color, it doesn’t get lazy.” I agree—and I’d add that a more diverse pool of critics would also discourage Hollywood from being lazy when it makes movies for anyone.

Diversity, in criticism as in anything else, is good for the groups directly affected, but it’s equally good for everybody. Writing of Min Jin Lee’s novel Pachinko, the author Eve L. Ewing recently said on Twitter: “Hire Asian-American writers/Korean-American writers/Korean folks with different diasporic experiences to write about Pachinko, be on panels about it, own reviews of it, host online roundtables…And then hire them to write about other books too!” That last sentence is the key. I want to know what Korean-American writers have to say about Pachinko, but I’d be just as interested in their thoughts on, say, Jonathan Franzen’s Purity. And the first step is acknowledging what critics are actually doing, which isn’t supporting particular works of art, advancing a cause, or providing recommendations. It’s writing reviews. When most critics write anything, they thinking primarily about the response it will get from readers and how it fits into their career as a whole. You may not like it, but it’s pointless to ignore it, or to argue that critics should be held to a standard that differs from anyone else trying to produce decent work. (I suppose that one requirement might be a basic respect or affection for the medium that one is criticizing, but that isn’t true of every critic, either.) Turning to the question of diversity, you find that expanding the range of critical voices is worthwhile in itself, just as it is for any other art form, and regardless of its impact on other works. When a piece of criticism or journalism is judged for its effects beyond its own boundaries, we’re edging closer to propaganda. Making this distinction is harder than it looks, as we’ve recently seen with Elon Musk, who, like Trump, seems to think that negative coverage must be the result of deliberate bias or dishonesty. Even on a more modest level, a call for “support” may seem harmless, but it can easily turn into a belief that you’re either with us or against us. And that would be a critical mistake.

Inside the sweatbox

leave a comment »

Yesterday, I watched a remarkable documentary called The Sweatbox, which belongs on the short list of films—along with Hearts of Darkness and the special features for The Lord of the Rings—that I would recommend to anyone who ever thought that it might be fun to work in the movies. It was never officially released, but a copy occasionally surfaces on YouTube, and I strongly suggest watching the version available now before it disappears yet again. For the first thirty minutes or so, it plays like a standard featurette of the sort that you might have found on the second disc of a home video release from two decades ago, which is exactly what it was supposed to be. Its protagonist, improbably, is Sting, who was approached by Disney in the late nineties to compose six songs for a movie titled Kingdom of the Sun. (One of the two directors of the documentary is Sting’s wife, Trudie Styler, a producer whose other credits include Lock, Stock and Two Smoking Barrels and Moon.) The feature was conceived by animator Roger Allers, who was just coming off the enormous success of The Lion King, as a mixture of Peruvian mythology, drama, mysticism, and comedy, with a central plot lifted from The Prince and the Pauper. After two years of production, the work in progress was screened for the first time for studio executives. As always, the atmosphere was tense, but no more than usual, and it inspired the standard amount of black humor from the creative team. As one artist jokes nervously before the screening: “You don’t want them to come in and go, ‘Oh, you know what, we don’t like that idea of the one guy looking like the other guy. Let’s get rid of the basis of the movie.’ This would be a good time for them to tell us.”

Of course, that’s exactly what happened. The top brass at Disney hated the movie, production was halted, and Allers left the project that was ultimately retooled into The Emperor’s New Groove, which reused much of the design work and finished animation while tossing out entire characters—along with most of Sting’s songs—and introducing new ones. It’s a story that has fascinated me ever since I first heard about it, around the time of the movie’s initial release, and I’m excited beyond words that The Sweatbox even exists. (The title of the documentary, which was later edited down to an innocuous special feature for the DVD, refers to the room at the studio in Burbank in which rough work is screened.) And while the events that it depicts are extraordinary, they represent only an extreme case of the customary process at Disney and Pixar, at least if you believe the ways in which that the studio likes to talk about itself. In a profile that ran a while back in The New Yorker, the director Andrew Stanton expressed it in terms that I’ve never forgotten:

“We spent two years with Eve getting shot in her heart battery, and Wall-E giving her his battery, and it never worked. Finally—finally—we realized he should lose his memory instead, and thus his personality…We’re in this weird, hermetically sealed freakazoid place where everybody’s trying their best to do their best—and the films still suck for three out of the four years it takes to make them.

This statement appeared in print six months before the release of Stanton’s live action debut John Carter, which implies that this method is far from infallible. And the drama behind The Emperor’s New Groove was unprecedented even by the studio’s relentless standards. As executive Thomas Schumacher says at one point: “We always say, Oh, this is normal. [But] we’ve never been through this before.”

As it happens, I watched The Sweatbox shortly after reading an autobiographical essay by the artist Cassandra Smolcic about her experiences in the “weird, hermetically sealed freakazoid” environment of Pixar. It’s a long read, but riveting throughout, and it makes it clear that the issues at the studio went far beyond the actions of John Lasseter. And while I could focus on any number of details or anecdotes, I’d like to highlight one section, about the firing of director Brenda Chapman halfway through the production of Brave:

Curious about the downfall of such an accomplished, groundbreaking woman, I began taking the company pulse soon after Brenda’s firing had been announced. To the general population of the studio — many of whom had never worked on Brave because it was not yet in full-steam production — it seemed as though Brenda’s firing was considered justifiable. Rumor had it that she had been indecisive, unconfident and ineffective as a director. But for me and others who worked closely with the second-time director, there was a palpable sense of outrage, disbelief and mourning after Brenda was removed from the film. One artist, who’d been on the Brave story team for years, passionately told me how she didn’t find Brenda to be indecisive at all. Brenda knew exactly what film she was making and was very clear in communicating her vision, the story artist said, and the film she was making was powerful and compelling. “From where I was sitting, the only problem with Brenda and her version of Brave was that it was a story told about a mother and a daughter from a distinctly female lens,” she explained.

Smolcic adds: “During the summer of 2009, I personally worked on Brave while Brenda was still in charge. I likewise never felt that she was uncertain about the kind of film she was making, or how to go about making it.”

There are obvious parallels between what happened to Allers and to Chapman, which might seem to undercut the notion that the latter’s firing had anything to do with the fact that she was a woman. But there are a few other points worth raising. One is that no one seems to have applied the words “indecisive, unconfident, and ineffective” to Allers, who voluntarily left the production after his request to push back the release date was denied. And if The Sweatbox is any indication, the situation of women and other historically underrepresented groups at Disney during this period was just as bad as it was at Pixar—I counted exactly one woman who speaks onscreen, for less than fifteen seconds, and all the other faces that we see are white and male. (After Sting expresses concern about the original ending of The Emperor’s New Groove, in which the rain forest is cut down to build an amusement park, an avuncular Roy Disney confides to the camera: “We’re gonna offend somebody sooner or later. I mean, it’s impossible to do anything in the world these days without offending somebody.” Which betrays a certain nostalgia for a time when no one, apparently, was offended by anything that the studio might do.) One of the major players in the documentary is Thomas Schumacher, the head of Disney Animation, who has since been accused of “explicit sexual language and harassment in the workplace,” according to a report in the Wall Street Journal. In the footage that we see, Schumacher and fellow executive Peter Schneider don’t come off particularly well, which may just be a consequence of the perspective from which the story is told. But it’s equally clear that the mythical process that allows such movies to “suck” for three out of four years is only practicable for filmmakers who look and sound like their counterparts on the other side of the sweatbox, which grants them the necessary creative freedom to try and fail repeatedly—a luxury that women are rarely granted. What happened to Allers on Kingdom of the Sun is still astounding. But it might be even more noteworthy that he survived for as long as he did.

The same clump of flowers

leave a comment »

I really intend to set up most of my [opera] productions so that people have very different experiences on the same evening. Part of that is just technical—I make too many things happen at once so you have to decide what you’re going to look at, and whatever you’re looking at, you’re not looking at something else. Someone else may be looking at that, and I deliberately set up confusing situations sometimes so that the audience is making their own choices. I like that. It’s what separates live theater from TV or film. In television or film, your gaze is always channeled. You are not consulted; you’re told where we’re going to look next. What I love about opera is that your mind wanders, and my job is to set up an interesting landscape to wander in. No two people come out having smelled the same clump of flowers…

I don’t like to watch people think onstage. I like to watch people do things. I don’t want to know what I think Nixon’s thinking. If I can get Nixon to do the things that Nixon does, then it’s up to the audience to decide what he’s thinking. That’s where it gets interesting. If I say, “Nixon is thinking this,” and stage it accordingly, then it blots out any possibility of interpretation on the part of the audience. So I just say, “Here’s a person who’s done the following things. Now you tell me what he’s thinking.” Then it gets interesting, and the range of reaction becomes wonderful. In theater, psychology is overrated…My way of direction is extremely simple. If I say, “Go over here, pick up the glass of water and drink it,” that’s what I expect.

Peter Sellars, in an interview with Bruce Duffie

Written by nevalalee

July 9, 2018 at 7:30 am

The master of time

leave a comment »

I saw Claude Lanzmann’s Shoah for the first time seven years ago at the Gene Siskel Film Center in Chicago. Those ten hours amounted to one of the most memorable moviegoing experiences of my life, and Lanzmann, who died yesterday, was among the most intriguing figures in film. “We see him in the corners of some of his shots, a tall, lanky man, informally dressed, chain-smoking,” Roger Ebert wrote in his review, and it’s in that role—the dogged investigator of the Holocaust, returning years afterward to the scene of the crime—that he’ll inevitably be remembered. He willed Shoah into existence at a period when no comparable models for such a project existed, and the undertaking was so massive that it took over the rest of his career, much of which was spent organizing material that had been cut, which produced several huge documentaries in itself. And the result goes beyond genre. Writing in The New Yorker, Richard Brody observes that Lanzmann’s film is “a late flowering of his intellectual and cultural milieu—existentialism and the French New Wave,” and he even compares it to Breathless. He also memorably describes the methods that Lanzmann used to interview former Nazis:

The story of the making of Shoah is as exciting as a spy novel…Lanzmann hid [the camera] in a bag with a tiny hole for the lens, and had one of his cameramen point it at an unsuspecting interview subject. He hid a small microphone behind his tie. A van was rigged with video and radio equipment that rendered the stealthy images and sounds on a television set. “What qualms should I have had about misleading Nazis, murderers?” Lanzmann recently told Der Spiegel. “Weren’t the Nazis themselves masters of deception?” He believed that his ruses served the higher good of revealing the truth—and perhaps accomplished symbolic acts of resistance after the fact. As he explained in 1985, “I’m killing them with the camera.”

The result speaks for itself, and it would be overwhelming even if one didn’t know the story of how it was made. (If the world were on fire and I could only save a few reels from the entire history of cinema, one of them would be Lanzmann’s devastating interview of the barber Abraham Bomba.) But it’s worth stressing the contrast between the film’s monumental quality and the subterfuge, tenacity, and cleverness that had to go into making it, which hint at Lanzmann’s secret affinities with someone like Werner Herzog. Brody writes:

The most audacious thing Lanzmann did to complete Shoah was, very simply, to take his time. His initial backers expected him to deliver a two-hour film in eighteen months; his response was to lie—to promise that it would be done as specified, and then to continue working as he saw fit. Lanzmann borrowed money (including from [Simone de] Beauvoir) to keep shooting, and then spent five years obsessively editing his three hundred and fifty hours of footage. He writes that he became the “master of time,” which he considered to be not only an aspect of creative control but also one of aesthetic morality. He sensed that there was just “one right path” to follow, and he set a rule for himself: “I refused to carry on until I had found it, which could take hours or days, on one occasion I am not likely to forget it took three weeks.”

Shoah is like no other movie ever made, but it had to be made just like every other movie, except even more so—which is a fact that all documentarians and aspiring muckrakers should remember. After one interview, Brody writes, “Lanzmann and his assistant were unmasked, attacked, and bloodied by the subject’s son and three young toughs.” Lanzmann spent a month in the hospital and went back to work.

When it finally came out in 1985, the film caused a sensation, but its reach might have been even greater three decades later, if only because the way in which we watch documentaries has changed. Lanzmann rightly conceived it as a theatrical release, but today, it would be more likely to play on television or online. Many of us don’t think twice about watching a nonfiction series that lasts for nine hours—The Vietnam War was nearly double that length—and Shoah would have become a cultural event. Yet there’s also something to be said for the experience of seeing it in a darkened auditorium over the course of a single day. As Ebert put it:

[Lanzmann] uses a…poetic, mosaic approach, moving according to rhythms only he understands among the only three kinds of faces we see in this film: survivors, murderers and bystanders. As their testimony is intercut with the scenes of train tracks, steam engines, abandoned buildings and empty fields, we are left with enough time to think our own thoughts, to meditate, to wonder…After nine hours of Shoah, the Holocaust is no longer a subject, a chapter of history, a phenomenon. It is an environment. It is around us.

That said, I’d encourage viewers to experience it in any form that they can, and there’s no denying that a single marathon session makes unusual demands. At the screening that I attended in Chicago, at least two audience members, after a valiant struggle, had fallen asleep by the end of the movie, which got out after midnight, and as the lights went up, the man in front of me said, “That last segment was too long.” He was probably just tired.

In fact, the final section—on the Warsaw Ghetto Uprising—is essential, and I often think of its central subject, the resistance fighter Simcha Rotem. In May 1943, Rotem attempted a rescue operation to save any survivors who might still be in the ghetto, making his way underground through the sewers, but when he reached the surface, he found no one:

I had to go on through the ghetto. I suddenly heard a woman calling from the ruins. It was darkest night, no lights, you saw nothing. All the houses were in ruins, and I heard only one voice. I thought some evil spell had been cast on me, a woman’s voice talking from the rubble. I circled the ruins. I didn’t look at my watch, but I must have spent half an hour exploring, trying to find the woman whose voice guided me, but unfortunately I didn’t find her.

Rotem, who is still alive today, moved from one bunker to another, shouting his password, and Lanzmann gives him the last words in a film that might seem to resist any ending:

There was still smoke, and that awful smell of charred flesh of people who had surely been burned alive. I continued on my way, going to other bunkers in search of fighting units, but it was the same everywhere…I went from bunker to bunker, and after walking for hours in the ghetto, I went back toward the sewers…I was alone all the time. Except for that woman’s voice, and a man I met as I came out of the sewers, I was alone throughout my tour of the ghetto. I didn’t meet a living soul. At one point I recall feeling a kind of peace, of serenity. I said to myself: “I’m the last Jew. I’ll wait for morning, and for the Germans.”

Written by nevalalee

July 6, 2018 at 8:41 am

The purity test

with one comment

Earlier this week, The New York Times Magazine published a profile by Taffy Brodesser-Akner of the novelist Jonathan Franzen. It’s full of fascinating moments, including a remarkable one that seems to have happened entirely by accident—the reporter was in the room when Frazen received a pair of phone calls, including one from Daniel Craig, to inform him that production had halted on the television adaptation of his novel Purity. Brodesser-Akner writes: “Franzen sat down and blinked a few times.” That sounds about right to me. And the paragraph that follows gets at something crucial about the writing life, in which the necessity of solitary work clashes with the pressure to put its fruits at the mercy of the market:

He should have known. He should have known that the bigger the production—the more people you involve, the more hands the thing goes through—the more likely that it will never see the light of day resembling the thing you set out to make in the first place. That’s the real problem with adaptation, even once you decide you’re all in. It just involves too many people. When he writes a book, he makes sure it’s intact from his original vision of it. He sends it to his editor, and he either makes the changes that are suggested or he doesn’t. The thing that we then see on shelves is exactly the thing he set out to make. That might be the only way to do this. Yes, writing a novel—you alone in a room with your own thoughts—might be the only way to get a maximal kind of satisfaction from your creative efforts. All the other ways can break your heart.

To be fair, Franzen’s status is an unusual one, and even successful novelists aren’t always in the position of taking for granted the publication of “exactly the thing he set out to make.” (In practice, it’s close to all or nothing. In my experience, the novel that you see on store shelves mostly reflects what the writer wanted, while the ones in which the vision clashes with those of other stakeholders in the process generally doesn’t get published at all.) And I don’t think I’m alone when I say that some of the most interesting details that Brodesser-Akner provides are financial. A certain decorum still surrounds the reporting of sales figures in the literary world, so there’s a certain frisson in seeing them laid out like this:

And, well, sales of his novels have decreased since The Corrections was published in 2001. That book, about a Midwestern family enduring personal crises, has sold 1.6 million copies to date. Freedom, which was called a “masterpiece” in the first paragraph of its New York Times review, has sold 1.15 million since it was published in 2010. And 2015’s Purity, his novel about a young woman’s search for her father and the story of that father and the people he knew, has sold only 255,476.

For most writers, selling a quarter of a million copies of any book would exceed their wildest dreams. Having written one of the greatest outliers of the last twenty years, Franzen simply reverting to a very exalted mean. But there’s still a lot to unpack here.

For one thing, while Purity was a commercial disappointment, it doesn’t seem to have been an unambiguous disaster. According to Publisher’s Weekly, its first printing—which is where you can see a publisher calibrating its expectations—came to around 350,000 copies, which wasn’t even the largest print run for that month. (That honor went to David Lagercrantz’s The Girl in the Spider’s Web, which had half a million copies, while a new novel by the likes of John Grisham can run to over a million.) I don’t know what Franzen was paid in advance, but the loss must have fallen well short of a book like Tom Wolfe’s Back to Blood, for which he received $7 million and sold 62,000 copies, meaning that his publisher paid over a hundred dollars for every copy that someone actually bought. And any financial hit would have been modest compared to the prestige of keeping a major novelist on one’s list, which is unquantifiable, but no less real. If there’s one thing that I’ve learned about publishing over the last decade, it’s that it’s a lot like the movie industry, in which apparently inexplicable commercial and marketing decisions are easier to understand when you consider their true audience. In many cases, when they buy or pass on a book, editors aren’t making decisions for readers, but for other editors, and they’re very conscious of what everyone in their imprint thinks. A readership is an abstraction, except when quantified in sales, but editors have their everyday judgement calls reflected back on them by the people they see every day. Giving up a writer like Franzen might make financial sense, but it would be devastating to Farrar, Straus and Giroux, to say nothing of the relationship that can grow between an editor and a prized author over time.

You find much the same dynamic in Hollywood, in which some decisions are utterly inexplicable until you see them as a manifestation of office politics. In theory, a film is made for moviegoers, but the reactions of the producer down the hall are far more concrete. The difference between publishing and the movies is that the latter publish their box office returns, often in real time, while book sales remain opaque even at the highest level. And it’s interesting to wonder how both industries might differ if their approaches were more similar. After years of work, the success of a movie can be determined by the Saturday morning after its release, while a book usually has a little more time. (The exception is when a highly anticipated title doesn’t make it onto the New York Times bestseller list, or falls off it with alarming speed. The list doesn’t disclose any sales figures, which means that success is relative, not absolute—and which may be a small part of the reason why writers seldom wish one another well.) In the absence of hard sales, writers establish the pecking order with awards, reviews, and the other signifiers that have allowed Franzen to assume what Brodesser-Akner calls the mantle of “the White Male Great American Literary Novelist.” But the real takeaway is how narrow a slice of the world this reflects. Even if we place the most generous interpretation imaginable onto Franzen’s numbers, it’s likely that well under one percent of the American population has bought or read any of his books. You’ll find roughly the same number on any given weeknight playing HQ Trivia. If we acknowledged this more widely, it might free writers to return to their proper cultural position, in which the difference between a bestseller and a disappointment fades rightly into irrelevance. Who knows? They might even be happier.

Written by nevalalee

June 28, 2018 at 7:49 am

Breaking the silence

leave a comment »

On Saturday, I participated in an event at the American Library Association conference in New Orleans with the authors Alex White (A Big Ship At the Edge of the Universe), Tessa Gratton (The Queens of Innis Lear), and Robert Jackson Bennett (Foundryside). It went fine—I signed books, met some interesting people, and had the chance to speak to librarians about Astounding, which is why I was there in the first place. I had also been told that I should talk about a book that I had recently read, but because of a miscommunication, the other writers on the panel never got the message, so the idea was quietly dropped. This wasn’t a serious problem, but it deprived me of the chance to recommend the title that I’d selected, which I feel comfortable describing as the most interesting book that I’ve read in at least two or three years. It isn’t about science fiction, but about the art of biography, which can be a form of speculative fiction in itself. As regular readers of this blog know, I stumbled into the role of a biographer almost by accident, and ever since, I’ve been seeking advice on the subject wherever I can find it. It shouldn’t come as a surprise that biographers are eager to speak about their art and struggles, and that they’ll sometimes overshare at moments when they should be fading into the background. (I have a sneaking fondness for books like The Life of Graham Greene by Norman Sherry and Anthony Burgess by Roger Lewis, in which the biographer smuggles himself into the life of his subject, even if I can’t defend it. And James Atlas recently published an entire book, The Shadow in the Garden, mostly as an excuse to air his grievances about the reception of his biography of Saul Bellow.) But it wasn’t until recently that I found a book that captured everything that I had been feeling and thinking, along with so much else.

The book is The Silent Woman: Sylvia Plath and Ted Hughes by Janet Malcolm, which was originally published in 1994. I think it’s a masterpiece—it’s one of the best nonfiction books that I’ve ever read of any kind—and it instantly elevated Malcolm, whom I’ve long respected, into the pantheon of my intellectual heroes. I’ve read a lot of her work in The New Yorker, of course, and I greatly admired her books Psychoanalysis: The Impossible Profession and In the Freud Archives. (The former includes a passage about the history of psychoanalysis that I find so insightful that I’ve quoted it here no fewer than three times.) But The Silent Woman is on another level entirely. On the surface, it’s a close reading of all the biographies that have been written by others about Plath and Hughes, but as you read it, it unfolds into a work of fiendish complexity that operates on multiple planes at once. It’s a fascinating—and gossipy—consideration of Plath and Hughes themselves; an account of Malcolm’s own investigation of some of the figures on the sidelines; a meditation on biographical truth; and a fantastically involving reading experience. Malcolm has a knack for crafting a phrase or analogy that can change the way you think about a subject forever. Writing about the appearance of the first collection of Plath’s letters, for instance, she uses an image that reminds me of the moment in certain movies when the screen suddenly widens into Cinemascope size:

Before the publication of Letters Home, the Plath legend was brief and contained, a taut, austere stage drama set in a few bleak, sparsely furnished rooms…Now the legend opened out, to become a vast, sprawling movie-novel filmed on sets of the most consummate and particularized realism: period clothing, furniture, and kitchen appliances; real food; a cast of characters headed by a Doris Dayish Plath (a tall Doris Day who “wrote”) and a Laurence Olivier-Heathcliffish Hughes.

The result is as twisty as Nabokov’s Pale Fire, but even better, I think, because it doesn’t wear its cleverness on its sleeve. Instead, it subtly ensnares you, and you end up feeling—or at least I did—that you’re somehow implicated in the story yourself. I read the first half online, in the archive of The New Yorker, and as soon as I realized how special it was, I checked out the hardcover from the library. Once I was done, I knew that this was a book that I had to own, so I picked up a used copy of the paperback at Open Books in Chicago. I leafed through it occasionally afterward, and I even lent it to my wife to read, but I didn’t look at it too closely. As a result, it wasn’t until I brought it last weekend to New Orleans that I realized that it included a new afterword. Unlike many books, it didn’t advertise the presence of any additional material, and it isn’t mentioned on the copyright page, which made it seem like a secret message straight out of Dictionary of the Khazars. It’s also a confession. In the original edition, Malcolm states that Ted Hughes decided to posthumously release Plath’s novel The Bell Jar in America because he needed money to buy a second home. After the book was published, Malcolm reveals in the afterword, Hughes wrote to her to say that this was incorrect:

One part of your narrative is not quite right…You quote my letter to [Plath’s mother] Aurelia in which I ask her how she feels about our publishing The Bell Jar in the U.S. That was early 1970; I wanted cash to buy a house…When Aurelia wrote back and made her feelings clear, even though she said the decision to publish or not rested with me, I dropped my idea of buying the house. My letter reassuring her is evidently not in the archive you saw (or obviously your account would be different).

Before I get to Malcolm’s response to Hughes, who is politely but firmly pointing out a possible mistake, I should mention my own situation. Yesterday, I delivered the final set of corrections to Astounding. In the process, I’ve checked as much of the book as I can against my primary sources, and I’ve found a few small mistakes—mistyped dates, minor transcription errors—that I’m glad to have caught at this stage. But it means that I’m very conscious of how it feels to be a writer who learns that something in his or her book might be wrong. As for Malcolm, she wrote back to Hughes, saying that she checked her notes from the Lilly Library at Indiana University Bloomington:

In 1971, Aurelia made an annotation on your letter of March 24, 1970. She wrote, in tiny handwriting, “’71—children said this was a horrible house’ and they didn’t want to live there. Ted did send me $10,000 from the royalties (I protested the publication, which Sylvia would not have allowed) and deposited [illegible] in accounts for Frieda and Nick—Ted [illegible] bought the property!!!” Not knowing anything to the contrary, I took Aurelia at her word.

Malcolm and Hughes spoke on the phone to straighten out the misunderstanding, and everything seemed fine. But on the very last page of the book, Malcolm slips in the literary equivalent of a post-credits scene that changes everything that we thought we knew:

The next morning I awoke with one of those inklings by which detective fiction is regularly fueled. I telephoned the Lilly Library again and asked the librarian if she would read me Aurelia Plath’s annotation of Hughes’s letter of March 24, 1970—I was especially interested in a word that I had found illegible when I took notes at the library in 1991. Perhaps she could make it out? She said she would try. When she reached the relevant sentence, she paused for a suspenseful moment of effort. Then she read—as I felt certain she would—“Ted never bought the property.”

Written by nevalalee

June 27, 2018 at 9:22 am

The ghost in the machine

with one comment

Note: Spoilers follow for the season finale of Westworld.

When you’re being told a story, you want to believe that the characters have free will. Deep down, you know that they’ve been manipulated by a higher power that can make them do whatever it likes, and occasionally, it can even be fun to see the wires. For the most part, though, our enjoyment of narrative art is predicated on postponing that realization for as long as possible. The longer the work continues, the harder this becomes, and it can amount to a real problem for a heavily serialized television series, which can start to seem strained and artificial as the hours of plot developments accumulate. These tensions have a way of becoming the most visible in the protagonist, whose basic purpose is to keep the action clocking along. As I’ve noted here before, there’s a reason why the main character is often the least interesting person in sight. The show’s lead is under such pressure to advance the plot that he or she becomes reduced to the diagram of a pattern of forces, like one of the fish in D’Arcy Wentworth Thompson’s On Growth and Form, in which the animal’s physical shape is determined by the outside stresses to which it has been subjected. Every action exists to fulfill some larger purpose, which often results in leads who are boringly singleminded, with no room for the tangents that can bring supporting players to life. The characters at the center have to constantly triangulate between action, motivation, and relatability, which can drain them of all surprise. And if the story ever relaxes its hold, they burst, like sea creatures brought up from a crevasse to the surface.

This is true of most shows that rely heavily on plot twists and momentum—it became a huge problem for The Vampire Diaries—but it’s even more of an issue when a series is also trying to play tricks with structure and time. Westworld has done more than any other television drama that I can remember to push against the constraints of chronology, and the results are often ingenious. Yet they come at a price. (As the screenwriter Robert Towne put it in a slightly different content: “You end up paying for it with an almost mathematical certainty.”) And the victim, not surprisingly, has been the ostensible lead. Over a year and a half ago, when the first season was still unfolding, I wrote that Dolores, for all her problems, was the engine that drove the story, and that her gradual movement toward awareness was what gave the series its narrative thrust. I continued:

This is why I’m wary of the popular fan theory, which has been exhaustively discussed online, that the show is taking place in different timelines…Dolores’s story is the heart of the series, and placing her scenes with William three decades earlier makes nonsense of the show’s central conceit: that Dolores is slowly edging her way toward greater self-awareness because she’s been growing all this time. The flashback theory implies that she was already experiencing flashes of deeper consciousness almost from the beginning, which requires us to throw out most of what we know about her so far…It has the advantage of turning William, who has been kind of a bore, into a vastly more interesting figure, but only at the cost of making Dolores considerably less interesting—a puppet of the plot, rather than a character who can drive the narrative forward in her own right.

As it turned out, of course, that theory was totally on the mark, and I felt a little foolish for having doubted it for so long. But on a deeper level, I have to give myself credit for anticipating the effect that it would have on the series as a whole. At the time, I concluded: “Dolores is such a load-bearing character that I’m worried that the show would lose more than it gained by the reveal…The multiple timeline theory, as described, would remove the Dolores we know from the story forever. It would be a fantastic twist. But I’m not sure the show could survive it.” And that’s pretty much what happened, although it took another season to clarify the extent of the damage. On paper, Dolores was still the most important character, and Evan Rachel Wood deservedly came first in the credits. But in order to preserve yet another surprise, the show had to be maddeningly coy about what exactly she was doing, even as she humorlessly pursued her undefined mission. Every line was a cryptic hint about what was coming, and the payoff was reasonably satisfying. But I don’t know if it was worth it. Offhand, I can’t recall another series in which an initially engaging protagonist was reduced so abruptly to a plot device, and it’s hard not to blame the show’s conceptual and structural pretensions, which used Dolores as a valve for the pressure that was occurring everywhere else but at its center. It’s frankly impossible for me to imagine what Dolores would even look like if she were relaxing or joking around or doing literally anything except persisting grimly in her roaring rampage of revenge. Because of the nature of its ambitions, Westworld can’t give her—or any of its characters—the freedom to act outside the demands of the story. It’s willing to let its hosts be reprogrammed in any way that the plot requires. Which you’ve got to admit is kind of ironic.

None of this would really matter if the payoffs were there, and there’s no question that last night’s big reveal about Charlotte is an effective one. (Unfortunately, it comes at the expense of Tessa Thompson, who, like Wood, has seemed wasted throughout the entire season for reasons that have become evident only now.) But the more I think about it, the more I feel that this approach might be inherently unsuited for a season of television that runs close to twelve hours. When a conventional movie surprises us with a twist at the end, part of the pleasure is mentally rewinding the film to see how it plays in light of the closing revelation—and much of the genius of Memento, which was based on Jonathan Nolan’s original story, was that it allowed us to do this every ten minutes. Yet as Westworld itself repeatedly points out, there’s only so much information or complexity that the human mind can handle. I’m a reasonably attentive viewer, but I often struggled to recall what happened seven episodes ago, and the volume of data that the show presents makes it difficult to check up on any one point. Now that the series is over, I’m sure that if I revisited the earlier episodes, many scenes would take on an additional meaning, but I just don’t have the time. And twelve hours may be too long to make viewers wait for the missing piece that will lock the rest into place, especially when it comes at the expense of narrative interest in the meantime, and when anything truly definitive will need to be withheld for the sake of later seasons. It’s to the credit of Westworld and its creators that there’s little doubt that they have a master plan. They aren’t making it up as they go along. But this also makes it hard for the characters to make anything of themselves. None of us, the show implies, is truly in control of our actions, which may well be the case. But a work of art, like life itself, doesn’t seem worth the trouble if it can’t convince us otherwise.

Written by nevalalee

June 25, 2018 at 8:42 am

The president is collaborating

leave a comment »

Last week, Bill Clinton and James Patterson released their collaborative novel The President is Missing, which has already sold something like a quarter of a million copies. Its publication was heralded by a lavish two-page spread in The New Yorker, with effusive blurbs from just about everyone whom a former president and the world’s bestselling author might be expected to get on the phone. (Lee Child: “The political thriller of the decade.” Ron Chernow: “A fabulously entertaining thriller.”) If you want proof that the magazine’s advertising department is fully insulated from its editorial side, however, you can just point to the fact that the task of reviewing the book itself was given to Anthony Lane, who doesn’t tend to look favorably on much of anything. Lane’s style—he has evidently never met a smug pun or young starlet he didn’t like—can occasionally turn me off from his movie reviews, but I’ve always admired his literary takedowns. I don’t think a month goes by that I don’t remember his writeup of the New York Times bestseller list May 15, 1994, which allowed him to tackle the likes of The Bridges of Madison County, The Celestine Prophecy, and especially The Day After Tomorrow by Allan Folsom, from which he quoted a sentence that permanently changed my view of such novels: “Two hundred European cities have bus links with Frankfurt.” But he seems to have grudgingly liked The President is Missing. If nothing else, he furnishes a backhanded compliment that has already been posted, hilariously out of context, on Amazon: “If you want to make the most of your late-capitalist leisure-time, hit the couch, crack a Bud, punch the book open, focus your squint, and enjoy.”

The words “hit the couch, crack a Bud, punch the book open, [and] focus your squint,” are all callbacks to samples of Patterson’s prose that Lane quotes in the review, but the phrase “late-capitalist leisure-time” might require some additional explanation. It’s a reference to the paper “Structure over Style: Collaborative Authorship and the Revival of Literary Capitalism,” which appeared last year in Digital Humanities Review, and I’m grateful to Lane for bringing it to my attention. The authors, Simon Fuller and James O’Sullivan, focus on the factory model of novelists who employ ghostwriters to boost their productivity, and their star exhibit is Patterson, to whom they devote the same kind of computational scrutiny that has previously uncovered traces of collaboration in Shakespeare. Not surprisingly, it turns out that Patterson doesn’t write most of the books that he ostensibly coauthors. (He may not even have done much of the writing on First to Die, which credits him as the sole writer.) But the paper is less interesting for its quantitative analysis than for its qualitative evaluation of what Patterson tells us about how we consume and enjoy fiction. For instance:

The form of [Patterson’s] novels also appears to be molded by contemporary experience. In particular, his work is perhaps best described as “commuter fiction.” Nicholas Paumgarten describes how the average time for a commute has significantly increased. As a result, reading has increasingly become one of those pursuits that can pass the time of a commute. For example, a truck driver describes how “he had never read any of Patterson’s books but that he had listened to every single one of them on the road.” A number of online reader reviews also describe Patterson’s writing in terms of their commutes…With large print, and chapters of two or three pages, Patterson’s works are constructed to fit between the stops on a metro line.

Of course, you could say much the same of many thrillers, particularly the kind known as the airport novel, which wasn’t just a book that you read on planes—at its peak, it was one in which many scenes took place in airports, which were still associated with glamor and escape. What sets Patterson apart from his peers is his ability to maintain a viable brand while publishing a dozen books every year. His productivity is inseparable from his use of coauthors, but he wasn’t the first. Fuller and O’Sullivan cite the case of Alexandre Dumas, who allegedly boasted of having written four hundred novels and thirty-five plays that had created jobs for over eight thousand people. And they dig up a remarkable quote from The German Ideology by Karl Marx and Friedrich Engels, who “favorably compare French popular fiction to the German, paying particular attention to the latter’s appropriation of the division of labor”:

In proclaiming the uniqueness of work in science and art, [Max] Stirner adopts a position far inferior to that of the bourgeoisie. At the present time it has already been found necessary to organize this “unique” activity. Horace Vernet would not have had time to paint even a tenth of his pictures if he regarded them as works which “only this Unique person is capable of producing.” In Paris, the great demand for vaudevilles and novels brought about the organization of work for their production, organization which at any rate yields something better than its “unique” competitors in Germany.

These days, you could easily imagine Marx and Engels making a similar case about film, by arguing that the products of collaboration in Hollywood have often been more interesting, or at least more entertaining, than movies made by artists working outside the system. And they might be right.

The analogy to movies and television seems especially appropriate in the case of Patterson, who has often drawn such comparisons himself, as he once did to The Guardian: “There is a lot to be said for collaboration, and it should be seen as just another way to do things, as it is in other forms of writing, such as for television, where it is standard practice.” Fuller and O’Sullivan compare Patterson’s brand to that of Alfred Hitchcock, whose name was attached to everything from Dell anthologies to The Three Investigators to Alfred Hitchcock’s Mystery Magazine. It’s a good parallel, but an even better one might be hiding in plain sight. In her recent profile of the television producer Ryan Murphy, Emily Nussbaum evokes an ability to repackage the ideas of others that puts even Patterson to shame:

Murphy is also a collector, with an eye for the timeliest idea, the best story to option. Many of his shows originate as a spec script or as some other source material. (Murphy owned the rights to the memoir Orange Is the New Black before Jenji Kohan did, if you want to imagine an alternative history of television.) Glee grew out of a script by Ian Brennan; Feud began as a screenplay by Jaffe Cohen and Michael Zam. These scripts then get their DNA radically altered and replicated in Murphy’s lab, retooled with his themes and his knack for idiosyncratic casting.

Murphy’s approach of retooling existing material in his own image might be even smarter than Patterson’s method of writing outlines for others to expand, and he’s going to need it. Two months ago, he signed an unprecedented $300 million contract with Netflix to produce content of all kinds: television shows, movies, documentaries. And another former president was watching. While Bill Clinton was working with Patterson, Barack Obama was finalizing a Netflix deal of his own—and if he needs a collaborator, he doesn’t have far to look.

%d bloggers like this: