Posts Tagged ‘Dune’
Sci-Fi and Si
In 1959, the newspaper magnate Samuel I. Newhouse allegedly asked his wife Mitzi what she wanted for their upcoming wedding anniversary. When she told him that she wanted Vogue, he bought all of Condé Nast. At the time, the publishing firm was already in negotiations to acquire the titles of the aging Street & Smith, and Newhouse, its new owner, inherited this transaction. Here’s how Carol Felsenthal describes the deal in Citizen Newhouse:
For $4 million [Newhouse] bought Charm, Living for Young Homemakers, and Mademoiselle. (Also included were five sports annuals, which he ignored, allowing them to continue to operate with a minimal staff and low-overhead offices—separate from Condé Nast’s—and to earn a small but steady profit.) He ordered that Charm be folded into Glamour. Living for Young Homemakers become House & Garden Guides. Mademoiselle was allowed to survive because its audience was younger and better educated than Glamour’s; Mademoiselle was aimed at the college girl, Glamour at the secretary.
Newhouse’s eldest son, who was known as Si, joined Glamour at the age of thirty-five, and within a few years, he was promoted to oversee all the company’s magazines. When he passed away yesterday, as his obituary in the Times notes, he was a media titan “who as the owner of The New Yorker, Vogue, Vanity Fair, Architectural Digest and other magazines wielded vast influence over American culture, fashion and social taste.”
What this obituary—and all the other biographies that I’ve seen—fails to mention is that when the Newhouses acquired Street & Smith, they also bought Astounding Science Fiction. In the context of two remarkably busy lives, this merits little more than a footnote, but it was a significant event in the career of John W. Campbell and, by extension, the genre as a whole. In practice, Campbell was unaffected by the change in ownership, and he joked that he employed Condé Nast to get his ideas out, rather than the other way around. (Its most visible impact was a brief experiment with a larger format, allowing the magazine to sell ads to technical advertisers that didn’t make smaller printing plates, but the timing was lousy, and it was discontinued after two years.) But it also seems to have filled him with a sense of legitimacy. Campbell, like his father, had an uncritical admiration for businessmen—capitalism was the one orthodoxy that he took at face value—and from his new office in the Graybar Building on Lexington Avenue, he continued to identify with his corporate superiors. When Isaac Asimov tried to pick up a check at lunch, Campbell pinned his hand to the table: “Never argue with a giant corporation, Isaac.” And when a fan told him that he had written a story, but wasn’t sure whether it was right for the magazine, Campbell drew himself up: “And since when does the Condé Nast Publications, Incorporated pay you to make editorial decisions?” In fact, the change in ownership seems to have freed him up to make the title change that he had been contemplating for years. Shortly after the sale, Astounding became Analog, much to the chagrin of longtime fans.
Some readers discerned more sinister forces at work. In the memorial essay collection John W. Campbell: An Australian Tribute, the prominent fan Redd Boggs wrote: “What indulgent publisher is this who puts out and puts up with Campbell’s personal little journal, his fanzine?…One was astounded to see the magazine plunge along as hardily as ever after Condé Nast and Samuel I. Newhouse swallowed up and digested Street & Smith.” He went on to answer his own question:
We are making a mistake when we think of Analog as a science fiction magazine and of John W. Campbell as an editor. The financial backer or backers of Analog obviously do not think that way. They regard Analog first and foremost as a propaganda mill for the right wing, and Campbell as a propagandist of formidable puissance and persuasiveness. The stories, aside from those which echo Campbell’s own ideas, are only incidental to the magazine, the bait that lures the suckers. Analog’s raison d’être is Campbell’s editorials. If Campbell died, retired, or backslid into rationality, the magazine would fold instantly…
Campbell is a precious commodity indeed, a clever and indefatigable propagandist for the right wing, much superior in intelligence and persuasive powers to, say, William F. Buckley, and he works for bargain basement prices at that. And if our masters are as smart as I think they are…I feel sure that they would know how to cherish such heaven-sent gifts, even as I would.
This is an ingenious argument, and I almost want to believe it, if only because it makes science fiction seem as important as it likes to see itself. In reality, it seems likely that Si Newhouse barely thought about Analog at all, which isn’t to say that he wasn’t aware of it. His Times obituary notes: “He claimed to read every one of his magazines—they numbered more than fifteen—from cover to cover.” This conjures up the interesting image of Newhouse reading the first installment of Dune and the latest update on the Dean Drive, although it’s hard to imagine that he cared. Campbell—who must have existed as a wraith in the peripheral vision of Diana Vreeland of Vogue, who worked in the same building for nearly a decade—was allowed to run the magazine on his own, and it was tolerated as along as it remained modestly profitable. Newhouse’s own interests ran less to science fiction than toward what David Remnick describes as “gangster pictures, romantic comedies, film noir, silent comedies, the avant-garde.” (He did acquire Wired, but his most profound impact on our future was one that nobody could have anticipated—it was his idea to publish Donald Trump’s The Art of the Deal.) When you love science fiction, it can seem like nothing else matters, but it hardly registers in the life of someone like Newhouse. We don’t know what Campbell thought of him, but I suspect that he wished that they had been closer. Campbell wanted nothing more than to bring his notions, like psionics, to a wider audience, and he spent the last decade of his career with a publishing magnate within view but tantalizingly out of reach—and his name was even “Psi.”
The search for the zone
Note: This post discusses plot points from Twin Peaks.
Last night’s episode of Twin Peaks featured the surprise return of Bill Hastings, the high school principal in Buckhorn, South Dakota who is somehow connected to the headless body of Major Garland Briggs. We hadn’t seen Hastings, played by Matthew Lillard, since the season premiere, and his reappearance marked one of the first times that the show has gone back to revisit an earlier subplot. Hastings, we’re told, maintained a blog called The Search for the Zone, in which he chronicled his attempts to contact other planes of reality, and the site really exists, of course, in the obligatory manner of such online ephemera as Save Walter White and the defunct What Badgers Eat. It’s a marketing impulse that seems closer to Mark Frost than David Lynch—if either of them were even involved—and I normally wouldn’t even mention it at all. Along with its fake banner ads and retro graphics, however, the page includes a section titled “Heinlein Links,” with a picture of Robert A. Heinlein and a list of a few real sites, including my friends over at The Heinlein Society. As “Hastings” writes: “Science Fiction has been a source of enjoyment for me since I was ten years old, when I read Orphans of the Sky.” Frankly, this already feels like a dead end, and, like the references to L. Ron Hubbard and Jack Parsons in The Secret History of Twin Peaks, it recalls some of the show’s least intriguing byways. (Major Briggs and the villainous Windom Earle, you might recall, were involved in Project Blue Book, the study of unidentified flying objects conducted by the Air Force, but the thread didn’t really lead anywhere, except perhaps to set off a train of thought for Chris Carter.) I enjoyed last night’s episode, but it was the most routine installment of the season so far, and this attempt at virality might be the most conventional touch of all. But since this might represent the only time in which my love of Twin Peaks will overlap with my professional interests, I should probably dig into it.
Orphans of the Sky, which was originally published as the two novellas “Universe” and “Common Sense” in Astounding Science Fiction in 1941, is arguably the most famous treatment of one of the loveliest ideas in science fiction—the generation starship, a spacecraft designed to travel for centuries or millennia until it reaches its destination. (Extra points if the passengers forget that they’re on a spaceship at all.) It’s also one of the few stories by Heinlein that can be definitively traced back to an idea provided by the editor John W. Campbell. On September 20, 1940, Campbell wrote to Heinlein with a detailed outline of the premise:
Sometime along about 3763, an expedition is finally launched from Earth to outer space—and I mean outer space…[The ship is] five miles in diameter, intended for about two thousand inhabitants, and equipped with gardens, pasturage, etc., for animals. It’s a self-sustaining economy…They’re bound for Alpha Centauri at a gradually building speed…The instruments somehow develop a systematic error, due to imperfect compensation for the rotation; they miss Centauri, plunging past it too rapidly and too far away to make landing. A brief revolt leads to the death of the few men aboard fully competent to make the necessary changes of mechanism for changing course and backtracking to Centauri. The ship can only plunge on.
But the story would be laid somewhere about 1430 After the Beginning. The characters are the remote descendants of those who took off, centuries before, from Earth. And they’re savages. The High Chiefs are the priest-engineers, who handle the small amount of non-automatic machinery…There are princes and nobles—and dull peasants. There are monsters, too, who are usually killed at birth, since every woman giving birth is required to present her baby before an inspector. That’s because of mutations, some of which are unspeakably hideous. One of which might, however, be a superman, and the hero of the story.
If you’ve read “Universe,” you can see that Campbell laid out most of it here, and that Heinlein used nearly all of it, down to the smallest details, although he later played down the extent of Campbell’s influence. (Decades later, in the collection Expanded Universe, Heinlein flatly, and falsely, stated that the unrelated serial Sixth Column “was the only story of mine ever influenced to any marked degree by John W. Campbell, Jr.”) But the two men also chose to emphasize different aspects of the narrative, in ways that reflected their interests and personalities. Most of Campbell’s letter, when it wasn’t talking about the design of the spacecraft itself, was devoted to the idea of the “scientisthood,” or a religion founded on a misinterpretation of science:
They’ve lost science, save for the priest class, who study it as a religion, and horribly misunderstand it because they learn from books written by and for people who dwelt on a planet near a sun. Here, the laws of gravity are meaningless, astronomy senseless, most of science purely superstition from a forgotten time. Naturally, there was a religious schism, a building-up of a new bastard science-religion that based itself on a weird and unnatural blending of the basic laws of science and the basic facts of their own experience…Anything is possible, and might be darned interesting. Particularly the queer, fascinating system of science-religion and so forth they’d have to live by.
The idea of a religion based on a misreading of the textbook Basic Modern Physics is a cute inversion of one of Campbell’s favorite plot devices—a fake religion deliberately dreamed up by scientists, which we see in such stories as the aforementioned Sixth Column, Isaac Asimov’s “Bridle and Saddle,” and Fritz Leiber, Jr.’s Gather, Darkness. In “Universe,” Heinlein touches on this briefly, but he was far more interested in the jarring perceptual and conceptual shift that the premise implied, which tied back into his interest in Alfred Korzybski and General Semantics: how do you come to terms with the realization that the only world you’ve ever known is really a small part of an incomprehensibly vaster reality?
“Universe” is an acknowledged landmark of the genre, although its sequel, “Common Sense,” feels more like work for hire. It isn’t hard to relate it to Hastings, whose last blog post reads in part:
We will have to reconcile with the question that if someone from outside our familiar world gains access to our plane of existence, what ramifications will that entail? There might be forces at work from deep dimensional space, or from the future…or are these one in [sic] the same?
But I’d hesitate to take the Heinlein connection too far. Twin Peaks—and most of David Lynch’s other work—has always asked us to look past the surface of things to an underlying pattern that is stranger than we can imagine, but it has little in common with the kind of cold, slightly dogmatic rationalism that we tend to see in Campbell and early Heinlein. Both men, like Korzybski or even Ayn Rand, claimed that they were only trying to get readers to think for themselves, but in practice, they were markedly impatient of anyone who disagreed with their answers. Lynch and Mark Frost’s brand of transcendence is looser, more dreamlike, and more intuitive, and its insights are more likely to be triggered by a song, the taste of coffee, or a pair of red high heels than by logical analysis. (When the show tries to lay out the pieces in a more systematic fashion, as it did last night, it doesn’t always work.) But there’s something to be said for the idea that beyond our familiar world, there’s an objective reality that would be blindingly obvious if we only managed to see it. With all the pop cultural baggage carried by Twin Peaks, it’s easy to forget that it’s also from the director and star of Dune, which took the opposite approach, with a unified past and future visible to the superhuman Kwisatz Haderach. Yet Lynch’s own mystical inclinations are more modest and humane, and neither Heinlein nor Frank Herbert have much in common with the man whose favorite psychoactive substances have always been coffee and cigarettes. And I’d rather believe in a world in which the owls are not what they seem than one in which nothing at all is what it seems. But there’s one line from “Universe” that can serve as a quiet undertone to much of Lynch’s career, and I’d prefer to leave it there: “He knew, subconsciously, that, having seen the stars, he would never be happy again.”
The great scene theory
“The history of the world is but the biography of great men,” Thomas Carlyle once wrote, and although this statement was criticized almost at once, it accurately captures the way many of us continue to think about historical events, both large and small. There’s something inherently appealing about the idea that certain exceptional personalities—Alexander the Great, Julius Caesar, Napoleon—can seize and turn the temper of their time, and we see it today in attempts to explain, say, the personal computing revolution though the life of someone like Steve Jobs. The alternate view, which was expressed forcefully by Herbert Spencer, is that history is the outcome of impersonal social and economic forces, in which a single man or woman can do little more than catalyze trends that are already there. If Napoleon had never lived, the theory goes, someone very much like him would have taken his place. It’s safe to say that any reasonable view of history has to take both theories into account: Napoleon was extraordinary in ways that can’t be fully explained by his environment, even if he was inseparably a part of it. But it’s also worth remembering that much of our fascination with such individuals arises from our craving for narrative structures, which demand a clear hero or villain. (The major exception, interestingly, is science fiction, in which the “protagonist” is often humanity as a whole. And the transition from the hard science fiction of the golden age to messianic stories like Dune, in which the great man reasserts himself with a vengeance, is a critical turning point in the genre’s development.)
You can see a similar divide in storytelling, too. One school of thought implicitly assumes that a story is a delivery system for great scenes, with the rest of the plot serving as a scaffold to enable a handful of awesome moments. Another approach sees a narrative as a series of small, carefully chosen details designed to create an emotional effect greater than the sum of its parts. When it comes to the former strategy, it’s hard to think of a better example than Game of Thrones, a television series that often seems to be marking time between high points: it can test a viewer’s patience, but to the extent that it works, it’s because it constantly promises a big payoff around the corner, and we can expect two or three transcendent set pieces per season. Mad Men took the opposite tack: it was made up of countless tiny but riveting choices that gained power from their cumulative impact. Like the theories of history I mentioned above, neither type of storytelling is necessarily correct or complete in itself, and you’ll find plenty of exceptions, even in works that seem to fall clearly into one category or the other. It certainly doesn’t mean that one kind of story is “better” than the other. But it provides a useful way to structure our thinking, especially when we consider how subtly one theory shades into the other in practice. The director Howard Hawks famously said that a good movie consisted of three great scenes and no bad scenes, which seems like a vote for the Game of Thrones model. Yet a great scene doesn’t exist in isolation, and the closer we look at stories that work, the more important those nonexistent “bad scenes” start to become.
I got to thinking about this last week, shortly after I completed the series about my alternative movie canon. Looking back at those posts, I noticed that I singled out three of these movies—The Night of the Hunter, The Limey, and Down with Love—for the sake of one memorable scene. But these scenes also depend in tangible ways on their surrounding material. The river sequence in The Night of the Hunter comes out of nowhere, but it’s also the culmination of a language of dreams that the rest of the movie has established. Terence Stamp’s unseen revenge in The Limey works only because we’ve been prepared for it by a slow buildup that lasts for more than twenty minutes. And Renée Zellweger’s confessional speech in Down with Love is striking largely because of how different it is from the movie around it: the rest of the film is relentlessly active, colorful, and noisy, and her long, unbroken take stands out for how emphatically it presses the pause button. None of the scenes would play as well out of context, and it’s easy to imagine a version of each movie in which they didn’t work at all. We remember them, but only because of the less showy creative decisions that have already been made. And at a time when movies seem more obsessed than ever with “trailer moments” that can be spliced into a highlight reel, it’s important to honor the kind of unobtrusive craft required to make a movie with no bad scenes. (A plot that consists of nothing but high points can be exhausting, and a good story both delivers on the obvious payoffs and maintains our interest in the scenes when nothing much seems to be happening.)
Not surprisingly, writers have spent a lot of time thinking about these issues, and it’s noteworthy that one of the most instructive examples comes from Leo Tolstoy. War and Peace is nothing less than an extended criticism of the great man theory of history: Tolstoy brings Napoleon onto the scene expressly to emphasize how insignificant he actually is, and the novel concludes with a lengthy epilogue in which the author lays out his objections to how history is normally understood. History, he argues, is a pattern that emerges from countless unobservable human actions, like the sum of infinitesimals in calculus, and because we can’t see the components in isolation, we have to content ourselves with figuring out the laws of their behavior in the aggregate. But of course, this also describes Tolstoy’s strategy as a writer: we remember the big set pieces in War and Peace and Anna Karenina, but they emerge from the diligent, seemingly impersonal collation of thousands of tiny details, recorded with what seems like a minimum of authorial interference. (As Victor Shklovsky writes: “[Tolstoy] describes the object as if he were seeing it for the first time, an event as if it were happening for the first time.”) And the awesome moments in his novels gain their power from the fact that they arise, as if by historical inevitability, from the details that came before them. Anna Karenina was still alive at the end of the first draft, and it took her author a long time to reconcile himself to the tragic climax toward which his story was driving him. Tolstoy had good reason to believe that great scenes, like great men, are the product of invisible forces. But it took a great writer to see this.
Quote of the Day
For Dune, I also used what I call a “camera position” method—playing back and forth (and in varied orders depending on the required pace) between long shot, medium, closeup, and so on…The implications of color, position, word root, and prosodic suggestion—all are taken into account when a scene has to have maximum impact. And what scene doesn’t if a book is tightly written?
“Make it recognizable!”
I’ve mentioned before how David Mamet’s little book On Directing Film rocked my world at a time when I thought I’d already figured out storytelling to my own satisfaction. It provides the best set of tools for constructing a plot I’ve ever seen, and to the extent that I can call any book a writer’s secret weapon, this is it. But I don’t think I’ve ever talked about the moment when I realized how powerful Mamet’s advice really is. The first section of the book is largely given over to a transcript of one of the author’s seminars at Columbia, in which the class breaks down the beats of a simple short film: a student approaches a teacher to request a revised grade. The crucial prop in the scene, which is told entirely without dialogue, is the student’s notebook, its contents unknown—and, as Mamet points out repeatedly, unimportant. Then he asks:
Mamet: What answer do we give to the prop person who says “what’s the notebook look like?” What are you going to say?
The students respond with a number of suggestions: put a label on it, make it look like a book report, make it look “prepared.” Mamet shoots them down one by one, saying that they’re things that the audience can’t be expected to care about, if they aren’t intrinsically impossible:
Mamet: No, you can’t make the book look prepared. You can make it look neat. That might be nice, but that’s not the most important thing for your answer to the prop person…To make it prepared, to make it neat, to make it convincing, the audience ain’t going to notice. What are they doing to notice?
Student: That it’s the same book they’ve seen already.
Mamet: So what’s your answer to the prop person?
Student: Make it recognizable.
Mamet: Exactly so! Good. You’ve got to be able to recognize it. That is the most important thing about this report. This is how you use the principle of throughline to answer questions about the set and to answer questions about the costumes.
Now, this might seem like a small thing, but to me, this was an unforgettable moment: it was a powerful illustration of how close attention to the spine of the plot—the actions and images you use to convey the protagonist’s sequence of objectives—can result in immediate, practical answers to seemingly minor story problems, as long as you’re willing to rigorously apply the rules. “Make it recognizable,” in particular, is a rule whose true value I’ve only recently begun to understand. In writing a story, regardless of the medium, you only have a finite number of details that you can emphasize, so it doesn’t hurt to focus on ones that will help the reader recognize and remember important elements—a character, a prop, an idea—when they recur over the course of the narrative. Mamet notes that you can’t expect a viewer to read signs or labels designed to explain what isn’t clear in the action, and it took me a long time to see that this is equally true of the building blocks of fiction: if the reader needs to pause to remember who a character is or where a certain object has appeared before, you haven’t done your job as well as you could.
And like the instructions a director gives to the prop department, this rule translates into specific, concrete actions that a writer can take to keep the reader oriented. It’s why I try to give my characters names that can be readily distinguished from one another, to the point where I’ll often try to give each major player a name that begins with a different letter. This isn’t true to life, where, as James Wood points out, we’re likely to know three people named John and three more named Elizabeth, but it’s a useful courtesy to the reader. The same applies to other entities within the story: it can be difficult to keep track of the alliances in a novel like Dune, but Frank Herbert helps us tremendously by giving the families distinctive names like House Atreides and House Harkonnen. (Try to guess which house contains all the bad guys.) This is also why it’s useful to give minor characters some small characteristic to lock them in the reader’s mind: we may not remember that we’ve met Robert in Chapter 3 when he returns in Chapter 50, but we’ll recall his bristling eyebrows. Nearly every choice a writer makes should be geared toward making these moments of recognition as painless as possible, without the need for labels. As Mamet says: “The audience doesn’t want to read a sign; they want to watch a motion picture.” And to be told a story.
Why hobbits need to be short
It’s never easy to adapt a beloved novel for the screen. On the one hand, you have a book that has been widely acclaimed as one of the greatest works of speculative fiction of all time, with a devoted fanbase and an enormous invented backstory spread across many novels and appendices. On the other, you have a genius director who moved on from his early, bizarre, low-budget features to a triumphant mainstream success with multiple Oscar nominations, but whose skills as a storyteller have sometimes been less reliable than his unquestioned visual talents. The result, after a protracted development process clouded by rights issues, financial difficulties, and the departure of the previous director, is an overlong movie with too many characters that fails to capture the qualities that drew people to this story in the first place. By trying to appease fans of the book while also drawing in new audiences, it ends up neither here nor there. While it’s cinematically striking, and has its defenders, it leaves critics mostly cold, with few of the awards or accolades that greeted its director’s earlier work. And that’s why David Lynch had so much trouble with Dune.
But it’s what Lynch did next that is especially instructive. After Dune‘s financial failure, he found himself working on his fourth movie under far greater constraints, with a tiny budget and a contractual runtime of no more than 120 minutes. The initial cut ran close to three hours, but eventually, with the help of editor Duwayne Dunham, he got it down to the necessary length, although it meant losing a lot of wonderful material along the way. And what we got was Blue Velvet, which isn’t just Lynch’s best film, but my favorite American movie of all time. I recently had the chance to watch all of the deleted scenes as part of the movie’s release on Blu-ray, and it’s clear that if Lynch had been allowed to retain whatever footage he wanted—as he clearly does these days—the result would have been a movie like Inland Empire: fascinating, important, but ultimately a film that I wouldn’t need to see more than once. The moral, surprisingly enough, is that even a director like Lynch, a genuine artist who has earned the right to pursue his visions wherever they happen to take him, can benefit from the need, imposed by a studio, to cut his work far beyond the level where he might have been comfortable.
Obviously, the case of Peter Jackson is rather different. The Lord of the Rings trilogy was an enormous international success, and did as much as anything to prove that audiences will still sit happily through a movie of more than three hours if the storytelling is compelling enough. As a result, Jackson was able to make The Hobbit: An Unexpected Journey as long as he liked, which is precisely the problem. The Hobbit isn’t a bad movie, exactly; after an interminable first hour, it picks up considerably in the second half, and there are still moments I’m grateful to have experienced on the big screen. Yet I can’t help feeling that if Jackson had felt obliged, either contractually or artistically, to bring it in at under two hours, it would have been vastly improved. This would have required some hard choices, but even at a glance, there are entire sequences here that never should have made it past a rough cut. As it stands, we’re left with a meandering movie that trades largely on our affection for the previous trilogy—its actors, its locations, its music. And if this had been the first installment of a series, it’s hard to imagine it making much of an impression on anyone. Indeed, it might have justified all our worst fears about a cinematic adaptation of Tolkien.
And the really strange thing is that Jackson has no excuse. For one thing, it isn’t the first time he’s done this: I loved King Kong, but I still feel that it would have been rightly seen as a game changer on the level of Avatar if he’d cut it by even twenty minutes. And unlike David Lynch and Blue Velvet, whose deleted scenes remained unseen for decades before being miraculously rediscovered, Jackson knows that even if has to cut a sequence he loves, he has an audience of millions that will gladly purchase the full extended edition within a year of the movie’s release. But it takes a strong artistic will to accept such constraints if they aren’t being imposed from the outside, and to acknowledge that sometimes an arbitrary limit is exactly what you need to force yourself to make those difficult choices. (My own novels are contractually required to come in somewhere around 100,000 words, and although I’ve had to cut them to the bone to get there, they’ve been tremendously improved by the process, to the point where I intend to impose the same limit on everything I ever write.) The Hobbit has two more installments to go, and I hope Jackson takes the somewhat underwhelming critical and commercial response to the first chapter to heart. Because an unwillingness to edit your work is a hard hobbit to break.
So what happened to John Carter?
In recent years, the fawning New Yorker profile has become the Hollywood equivalent of the Sports Illustrated cover—a harbinger of bad times to come. It isn’t hard to figure out why: both are awarded to subjects who have just reached the top of their game, which often foreshadows a humbling crash. Tony Gilroy was awarded a profile after the success of Michael Clayton, only to follow it up with the underwhelming Duplicity. For Steve Carrell, it was Dinner with Schmucks. For Anna Faris, it was What’s Your Number? And for John Lasseter, revealingly, it was Cars 2. The latest casualty is Andrew Stanton, whose profile, which I discussed in detail last year, now seems laden with irony, as well as an optimism that reads in retrospect as whistling in the dark. “Among all the top talent here,” a Pixar executive is quoted as saying, “Andrew is the one who has a genius for story structure.” And whatever redeeming qualities John Carter may have, story structure isn’t one of them. (The fact that Stanton claims to have closely studied the truly awful screenplay for Ryan’s Daughter now feels like an early warning sign.)
If nothing else, the making of John Carter will provide ample material for a great case study, hopefully along the lines of Julie Salamon’s classic The Devil’s Candy. There are really two failures here, one of marketing, another of storytelling, and even the story behind the film’s teaser trailer is fascinating. According to Vulture’s Claude Brodesser-Akner, a series of lost battles and miscommunications led to the release of a few enigmatic images devoid of action and scored, in the manner of an Internet fan video, with Peter Gabriel’s dark cover of “My Body is a Cage.” And while there’s more to the story than this—I actually found the trailer quite evocative, and negative responses to early marketing materials certainly didn’t hurt Avatar—it’s clear that this was one of the most poorly marketed tentpole movies in a long time. It began with the inexplicable decision to change the title from John Carter of Mars, on the assumption that women are turned off by science fiction, while making no attempt to lure in female viewers with the movie’s love story or central heroine, or even to explain who John Carter is. This is what happens when a four-quadrant marketing campaign goes wrong: when you try to please everybody, you please no one.
And the same holds true of the movie itself. While the story itself is fairly clear, and Stanton and his writers keep us reasonably grounded in the planet’s complex mythology, we’re never given any reason to care. Attempts to engage us with the central characters fall curiously flat: to convey that Princess Dejah is smart and resourceful, for example, the film shows her inventing the Barsoomian equivalent of nuclear power, evidently in her spare time. John Carter himself is a cipher. And while some of these problems might have been solved by miraculous casting, the blame lands squarely on Stanton’s shoulders. Stanton clearly loves John Carter, but forgets to persuade us to love him as well. What John Carter needed, more than anything else, was a dose of the rather stark detachment that I saw in Mission: Impossible—Ghost Protocol, as directed by Stanton’s former Pixar colleague Brad Bird. Bird clearly had no personal investment in the franchise, except to make the best movie he possibly could. John Carter, by contrast, falls apart on its director’s passion and good intentions, as well as a creative philosophy that evidently works in animation, but not live action. As Stanton says of Pixar:
We’re in this weird, hermetically sealed freakazoid place where everybody’s trying their best to do their best—and the films still suck for three out of the four years it takes to make them.
Which only makes us wonder what might have happened if John Carter had been granted a fourth year.
Stanton should take heart, however. If there’s one movie that John Carter calls to mind, it’s Dune, another financial and critical catastrophe that was doomed—as much as I love it—by fidelity to its source material. (In fact, if you take Roger Ebert’s original review of Dune, which came out in 1985, and replace the relevant proper names, you end up with something remarkably close to a review of John Carter: “Actors stand around in ridiculous costumes, mouthing dialogue with little or no context.”) Yet its director not only recovered, but followed it up with my favorite movie ever made in America. Failure, if it results in another chance, can be the opposite of the New Yorker curse. And while Stanton may not be David Lynch, he’s not without talent: the movie’s design is often impressive, especially its alien effects, and it displays occasional flashes of wit and humor that remind us of what Stanton can do. John Carter may go on record as the most expensive learning experience in history, and while this may be cold comfort to Disney shareholders, it’s not bad for the rest of us, as long as Stanton gets his second chance. Hopefully far away from the New Yorker.