Posts Tagged ‘Disney’
Inside the sweatbox
Yesterday, I watched a remarkable documentary called The Sweatbox, which belongs on the short list of films—along with Hearts of Darkness and the special features for The Lord of the Rings—that I would recommend to anyone who ever thought that it might be fun to work in the movies. It was never officially released, but a copy occasionally surfaces on YouTube, and I strongly suggest watching the version available now before it disappears yet again. For the first thirty minutes or so, it plays like a standard featurette of the sort that you might have found on the second disc of a home video release from two decades ago, which is exactly what it was supposed to be. Its protagonist, improbably, is Sting, who was approached by Disney in the late nineties to compose six songs for a movie titled Kingdom of the Sun. (One of the two directors of the documentary is Sting’s wife, Trudie Styler, a producer whose other credits include Lock, Stock and Two Smoking Barrels and Moon.) The feature was conceived by animator Roger Allers, who was just coming off the enormous success of The Lion King, as a mixture of Peruvian mythology, drama, mysticism, and comedy, with a central plot lifted from The Prince and the Pauper. After two years of production, the work in progress was screened for the first time for studio executives. As always, the atmosphere was tense, but no more than usual, and it inspired the standard amount of black humor from the creative team. As one artist jokes nervously before the screening: “You don’t want them to come in and go, ‘Oh, you know what, we don’t like that idea of the one guy looking like the other guy. Let’s get rid of the basis of the movie.’ This would be a good time for them to tell us.”
Of course, that’s exactly what happened. The top brass at Disney hated the movie, production was halted, and Allers left the project that was ultimately retooled into The Emperor’s New Groove, which reused much of the design work and finished animation while tossing out entire characters—along with most of Sting’s songs—and introducing new ones. It’s a story that has fascinated me ever since I first heard about it, around the time of the movie’s initial release, and I’m excited beyond words that The Sweatbox even exists. (The title of the documentary, which was later edited down to an innocuous special feature for the DVD, refers to the room at the studio in Burbank in which rough work is screened.) And while the events that it depicts are extraordinary, they represent only an extreme case of the customary process at Disney and Pixar, at least if you believe the ways in which that the studio likes to talk about itself. In a profile that ran a while back in The New Yorker, the director Andrew Stanton expressed it in terms that I’ve never forgotten:
“We spent two years with Eve getting shot in her heart battery, and Wall-E giving her his battery, and it never worked. Finally—finally—we realized he should lose his memory instead, and thus his personality…We’re in this weird, hermetically sealed freakazoid place where everybody’s trying their best to do their best—and the films still suck for three out of the four years it takes to make them.
This statement appeared in print six months before the release of Stanton’s live action debut John Carter, which implies that this method is far from infallible. And the drama behind The Emperor’s New Groove was unprecedented even by the studio’s relentless standards. As executive Thomas Schumacher says at one point: “We always say, Oh, this is normal. [But] we’ve never been through this before.”
As it happens, I watched The Sweatbox shortly after reading an autobiographical essay by the artist Cassandra Smolcic about her experiences in the “weird, hermetically sealed freakazoid” environment of Pixar. It’s a long read, but riveting throughout, and it makes it clear that the issues at the studio went far beyond the actions of John Lasseter. And while I could focus on any number of details or anecdotes, I’d like to highlight one section, about the firing of director Brenda Chapman halfway through the production of Brave:
Curious about the downfall of such an accomplished, groundbreaking woman, I began taking the company pulse soon after Brenda’s firing had been announced. To the general population of the studio — many of whom had never worked on Brave because it was not yet in full-steam production — it seemed as though Brenda’s firing was considered justifiable. Rumor had it that she had been indecisive, unconfident and ineffective as a director. But for me and others who worked closely with the second-time director, there was a palpable sense of outrage, disbelief and mourning after Brenda was removed from the film. One artist, who’d been on the Brave story team for years, passionately told me how she didn’t find Brenda to be indecisive at all. Brenda knew exactly what film she was making and was very clear in communicating her vision, the story artist said, and the film she was making was powerful and compelling. “From where I was sitting, the only problem with Brenda and her version of Brave was that it was a story told about a mother and a daughter from a distinctly female lens,” she explained.
Smolcic adds: “During the summer of 2009, I personally worked on Brave while Brenda was still in charge. I likewise never felt that she was uncertain about the kind of film she was making, or how to go about making it.”
There are obvious parallels between what happened to Allers and to Chapman, which might seem to undercut the notion that the latter’s firing had anything to do with the fact that she was a woman. But there are a few other points worth raising. One is that no one seems to have applied the words “indecisive, unconfident, and ineffective” to Allers, who voluntarily left the production after his request to push back the release date was denied. And if The Sweatbox is any indication, the situation of women and other historically underrepresented groups at Disney during this period was just as bad as it was at Pixar—I counted exactly one woman who speaks onscreen, for less than fifteen seconds, and all the other faces that we see are white and male. (After Sting expresses concern about the original ending of The Emperor’s New Groove, in which the rain forest is cut down to build an amusement park, an avuncular Roy Disney confides to the camera: “We’re gonna offend somebody sooner or later. I mean, it’s impossible to do anything in the world these days without offending somebody.” Which betrays a certain nostalgia for a time when no one, apparently, was offended by anything that the studio might do.) One of the major players in the documentary is Thomas Schumacher, the head of Disney Animation, who has since been accused of “explicit sexual language and harassment in the workplace,” according to a report in the Wall Street Journal. In the footage that we see, Schumacher and fellow executive Peter Schneider don’t come off particularly well, which may just be a consequence of the perspective from which the story is told. But it’s equally clear that the mythical process that allows such movies to “suck” for three out of four years is only practicable for filmmakers who look and sound like their counterparts on the other side of the sweatbox, which grants them the necessary creative freedom to try and fail repeatedly—a luxury that women are rarely granted. What happened to Allers on Kingdom of the Sun is still astounding. But it might be even more noteworthy that he survived for as long as he did.
The secret villain
Note: This post alludes to a plot point from Pixar’s Coco.
A few years ago, after Frozen was first released, The Atlantic ran an essay by Gina Dalfonzo complaining about the moment—fair warning for a spoiler—when Prince Hans was revealed to be the film’s true villain. Dalfonzo wrote:
That moment would have wrecked me if I’d seen it as a child, and the makers of Frozen couldn’t have picked a more surefire way to unsettle its young audience members…There is something uniquely horrifying about finding out that a person—even a fictional person—who’s won you over is, in fact, rotten to the core. And it’s that much more traumatizing when you’re six or seven years old. Children will, in their lifetimes, necessarily learn that not everyone who looks or seems trustworthy is trustworthy—but Frozen’s big twist is a needlessly upsetting way to teach that lesson.
Whatever you might think of her argument, it’s obvious that Disney didn’t buy it. In fact, the twist in question—in which a seemingly innocuous supporting character is exposed in the third act as the real bad guy—has appeared so monotonously in the studio’s recent movies that I was already complaining about it a year and a half ago. By my count, the films that fall back on his convention include not just Frozen, but Wreck-It Ralph, Zootopia, and now the excellent Coco, which implies that the formula is spilling over from its parent studio to Pixar. (To be fair, it goes at least as far back as Toy Story 2, but it didn’t become the equivalent of the house style until about six or seven years ago.)
This might seem like a small point of storytelling, but it interests me, both because we’ve been seeing it so often and because it’s very different from the stock Disney approach of the past, in which the lines between good and evil were clearly demarcated from the opening frame. In some ways, it’s a positive development—among other things, it means that characters are no longer defined primarily by their appearance—and it may just be a natural instance of a studio returning repeatedly to a trick that has worked in the past. But I can’t resist a more sinister reading. All of the examples that I’ve cited come from the period since John Lasseter took over as the chief creative officer of Disney Animation Studios, and as we’ve recently learned, he wasn’t entirely what he seemed, either. A Variety article recounts:
For more than twenty years, young women at Pixar Animation Studios have been warned about the behavior of John Lasseter, who just disclosed that he is taking a leave due to inappropriate conduct with women. The company’s cofounder is known as a hugger. Around Pixar’s Emeryville, California, offices, a hug from Lasseter is seen as a mark of approval. But among female employees, there has long been widespread discomfort about Lasseter’s hugs and about the other ways he showers attention on young women…“Just be warned, he likes to hug the pretty girls,” [a former employee] said she was told. “He might try to kiss you on the mouth.” The employee said she was alarmed by how routine the whole thing seemed. “There was kind of a big cult around John,” she says.
And a piece in The Hollywood Reporter adds: “Sources say some women at Pixar knew to turn their heads quickly when encountering him to avoid his kisses. Some used a move they called ‘the Lasseter’ to prevent their boss from putting his hands on their legs.”
Of all the horror stories that have emerged lately about sexual harassment by men in power, this is one of the hardest for me to read, and it raises troubling questions about the culture of a company that I’ve admired for a long time. (Among other things, it sheds a new light on the Pixar motto, as expressed by Andrew Stanton, that I’ve quoted here before: “We’re in this weird, hermetically sealed freakazoid place where everybody’s trying their best to do their best—and the films still suck for three out of the four years it takes to make them.” But it also goes without saying that it’s far easier to fail repeatedly on your way to success if you’re a white male who fits a certain profile. And these larger cultural issues evidently contributed to the departure from the studio of Rashida Jones and her writing partner.) It also makes me wonder a little about the movies themselves. After the news broke about Lasseter, there were comments online about his resemblance to Lotso in Toy Story 3, who announces jovially: “First thing you gotta know about me—I’m a hugger!” But the more I think about it, the more this seems like a bona fide inside joke about a situation that must have been widely acknowledged. As a recent article in Deadline reveals:
[Lasseter] attended some wrap parties with a handler to ensure he would not engage in inappropriate conduct with women, say two people with direct knowledge of the situation…Two sources recounted Lasseter’s obsession with the young character actresses portraying Disney’s Fairies, a product line built around the character of Tinker Bell. At the animator’s insistence, Disney flew the women to a New York event. One Pixar employee became the designated escort as Lasseter took the young women out drinking one night, and to a party the following evening. “He was inappropriate with the fairies,” said the former Pixar executive, referring to physical contact that included long hugs. “We had to have someone make sure he wasn’t alone with them.”
Whether or not the reference in Toy Story 3 was deliberate—the script is credited to Michael Arndt, based on a story by Lasseter, Stanton, and Lee Unkrich, and presumably with contributions from many other hands—it must have inspired a few uneasy smiles of recognition at Pixar. And its emphasis on seemingly benign figures who reveal an unexpected dark side, including Lotso himself, can easily be read as an expression, conscious or otherwise, of the tensions between Lasseter’s public image and his long history of misbehavior. (I’ve been thinking along similar lines about Kevin Spacey, whose “sheer meretriciousness” I identified a long time ago as one of his most appealing qualities as an actor, and of whom I once wrote here: “Spacey always seems to be impersonating someone else, and he does the best impersonation of a great actor that I’ve ever seen.” And it seems now that this calculated form of pretending amounted to a way of life.) Lasseter’s influence over Pixar and Disney is so profound that it doesn’t seem farfetched to see its films both as an expression of his internal divisions and of the reactions of those around him, and you don’t need to look far for parallel examples. My daughter, as it happens, knows exactly who Lasseter is—he’s the big guy in the Hawaiian shirt who appears at the beginning of all of her Hayao Miyazaki movies, talking about how much he loves the film that we’re about to see. I don’t doubt that he does. But not only do Miyazaki’s greatest films lack villains entirely, but the twist generally runs in the opposite direction, in which a character who initially seems forbidding or frightening is revealed to be kinder than you think. Simply on the level of storytelling, I know which version I prefer. Under Lasseter, Disney and Pixar have produced some of the best films of recent decades, but they also have their limits. And it only stands to reason that these limitations might have something to do with the man who was more responsible than anyone else for bringing these movies to life.
The tentpole test
How do you release blockbusters like clockwork and still make each one seem special? It’s an issue that the movie industry is anxious to solve, and there’s a lot riding on the outcome. When I saw The Phantom Menace nearly two decades ago, there was an electric sense of excitement in the theater: we were pinching ourselves over the fact that we were about to see see the opening crawl for a new Star Wars movie on the big screen. That air of expectancy diminished for the two prequels that followed, and not only because they weren’t very good. There’s a big difference, after all, between the accumulated anticipation of sixteen years and one in which the installments are only a few years apart. The decade that elapsed between Revenge of the Sith and The Force Awakens was enough to ramp it up again, as if fan excitement were a battery that recovers some of its charge after it’s allowed to rest for a while. In the past, when we’ve watched a new chapter in a beloved franchise, our experience hasn’t just been shaped by the movie itself, but by the sudden release of energy that has been bottled up for so long. That kind of prolonged wait can prevent us from honestly evaluating the result—I wasn’t the only one who initially thought that The Phantom Menace had lived up to my expectations—but that isn’t necessarily a mistake. A tentpole picture is named for the support that it offers to the rest of the studio, but it also plays a central role in the lives of fans, which have been going on long before the film starts and will continue after it ends. As Robert Frost once wrote about a different tent, it’s “loosely bound / By countless silken ties of love and thought / to every thing on earth the compass round.”
When you have too many tentpoles coming out in rapid succession, however, the outcome—if I can switch metaphors yet again—is a kind of wave interference that can lead to a weakening of the overall system. On Christmas Eve, I went to see Rogue One, which was preceded by what felt like a dozen trailers. One was for Spider-Man: Homecoming, which left me with a perplexing feeling of indifference. I’m not the only one to observe that the constant onslaught of Marvel movies makes each installment feel less interesting, but in the case of Spider-Man, we actually have a baseline for comparison. Two baselines, really. I can’t defend every moment of the three Sam Raimi films, but there’s no question that each of those movies felt like an event. There was even enough residual excitement lingering after the franchise was rebooted to make me see The Amazing Spider-Man in the theater, and even its sequel felt, for better or worse, like a major movie. (I wonder sometimes if audiences can sense the pressure when a studio has a lot riding on a particular film: even a mediocre movie can seem significant if a company has tethered all its hopes to it.) Spider-Man: Homecoming, by contrast, feels like just one more component in the Marvel machine, and not even a particularly significant one. It has the effect of diminishing a superhero who ought to be at the heart of any universe in which he appears, relegating one of the two or three most successful comic book characters of all time to a supporting role in a larger universe. And because we still remember how central he was to no fewer than two previous franchises, it feels like a demotion, as if Spider-Man were an employee who had left the company, came back, and is now reporting to Iron Man.
It isn’t that I’m all that emotionally invested in the future of Spider-Man, but it’s a useful case study for what it tells us about the pitfalls of these films, which can take something that once felt like a milestone and reduce it to a midseason episode of an ongoing television series. What’s funny, of course, is that the attitude we’re now being asked to take toward these movies is actually closer to the way in which they were originally conceived. The word “episode” is right there in the title of every Star Wars movie, which George Lucas saw as an homage to classic serials, with one installment following another on a weekly basis. Superhero films, obviously, are based on comic books, which are cranked out by the month. The fact that audiences once had to wait for years between movies may turn out to have been a historical artifact caused by technological limitations and corporate inertia. Maybe the logical way to view these films is, in fact, in semiannual installments, as younger viewers are no doubt growing up to expect. In years to come, the extended gaps between these movies in prior decades will seem like a structural quirk, rather than an inherent feature of how we relate to them. This transition may not be as meaningful as, say, the shift from silent films to the talkies, but they imply a similar change in the way we relate to the film onscreen. Blockbusters used to be released with years of anticipation baked into the response from moviegoers, which is no longer something that can be taken for granted. It’s a loss, in its way, to fan culture, which had to learn how to sustain itself during the dry periods between films, but it also implies that the movies themselves face a new set of challenges.
To be fair, Disney, which controls both the Marvel and Star Wars franchises, has clearly thought a lot about this problem, and they’ve hit on approaches that seem to work pretty well. With the Marvel Universe, this means pitching most of the films at a level at which they’re just good enough, but no more, while investing real energy every few years into a movie that is first among equals. This leads to a lot of fairly mediocre installments, but also to the occasional Captain America: Civil War, which I think is the best Marvel movie yet—it pulls off the impossible task of updating us on a dozen important characters while also creating real emotional stakes in the process, which is even more difficult than it looks. Rogue One, which I also liked a lot, takes a slightly different tack. For most of the first half, I was skeptical of how heavily it was leaning on its predecessors, but by the end, I was on board, and for exactly the same reason. This is a movie that depends on our knowledge of the prior films for its full impact, but it does so with intelligence and ingenuity, and there’s a real satisfaction in how neatly it aligns with and enhances the original Star Wars, while also having the consideration to close itself off at the end. (A lot of the credit for this may be due to Tony Gilroy, the screenwriter and unbilled co-director, who pulled off much of the same feat when he structured much of The Bourne Ultimatum to take place during gaps in The Bourne Supremacy.) Relying on nostalgia is a clever way to compensate for the reduced buildup between movies, as if Rogue One were drawing on the goodwill that Star Wars built up and hasn’t dissipated, like a flywheel that serves as an uninterruptible power supply. Star Wars isn’t just a tentpole, but a source of energy. And it might just be powerful enough to keep the whole machine running forever.
Moana and the two studios
If the history of animation had a portentous opening voiceover, it would probably say: “In the beginning was the storyboard.” The earliest animated cartoons were short and silent, so it made sense to plan them out as a series of rough thumbnail sketches. Even after they added sound and dialogue and became longer in length, the practice survived, which is why so many of the classic Disney movies are so episodic. They weren’t plotted on paper from beginning to end, but conceived as a sequence of set pieces, often with separate teams, and they were planned by artists who thought primarily with a pencil. This approach generated extraordinary visual achievements, but it could also result in movies, like Alice in Wonderland, that were brilliant in their individual components but failed to build to anything more. Later, in the eighties, Disney switched over to a production cycle that was closer to that of a live-action feature, with a traditional screenplay serving as the basis for future development. This led to more coherent stories, and it’s hard to imagine a film like Frozen being written in any other way. But another consequence was a retreat of visual imagination. When the eye no longer comes first, it’s harder for animators to create sequences that push against the boundaries of the medium. Over time, the movies start to look more or less the same, with similar character designs moving through beautifully rendered backgrounds that become ever more photorealistic for no particular reason.
The most heartening development in animation in recent years, which we’ve seen in Inside Out and Zootopia and now Moana, is the movement back toward a kind of animated feature that isn’t afraid to play with how it looks. Inside Out—which I think is the best movie Pixar has ever made—remains the gold standard, a film with a varied, anarchic style and amazing character design that still tells an emotionally effective story. Zootopia is more conventionally structured, but sequences like the chase through Little Rodentia are thrillingly aware of the possibilities of scale. Moana, in turn, may follow all the usual beats, but it’s also more episodic than usual, with self-contained sequences that seem to have been developed for their visual possibilities. I’m thinking, in particular, of the scenes with the pygmy Kakamora pirates and the encounter with Jermaine Clement’s giant coconut crab Tamatoa. You could lift these parts out and replace them with something else, and the rest of the story would be pretty much the same. For most movies, this would be a criticism, but there’s something about the episodic structure that allows animation to flourish, because each scene can be treated as a work of art in itself. Think, for instance, of Pinocchio, and how the plot wanders from Stromboli to Pleasure Island to Monstro almost at fancy. If it were made again today, the directors would probably get notes about how they should “establish” Monstro in the first act. But its dreamlike procession of wonders is what we remember the most fondly, and it’s exactly the quality that a conventional script would kill.
The fact that Disney and Pixar are rediscovering this sort of loose, shaggy energy is immensely promising, and I’m not entirely sure how it happened. (It doesn’t seem to be uniformly the case, either: Finding Dory was a lovely movie, but it was plotted to within an inch of its life.) Pinning down the cause becomes even tricker when we remember that all of these movies are in production at the same time. If so many storytelling tricks seem to recur—like the opening scene that shows the protagonist as a child, or the reveal in the third act that an apparently friendly character is really a bad guy—it’s probably because the same people were giving notes or actively engaged in multiple stories for years. Similarly, the move toward episodic structure may be less a conscious decision than the result of an atmosphere of experimentation that has started to permeate the studio. I’d love to think that it might be due to the influence through John Lasseter of Hayao Miyazaki, who thinks naturally in the language of dreams. The involvement of strong songwriters like Robert and Kristen Lopez and Lin-Manuel Miranda may also play a part: when you’ve got a great song at the heart of a scene, you’re more likely to think of visuals that rise to the level of the music. Another factor may be the rise of animators, like Moana producer Osnat Shurer, who came up through the ranks in the Pixar shorts, which are more willing to take stylistic risks. Put them all together with veteran directors like Ron Clements and John Musker, and you’ve got a recipe for self-contained scenes that push the envelope within a reliable formula.
But the strongest possibility of all, I think, is that we’re seeing what happens when the Pixar and Disney teams begin to work side by side. It’s been exactly ten years since Pixar was acquired by its parent company, which is just about the right amount of time for a cultural exchange to become consistently visible onscreen. The two divisions seem as if they’re trying to outdo each other, and the most obvious way is to come up with visually stunning sequences. This kind of competition will naturally manifest itself on the visual end: it’s hard for two teams of writers to keep an eye on each other, and any changes to the story won’t be visible until the whole thing is put together, while it’s likely that every animator has a good idea of what everybody else is doing. (Pixar headquarters itself was designed to encourage an organic exchange of ideas, and while it’s a long drive from Emeryville to Burbank, even that distance might be a good thing—it allows the divisions to compete on the basis of finished scenes, rather than works in progress.) It isn’t a foolproof method, and there will inevitably come a day when one studio or the other won’t overcome the crisis that seems to befall every animated feature halfway through production. But if you wanted to come up with a system that would give animators an incentive to innovate within the structure of a decent script, it’s hard to imagine a better one. You’ve got separate teams of animators trying to top each other, as they did on Alice, and a parent studio that has figured out how to make those episodes work as part of a story. That’s a great combination. And I can’t wait to see what they do next.
Zootopia and the anthropomorphic principle
Note: Mild spoilers follow for Zootopia.
I enjoyed Zootopia one heck of a lot, but the most emphatic recommendation of all came from my daughter, who burst into tears as soon as the movie ended. And it wasn’t because something onscreen had upset her, or even because she was startled by the unstoppable Shakira track that blasts over the closing credits: she was sad because she loved it so much, and now it was over. In fact, she was inconsolable, to the point where I had to carry her into the lobby and reassure her that we would see it again soon. And I’m looking forward to it, as well as to the countless other viewings to follow, which will give us plenty to discuss when she gets older. I plan to talk to her at length about my favorite scene, the chase in Little Rodentia, and how its sudden shifts of scale remind me of animation’s visual possibilities—and how rarely they seem to be utilized. We’ll also dissect the cleverness of the screenplay, which offers up a neat false ending before burrowing deeper into the story’s implications. She can compare it to the Richard Scarry books she reads, and even to Robin Hood. And when she’s ready, I’ll gently point out that this is something like the fourth consecutive Disney movie in which a seemingly innocuous character turns out to be the real bad guy, which makes me think that this trope ought to be retired.
Above all else, we can talk about its message, which, as has been widely noted, is a timely one indeed. And it deserves a lot of credit for this. Most ordinary movies would have been content to settle for the moral that anyone can be anything, or that we should all be a little nicer to one another. A slightly more ambitious film might have reminded us that we shouldn’t judge based on appearances, and it might conceivably have even broached the subject of racial profiling. But Zootopia goes even further, into the implication that there are systems in this world that are set up to benefit—deliberately or otherwise—from institutionalized prejudice. It’s a heady lesson, even if it will mostly affect viewers who were already primed to receive it, like those who cringe a bit when Judy Hopps, a rabbit, praises Nick Wilde, a fox, for being so “articulate.” But you never know. And I think it’s true, as other commentators have pointed out, that the movie is able to go as far as it does because its parts are played by animals. The first trailer took pains to introduce audiences to the concept of anthropomorphism, but it’s an idea that we all intuitively understand, and it’s generally accepted that certain kinds of stories go down more easily when presented in animal form. It’s the reverse of the uncanny valley: we empathize with animals because our minds focus on the points we have in common, a tendency that has been utilized by moralists from Aesop to La Fontaine.
But there’s an even more interesting point to be made here, which is that the anthropomorphism of Zootopia seems to have loosened up the filmmakers themselves. Since we find talking animals in everything from Kung Fu Panda to My Little Pony, it’s a little surprising to realize how rarely it’s been used in its purest form by Disney: Robin Hood is the only other movie from the classic canon—if we don’t count Chicken Little—to show animals interacting in a world without humans. And it’s worth asking why it resists exploiting such a powerful tool, especially because it appeals so much to children: it’s no accident that Robin Hood, which is far from the best movie the studio ever made, is the one that my daughter has watched the most. In part, it’s due to a residual anxiety over being seen as kid’s stuff, which still haunts the genre as a whole, but there’s also an element of caution at play. Walt Disney himself was oddly insistent on centering his movies on a boring human couple, with the animators reduced to creating a riot of energy in the supporting characters: it’s as if the Marx Brothers had built all their movies around Zeppo, or even Allan Jones and Kitty Carlisle. It was a conservative choice made by a studio that embraced conventional values, and animals have always enabled exactly that anarchic vein in animation that Disney did his best to repress. (Disney buffs have long wondered why the studio repeatedly tried and failed to develop Chanticleer, an animal fable featuring none other than Reynard the Fox, and I suspect that we have our answer here.)
Something similar appears to have happened with Zootopia, even if it’s obviously the product of another place and time. Try to imagine this story being made with human characters, and you can’t: its anthropomorphism was a shield that protected it throughout what must have been a lengthy development process. I’m tempted to propose an anthropomorphic principle of fiction, in parallel to the anthropic principle that I’ve discussed here before, which states that a story that grounds itself in a nonhuman world is more likely to take meaningful risks with our human preconceptions. To borrow a concept from the movie’s own lexicon, it allows animators to follow their instincts. (I also can’t resist pointing out that both “animal” and “animation” emerge from the same root, which refers to nothing less than the soul.) And I have a feeling that this is where the real influence of Zootopia will be felt. A movie can’t change the world, unfortunately, but it can certainly change a studio, and I’m hopeful that Disney will continue to pursue the line of thinking it represents. It gives us a world rich enough to sustain multiple sequels, so here’s my pitch for the next one: a movie that raises the question of why everyone we meet here is a mammal, as if we couldn’t be expected to relate to anything with feathers or scales. That’s a form of prejudice, too—and if Zootopia itself teaches us anything, it’s that our assumptions are sometimes so large that they can’t even be seen.
Sofia’s world
When you’ve gotten into the habit of seeing television as a source of hot takes and think pieces, it can be hard to turn that mindset off. Consider the case of Sofia the First. Of all the shows that my daughter watches these days, it’s by far her favorite, and its easy availability through Netflix and Disney Junior means that we absorb three or four episodes on an average morning. (Like most parents, I do what I can to keep screen time under control, but it isn’t easy: we’re at the point where I can only talk her into brushing her teeth and putting on her pajamas with the promise of a Taylor Swift video.) Most of her shows tend to blur into background noise, largely because I’ve already been up since before sunrise, but I’ve ended up watching Sofia more closely. And I like it. It’s a show that benefits from having the full resources of the massive Disney studio mustered on its behalf: as the gateway into the princess franchise for an entire generation of toddlers, it’s a crucial piece of that machine, and you can tell that a lot of time, money, and effort have gone into making it as appealing a product as possible. The animation is great, the songs are cute, and the writing is reasonably sharp, even as it remains pitched squarely toward the kindergarten crowd. When I sit down to watch it, I have a good time.
But the strange thing is that I also find myself thinking about it at odd moments throughout the day. The premise, if you aren’t familiar with it, is spelled out with admirable efficiency in the show’s theme song: Sofia was a village girl whose mother married the king of Enchancia, making her a princess overnight and giving her a new royal brother and sister. She has a magic amulet that lets her talk to animals, and which occasionally summons a princess from another movie to give her advice, although their input isn’t always particularly useful. (When Aurora from Sleeping Beauty turns up, you have to wonder what she has to teach anyone about anything, and her only tip is for Sofia to listen to her animal friends.) Her world is populated by the usual sorcerers and magical creatures, including, delightfully, Tim Gunn, more or less playing himself. And if this all sounds routine, it’s executed at a consistently high level, with a light touch and just enough wit to make it all very charming. The writers are clearly having fun with the material. They aren’t afraid to let Sofia herself come off as prissy or smug, and Amber, her stepsister, has become a fan favorite for obvious reasons: she’s vain, spoiled, and self-centered, but she’s also the closest thing we have to an audience surrogate, and she’s often the only one who sees the underlying ridiculousness of the situations in which she finds herself.
Yet the fact that I’ve devoted this much thought to Sofia at all indicates how my feelings about television have changed. I don’t think it’s possible for me to watch a show casually anymore: everything has to fit into a larger picture, as if I’m pitching some imaginary article to Salon. My wife and I have debated class issues, or their absence, in the kingdom of Enchancia; unpacked the character arc of Cedric the Sorcerer; made fun of the general incompetence of King Roland; compared the series to the plot of The Royal We; and joked about writing a crossover with Game of Thrones. (Honestly, James shades into Joffrey so imperceptibly that it isn’t even funny.) But we’re also being sucked into the show on its own terms, even if we can’t simply enjoy it in the way my daughter does—we have to justify it to ourselves. We’re used to seeking out shows to talk about, rather than having them sneak up on us: sometimes it seems as if we watch most shows these days so that we won’t be left out of the conversation online, rather than the other way around. And if we talk about Sofia at length, it’s because we’ve been trained to talk about every show this way. Thanks to my daughter, we basically binge watch it every morning. And even after she’s gone to bed, there are times when I’m folding laundry or doing other chores around the living room and I have to almost physically restrain myself from putting on an episode.
Of course, there’s a reason I’m writing about Sofia the First here and not Strawberry Shortcake: I’ve learned to value quality wherever I find it, and the show is an excellent example of how a branding strategy can yield something like real storytelling, however slickly packaged and presented. But it also reminds me of something that I’ve lost. A few weeks ago, I wrote a blog post in which I referred to television as a reviewable appliance, generating a steady stream of content to fill the voracious demands of online critics and readers. After reading—and occasionally writing—so much of it, I find it harder to relate to shows purely as entertainment. (It may also have something to do with the fact that I’ve watched nothing but appointment television for the last decade or so: it’s been a long time since I’ve tuned into something simply because it was on.) Sofia might seem like the quintessential example of Renata Adler called a work of art that “inevitably cannot bear, would even be misrepresented by, review in depth,” and although I doubt that this is what she meant, I do think that it deserves to be watched through a child’s eyes. And so do a lot of other shows. I might not gain much by seeing Sofia as my daughter would, but it might be healthier if I watched, say, Mad Men that way. As Sofia herself says in her theme song, there’s so much to learn and see. And I’ve got to figure out how to do it right.
The Jedi mind trick
Difficult to see. Always in motion is the future.
—Yoda, The Empire Strikes Back
At some point over the next few hours, perhaps as you’re reading this post, The Force Awakens is projected to surge past Avatar to become the highest-grossing movie in the history of the North American box office. We usually don’t adjust such figures for inflation, of course, probably because there wouldn’t be as many records broken each year if we did, and it’s all but certain that the original Star Wars will remain tops in the franchise in terms of tickets sold. Yet it’s impossible to discount this achievement. If the latest installment continues on its present trajectory, it has a good chance of cracking the adjusted top ten of all time—it would need to gross somewhere north of $948 million domestic to exceed Snow White and the Seven Dwarfs and earn a spot on that rarefied list, and this is starting to feel like a genuine possibility. Given the changes in the entertainment landscape over the last century, this is beyond flabbergasting. But even this doesn’t get at the real, singular nature of what we’re witnessing today. The most unexpected thing about the success of The Force Awakens is how expected it was. And at a time when Hollywood is moving increasingly toward a tentpole model in which a handful of blockbusters finance all the rest, it represents both a historic high point for the industry and an accomplishment that we’re unlikely to ever see again.
When you look at the lineal timeline of the most successful films at the domestic box office, you have to go back seventy-five years to find a title that even the shrewdest industry insider could have reasonably foreseen. This list, unadjusted for inflation, consists of Gone With the Wind, The Sound of Music, The Godfather, Jaws, Star Wars, E.T., Titanic, and Avatar. Gone With the Wind, which claimed the title that The Birth of a Nation had won a quarter of a century earlier, is the one exception: there’s no doubt that David O. Selznick hoped that it could be the biggest film of its era, even before the first match had been struck for the burning of Atlanta. Every other movie here is a headscratcher. No studio insider at the time would have been willing to bet that The Sound of Music—which Pauline Kael later called The Sound of Money—would outgross not just Doctor Zhivago and Thunderball that year, but every other movie ever made. The Godfather and Jaws were both based on bestselling novels, but that’s hardly a guarantee of success, and both were troubled productions with untested directors at the helm. Star Wars itself hardly needs to be discussed here. Columbia famously passed on E.T., and Titanic was widely regarded before its release as a looming disaster. And even Avatar, which everyone thought would be huge, exceeded all expectations: when you take regression to the mean into account, the idea that James Cameron could break his own record is so implausible that I have a hard time believing it even now.
Which is just another way of saying that these movies were all outliers: unique, idiosyncratic projects, not part of any existing franchise, that audiences discovered gradually, often to the bewilderment of the studios themselves. The Force Awakens was different. It had barely been announced before pundits were speculating that it could set the domestic record, and although Disney spent much of buildup to its opening weekend downplaying such forecasts—with the implication that rival studios were inflating projections to make its final performance seem disappointing—it’s hard to believe that the possibility hadn’t crossed everybody’s mind. Most movie fans will remember that William Goldman said “Nobody knows anything” in Adventures in the Screen Trade, but it’s worth quoting the relevant paragraph in full. After noting that everyone in town except for Paramount turned down Raiders of the Lost Ark, he continues:
Why did Paramount say yes? Because nobody knows anything. And why did all the other studios say no? Because nobody knows anything. And why did Universal, the mightiest studio of all, pass on Star Wars, a decision that may just cost them, when all the sequels and spinoffs and toy money and book money and video-game money are totaled, over a billion dollars? Because nobody, nobody—not now, not ever—knows the least goddam thing about what is or isn’t going to work at the box office.
If Hollywood has learned anything since, it’s that you don’t pass on Star Wars. Whatever you might think of its merits as a movie, The Force Awakens marks the one and only time that somebody knew something. And it’s probably the last time, too. It may turn into the reassuring bedtime story that studio executives use to lull themselves to sleep, and Disney may plan on releasing a new installment on an annual basis forever, but the triumphant rebirth of the franchise after ten years of dormancy—or three decades, depending on how you feel about the prequels—is the kind of epochal moment that the industry is doing its best to see never happens again. We aren’t going to have another chance to miss Star Wars because it isn’t going to go away, and the excitement that arose around its return can’t be repeated. The Force Awakens is both the ultimate vindication of the blockbuster model and a high-water mark that will make everything that follows seem like diminishing returns. (More insidiously, it may be the Jedi mind trick that convinces the studios that they know more than they do, which can only lead to heartbreak.) Records are made to be broken, and at some point in my lifetime, another movie will take the crown, if only because inflation will proceed to a point where the mathematics become inevitable. But it won’t be a Star Wars sequel. And it won’t be a movie that anyone, not even a Jedi, can see coming.
Alice in Disneyland
A few weeks ago, I noted that watching the Disney movies available for streaming on Netflix is like seeing an alternate canon with high points like Snow White and Pinocchio stripped away, leaving marginal—but still appealing—films like Robin Hood and The Aristocats. Alice in Wonderland, which my daughter and I watched about ten times this week, lies somewhere in the middle. It lacks the rich texture of the earlier masterpieces, but it’s obviously the result of a lot of work and imagination, and much of it is wonderful. In many respects, it’s as close as the Disney studio ever got to the more anarchic style of the Warner Bros. cartoons, and when it really gets cooking, you can’t tear your eyes away. Still, it almost goes without saying that it fails to capture, or even to understand, the appeal of the original novels. Part of this is due to the indifference of the animators to anything but the gag of the moment, a tendency that Walt Disney once fought to keep in check, but which ran wild as soon as his attention was distracted by other projects. I love the work of the Nine Old Men as much as anyone, but it’s also necessary to acknowledge how incurious they could often appear about everything but animation itself, and how they seemed less interested in capturing the tone of authors like Lewis Carroll, A.A. Milne, or Kenneth Grahame than in shoehorning those characters into the tricks they knew. And it was rarely more evident than it is here.
What really fascinates me now about Alice in Wonderland is how it represents a translation from one mode of storytelling—and even of how to think about narrative itself—into another. The wit of Carroll’s novels isn’t visual, but verbal and logical: as I noted yesterday, the first book emerges from the oral fairy tale tradition, as enriched by the author’s gifts for paradox, parody, and wordplay. The Disney studio of this era, by contrast, wasn’t used to thinking in words, but in pictures. Movies were planned out as a series of thumbnail sketches on a storyboard, which naturally emphasized sight gags and physical comedy over dialogue. For the most part, Carroll’s words are preserved, and they often benefit from fantastic voice performances, but most of the scenes treat them as little more than background noise. My favorite example here is the Mad Tea Party. When I watch it again now, it strikes me as a dazzling anthology of visual puns, some of them brilliant, built around the props on the table: you can almost see the animators at the drawing board pitching out the gags, which follow one another so quickly that it makes your head spin. The result doesn’t have much to do with Lewis Carroll, and none of the surviving verbal jokes really land or register, but it works, at least up to a point, as a visual equivalent of the density of the book’s prose.
But it doesn’t really build to anything, and like the movie itself, it just sort of ends. As Ward Kimball once said to Leonard Maltin: “It suffered from too many cooks—directors. Here was a case of five directors each trying to top the other guy and make his sequence the biggest and craziest in the show. This had a self-canceling effect on the final product.” Walt Disney himself seems to have grasped this, and I’d like to think that it contributed to his decision, a few years later, to subordinate all of Sleeping Beauty to the style of the artist Eyvind Earle. (That movie suffers from the same indifference to large chunks of the plot that we see elsewhere in Disney—neither Aurora nor Prince Philip even speak for the second half of the film, since the animators are clearly much more interested in Malificent and the three good fairies—but we’re so caught up in the look and music that we don’t really care.) Ultimately, the real solution lay in a more fundamental shift in the production process, in which the film was written up first as a screenplay rather than as a series of storyboards. This model, which is followed today by nearly all animated features, was a relatively late development. And to the extent that we’ve seen an expansion of the possibilities of plot, emotion, and tone in the ongoing animation renaissance, it’s thanks to an approach that places more emphasis on figuring out the overall story before drilling down to the level of the gag.
That said, there’s a vitality and ingenuity to Alice in Wonderland that I miss in more recent works. Movies like Frozen and the Pixar films are undeniably spectacular, but it’s hard to recall any moments of purely visual or graphic wit of the kind that fill the earlier Disney films so abundantly. (The exception, interestingly, is The Peanuts Movie, which seems to have benefited by regarding the classic Schulz strips as a sort of storyboard in themselves, as well as from the challenges of translating the flat style of the originals into three dimensions.) An animated film built around a screenplay and made with infinite technological resources starts to look more or less like every other movie, at least in terms of its staging and how all the pieces fit together, while a film that starts with a storyboard often has narrative limitations, but makes up for it with a kind of local energy that doesn’t have a parallel in any other medium. The very greatest animated films, like My Neighbor Totoro, somehow manage to have it both ways, and the example of Miyazaki suggests that real secret is to have the movie conceived by a single visionary who also knows how to draw. Given the enormous technical complexity of contemporary animation, that’s increasingly rare these days, and it’s true that some of the best recent Pixar movies, like Toy Story 3, represent the work of directors who don’t draw at all. But I’d love to see a return to the old style, at least occasionally—even if it isn’t everyone’s cup of tea.
Oo-de-lally, oo-de-lally
Over the last few weeks, my daughter and I have been slowly working through the Disney movies that are available for streaming on Netflix. I’m not sure about the business details of that arrangement, which I can only assume involved some protracted negotiations, but Disney’s conservative approach to its back catalog leads to an intriguingly skewed sample set. It’s reluctant to give unlimited access to its most lucrative plums, so the selection includes neither the masterpieces of the first golden age, like Snow White or Pinocchio, nor the heights of its late renaissance, like Beauty and the Beast or Aladdin. Instead, it gives us the movies that fell through the cracks: lighter fare, much of it from after Walt Disney’s death, like The Aristocats or The Rescuers, or the movies that the revitalized studio continued to produce after the bloom had gone off the rose, like Hercules or Treasure Planet. And although my daughter seems equally happy with all of it, as an animation buff, I’m most interested in the way the result amounts to an accidental canon from a parallel universe. As viewers of the excellent documentary American Experience: Walt Disney can attest, the studio’s history consisted of alternating periods of boom and bust, and watching the movies on Netflix is like experiencing that legacy with most of the high points removed, leaving the products of the years when money was scarce and the animators were forced to work under considerable constraints.
In his indispensable book Paper Dreams: The Art and Artists of Disney Storyboards, the historian John Canemaker says this about that era:
After Walt died in 1966, story took a backseat to animation at the Disney Studio. In films such as The Aristocats, Robin Hood, The Rescuers, and The Fox and the Hound, the animators brought new degrees of subtlety to the characters’ personalities and relationships. But the stories, concocted solely by storyboards that were mainly contributed to by a committee of animators, were weak and almost an incidental backdrop to the often bravura performances. Observing fine animators going through their dazzling paces in second-rate vehicles was likened by one pundit to watching great chefs make hot dogs.
Frank Thomas and Ollie Johnston make much the same point in Disney Animation: The Illusion of Life:
The interrelationships of these characters were of particular importance in Robin Hood, because the story was secondary to the characters. There was no real suspense in Prince John’s many attempts to catch Robin. They are showcases for the histrionics of the two villainous actors who become richer and more entertaining as the picture progresses.
This goes a long way toward explaining the peculiar appeal of Robin Hood, which remains one of the most beguiling works in the whole Disney canon, as well as the movie that my daughter and I have ended up watching the most. Its reduced budget is painfully apparent, with animation and character designs repurposed from other projects, reused from elsewhere in the movie, or simply flipped and repeated. Much of the writing feels like the work of animators more accustomed to thinking in terms of isolated character poses and bits of business than considering the story as a whole, leading to the kind of crude, obvious gags and tricks that we find even in Winnie the Pooh. And the story suffers from a manifest indifference, verging on boredom, toward Robin Hood and Maid Marian: Disney has always been better at evil than at good, and it’s particularly evident here. But the evil is truly delicious. The pairing of Peter Ustinov as Prince John and the British comic Terry-Thomas as Sir Hiss—both playing wonderfully within type—still makes me laugh with delight. And the rest of the cast is stocked with the kinds of dependable character actors that Disney used so capably: Phil Harris, Pat Buttram, Ken Curtis, George Lindsey, Andy Devine. (You could write an entire dissertation on the evolving pool of talent that the studio employed over the years, from vaudeville and radio pros like Ed Wynn through the television stars of the seventies through the Second City and single-camera sitcom alumni that make up the cast of a movie like Inside Out.)
And it’s still oddly charming, especially in the songs that Roger Miller contributes as the Rooster: if you’re anything like me, you’ve probably got “Whistle-Stop” running on a loop through your head right now. (There’s something undeniably shrewd in the way the studio outsourced the music to different writers, with Miller’s novelty country numbers sharing screen time with “Love” by Floyd Huddleston and George Bruns and Johnny Mercer’s “The Phony King of England.”) It’s a cut below the classics, but luckily, we don’t need to take it in isolation. When we’re in the mood for a movie on which the studio lavished all its resources, there’s always Fantasia or Sleeping Beauty, but there’s also something engaging about the sheer roughness of Robin Hood, cut corners and all, which is as close as Disney ever got to the actor’s performance passing through the pencil sketches to end up almost intact on the screen. It all feels like the result of a private huddle between the animators themselves, and they weren’t afraid to poke fun at their own situation, as Thomas and Johnston note:
The subtler shadings of [Sir Hiss’s] personality were based on real experience. Occasionally, over the years, there had been men at the studio who in their determination to please Walt did a fair amount of bowing and scraping…Suddenly there was a place to use these observations as our cartoon character matched the reality of human actions. “Now, what was so funny about the way those guys did it?”
Now that Disney is an entertainment juggernaut once more, I doubt we’ll ever see anything as unvarnished and vital again. And as much as I love Frozen, I also miss the spirit that we find here, with Robin Hood himself—in the form of Walt—gone from the forest, and a ragtag group of merry men doing their best in his absence.
Mary Poppins and the rise of the blockbuster
Fifty years ago, Disney’s Mary Poppins had been firmly established as the highest-grossing movie of 1964, with a degree of cultural omnipresence that now seems all but unrecognizable—adjusted for inflation, its box office take works out to an astonishing $600 million. Ever since, it’s been so ubiquitous that it’s hard to regard it as an ordinary movie, much less as a work of art. Yet it’s wonderful in ways that have nothing to do with nostalgia, a witty, inventive blockbuster that feels almost like a more innocent extension of the work of Powell and Pressburger: it has the same technical ambition, depth of cast, and richness of design. For much of the last few weeks, its soundtrack has resided on my record player, and it delights me almost as much as it does our daughter. There isn’t a clunker in the entire score, and at least six of the songs by the Sherman Brothers are outright classics. (If the movie’s look and atmosphere were secretly shaped by the Archers, the music draws openly on Lerner and Lowe, and in retrospect, it feels like a natural bridge between My Fair Lady and its even more commercially spectacular successor, The Sound of Music.)
Yet its full legacy wouldn’t be felt for another four decades. In a sense, it’s the first unmistakable example of the business model that currently dominates Hollywood: the adaptation of an established children’s property, aimed squarely at all four quadrants of the public, with every resource of a major studio lavished on casting, art direction, music, and visual effects. For all its undeniable charm, it marks the beginning of a lineage that runs from Harry Potter through the Marvel Universe to The Hunger Games, with movie companies investing everything in tentpole franchises that stake much of the available money and talent on a single roll of the dice. Lionsgate is The Hunger Games, much as MGM is James Bond and the Hobbit franchise, and it’s no exaggeration to say that Disney was Mary Poppins for the years in which the movie was in production. The artistic legacy of Walt Disney, the man, is a mixed one, but there’s no question of his perfectionism or the demands he made on his creative team, and it shows. Mary Poppins cuts no corners, and it looks so good, with such attention to detail and so much money visible on the screen, that it makes most children’s movies seem cheap by comparison.
In other words, Mary Poppins was the original big bet, albeit one driven less by market calculation than by the obsessiveness of Walt Disney himself. (There’s a strong case to be made that its real impact has been even greater than that of Star Wars, which was a comparatively ragged production made in the face of active corporate interference.) And it stands as the culmination of everything the studio represented, in craft if not in content. It’s a repository of nifty tricks, both old and new: the gag with Mary Poppins rescuing her carpet bag from sinking into the cloudbank is lifted almost intact from the stork in Dumbo, as if an old hand on the Burbank lot, possibly Disney himself, had simply pitched a joke that he knew had worked well in the past. Mary Poppins is made up of a thousand little touches like this, and part of its magic is how seamlessly it synthesizes the work of so many craftsmen and disparate influences into something that seems so inevitable. The director, Robert Stevenson, was a capable journeyman who had worked with Disney for years—although not, confusingly, on Treasure Island—and if the result doesn’t bear much trace of his personality, there’s no doubt that he deserves much of the credit for keeping it so superbly organized.
And audiences obviously responded to it, even if some critics were skeptical both of its departures from its source material and of the apparent reassurances it provided. Even at the time, many cultural observers felt that it offered nothing but a form of Edwardian escapism from current events, and a glance at the headlines from the year in which it was released—this was the summer of the Civil Rights Act, the Gulf of Tonkin incident, and the dawn of Beatlemania, with race riots erupting in Philadelphia the day after its premiere—creates an undeniable dissonance. Yet the same could be said of nearly every big movie in nearly every decade, and few have managed to carve out their own perfect worlds so beautifully. Mary Poppins is a little like the snow globe of St. Paul’s Cathedral that its title character holds as she sings “Feed the Birds”: closed, gorgeously rendered, and complete in itself. It’s the kind of movie that the major studios ought to be able to do best; it certainly couldn’t have been produced in any other way. And if few comparable films since have matched its grace and imagination, it still stands as an example of Hollywood’s potential, even for an industry that has always been run by the likes of Mr. Banks.
The Dumbo solution
The clowns have been celebrating and dropped a bottle of liquor into a tub of water. Timothy and Dumbo come along, very disconsolate, and Dumbo begins to hiccup. Timothy suggests he drink some water, and soon the little elephant is behaving in a strange manner. Timothy wonders what kind of water is in that tub and takes a drink himself. This was the action the storyman left up to the animator.
The scene was given to Fred Moore, one of the top animators…but he had trouble with this assignment…Timothy, somehow, had to react in an appropriate and entertaining way, first, to the taste of the water, and, second, to the way it was beginning to make him feel. There was not enough time to have him complete the change to a funny drunk; the point of the scene was just to show his initial reactions to taking the drink. It was subtle—and questionable planning as well.
After Fred had sweated and squirmed through several tests, none of which felt right, the decision was made to change the story concept at that point. They went back to the storyboard, and after many discussions, Ben [Sharpsteen] recalled they came up with this idea:
“When Dumbo showed signs of intoxication, Timothy remarked, ‘I wonder what kind of water this is anyhow.’ With that remark, he leaned over too far to look into the tub, fell in, and after a splash or two, the sound of his voice changed considerably. [In the final, they used a happy yodel.] This was done without showing animation of Timothy. The next time we saw him, he was resting on his elbows on the edge of the tub with a silly smile on his face.
“This was a simple and easy way of putting the transition over. It was a far better means of doing it than to have squeezed everything we could have out of the animator in some subtle manner. In fact, the resulting animation could have been done by an animator of lesser abilities.”
—Frank Thomas and Ollie Johnston, Disney Animation: The Illusion of Life
Blinn’s Law and the paradox of efficiency
Note: I seem to have come down with a touch of whatever stomach bug my daughter had this week, so I’m taking the day off. This post was originally published, in a somewhat different form, on August 9, 2011.
As technology advances, rendering time remains constant.
—Blinn’s Law
Why isn’t writing easier? Looking at the resources that contemporary authors have at their disposal, it’s easy to conclude that we should all be perfect writing machines. Word processing software, from WordStar to Scrivener, has made the physical process of writing more streamlined than ever; Google and Amazon have given us access to a world of information that would have been inconceivable even fifteen years ago; and research, editing, and revision have been made immeasurably more efficient. And yet writing itself doesn’t seem all that much less difficult than before. The amount of time a professional novelist needs to spend writing each day—let’s say three or four hours—hasn’t decreased much since Trollope, and for most of us, it still takes a year or two to write a decent novel.
So what happened? In some ways, it’s an example of the paradox of labor-saving devices: instead of taking advantage of our additional leisure time, we create more work for ourselves. It also reflects the fact that the real work of writing a novel is rarely connected to how fast you can type. But I prefer to think of it as a variation on Blinn’s Law. As graphics pioneer James Blinn first pointed out, in animation, rendering time remains constant, even as computers get faster. An artist gets accustomed to waiting a certain number of hours for an image to render, so as hardware improves, instead of using it to save time, he employs it to render more complex graphics. This is why rendering time at Pixar has remained essentially constant over the past fifteen years. (Although the difference between Toy Story and Cars 2, or even Brave, is a reminder that rendering isn’t everything.)
Similarly, whatever time I save by writing on a laptop rather than a manual typewriter is canceled out by the hours I spend making additional small changes and edits along the way. The Icon Thief probably went through eighteen substantial drafts before the final version was delivered to my publisher, an amount of revision and rewriting that would have been unthinkable without Word. Is the novel better as a result? On a purely technical level, yes. Is the underlying story more interesting than if I’d written it by hand? Probably not. Blinn’s Law tells us that the leaves and grass in the background of a shot will look increasingly great, but it says nothing about the quality of storytelling. Which seems to imply that the countless tiny changes that a writer like me makes to each draft are only a waste of effort.
And yet here’s the thing: I still needed all that time. No matter how efficient the physical side of the process becomes, it’s still desirable for a writer to live with a novel, or a studio to live with a movie, for at least a year or so. (In the case of a film like Frozen, that gestational period can amount to a decade or more.) For most of us, there seems to be a fixed developmental period for decent art, a minimum amount of time that a story needs to simmer and evolve. The endless small revisions aren’t the point: the point is that while you’re altering a word or shifting a paragraph here or there, the story is growing in your head in unexpected ways. Even as you fiddle with the punctuation, seismic changes are taking place. But for this to happen, you need to be at your desk for a certain number of hours. So what do we do in the meantime? We do what Pixar does: we render. That’s the wonderful paradox of efficiency: it allows us, as artists, to get the inefficiency we all need.
The Disney way of storytelling
Our best advice, at this point, is to develop and strengthen what is good; edit out and shift emphasis on what is not coming off; stay away from the commonplace and the hackneyed; constantly search for new things the audience has never seen before—but tell it all with the same old values and fundamentals of communication…
Every picture will have scenes that are difficult to draw and scenes that call for experience and talent, but the bulk of the film should be made up of scenes that are easy to do, should be effective and good-looking on the screen, and should make the best possible use of the animation potential.
Flight, Wreck-It Ralph, and the triumph of the mainstream
At first glance, it’s hard to imagine two movies more different than Flight and Wreck-It Ralph. The former is an emphatically R-rated message movie with a refreshing amount of nudity, drug and alcohol abuse, and what used to be known as adult situations, in the most literal sense of the term; the latter is a big, colorful family film that shrewdly joins the latest innovations in animation and digital effects to the best of classic Disney. On the surface, they appeal to two entirely separate audiences, and as a result, you’d expect them to coexist happily at the box office, which is precisely what happened: both debuted this weekend to numbers that exceeded even the most optimistic expectations. (This is, in fact, the first weekend in a long time when my wife and I went to the movies on two consecutive nights.) Yet these two films have more in common than first meets the eye, and in particular, they offer an encouraging snapshot of Hollywood’s current potential for creating great popular entertainment. And even if their proximity is just a fluke of scheduling, it’s one that should hearten a lot of mainstream moviegoers.
In fact, for all their dissimilarities, the creative team behind Flight would have been more than capable of making Wreck-It Ralph, and vice-versa, and under the right circumstances, they might well have done so. Flight is Robert Zemeckis’s first live-action movie in years, after a long, self-imposed exile in the motion-capture wilderness, and the script is by John Gatins, who spent a decade trying to get it made, while also slaving away for two years on the screenplay for Real Steel. It’s a handsome movie, old-fashioned in its insistence on big themes and complex characters, but it’s also a product of the digital age: Zemeckis’s Forrest Gump, whatever its other flaws, remains a landmark in the use of unobtrusive special effects to advance the story and fill in a movie’s canvas, and their use here allowed Flight to be brought in on a startlingly low budget of $31 million. At his best, Zemeckis is one of the most technically gifted of mainstream directors, and in some ways, he’s an important spiritual godfather for Wreck-It Ralph, whose true precursor isn’t Toy Story, as many critics have assumed, but Who Framed Roger Rabbit?
Similarly, Wreck-It Ralph is the product of a canny, often surprisingly mature set of sensibilities that only happens to have ended up in animation. Along with the usual stable of Pixar and Disney veterans, the creative team includes Rich Moore and Jim Reardon, a pair of directors whose work on The Simpsons collectively represents the best fusion of high and low art in my lifetime, and they’ve given us a movie that appeals to both adults and kids, and not just in the obvious ways. It’s full of video game in-jokes that will fly over or under the heads of many viewers—a reference to Metal Gear Solid represents one of the few times a joke in a movie had the audience laughing while I was scratching my head—but this is really the least impressive aspect of the movie’s sophistication. The script is very clever, with a number of genuinely ingenious surprises, and there are touches here that go well beyond nerd culture to something older and weirder, like Alan Tudyk’s brilliant Ed Wynn impression as the villainous King Candy. (The cast, which includes John C. Reilly, Jack McBrayer, and Sarah Silverman, all of them wonderful, is a modern version of the Disney trick of recruiting old pros like Ed Wynn and Phil Harris to bring its characters to life.)
It’s tempting to say that it all comes down to good storytelling, but there’s something else going on here. Last year, I predicted that the incursion of Pixar talent into live-action movies would represent a seismic shift in popular filmmaking, and although John Carter was a bust, Brad Bird’s work on Mission: Impossible—Ghost Protocol indicates that I wasn’t entirely off the mark. This weekend’s top two movies are a sign that, at its best, Hollywood is still capable of making solid movies for adults and children that come from essentially the same place—from good scripts, yes, but also from studios and creative teams that understand the potential of technology and draw on a similar pool of skilled professionals. This is how Hollywood should look: not a world neatly divided into summer tentpole pictures, Oscar contenders, and a lot of mediocrity, but a system capable of turning out mainstream entertainment for different audiences linked by a common respect for craft. The tools and the talent are there, led by directors like Zemeckis and backed up by studios like Pixar and Disney. This ought to be the future of moviemaking. And at least for one weekend, it’s already here.
So what happened to John Carter?
In recent years, the fawning New Yorker profile has become the Hollywood equivalent of the Sports Illustrated cover—a harbinger of bad times to come. It isn’t hard to figure out why: both are awarded to subjects who have just reached the top of their game, which often foreshadows a humbling crash. Tony Gilroy was awarded a profile after the success of Michael Clayton, only to follow it up with the underwhelming Duplicity. For Steve Carrell, it was Dinner with Schmucks. For Anna Faris, it was What’s Your Number? And for John Lasseter, revealingly, it was Cars 2. The latest casualty is Andrew Stanton, whose profile, which I discussed in detail last year, now seems laden with irony, as well as an optimism that reads in retrospect as whistling in the dark. “Among all the top talent here,” a Pixar executive is quoted as saying, “Andrew is the one who has a genius for story structure.” And whatever redeeming qualities John Carter may have, story structure isn’t one of them. (The fact that Stanton claims to have closely studied the truly awful screenplay for Ryan’s Daughter now feels like an early warning sign.)
If nothing else, the making of John Carter will provide ample material for a great case study, hopefully along the lines of Julie Salamon’s classic The Devil’s Candy. There are really two failures here, one of marketing, another of storytelling, and even the story behind the film’s teaser trailer is fascinating. According to Vulture’s Claude Brodesser-Akner, a series of lost battles and miscommunications led to the release of a few enigmatic images devoid of action and scored, in the manner of an Internet fan video, with Peter Gabriel’s dark cover of “My Body is a Cage.” And while there’s more to the story than this—I actually found the trailer quite evocative, and negative responses to early marketing materials certainly didn’t hurt Avatar—it’s clear that this was one of the most poorly marketed tentpole movies in a long time. It began with the inexplicable decision to change the title from John Carter of Mars, on the assumption that women are turned off by science fiction, while making no attempt to lure in female viewers with the movie’s love story or central heroine, or even to explain who John Carter is. This is what happens when a four-quadrant marketing campaign goes wrong: when you try to please everybody, you please no one.
And the same holds true of the movie itself. While the story itself is fairly clear, and Stanton and his writers keep us reasonably grounded in the planet’s complex mythology, we’re never given any reason to care. Attempts to engage us with the central characters fall curiously flat: to convey that Princess Dejah is smart and resourceful, for example, the film shows her inventing the Barsoomian equivalent of nuclear power, evidently in her spare time. John Carter himself is a cipher. And while some of these problems might have been solved by miraculous casting, the blame lands squarely on Stanton’s shoulders. Stanton clearly loves John Carter, but forgets to persuade us to love him as well. What John Carter needed, more than anything else, was a dose of the rather stark detachment that I saw in Mission: Impossible—Ghost Protocol, as directed by Stanton’s former Pixar colleague Brad Bird. Bird clearly had no personal investment in the franchise, except to make the best movie he possibly could. John Carter, by contrast, falls apart on its director’s passion and good intentions, as well as a creative philosophy that evidently works in animation, but not live action. As Stanton says of Pixar:
We’re in this weird, hermetically sealed freakazoid place where everybody’s trying their best to do their best—and the films still suck for three out of the four years it takes to make them.
Which only makes us wonder what might have happened if John Carter had been granted a fourth year.
Stanton should take heart, however. If there’s one movie that John Carter calls to mind, it’s Dune, another financial and critical catastrophe that was doomed—as much as I love it—by fidelity to its source material. (In fact, if you take Roger Ebert’s original review of Dune, which came out in 1985, and replace the relevant proper names, you end up with something remarkably close to a review of John Carter: “Actors stand around in ridiculous costumes, mouthing dialogue with little or no context.”) Yet its director not only recovered, but followed it up with my favorite movie ever made in America. Failure, if it results in another chance, can be the opposite of the New Yorker curse. And while Stanton may not be David Lynch, he’s not without talent: the movie’s design is often impressive, especially its alien effects, and it displays occasional flashes of wit and humor that remind us of what Stanton can do. John Carter may go on record as the most expensive learning experience in history, and while this may be cold comfort to Disney shareholders, it’s not bad for the rest of us, as long as Stanton gets his second chance. Hopefully far away from the New Yorker.
Is storytelling obsolete?
The tricky thing about defending plot is that you occasionally get it from both sides. On the literary end, you have critics like John Lucas of the Guardian, who is clearly suspicious of most plotted fiction, or James Wood of the New Yorker, who is famously fed up with the conventions of literary realism. Meanwhile, on the other end, you have those who want to get rid of story altogether, but for radically different reasons. And I suspect that the likes of Lucas and Wood might ease up on their invective if they realized that plot was, in fact, literature’s last stand against an even more insidious opponent, embodied, at least this week, by Andy Hendrickson, Chief Technology Officer of Disney Studios, who was quoted in Variety as saying: “People say ‘It’s all about the story.’ When you’re making tentpole films, bullshit.”
To state the obvious, I’d rather be defending story against the likes of Lucas and Wood, who at least claim to be aspiring to something more, than Hendrickson, who is pushing toward something much less. On a superficial level, though, he seems to have a point—at least when it comes to the movies that consistently generate large audiences. Citing a chart of the top 12 movies of all time, including Disney’s own Alice in Wonderland, Hendrickson notes that visual effects are what tend to drive box office—”and Johnny Depp didn’t hurt,” he concludes. Which is true enough. Most of these movies are triumphs of visuals over narrative, based on existing brands or properties, to the point where story seems almost incidental. Even a movie like The Dark Knight, which cares deeply about plot and narrative complexity, feels like little more than an aberration.
But this only tells half the story. For one thing, the list that Hendrickson provides isn’t adjusted for inflation, and the list of the real highest-grossing movies of all time yields a much different picture. There are some clunkers here, too (nobody, I trust, went to see The Ten Commandments because of the script), but for the most part, these are movies driven by story and spectacle: Gone With the Wind. E.T. Star Wars. The Sound of Music. Even Avatar, which had a few problems in the screenplay department, was an ambitious attempt to create a fully realized original story that would fuel the dreamlife of millions. And these are the most lucrative movies ever made. To be content with a disposable tentpole picture that barely makes back its production and marketing costs strikes me as a lack of ambition. And it should strike Disney shareholders the same way.
Moreover, even the movies that Hendrickson cites are more driven by story than he acknowledges. Alice in Wonderland was a book before it became a terrible movie, after all, and it’s safe to say that box office was driven as much by goodwill toward Lewis Carroll’s creations as toward Johnny Depp. The same is true for Spider-Man, Harry Potter, The Lord of the Rings, and most other major franchises, all of which were built on the work of solitary geniuses. In short, someone still needs to do the work of story. Aside from exceptions like Pixar or Inception, the primary creative work may not be done in Hollywood itself, but in novels, comics, and other media where true artists continue to gravitate, and where the movies will eventually turn. Hendrickson may hate to admit it, but he still depends on storytellers, even if they’ve fled his own department. Life, as a certain famous franchise reminds us, always finds a way. And story does as well.
So what happened to Cars 2?
On Saturday afternoon, at my insistence, my wife and I ended up in a theater full of excited kids and obliging parents at a screening of Cars 2, which had already received the worst reviews in the history of Pixar. Rather to my surprise, my wife enjoyed it more than I did, and the kids seemed okay with it as well (aside from the one who kicked my chair repeatedly throughout most of the last twenty minutes). Yet the film itself is undeniably underwhelming: a bright, shiny mediocrity. Cars 2 isn’t a bad movie, exactly—it’s watchable and reasonably fun—but it’s a disappointment, not only in comparison to Pixar’s past heights but also to a strong recent showing from DreamWorks, which includes How to Train Your Dragon and the sublime Kung Fu Panda series. And while Pixar can take comfort in good box office and decent audience reactions, I hope that the negative critical response inspires some introspection at the studio as to how things went wrong.
It’s important to note that it wouldn’t have taken a miracle to make Cars 2 a better movie. While the original Cars struck me as somewhat misconceived, the basic elements of the sequel are all sound: the shift in tone from nostalgic Americana to international thriller is a masterstroke, and the underlying story and premise are fine, although never particularly involving. The trouble is that the script, by writer Ben Queen, never really sparkles, at least not by the standards we’ve come to expect: there are some laughs, but only a few hit home, and the movie seems content to coast on the level of cleverness rather than taking the leap to really inspired comedy or action. Cars 2 is constantly on the verge of breaking through to something more engaging, but never quite makes it, when I suspect that another pass on the screenplay, and some honest notes, would have made all the difference.
This brings us to the second big problem: it’s hard to give notes to the man who founded the studio. John Lasseter is undeniably a genius—he’s the rare example of a great creative artist who has also demonstrated a willingness to tackle the practical problems of building a major corporation—but it was probably too much to ask one man to oversee Pixar, Disney animation, and a movie of his own. A recent New York Magazine profile makes it clear that the process left Lasseter pressed for time, which would have made it hard for him to address his own movie’s more glaring flaws. Even more importantly, it seems likely that his status as a Pixar legend and founding father prevented him from receiving the feedback he needed. Just a glance at the history of movies reminds us that the heads of studios can make remarkable producers—just look at David O. Selznick—but that even the greatest directors can’t operate entirely without accountability.
I’ve talked about Pixar’s singular culture before, in a much more comprehensive post, so I won’t repeat the same points here. But it seems clear that Pixar’s previous excellence was due to a process that allowed its central brain trust to mercilessly criticize and improve the studio’s works in progress. For Cars 2, this process seems to have broken down, partly because of Lasseter’s deserved stature, and also because of his personal attachment to the Cars franchise. (Pixar has famously canceled other projects, such as Newt, deep into the planning stages because of quality concerns, something it’s hard to imagine happening to Cars 2.) Judging from the outcome, Lasseter needs to return to what he does better than anyone else alive: overseeing the work of the world’s greatest animation studio. If not, he will end up with a legacy more like that of George Lucas than Walt Disney. And that would be a shame.
The Legend of Miyamoto
For reasons known only to itself, The New Yorker has evidently decided that the best way to write about video games is to assign these stories to writers who emphatically have no gaming experience. This approach, which wouldn’t be tolerated for any other art form, high or low, has already resulted in this notorious article by Nicholson Baker—one of my favorite living writers, but clearly unequipped to say anything interesting about Red Dead Redemption. And now we have Nick Paumgarten’s disappointing profile of Shigeru Miyamoto, which is a huge missed opportunity, in more ways than one.
Miyamoto, the creator of the Mario and Zelda franchises and the greatest video game designer of all time, has often been compared to Walt Disney, an accolade he shares with his fellow genius Hayao Miyazaki. (Miyamoto and Miyazaki also share a deep nostalgia for the forests and villages of rural Japan, an abiding affection that shows up throughout their work.) Miyamoto is an artist, a storyteller, an engineer, and a visionary, and he’s exactly the sort of creative force that the readers of The New Yorker ought to know more about. The fact that Paumgarten scored only a brief interview with Miyamoto, which he pads out to feature length with pages of unenlightening digressions, is only the most disappointing thing about the profile. A single glimpse of one of Miyamoto’s sketches for Zelda would be more interesting than anything on display here.
Still, there are a few moments worth mentioning. Here’s Miyamoto on calibrating the difficulty of a game, and how important it is to incorporate quiet moments alongside every challenge:
A lot of the so-called action games are not made that way…All the time, players are forced to do their utmost. If they are challenged to the limit, is it really fun for them?…[In Miyamoto’s own games] you are constantly providing the players with a new challenge, but at the same time providing them with some stages or some occasions where they can simply, repeatedly, do something again and again. And that can be a joy.
This is especially good advice for writers in genres, such as suspense, that place a premium on intensity. A few strategically timed breaks in the action, which give the reader a moment of breathing room, can make the rest of the novel read much more quickly. The key, as Miyamoto knows, is putting yourself in the position of a person approaching a work of art for the first time:
I always remind myself, when it comes to a game I’m developing, that I’m the perfect, skillful player. I can manipulate all this controller stuff. So sometimes I ask the younger game creators to try playing the games they are making by switching their left and right hands. In that way, they can understand how inexperienced the first-timer is.
Similarly, once a writer has internalized the plot of a novel, it can be hard to see it with fresh eyes. One solution is to set the book aside for a month and read it again once the memory of the story has faded. Another approach, which I’ve done a few times, is to read a sequence of chapters in reverse, or at random, which often reveals problems or repetitions that I wouldn’t have noticed otherwise.
Finally, here’s Paumgarten on one of my favorite topics, the importance of constraints as a creative tool:
Mario, [Miyamoto’s] most famous creation, owes his appearance to the technological limitations of the first Donkey Kong game. The primitive graphics—there were hardly enough pixels to approximate a human form—compelled Miyamoto to give Mario white gloves and red overalls (so that you could see his arms swing), a big bushy mustache and a red hat (to hide the fact that engineers couldn’t yet do mouths or hair that moved), and a big head (to exaggerate his collisions). Form has always followed functionality. The problem now, if you want to call it one, is the degree of functionality. [Italics mine.]
This is a nice, crucial point. And it applies to more than video games. The limitations that made Mario so distinctive are the same ones that led to the look of Mickey Mouse, among so many other stars of early animation. One problem with the recent availability of beautifully rendered computer graphics is that character design is becoming a lost art. Even the best recent Pixar, Disney, and DreamWorks films have suffered from this: they can render every hair on a character’s head, but can’t make the character itself a memorable one. (Kung Fu Panda may be the last computer-animated movie with really distinctive character designs.)
So are video games art? Paumgarten glances at the subject only briefly, but with all due respect to Roger Ebert, there’s no doubt in my mind that the best video games are indeed art. At least, that’s the only explanation I have for something like Super Mario Galaxy, which is one of the few recent works, in any medium, that has filled me with something like my childhood envy for those who get to spend their lives telling stories. (The J.J. Abrams reboot of Star Trek is another.) Miyamoto’s great skill, as the article reminds us, is to bring us back to the best moments of our childhood. And while not all art needs to aspire to this, the world definitely needs art that does.
Tangled’s web
The big news in pop culture this week, of course, is the unexpected resurgence, in the form of Tangled, of the classic Walt Disney brand. As many critics have noted, Tangled is the closest thing to the full Disney package—fairy tale setting, beautiful princess, dashing hero, amusing animal sidekicks, Alan Menken—that we’ve had in at least fifteen years. The result, while sentimental, undeniably works. Watching Tangled, I felt something like what Pauline Kael described when reviewing a very different movie: “The pieces of the story fit together so beautifully that eventually the director has you wrapped up in the foolishness. By the end, all the large, sappy, satisfying emotions get to you.”
So what are the lessons for writers? It’s easy, and definitely accurate, to credit John Lasseter, the Pixar genius who shepherded Tangled throughout its entire production, with much of the movie’s success. But it’s also worth spotlighting the contribution of animator Glen Keane, who would have directed Tangled had he not suffered a heart attack midway through production. Den of Geek has a really fine interview with Keane, which is worth reading in its entirety, but especially for this story, which describes something to which any writer can relate:
The amazing thing was that in May, this year, we were only at 40% finished in our animation. We had to have 60% of the movie by the middle of July. And it was impossible. And it was all of the most subtle, difficult, stuff.
I remember telling [the animators], “look, we have an impossible amount of work to do, none of you will be the animators you are now by the end of the film, you will have grown. You will have animated scenes that you can’t even imagine that you did. And I can’t tell you how you will do them. But you will do them, and there’s just something that is happening right now, and I call it collective learning.”
The history of animation, in general, is something that every writer should study, because it’s by far the best documented of any of the narrative arts. Because every stage in the animation process—initial sketches, concept art, storyboards, backgrounds—is fun to look at in itself (which isn’t true, for example, of most novelists’ first drafts), the creative process is exceptionally well chronicled. A book like Paper Dreams: The Art and Artists of Disney Storyboards is an inspiring read for any writer who needs a reminder of how tentative and exploratory the artistic process can be, especially in its earliest stages.
Animation is also worth studying because it tells stories simply and cleanly, as most writers should strive to do. It’s especially good at breaking stories down into their basic units, which, as I’ve noted already, is the first and most important rule of writing. Any writer, for example, would benefit from the sort of advice that Shamus Culhane gives in Animation: From Script to Screen:
One good method of developing a story is to make a list of details. For example [for a cartoon about elves as clock cleaners in a cathedral], what architectural features come to mind—steeples, bells, windows, gargoyles? What props would the elves use—brushes, pails, mops, sponges…what else? Keep on compiling lists without stopping to think about them. Let your mind flow effortlessly, and don’t try to be neat or orderly. Scribble as fast as you can until you run out of ideas.
Disney can be accused of tastelessness and commercialism, to put it mildly, but it’s also better than anyone I know (except, perhaps, the team of Michael Powell and Emeric Pressburger, about whom I’ll have much more to say later) at creating works of art that exemplify the most fundamental reasons we go to the movies, or seek out any kind of art at all. The success of Tangled is only the most recent reminder of how powerful those elements can be.