Posts Tagged ‘AVQ&A’
Trading places
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What famous person’s life would you want to assume?”
“Celebrity,” John Updike once wrote, “is a mask that eats into the face.” And Updike would have known, having been one of the most famous—and the most envied—literary novelists of his generation, with a career that seemed to consist of nothing but the serene annual production of poems, stories, essays, and hardcovers that, with their dust jackets removed, turned out to have been bound and designed as a uniform edition. From the very beginning, Updike was already thinking about how his complete works would look on library shelves. That remarkable equanimity made an impression on the writer Nicholson Baker, who wrote in his book U & I:
I compared my awkward self-promotion too with a documentary about Updike that I saw in 1983, I believe, on public TV, in which, in one scene, as the camera follows his climb up a ladder at his mother’s house to put up or take down some storm windows, in the midst of this tricky physical act, he tosses down to us some startlingly lucid little felicity, something about “These small yearly duties which blah blah blah,” and I was stunned to recognize that in Updike we were dealing with a man so naturally verbal that he could write his fucking memoirs on a ladder!
Plenty of writers, young or old, might have wanted to switch places with Updike, although the first rule of inhabiting someone else’s life is that you don’t want to be a writer. (The Updike we see in Adam Begley’s recent biography comes across as more unruffled than most, but all those extramarital affairs in Ipswich must have been exhausting.) Writing might seem like an attractive kind of celebrity: you can inspire fierce devotion in a small community of fans while remaining safely anonymous in a restaurant or airport. You don’t even need to go as far as Thomas Pynchon: how many of us could really pick Michael Chabon or Don DeLillo or Cormac McCarthy out of a crowd? Yet that kind of seclusion carries a psychological toll as well, and I suspect that the daily life of any author, no matter how rich or acclaimed, looks much the same as any other. If you want to know what it’s like to be old, Malcolm Cowley wrote: “Put cotton in your ears and pebbles in your shoes. Pull on rubber gloves. Smear Vaseline over your glasses, and there you have it: instant old age.” And if you want to know what it’s like to be a novelist, you can fill a room with books and papers, go inside, close the door, and stay there for as long as possible while doing absolutely nothing that an outside observer would find interesting. Ninety percent of a writer’s working life looks more or less like that.
What kind of celebrity, then, do you really want to be? If celebrity is a mask, as Updike says, it might be best to make it explicit. Being a member of Daft Punk, say, would allow you to bask in the adulation of a stadium show, then remove your helmet and take the bus back to your hotel without any risk of being recognized. The mask doesn’t need to be literal, either: I have a feeling that Lady Gaga could dress down in a hoodie and ponytail and order a latte at any Starbucks in the country without being mobbed. The trouble, of course, with taking on the identity of a total unknown—Banksy, for instance—is that you’re buying the equivalent of a pig in a poke: you just don’t know what you’re getting. Ideally, you’d switch places with a celebrity whose life has been exhaustively chronicled, either by himself or others, so that there aren’t any unpleasant surprises. It’s probably best to also go with someone slightly advanced in years: as Solon says in Herodotus, you don’t really know how happy someone’s life is until it’s over, and the next best thing would be a person whose legacy seems more or less fixed. (There are dangers there, too, as Bill Cosby knows.) And maybe you want someone with a rich trove of memories of a life spent courting risk and uncertainty, but who has since mellowed into something slightly more stable, with the aura of those past accomplishments still intact.
You also want someone with the kind of career that attracts devoted collaborators, which is the only kind of artistic wealth that really counts. But you don’t want too much fame or power, both of which can become traps in themselves. In many respects, then, what you’d want is something close to the life of half and half that Lin Yutang described so beautifully: “A man living in half-fame and semi-obscurity.” Take it too far, though, and you start to inch away from whatever we call celebrity these days. (Only in today’s world can an otherwise thoughtful profile of Brie Larson talk about her “relative anonymity.”) And there are times when a touch of recognition in public can be a welcome boost to your ego, like for Sally Field in Soapdish, as long as you’re accosted by people with the same basic mindset, rather than those who just recognize you from Istagram. You want, in short, to be someone who can do pretty much what he likes, but less because of material resources than because of a personality that makes the impossible happen. You want to be someone who can tell an interviewer: “Throughout my life I have been able to do what I truly love, which is more valuable than any cash you could throw at me…So long as I have a roof over my head, something to read and something to eat, all is fine…What makes me so rich is that I am welcomed almost everywhere.” You want to be Werner Herzog.
The old switcheroo
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What makes a great trailer?”
A few years ago, in a post about The Cabin in the Woods, which is one of a small handful of recent films I still think about on a regular basis, I wrote:
If there’s one thing we’ve learned about American movie audiences over the past decade or so, it’s that they don’t like being surprised. They may say that they do, and they certainly respond positively to twist endings, properly delivered, within the conventions of the genre they were hoping to see. What they don’t like is going to a movie expecting one thing and being given something else. And while this is sometimes a justifiable response to misleading ads and trailers, it can also be a form of resentment at having one’s expectations upended.
I went on to quote a thoughtful analysis from Box Office Mojo, which put its finger on why the movie scored so badly with audiences:
By delivering something much different, the movie delighted a small group of audience members while generally frustrating those whose expectations were subverted. Moviegoers like to know what they are in for when they go to see a movie, and when it turns out to be something different the movie tends to get punished in exit polling.
And the funny thing is that you can’t really blame the audience for this. If you think of a movie primarily as a commercial product that you’ve paid ten dollars or more to see—which doesn’t even cover the ancillary costs of finding a babysitter and driving to and from the theater—you’re likely to be frustrated if it turns out to be something different from what you were expecting. This is especially the case if you only see a few movies a year, and doubly so if you avoid the reviews and base your decisions solely on trailers, social media, or the presence of a reliable star. In practice, this means that certain surprises are acceptable, while others aren’t. It’s fine if the genre you’re watching all but requires there to be a twist, even if it strains all logic or openly cheats. (A lot of people apparently liked Now You See Me.) But if the twist takes you out of the genre that you thought you were paying to see, viewers tend to get angry. Genre, in many ways, is the most useful metric for deciding where to put your money: if you pay to see an action movie or a romantic comedy or a slasher film, you have a pretty good sense of the story beats you’re going to experience. A movie that poses as one genre and turns out to be another feels like flagrant false advertising, and it leaves many viewers feeling ripped off.
As a result, it’s probably no longer possible for a mainstream movie to radically change in tone halfway through, at least not in a way that hasn’t been spoiled by trailers. Few viewers, I suspect, went into From Dusk Till Dawn without knowing that a bunch of vampires were coming, and a film like Psycho couldn’t be made today at all. (Any attempt to preserve the movie’s secrets in the ads would be seen, after the fact, as a tragic miscalculation in marketing, as many industry insiders thought it was for The Cabin in the Woods.) There’s an interesting exception to this rule, though, and it applies to trailers themselves. Unless it’s for something like The Force Awakens, a trailer, by definition, isn’t something you’ve paid to see: you don’t have any particular investment in what they’re showing you, and it’s only going to claim your attention for a couple of minutes. As a result, trailers can indulge in all kinds of formal experiments that movies can’t, and probably shouldn’t, attempt at feature length. For the most part, trailers aren’t edited according to the same rules as movies, and they’re often cut together by a separate team of editors who are looking at the footage using a very different set of criteria. And as it turns out, one of the most reliable conventions of movie trailers is the old switcheroo: you start off in one genre, then shift abruptly to another, often accompanied by a needle scratch or ominous music cue.
In other words, the trailers frequently try to appeal to audiences using exactly the kind of surprise that the movies themselves can no longer provide. Sometimes it starts off realistically, only to introduce monsters or aliens, as Cloverfield and District 9 did so memorably, and trailers never tire of the gimmick of giving us what looks like a romantic comedy before switching into thriller mode. The ultimate example, to my mind, remains Vanilla Sky, which is still one of my favorite trailers. When I saw it for the first time, the genre switcheroo wasn’t as overused as it later became, and the result knocked me sidways. By now, most of its tricks have become clichés in themselves, down to its use of “Solsbury Hill,” so maybe you’ll have to take my word for it when I say that it was unbelievably effective. (In some ways, I wish the movie, which I also love, had followed the trailer’s template more closely, instead of tipping its hand early on about the weirdness to come.) And I suspect that such trailers, with their ability to cross genre boundaries, represent a kind of longing by directors about the sorts of films that they’d really like to make. The logic of the marketplace has made it impossible for such surprises to survive in the finished product, but a trailer can serve a sort of miniature version of what it might have been under different circumstances. This isn’t always true: in most cases, the studio just cuts together a trailer for the movie that they wish the director had made, rather than the one that he actually delivered. But every now and then, a great trailer can feel like a glimpse of a movie’s inner, secret life, even if it turns out that it was all a dream.
Multiple personalities
When I was in my early twenties, I was astonished to learn that “One,” “Coconut,” the soundtrack to The Point, and “He Needs Me”—as sung by Shelley Duvall in Popeye and, much later, in Punch-Drunk Love—were all written by the same man, who also sang “Everybody’s Talkin'” from Midnight Cowboy. (This doesn’t even cover “Without You” or “Jump Into the Fire,” which I discovered only later, and it also ignores some of the weirder detours in Harry Nilsson’s huge discography.) At the time, I was reminded of Homer Simpson’s response when Lisa told him that bacon, ham, and pork chops all came from the same animal: “Yeah, right, Lisa. A wonderful, magical animal.” Which is exactly what Nilsson was. But it’s also the kind of diversity that arises from decades of productive, idiosyncratic work. Nilsson was a facile songwriter with a lot of tricks up his sleeve, as he notes in an interview in the book Songwriters on Songwriting:
Most [songs] I find you can write in less time than it takes to sing them. The concept, if there is a concept, or the hook, is all you’re concerned with. Because you know you can go back and fill in the pieces. If you get a front line and a punch line, it’s a question of just filling in the missing bits.
And given Nilsson’s diverse, prolific output, it shouldn’t come as a surprise that I encountered him in so many different guises before realizing that they were all aspects of a single creative personality.
Of course, not every career generates this kind of enticing randomness. Nilsson occupied a curious position for much of his life, stuck somewhere halfway between superstardom and seclusion, and it freed him to make a long series of odd, peculiar choices. When other artists end up in the same position, it’s often less by choice than by necessity. When you look at the résumé of a veteran supporting actor or working writing, you usually find that they resist easy categorizations, since each credit resulted from a confluence of circumstances that may never be repeated. A glance at the filmography of any character actor inspires moment after moment of recognition, as you realize, for instance, that the same guy who played Mr. Noodle on Sesame Street was also the dad in Rachel Getting Married and Tars in Interstellar. A few artists have the luxury of shaping careers that seem all of a piece, but others aren’t all that interested in it, or find that their body of work is determined more by external factors. Most actors aren’t in a position to turn down a paycheck, and learning how and why they took one role and not another is part of what makes Will Harris’s A.V. Club interviews in “Random Roles” so fascinating. When you’re at the constant mercy of trends and casting agents, you can end up with a career that looks like it should belong to three different people. And as someone like Matthew McConaughey can tell you, that goes for stars as well.
It’s particularly true of actresses. I’ve spoken here before of the starlet’s dilemma, in which young actresses are required to balance the needs of extending their shelf life as ingenues for a few more seasons with the very different set of choices required to sustain a career over decades. In many cases, the decisions that make sense now, like engaging in cosmetic surgery, can come back to haunt them later, but the pressure to extend their prime earning years is immense, and it’s no surprise that few manage to navigate the pitfalls that Hollywood presents. I was reminded of this while leafing—never mind why—through the latest issue of Allure, which features Jessica Alba on its cover. Alba has recently begun a second act as the head of her own consumer goods company, and she seems far happier and more satisfied in that role than she ever was as an actress: she admits that she tried to be what everyone else wanted her to be, and she accepted roles and made choices without a larger plan in mind. The result, sadly, was a career without shape or character, determined by an industry that could never decide whether Alba was best suited for comedy, romance, or action. I don’t think any of her movies will still be watched twenty years from now, and I expect that we’ll be surprised one day to remember that the founder of the Honest Company was also a movie star, in the way it amuses us to reflect that Martha Stewart used to be a model.
So how do you end up with a career more like Nilsson’s and less like Alba’s, given countless uncontrollable factors that can govern a life in the arts? You can begin, perhaps, by remembering like an artist, like any human being, will play many roles, and not all of them are going to be consistent. When you look back at what you’ve done, it can be hard to find any particular shape, aside from what was determined by the needs of the moment, and it may even be difficult to recognize the person who thought that a particular project was a good idea—if you had any choice in the matter at all. (When I look at my own career, I find that it divides neatly in two, with one half in science fiction and the other in suspense, with no overlap between them whatsoever, a situation that was created almost entirely by the demands of the market.) But if you need to wear multiple hats, or even multiple personalities, you can at least strive to make all of them interesting. Consistency, as Emerson puts it, is the hobgoblin of little minds, and it’s an equally elusive goal in the arts: the only way to be consistent is to be dependably mediocre. The life you get by staying true to yourself in the face of external pressure will be more interesting than the one that results from a perfect plan. It can even be easier to have two careers than one. And if you try too hard to make everything fit into a single frame, you might find that one is the loneliest number.
Pictures at an exhibition
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What piece of art has actually stopped you in your tracks?”
“All art constantly aspires toward the condition of music,” Walter Pater famously said, but these days, it seems more accurate to say that all art aspires toward the condition of advertising. There’s always been a dialogue between the two, of course, and it runs in both directions, with commercials and print ads picking up on advances in the fine arts, even as artists begin to utilize techniques initially developed on Madison Avenue. Advertising is a particularly ruthless medium—you have only a few seconds to grab the viewer’s attention—and the combination of quick turnover, rapid feedback, and intense financial pressure allows innovations to be adapted and refined with blinding speed, at least within a certain narrow range. (There’s a real sense in which the hard lessons that Jim Henson, say, learned while shooting commercials for Wilkins Coffee are what made Sesame Street so successful.) The difference today is that the push for virality—the need to attract eyeballs in brutal competition with countless potential diversions—has superseded all other considerations, including the ability to grow and maintain an audience. When thousands of “content providers” are fighting for our time on equal terms, there’s no particular reason to remain loyal to any one of them. Everything is an ad now, and it’s selling nothing but itself.
This isn’t a new idea, and I’ve written about it here at length before. What really interests me, though, is how even the most successful examples of storytelling are judged by how effectively they point to some undefined future product. The Marvel movies are essentially commercials or trailers for the idea of a superhero film: every installment builds to a big, meaningless battle that serves as a preview for the confrontation in an upcoming sequel, and we know that nothing can ever truly upset the status quo when the studio’s slate of tentpole releases has already been announced well into the next decade. They aren’t bad films, but they’re just ever so slightly better than they have to be, and I don’t have much of an interest in seeing any more. (Man of Steel has plenty of problems, but at least it represents an actual point of view and an attempt to work through its considerable confusions, and I’d sooner watch it again than The Avengers.) Marvel is fortunate enough to possess one of the few brands capable of maintaining an audience, and it’s petrified at the thought of losing it with anything so upsetting as a genuine surprise. And you can’t blame anyone involved. As Christopher McQuarrie aptly puts it, everyone in Hollywood is “terribly lost and desperately in need of help,” and the last thing Marvel or Disney wants is to turn one of the last reliable franchises into anything less than a predictable stream of cash flows. The pop culture pundits who criticize it—many of whom may not have jobs this time next year—should be so lucky.
But it’s unclear where this leaves the rest of us, especially with the question of how to catch the viewer’s eye while inspiring an engagement that lasts. The human brain is wired in such a way that the images or ideas that seize its attention most easily aren’t likely to retain it over the long term: the quicker the impression, the sooner it evaporates, perhaps because it naturally appeals to our most superficial impulses. Which only means that it’s worth taking a close look at works of art that both capture our interest and reward it. It’s like going to an art gallery. You wander from room to room, glancing at most of the exhibits for just a few seconds, but every now and then, you see something that won’t let go. Usually, it only manages to intrigue you for the minute it takes to read the explanatory text beside it, but occasionally, the impression it makes is a lasting one. Speaking from personal experience, I can think of two revelatory moments in which a glimpse of a picture out of the corner of my eye led to a lifelong obsession. One was Cindy Sherman’s Untitled Film Stills; the other was the silhouette work of Kara Walker. They could hardly be more different, but both succeed because they evoke something to which we instinctively respond—movie archetypes and clichés in Sherman’s case, classic children’s illustrations in Walker’s—and then force us to question why they appealed to us in the first place.
And they manage to have it both ways to an extent that most artists would have reason to envy. Sherman’s film stills both parody and exploit the attitudes that they meticulously reconstruct: they wouldn’t be nearly as effective if they didn’t also serve as pin-ups for readers of Art in America. Similarly, Walker’s cutouts fill us with a kind of uneasy nostalgia for the picture books we read growing up, even as they investigate the darkest subjects imaginable. (They also raise fascinating questions about intentionality. Sherman, like David Lynch, can come across as a naif in interviews, while Walker is closer to Michael Haneke, an artist who is nothing if not completely aware of how each effect was achieved.) That strange combination of surface appeal and paradoxical depth may be the most promising angle of attack that artists currently have. You could say much the same about Vijith Assar’s recent piece for McSweeney’s about ambiguous grammar, which starts out as the kind of viral article that we all love to pass around—the animated graphics, the prepackaged nuggets of insight—only to end on a sweet sucker punch. The future of art may lie in forms that seize on the tools of virality while making us think twice about why we’re tempted to click the share button. And it requires artists of unbelievable virtuosity, who are able to exactly replicate the conditions of viral success while infusing them with a white-hot irony. It isn’t easy, but nothing worth doing ever is. This is the game we’re all playing, like it or not, and the artists who are most likely to survive are the ones who can catch the eye while also burrowing into the brain.
Thinking inside the panel
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What non-comic creative type do you want to see make a comic?”
Earlier this year, I discovered Radio: An Illustrated Guide, the nifty little manual written by cartoonist Jessica Abel and Ira Glass of This American Life. At the time, the book’s premise struck me as a subtle joke in its own right, and I wrote:
The idea of a visual guide to radio is faintly amusing in itself, particularly when you consider the differences between the two art forms: comics are about as nonlinear a medium as you can get between two covers, with the reader’s eye prone to skip freely across the page.
The more I think about it, though, the more it seems to me that these two art forms share surprising affinities. They’re both venerable mediums with histories that stretch back for close to a century, and they’ve both positioned themselves in relation to a third, invisible other, namely film and television. On a practical level, whether its proponents like it or not, both radio and comics have come to be defined by the ways in which they depart from what a movie or television show can do. In the absence of any visual cues, radio has to relentlessly manage the listener’s attention—”Anecdote then reflection, over and over,” as Glass puts it—and much of the grammar of the comic book emerged from attempts to replicate, transcend, and improve upon the way images are juxtaposed in the editing room.
And smart practitioners in both fields have always found ways of learning from their imposing big brothers, while remaining true to the possibilities that their chosen formats offer in themselves. As Daniel Clowes once said:
To me, the most useful experience in working in “the film industry” has been watching and learning the editing process. You can write whatever you want and try to film whatever you want, but the whole thing really happens in that editing room. How do you edit comics? If you do them in a certain way, the standard way, it’s basically impossible. That’s what led me to this approach of breaking my stories into segments that all have a beginning and end on one, two, three pages. This makes it much easier to shift things around, to rearrange parts of the story sequence.
Meanwhile, the success of a podcast like Serial represents both an attempt to draw upon the lessons of modern prestige television and a return to the roots of this kind of storytelling. Radio has done serialized narratives better than any other art form, and Serial, for all its flaws, was an ambitious attempt to reframe those traditions in a shape that spoke to contemporary listeners.
What’s a little surprising is that we haven’t witnessed a similar mainstream renaissance in nonfiction comics, particularly from writers and directors who have made their mark in traditional documentaries. Nonfiction has always long been central to the comic format, of course, ranging from memoirs like Maus or Persepolis to more didactic works like Logicomix or The Cartoon History of the Universe. More recently, webcomics like The Oatmeal or Randall Munroe’s What If? have explained complicated issues in remarkable ways. What I’d really love to see, though, are original works of documentary storytelling in comic book form, the graphic novel equivalent of This American Life. You could say that the reenactments we see in works like Man on Wire or The Jinx, and even the animated segments in the films of Brett Morgen, are attempts to push against the resources to which documentaries have traditionally been restricted, particularly when it comes to stories set in the past—talking heads, archive footage, and the obligatory Ken Burns effect. At times, such reconstructions can feel like cheating, as if the director were bristling at having to work with the available material. Telling such stories in the form of comics instead would be an elegant way of circumventing those limitations while remaining true to the medium’s logic.
And certain documentaries would work even better as comics, particularly if they require the audience to process large amounts of complicated detail. Serial, with its endless, somewhat confusing discussions of timelines and cell phone towers, might have worked better as a comic book, which would have allowed readers to review the chain of events more easily. And a director like Errol Morris, who has made brilliant use of diagrams and illustrations in his published work, would be a natural fit. There’s no denying that some documentaries would lose something in the translation: the haunted face of Robert Durst in The Jinx has a power that can’t be replicated in a comic panel. But comics, at their best, are an astonishing way of conveying and managing information, and for certain stories, I can’t imagine anything more effective. We’re living in a time in which we seem to be confronting complex systems every day, and as a result, artists of all kinds have begun to address what Zadie Smith has called the problem of “how the world works,” with stories that are as much about data, interpretation, and information overload as about individual human beings. For the latter, narrative formats that can offer us a real face or voice may still hold an edge. But for many of the subjects that documentarians in film, television, or radio will continue to tackle, the comics may be the best solution they’ll ever have.
The big jar of rocks
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What’s your absolute favorite piece of media so far this year?”
Earlier this week, while exploring the question of why we say someone is “in” a movie but “on” a television series, I got to thinking about the significance of the television set itself as a physical object. It’s hard to imagine a more ubiquitous appliance: a hotel room that contains nothing else but a bed and a toilet seems bare without that blank screen in the corner, and we encounter them in every waiting room, bar, and airport terminal. Television is a utility, like heat or gas, and when we talk about channels or airwaves, we’re making a subconscious analogy to running water. Most households have one, to the point where its absence is worth mentioning, and choosing not to own a television amounts to a political or lifestyle statement. Or at least it once did. Back when I was in college, acquiring a television set was a big deal: it freed us from the tyranny of the common room, where I had to stake a claim to watch everything from The X-Files to that one time R.E.M. appeared on Sesame Street. Nowadays, fewer college kids seem to make owning a set a priority, and if they do, it’s more likely to be used for gaming. We have plenty of other screens that can do the same work as well or better, and in a decade or two, television sets may seem like dusty relics, kept out of nostalgia or inertia, like the radios or electric organs in the parlor of your grandmother’s house.
Yet that box still carries a psychological significance. It serves as a reminder, or even an advertisement, of the fact that television exists. We still switch it on out of habit, just because it’s there, and even those of us who don’t use it as a source of background noise are likely to flip through the channels as soon as we drop our bags in the aforementioned hotel room. The same qualities that make it seem vaguely anachronistic—the way it’s tethered to a bulky, immovable object, or how the flow of information goes only one way—are a big part of its lingering appeal. It doesn’t demand anything of us, except that we keep it in our line of sight, and it remains an ideal source of distraction and consolation for loners, agoraphobes, and new parents. Even as we migrate to other sources of content, television stands at the center of that solar system: maybe a quarter of the time I spend online is devoted to scrolling through news, criticism, episode recaps, or think pieces about the shows I like, which is more than I spend reading about politics, current events, or just about anything else. Even when that screen in the corner remains dark, it throws out its tendrils into whatever browser window happens to be open. It’s the Cthulhu of pop culture, invading the dreams of its followers even as it slumbers in the deep.
And you can see the impact on this blog. Over the last six months, the only film released this year to which I’ve devoted a complete post, somewhat hilariously, is Blackhat, which was seen by fewer moviegoers on its entire run than turn out on a good afternoon for Jurassic World. I haven’t written about any new books at all—the most recent novel I’ve finished reading, The Goldfinch, was published two years ago. As with most people in their middle thirties, my knowledge of current music is actively embarrassing. Yet over the same period, I’ve written extensively about television shows like Parks and Recreation, House of Cards, Glee, The Jinx, Unbreakable Kimmy Schmidt, The Vampire Diaries, Mad Men, Community, Game of Thrones, True Detective, and Hannibal. I don’t even think of myself as a television fan, at least not in the way I love the movies, but my shift in that direction has been as decisive as it felt inevitable. A lot of this is due to the fact that I just don’t get out as much as I once did, except to bring my daughter to the playground or library. But if I’ve embraced television instead of becoming a better reader or catching up on music, it tells us something about how that medium insinuates itself so readily into the pockets of time that remain.
Television, after all, is infinitely expandable or compressible, as long as you extend its definition to other forms of streaming content. It can take up weeks of your life or a minute or two at a time. If you want to be told a novelistic story, it’s happy to oblige, but it’s equally capable of delivering a quick laugh or a snackable dose of diversion. And at a time when my life sometimes seems packed to bursting with the demands of work and parenthood, it’s glad to take up whatever bandwidth remains. I can give it as much, or as little, energy as I like. My wife listens to podcasts for much the same reason, and radio has certainly mastered the trick of rewarding a wide range of attentiveness: even the best radio programs encourage their listeners to do as little thinking for themselves as possible. And if I’ve stuck with television instead, it’s because it was there already, just waiting for me to turn the faucet. It reminds me of Stephen Covey’s parable of the jar of rocks, although with the opposite moral: even when it seems full, you can pour in a little more water until all the nooks and crannies are filled. Television has had decades of practice at filling us up to the brim, and lucky for me, it’s been a great six months. (For the record, the best things I’ve seen so far this year are The Jinx and the Mad Men finale.) But if television is the water in the jar, books, movies, and music are the rocks. This isn’t a value judgment, just an observation. And as Covey likes to say, if you don’t put the big rocks in first, you’ll never get them in at all.
A brand apart
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What individual instances of product placement in movies and television have you found most effective?”
One of the small but consistently troublesome issues that every writer faces is what to do about brand names. We’re surrounded by brands wherever we look, and we casually think and talk about them all the time. In fiction, though, the mention of a specific brand often causes a slight blip in the narrative: we find ourself asking if the character in question would really be using that product, or why the author introduced it at all, and if it isn’t handled well, it can take us out of the story. Which isn’t to say that such references don’t have their uses. John Gardner puts it well in The Art of Fiction:
The writer, if it suits him, should also know and occasionally use brand names, since they help to characterize. The people who drive Toyotas are not the same people who drive BMWs, and people who brush with Crest are different from those who use Pepsodent or, on the other hand, one of the health-food brands made of eggplant. (In super-realist fiction, brand names are more important than the characters they describe.)
And sometimes the clever deployment of brands can be another weapon in the writer’s arsenal, although it usually only works when the author already possesses a formidable descriptive vocabulary. Nicholson Baker is a master of this, and it doesn’t get any better than Updike in Rabbit is Rich:
In the bathroom Harry sees that Ronnie uses shaving cream, Gillette Foamy, out of a pressure can, the kind that’s eating up the ozone so our children will fry. And that new kind of razor with the narrow single-edge blade that snaps in and out with a click on the television commercials. Harry can’t see the point, it’s just more waste, he still uses a rusty old two-edge safety razor he bought for $1.99 about seven years ago, and lathers himself with an old imitation badger-bristle on whatever bar of soap is handy…
For the rest of us, though, I’d say that brand names are one of those places where fiction has to retreat slightly from reality in order to preserve the illusion. Just as dialogue in fiction tends to be more direct and concise than it would be in real life, characters should probably refer to specific brands a little less often than they really would. (This is particularly true when it comes to rapidly changing technology, which can date a story immediately.)
In movies and television, a prominently featured brand sets off a different train of thought: we stop paying attention to the story and wonder if we’re looking at deliberate product placement—if there’s even any question at all. Even a show as densely packed as The Vampire Diaries regularly takes a minute to serve up a commercial for the likes of AT&T MiFi, and shows like Community have turned paid brand integration into entire self-mocking subplots, while still accepting the sponsor’s money, which feels like a textbook example of having it both ways. Tony Pace of Subway explains their strategy in simple terms: “We are kind of looking to be an invited guest with a speaking role.” Which is exactly what happened on Community—and since it was reasonably funny, and it allowed the show to skate along for another couple of episodes, I didn’t really care. When it’s handled poorly, though, this ironic, winking form of product placement can be even more grating than the conventional kind. It flatters us into thinking that we’re all in on the joke, although it isn’t hard to imagine cases where corporate sponsorship, embedded so deeply into a show’s fabric, wouldn’t be so cute and innocuous. Even under the best of circumstances, it’s a fake version of irreverence, done on a company’s terms. And if there’s a joke here, it’s probably on us.
Paid or not, product placement works, at least on me, although often in peculiar forms. I drank Heineken for years because of Blue Velvet, and looking around my house, I see all kinds of products or items that I bought to recapture a moment from pop culture, whether it’s the Pantone mug that reminds me of a Magnetic Fields song or the Spyderco knife that carries the Hannibal seal of approval. (I’ve complained elsewhere about the use of snobbish brand names in Thomas Harris, but it’s a beautiful little object, even if I don’t expect to use it exactly as Lecter does.) If it’s kept within bounds, it’s a mostly harmless way of establishing a connection between us and something we love, but it always ends up feeling a little empty. Which may be why brand names sit so uncomfortably in fiction. Brands or corporations use many of the same strategies as art to generate an emotional response, except the former is constantly on message, unambiguous, and designed to further a specific end. It’s no accident that there are so many affinities between advertising and propaganda. A good work of art, by contrast, is ambiguous, open to multiple interpretations, and asks nothing of us aside from an investment of time—which is the opposite of what a brand wants. Fiction and brands are always going to live together, either because they’ve been paid to do so or because it’s an accurate reflection of our world. But we’re more than just consumers. And art, at its best, should remind us of this.
Under the covers
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What great albums do you love that have ugly album covers?”
There are two kinds of readers in this world: those who keep the dust jackets on their books, and those who take them off. For most of my life, I’ve been in the latter camp. Whenever I’m out with a hardcover, I’ll usually leave the dust jacket behind, and although I’ll restore it as soon as the book is back on the shelf, I feel more comfortable carrying an anonymous spine in public. The reasons can be a little hard to parse, even for me. On a practical level, an unsecured dust jacket can be cumbersome and inconvenient: it has a way of slipping up or down whenever you’re reading a book that isn’t flat on a table, which leads to rumpled and torn corners. Really, though, it’s a matter of discretion. I don’t necessarily want to advertise what I’m reading for everyone else to see, and a book cover, among other things, is an advertisement, as well as an invitation to judge. Whenever we’re in close proximity to other readers, we all do it, but I prefer to avoid it entirely. Reading, for me, is an immersion in a private world, and what I do there is my own business. And this holds true whether or not the title could be construed as odd or embarrassing. (Only once in my adult life have I ever constructed a paper slipcover to conceal the cover of a book I was reading on the subway. It was the Bible.)
This is particularly true of covers that aggressively sell the contents to the point of active misrepresentation, which seems to be the case pretty often. As I’ve said before in reference to my own novels, a book’s cover art is under tremendous pressure to catch the buyer’s eye: frequently, it’s the only form of advertising a book ever gets. Hence the chunky fonts, embossed letters, and loud artwork that help a book stand out on shelves, but feel vaguely obscene when held in the hand. And the cover image need bear little resemblance to the material inside. Back in the heyday of pulp fiction, seemingly every paperback original was sold with the picture of a girl with a gun, even if the plot didn’t include any women at all. Hard Case Crime, the imprint founded by my friend and former colleague Charles Ardai, has made a specialty of publishing books with covers that triangulate camp, garishness, and allure, and sometimes it gleefully pushes the balance too far. I was recently tempted to pick up a copy of their reprint of Michael Crichton’s Binary, an early pulp thriller written under the pseudonym John Lange, but the art was about ten percent too lurid: I just couldn’t see myself taking it on a plane. There’s no question that it stood out in the store, but it made me think twice about taking it home.
In theory, once we’ve purchased a book, album, or movie, its cover’s work is done, as with any other kind of packaging. And yet we also have to live with it, even if the degree of that engagement varies a lot from one medium to another. In an ideal world, every book would come with two covers—one to grab the browser’s eye, the other to reside comfortably on a shelf at home—and in fact, a lot of movies take this approach: the boxes for my copies of The Godfather Trilogy and The Social Network, among others, come with a flimsy fake cover to display in stores, designed to be removed to present a more sober front at home. It’s not so different from the original function of a dust jacket, which was meant solely as a protective covering to be thrown away after the book was purchased. In practice, I don’t feel nearly the same amount of ambivalence toward ugly DVD or album covers as I do with books: the experience of watching a movie or listening to music is detachable from the container in which it arrives, while a book is all of a piece. That said, there are a couple of movies in my collection, like Say Anything, that I wish didn’t look so egregiously awful. And like a lot of Kanye fans, I always do a double take when the deliberately mortifying cover art for My Beautiful Dark Twisted Fantasy pops up in my iTunes queue.
But I don’t often think consciously about album art these days, any more than I can recall offhand how the box covers look for most of my movies. And there’s a sense in which such packaging has grown increasingly disposable. For many of us, the only time we’ll see the cover art for a movie or album is as a thumbnail on Amazon before we click on it to download. Even if we still buy physical discs, the jewel case is likely to be discarded or lost in a closet as soon as we’ve uploaded it in digital form. Covers have become an afterthought, and the few beautiful examples that we still see feel more like they’re meant to appeal to the egos of the artists or designers, as well as a small minority of devoted fans. But as long as physical media still survive, the book is the one format in which content and packaging will continue to exist as a unit, and although we’ll sometimes have to suffer through great books with bad covers, we can also applaud the volumes in which form and content tell a unified story. Pick up a novel like The Goldfinch, and you sense at once that you’re in good hands: regardless of how you feel about the book itself, the art, paper, and typesetting are all first-rate—it’s like leafing through a Cadillac. I feel happy whenever I see it on my shelf. And one of these days, I may even finish reading it.
The middle ground
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What series are you waiting to dive into until you can do it all at once?”
Yesterday, while leafing through a recent issue of The New Yorker, I came across the following lines in a book review by James Wood:
[Amit Chaudhuri] has struggled, as an Indian novelist writing in English, with the long shadow of Salman Rushdie’s Booker-winning novel Midnight’s Children…and with the notion, established in part by the success of that book, that fictional writing about Indian life should be noisy, magical, hybrid, multivocally “exotic”—as busy as India itself…He points out that in the Bengali tradition “the short story and novella have predominated at least as much as the novel,” and that there are plenty of Indian writers who have “hoped to suggest India by ellipsis rather than by all-inclusiveness.”
Wood, who is no fan of the “noisy, magical, hybrid” form that so many modern novels have assumed, draws an apt parallel to “the ceaseless quest for the mimetically overfed Great American Novel.” But an emphasis on short, elliptical fiction has been the rule, rather than the exception, in our writing programs for years. And a stark division between big and small seems to be true of most national literatures: think of Russia, for instance, in which Eugene Onegin stands as the only real rival as a secular scripture to the loose, baggy monsters of Tolstoy and Dostoyevsky.
Yet most works of art, inevitably, end up somewhere in the middle. If we don’t tend to write essays or dissertations about boringly midsized novels, which pursue their plot and characters for the standard three hundred pages or so, it’s for much the same reason that we don’t hear much about political moderates: we may be in the majority, but it isn’t news. Our attention is naturally drawn to the extreme, which may be more interesting to contemplate, but which also holds the risk that we’ll miss the real story by focusing on the edges. When we think about film editing, for instance, we tend to focus on one of two trends: the increasingly rapid rate of cutting, on the one hand, and the fetishization of the long take, on the other. In fact, the average shot length has been declining at a more or less linear rate ever since the dawn of the sound era, and over the last quarter of a century, it’s gone from about five seconds to four—a change that is essentially imperceptible. The way a movie is put together has remained surprisingly stable for more than a generation, and whatever changes of pace we do find are actually less extreme than we might expect from the corresponding technical advances. Digital techniques have made it easier than ever to construct a film out of very long or very short shots, but most movies still fall squarely in the center of the bell curve. And in terms of overall length, they’ve gotten slightly longer, but not by much.
That’s true of other media as well. Whenever I read think pieces about the future of journalism, I get the impression that we’ve been given a choice between the listicle and the longread: either we quickly skim a gallery of the top ten celebrity pets, or we devote an entire evening to scrolling through a lapbreaker like “Snow Fall.” Really, though, most good articles continue to fall in the middle ground; it’s just hard to quantify what makes the best ones stand out, and it’s impossible to reduce it to something as simple as length or format. Similarly, when it comes to what we used to call television, the two big stories of the last few years have been the dueling models of Vine and Netflix: it seems that either we can’t sit still for more than six seconds at a time, or we’re eager to binge on shows for hours and hours. There are obvious generational factors at play here—I’ve spent maybe six seconds total on Vine—but the division is less drastic than it might appear. In fact, I suspect that most of us still consume content in the way we always have, in chunks of half an hour to an hour. Mad Men was meant to be seen like this; so, in its own way, was Community, which bucked recent trends by releasing an episode per week. But it isn’t all that interesting to talk about how to make a great show that looks more or less like the ones that have come before, so we don’t hear much about it.
Which isn’t to say that the way we consume and think about media hasn’t changed. A few years ago, the idea of waiting to watch a television show until its entire run was complete might have seemed ridiculous; now, it’s an option that many of us seriously consider. (The only series I’ve ever been tempted to wait out like this was Lost, and it backfired: once I got around to starting it, the consensus was so strong that it went nowhere that I couldn’t bring myself to get past the second season.) But as I’ve said before, it can be a mistake for a television show—or any work of art—to proceed solely with that long game in mind, without the pressure of engaging with an audience from week to week. We’re already starting to see some of the consequences in Game of Thrones, which thinks entirely in terms of seasons, but often forgets to make individual scenes worth watching on a level beyond, “Oh, let’s see what this guy is doing.” But a show that focuses entirely on the level of the scene or moment can sputter out after a few seasons, or less: Unbreakable Kimmy Schmidt had trouble sustaining interest in its own premise for even thirteen episodes. The answer, as boring as it may be, lies in the middle, or in the narratives that think hard about telling stories in the forms that have existed before, and will continue to exist. The extremes may attract us. But it’s in the boring middle ground that the future of an art form is made.
An unfinished decade
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What movie from our best films of the decade so far list doesn’t deserve to be on there?”
Toward the end of the eighties, Premiere Magazine conducted a poll of critics, directors, writers, and industry insiders to select the best films of the previous decade. The winners, in order of the number of votes received, were Raging Bull, Wings of Desire, E.T., Blue Velvet, Hannah and Her Sisters, Platoon, Fanny and Alexander, Shoah, Who Framed Roger Rabbit, and Do the Right Thing, with The Road Warrior, Local Hero, and Terms of Endearment falling just outside the top ten. I had to look up the list to retype it here, but I also could have reconstructed much of it from memory: a battered copy of Premiere’s paperback home video guide—which seems to have vanished from existence, along with its parent magazine, based on my inability, after five minutes of futile searching, to even locate the title online—was one of my constant companions as I started exploring movies more seriously in high school. And if the list contains a few headscratchers, that shouldn’t be surprising: the poll was held a few months before the eighties were technically even over, which isn’t close to enough time for a canon to settle into a consensus.
So how would an updated ranking look? The closest thing we have to a more recent evaluation is the latest Sight & Sound critics’ poll of the best films ever made. Pulling out only the movies from the eighties, the top films are Shoah, Raging Bull, Blade Runner, Blue Velvet, Fanny and Alexander, A City of Sadness, Do the Right Thing, L’Argent, The Shining, and My Neighbor Totoro, followed closely by Come and See, Distant Voices Still Lives, and Once Upon a Time in America. There’s a degree of overlap here, and Raging Bull was already all but canonized when the earlier survey took place, but Wings of Desire, which once came in second, is nowhere in sight, its position taken by a movie—Blade Runner—that didn’t even factor into the earlier conversation. The Shining received the vote of just a single critic in the Premiere poll, and at the time it was held, My Neighbor Totoro wouldn’t be widely seen outside Japan for another three years. Still, if there’s a consistent pattern, it’s hard to see, aside from the obvious point that it takes a while for collective opinion to stabilize. Time is the most remorseless, and accurate, critic of them all.
And carving up movies by decade is an especially haphazard undertaking. A decade is an arbitrary division, much more so than a single year, in which the movies naturally engage in a kind of accidental dialogue. It’s hard to see the release date of Raging Bull as anything more than a quirk of the calendar: it’s undeniably the last great movie of the seventies. You could say much the same of The Shining. And there’s pressure to make any such list conform to our idea of what a given decade was about. The eighties, at least at the time, were seen as a moment in which the auteurism of the prior decade was supplanted by a blockbuster mentality, encouraged, as Tony Kushner would have it, by an atmosphere of reactionary politics, but of course the truth is more complicated. Blue Velvet harks back to the fifties, but the division at its heart feels like a product of Reaganism, and the belated ascent of Blade Runner is an acknowledgment of the possibilities of art in the era of Star Wars. (As an offhand observation, I’d say that we find it easier to characterize decades if their first years happen to coincide with a presidential election. As a culture, we know what the sixties, eighties, and aughts were “like” far more than the seventies or nineties.)
So we should be skeptical of the surprising number of recent attempts to rank works of art when the decade in question is barely halfway over. This week alone, The A.V. Club did it for movies, while The Oyster Review did it for books, and even if we discount the fact that we have five more years of art to anticipate, such lists are interesting mostly in the possibilities they suggest for later reconsideration. (The top choices at The A.V. Club were The Master, A Separation, The Tree of Life, Frances Ha, and The Act of Killing, and looking over the rest of the list, about half of which I’ve seen, I’d have to say that the only selection that really puzzled me was Haywire.) As a culture, we may be past the point where a consensus favorite is even possible: I’m not sure if any one movie occupies the same position for the aughts that Raging Bull did for the eighties. If I can venture one modest prediction, though, it’s that Inception will look increasingly impressive as time goes on, for much the same reason as Blade Runner does: it’s our best recent example of an intensely personal version thriving within the commercial constraints of the era in which it was made. Great movies are timeless, but also of their time, in ways that can be hard to sort out until much later. And that’s true of critics and viewers, too.
The curated past of Mad Men
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What has Mad Men inspired you to seek out?”
Now that Mad Men is entering its final stretch at last, it’s time to acknowledge a subtle but important point about the source of its appeal. This is my favorite television drama of all time. I’m not going to argue that it’s the greatest series ever—we’ll need another decade or two to make that appraisal with a cool head—but from one scene to the next, one episode after another, it’s provided me with more consistent pleasure and emotion than any show I can name. I’ve spoken before, perhaps too often, about what I like to call its fractal quality: the tiniest elements start to feel like emblems of the largest, and there’s seemingly no limit to how deep you can drill while analyzing even the smallest of touches. For proof, we need turn no further than the fashion recaps by Tom and Lorenzo, which stand as some of the most inspired television criticism of recent years. The choice of a fabric or color, the reappearance of a dress or crucial accessory, a contrast between the outfits of one character and another turn out to be profoundly expressive of personality and theme, and it’s a testament to the genius of both costume designer Jane Bryant and Matthew Weiner, the ultimate man behind the curtain.
Every detail in Mad Men, then, starts to feel like a considered choice, and we can argue over their meaning and significance for days. But that’s also true of any good television series. By definition, everything we see in a work of televised fiction is there because someone decided it should be, or didn’t actively prevent it from appearing. Not every showrunner is as obsessed with minutiae as Weiner is, but it’s invariably true of the unsung creative professionals—the art director, the costume designer, the craftsmen responsible for editing, music, cinematography, sound—whose contributions make up the whole. Once you’ve reached the point in your career where you’re responsible for a department in a show watched by millions, you’re not likely to achieve your effects by accident: even if your work goes unnoticed by most viewers, every prop or bit of business is the end result of a train of thought. If asked, I don’t have any doubt that the costume designers for, say, Revenge or The Vampire Diaries would have much to say about their craft as Jane Bryant does. But Mad Men stands alone in the current golden age of television in actually inspiring that kind of routine scrutiny for each of its aesthetic choices, all of which we’re primed to unpack for clues.
What sets it apart, of course, is its period setting. With a series set in the present day, we’re more likely to take elements like costume design and art direction for granted; it takes a truly exceptional creative vision, like the one we find in Hannibal, to encourage us to study those choices with a comparable degree of attention. In a period piece, by contrast, everything looks exactly as considered as it really is: we know that every lamp, every end table, every cigarette or magazine cover has been put consciously into place, and while we might appreciate this on an intellectual level with other shows, Mad Men makes us feel it. And its relatively recent timeframe makes those touches even more evident. When you go back further, as with a show like Downton Abbey, most of us are less likely to think about the decisions a show makes, simply because it’s more removed from our experience: only a specialist would take an interest in which kind of silverware Mrs. Hughes sets on the banquet table, rather than another, and we’re likely to think of it as a recreation, not a creation. (This even applies to a series like Game of Thrones, in which it’s easy to take the world it makes at face value, at least until the seams start to show.) But the sixties are still close enough that we’re able to see each element as a choice between alternatives. As a result, Mad Men seems curated in a way that neither a contemporary or more remote show would be.
I’m not saying this to minimize the genuine intelligence behind Mad Men’s look and atmosphere. But it’s worth admitting that if we’re more aware of it than usual, it’s partially a consequence of that canny choice of period. Just as a setting in the recent past allows for the use of historical irony and an oblique engagement with contemporary social issues, it also encourages the audience to regard mundane details as if they were charged with significance. When we see Don Draper reading Bernard Malamud’s The Fixer, for instance, we’re inclined to wonder why, and maybe even check it out for ourselves. And many of us have been influenced by the show’s choices of fashion, music, and even liquor. But its real breakthrough lay in how those surface aspects became an invitation to read more deeply into the elements that mattered. Even if we start to pay less attention to brand names or articles of set dressing, we’re still trained to watch the show as if everything meant something, from a line of throwaway dialogue to Don’s lingering glance at Megan at the end of “Hands and Knees.” Like all great works of art, Mad Men taught us how to watch it, and as artists as different as Hitchcock and Buñuel understood, it knew that it could only awaken us to its deepest resonances by enticing us first with its surfaces. It turned us all into noticers. And the best way to honor its legacy is by directing that same level of attention onto all the shows we love.
The opening act dilemma
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “Have you ever gone to a concert just for the opener?”
Earlier this week, I described the initial stages of creating a brand, whether commercial or artistic, as a kind of charitable enterprise: you’ve got to be willing to lose money for years to produce anything with a chance of surviving. I was speaking primarily of investors and patrons, but of course, it’s also true of artists themselves. A career in the arts requires an enormous initial investment of time, energy, and money—at least in the form of opportunity cost, as you choose not to pursue more remunerative forms of making a living—and a major factor separating those who succeed from those who don’t is the amount of pain they’re willing to endure. David Mamet famously said that everyone gets a break in show business in twenty-five years: some get it at the beginning, others at the end, and all you can really control is how willing you are to stick around after everyone else has gone home. That’s always been true, but more recently, it’s led to a growing assumption that emerging artists should be willing, even eager, to give work away for free. With media of all kinds being squeezed on both sides by increasing competition and diminishing audiences, there’s greater pressure than ever to find cheap content, and the most reliable source has always been hungry unknowns desperate for any kind of exposure.
And that last word is an insidious one. Everybody wants exposure—who wouldn’t?—but its promise is often used to justify arrangements in which artists are working for nothing, or at a net loss, for companies that aren’t in it for charity. Earlier this month, McDonald’s initially declined to pay the bands scheduled to play at its showcase at South by Southwest, saying instead that the event would be “a great opportunity for additional exposure.” (This took the form of the performers being “featured on screens throughout the event, as well as possibly mentioned on McDonald’s social media counts.”) When pressed on this, the company replied sadly: “There isn’t a budget for an artist fee.” Ultimately, after an uproar that canceled out whatever positive attention it might have expected, it backtracked and agreed to compensate the artists. And even if this all sort of went nowhere, it serves as a reminder of how craven even the largest corporations can be when it comes to fishing for free content. McDonald’s always seeks out the cheapest labor it can, cynically passing along the hidden human costs to the rest of society, so there’s no reason to expect it to be any different when it comes to music. As Mamet says of movie producers, whenever someone talks to you about “exposure,” what they’re really saying is: “Let me take that cow to the fair for you, son.”
That said, you can’t blame McDonald’s for seizing an opportunity when it saw one. If there are two groups of artists who have always been willing to work for free, it’s writers and musicians, and it’s a situation that has been all but institutionalized by how the industries themselves are structured. A few months ago, Billboard published a sobering breakdown of the costs of touring for various tiers of performers. For a headliner like Lady Gaga or Katy Perry, an arena performance can net something like $300,000, and even after the costs of production, crew, and transportation are deducted, it’s a profitable endeavor. But an opening act gets paid a flat fee of $15,000 or so, and when you subtract expenses and divide the rest between members of the band, you’re essentially paying for the privilege of performing. As Jamie Cheek, an entertainment business manager, is quoted as saying: “If you get signed to a major label, you’re going to make less money for the next two or three years than you’ve ever made in your life.” And it remains a gamble for everyone except the label itself. Over the years, I’ve seen countless opening acts, but I’d have trouble remembering even one, and it isn’t because they lacked talent. We’re simply less likely to take anything seriously if we haven’t explicitly paid for it.
That’s the opening act dilemma. And it’s worth remembering this if you’re a writer being bombarded with proposals to write for free, even for established publications, for the sake of the great god exposure. For freelancers, it’s created a race to the bottom, as they’re expected to work for less and less just to see their names in print. And we shouldn’t confuse this with the small presses that pay contributors in copies, if at all. These are labors of love, meant for a niche audience of devoted readers, and they’re qualitatively different from commercial sites with an eye on their margins. The best publications will always pay their writers as fairly as they can afford. Circulation for the handful of surviving print science-fiction magazines has been falling for years, for instance, but Analog and Asimov’s recently raised their rate per word by a penny or so. It may not sound like much, but it amounts to a hundred dollars or so that it didn’t need to give its authors, most of whom would gladly write for even less. Financially, it’s hard to justify, but as a sign of respect for its contributors, it speaks volumes, even as larger publications relentlessly cut their budgets for freelancers. As painful as it may be, you have to push back, unless you’re content to remain an opening act for the rest of your life. You’re going to lose money anyway, so it may as well be on your own terms. And if someone wants you to work for nothing now, you can’t expect them to pay you later.
Altered states of conscientiousness
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What pop culture is best consumed in an altered state?”
When Bob Dylan first met the Beatles, the story goes, he was astonished to learn that they’d never used drugs. (Apparently, the confusion was all caused by a mondegreen: Dylan misheard a crucial lyric from “I Want to Hold Your Hand” as “I get high” instead of “I can’t hide.”) This was back in the early days, of course, and later, the Beatles would become part of the psychedelic culture in ways that can’t be separated from their greatest achievements. Still, it’s revealing that their initial triumphs emerged from a period of clean living. Drugs can encourage certain qualities, but musicianship and disciplined invention aren’t among them, and I find it hard to believe that Lennon and McCartney would have gained much, if anything, from controlled substances without that essential foundation—certainly not to the point where Dylan would have wanted to meet them in the first place. For artists, drugs are a kind of force multiplier, an ingredient that can enhance elements that are already there, but can’t generate something from nothing. As Norman Mailer, who was notably ambivalent about his own drug use, liked to say, drugs are a way of borrowing on the future, but those seeds can wither and die if they don’t fall on soil that has been prepared beforehand.
Over the years, I’ve read a lot written by or about figures in the drug culture, from Carlos Castaneda to Daniel Pinchbeck to The Electric Kool-Aid Acid Test, and I’m struck by a common pattern: if drugs lead to a state of perceived insight, it usually takes the form of little more than a conviction that everyone should try drugs. Drug use has been a transformative experience for exceptional individuals as different as Aldous Huxley, Robert Anton Wilson, and Steve Jobs, but it tends to be private, subjective, and uncommunicable. As such, it doesn’t have much to do with art, which is founded on its functional objectivity—that is, on its capacity to be conveyed more or less intact from one mind to the next. And it creates a lack of critical discrimination that can be dangerous to artists when extended over time. If marijuana, as South Park memorably pointed out, makes you fine with being bored, it’s the last thing artists need, since art boils down to nothing but a series of deliberate strategies for dealing with, confronting, or eradicating boredom. When you’re high, you’re easily amused, which makes you less likely to produce anything that can sustain the interest of someone who isn’t in the same state of chemical receptivity.
And the same principle applies to the artistic experience from the opposite direction. When someone says that 2001 is better on pot, that isn’t saying much, since every movie seems better on pot. Again, however, this has a way of smoothing out and trivializing a movie’s real merits. Kubrick’s film comes as close as any ever made to encouraging a transcendent state without the need of mind-altering substances, and his own thoughts on the subject are worth remembering:
[Drug use] tranquilizes the creative personality, which thrives on conflict and on the clash and ferment of ideas…One of the things that’s turned me against LSD is that all the people I know who use it have a peculiar inability to distinguish between things that are really interesting and stimulating and things that appear so in the state of universal bliss the drug induces on a good trip. They seem to completely lose their critical faculties and disengage themselves from some of the most stimulating areas of life.
Which isn’t to say that a temporary relaxation of the faculties doesn’t have its place. I’ll often have a beer while watching a movie or television show, and my philosophy here is similar to that of chef David Chang, who explains his preference for “the lightest, crappiest beer”:
Let me make one ironclad argument for shitty beer: It pairs really well with food. All food. Think about how well champagne pairs with almost anything. Champagne is not a flavor bomb! It’s bubbly and has a little hint of acid and is cool and crisp and refreshing. Cheap beer is, no joke, the champagne of beers.
And a Miller Lite—which I’m not embarrassed to proclaim as my beer of choice—pairs well with almost any kind of entertainment, since it both gives and demands so little. At minimum, it makes me the tiniest bit more receptive to whatever I’m being shown, not enough to forgive its flaws, but enough to encourage me to meet it halfway. For much the same reason, I no longer drink while working: even that little extra nudge can be fatal when it comes to evaluating whether something I’ve written is any good. Because Kubrick, as usual, deserves the last word: “Perhaps when everything is beautiful, nothing is beautiful.”
Turning down the volume
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What do you listen to when you’re working?” (As it happens, I’ve talked about this before, so this is a slightly revised version of a post that originally appeared on February 25, 2014.)
For years, I listened to music while I wrote. When I was working on my first few novels, I went so far as to put together playlists of songs that embodied the atmosphere or mood I wanted to evoke, or songs that seemed conductive to creating the proper state of mind, and there’s no question that a lot of other writers do the same. (If you spend any time on the writing forums on Reddit, you’ll encounter some variation of the question “What’s your writing playlist?” posted once every couple of days.) This may have been due to the fact that my first serious attempts at writing coincided with a period in my twenties when most of us are listening to a lot of music anyway. And it resulted in some unexpected pleasures, in the form of highly personal associations between certain songs and the stories I was writing at the time. I don’t think I’ll ever be able to listen to Eternal Youth by Future Bible Heroes without thinking of my novelette “The Boneless One,” since it provided the backdrop to the wonderful weeks I spent researching and writing that story, and much of the tone and feel of my novel Eternal Empire is deliberately indebted to the song “If I Survive” by Hybrid, which I’ve always felt was a gorgeous soundtrack waiting for a plot to come along and do it justice.
Yet here’s the thing: I don’t think that this information is of any interest of all to anyone but me. It might be interesting to someone who has read the stories and also knows the songs—which I’m guessing is a category that consists of exactly one person—but even then, I don’t know if the connection has any real meaning. Aside from novels that incorporate certain songs explicitly into the text, as we see in writers as different as Nick Hornby and Stephen King, a writer’s recollection of a song that was playing while a story was written is no different from his memory of the view from his writing desk: it’s something that the author himself may treasure, but it has negligible impact on the reader’s experience. If anything, it may be a liability, since it lulls the writer into believing that there’s a resonance to the story that isn’t there at all. Movies can, and do, trade on the emotional associations that famous songs evoke, sometimes brilliantly, but novels don’t work in quite the same way. Even if you go so far as to use the lyrics as an epigraph, or even as the title itself, the result is only the faintest of echoes, which doesn’t stop writers from trying. (It’s no accident that if you search for a song like Adele’s “Set Fire to the Rain” on Fanfiction.net, you’ll find hundreds of stories.)
This is part of the reason why I prefer to write in silence these days. This isn’t an unbreakable rule: during the rewrite, I’ll often cue up a playlist of songs that I’ve come to think of as my revision music, if only because they take me back to the many long hours I spent as a teenager rewriting stories late into the night. (As it happens, they’re mostly songs from the B-sides collection Alternative by the Pet Shop Boys, the release of which coincided almost exactly with my first extended forays into fiction. Nostalgia, here as everywhere else, can be a powerful force.) During my first drafts, though, I’ve found that it’s better to keep things quiet. Even for Eternal Empire, which was the last of my novels to boast a soundtrack of its own, I ended up turning the volume so low that I could barely hear it, and I finally switched it off altogether. There’s something to be said for silence as a means of encouraging words to come and fill that empty space, and this is as true when you’re seated at your desk as when you’re taking a walk. Music offers us an illusion of intellectual and emotional engagement when we’re really just passively soaking up someone else’s feelings, and the gap between song and story is so wide that I no longer believe that the connection is a useful one.
This doesn’t mean that music doesn’t have a place in a writer’s life, or that you shouldn’t keep playing it if that’s the routine you’ve established. (As it happens, I’ve spent much of my current writing project listening to Reflektor by Arcade Fire.) But I think it’s worth restoring it to its proper role, which is that of a stimulus for feelings that ought to be explored when the music stops. The best art, as I’ve noted elsewhere, serves as a kind of exercise room for the emotions, a chance for us to feel and remember things that we’ve never felt or tried to forgotten. Like everyone else, I’ll often hear a song on the radio or on old playlist, like “Two-Headed Boy Part 2” by Neutral Milk Hotel, that reminds me of a period in my life I’ve neglected, or a whole continent of emotional space that I’ve failed to properly navigate. That’s a useful tool, and it’s one that every writer should utilize. The best way to draw on it, though, isn’t to play the song on an endless loop, but to listen to it once, turn it off, and then try to recapture those feelings in the ensuing quiet. If poetry, as Wordsworth said, is emotion recollected in tranquility, then perhaps fiction is music recollected—or reconstructed—in silence. If you’ve done it right, the music will be there. But it only comes after you’ve turned the volume down.
To be young was very heaven
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s question: “Assuming the afterlife exists, in what fictional world do you want to spend it?”
Years ago, whenever I thought about the possibility of an afterlife, I’d find myself indulging in a very specific fantasy. After my death, I’d wake up lying on a beach, alone, dressed for some reason in a dark suit pretty much like the one Kyle MacLachlan wore on Twin Peaks. The world in which I’d find myself would be more or less like our own, except maybe a little emptier, and as I explored it, I’d gradually come into contact with other departed souls who had awoken into much the same situation. We’d be curious about who or what had brought us here, but the answers wouldn’t be obvious, and we’d suspect that we were all part of some kind of ongoing test or game, the rules of which were still obscure. And we’d spend the rest of eternity trying to figure out what, exactly, we were supposed to be doing there. (I’m not the first to imagine something like this: Philip Jose Farmer’s Riverworld series is based on a similar premise. And much later, I was amazed to find the same image in the opening scenes of A Matter of Life and Death by Powell and Pressburger, in which the airman played by David Niven—who isn’t really dead, although he doesn’t know this yet—wakes up to find himself on a beach in Devon. He thinks he’s in heaven, and he’s pleased to meet a dog there: “I’d always hoped there would be dogs.”)
What’s funny, of course, is that what I’ve described isn’t so far from the world in which we’ve actually found ourselves. We’re all born into an ongoing story, its meaning unknown, and we’re left to explore it and figure out the answers together. The difference is that we enter it as babies, and by the time we’re old enough to have any agency, we’ve already started to take it for granted. There’s a window of time in childhood when everything in the world is exciting and new—I’m seeing my daughter go through it now—but most of us slowly lose it, as our lives become increasingly governed by assumptions and routine. That’s a necessary part of growing older: as a practical matter, if we faced every day as another adventure, we’d quickly burn ourselves out, although not before rendering ourselves unbearable to everyone else we knew. Yet there’s also a tremendous loss here, and we spend much of our adult lives trying to recapture that magic in a provisional fashion. Part of the reason I became a novelist was to consciously reinvigorate that sense of possibility, by laboriously renewing it one story at a time. (If writers often seem unduly obsessed with death, it’s partially because the field attracts people of that temperament: we’re engaged either in constructing a kind of literary immorality for ourselves or in increasing the number of potential lives we can experience in the limited time we have.)
On a similar level, when we fantasize about spending our afterlives in Narnia or the Star Trek universe, we’re really talking about recapturing that sense of childlike discovery with our adult sensibilities and capacities intact. This planet is as wondrous as any product of fantasy world-building, but by the time we have the freedom and ability to explore it, we’ve been tied down by other responsibilities, or simply by a circumscribed sense of the possibilities at our disposal. So much speculative fiction—or really fiction of any kind—is devoted to rekindling the sense of wonder that we should, in theory, be able to feel just by looking all around us, if we hadn’t gotten so used to it. Video games of the open world variety are designed to reignite some of that old curiosity, and there’s even an entire subreddit devoted to talking about the real world as if it were a massively multiplayer online game, with billions of active players. It’s a cute conceit, but it’s also a reminder of how little we take advantage of the potential that life affords. If this were a game, we’d be constantly exploring, talking to strangers, and poking our heads into whatever byways caught our interest. Instead, we tend to treat it as if we were on rails, except in those rare times when the range of possibilities seems to expand for everyone, as it did to Wordsworth during the French Revolution: “Bliss was it in that dawn to be alive, but to be young was very heaven.”
This inability to live outside our own limits explains why the problem of boredom is one that all creators of speculative afterlives, from Dante to Mark Twain, have been forced to confront, with mixed results. Even eternal bliss might start to feel like a burden if extended beyond the heat death of the universe, and to imagine that we’ll merely be content to surrender ourselves to that ecstasy also means giving up something precious about ourselves. Dante’s vision of purgatory is compelling because it turns the afterlife into a learning process of its own—a series of challenges we need to surmount to climb that mountain—and his conception of paradise is significantly less interesting, both poetically and theologically. But if we can start to see heaven as a place in which that sense of childlike discovery is restored, only with full maturity and understanding, it starts to feel a lot more plausible. And, more practically, it points a way forward right now. As Wordsworth says later in the same poem:
[They] were called upon to exercise their skill,
Not in Utopia, subterranean fields,
Or some unsecreted island, Heaven knows where!
But in the very world, which is the world
Of all of us,—the place where in the end
We find our happiness, or not at all!
That revolution, like most utopian ideals, didn’t end as most of its proponents would have wished. But in this life, in incremental ways, it’s the closest thing we have to paradise. Or to put it even more vividly: “Unless you change and become like little children, you will never enter the kingdom of heaven.”
Disquiet on the set
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s question: “What movie scene would you have wanted to be on set for?”
“The most exciting day of your life may well be your first day on a movie set,” William Goldman writes in Adventures in the Screen Trade, “and the dullest days will be all those that follow.” Which isn’t to say that filmmaking is more boring than any other kind of creative work. Vladimir Mayakovsky once compared the act of writing poetry to mining for radium—”The output an ounce, the labor a year”—and that’s more or less true of every art form. Moments of genuine excitement are few and far between; the bulk of an artist’s time is spent laying pipe and fixing the small, tedious, occasionally absorbing problems that arise from an hour of manic inspiration that occurred weeks or months before. What sets the movies apart is that their tedium is shared and very expensive, which makes it even less bearable. If star directors have an annoying habit of comparing themselves to generals, perhaps it’s because war and moviemaking have exactly one thing in common: they consist of hours of utter boredom punctuated by moments of sheer terror. (You could argue that the strange career of Werner Herzog can be explained by his determination to drive that boredom away, or at least to elevate the terror level as much as possible while still remaining insurable.)
In general, there are excellent reasons for members of the creative team who aren’t directly involved in the production process to keep away. Screenwriters don’t like being around the filming because it’s all to easy to get caught up in disputes between the actors and director, or to be asked to work for free. Editors like Walter Murch make a point of never visiting the set, because they need to view the resulting footage as objectively as possible: each piece has to be judged on its own terms, and it’s hard to cut something when you know how hard it was to get the shot. And while a serious film critic might benefit from firsthand knowledge of how movies are made, for most viewers, it’s unclear if that experience would add more than it detracts. The recent proliferation of special features on home video has been a mixed blessing: it can be fascinating to observe filmmakers at work, especially in departments like editing or sound that rarely receive much attention, but it can also detach us from the result. I’ve watched the featurettes on my copy of the Lord of the Rings trilogy so many times that I’ve started to think of the movies themselves almost as appendages to the process of their own making, which I’m sure isn’t what Peter Jackson would have wanted.
And a thrilling movie doesn’t necessarily make for a thrilling set, any more than a fun shoot is likely to result in anything better than Ocean’s 13. Contrary to what movies like Hitchcock or The Girl might have us think, I imagine that for most of the cast and crew, working on Psycho or The Birds must have been a little dull: Hitchcock famously thought that the creative work was essentially done once the screenplay was finished, and the act of shooting was just a way of translating the script and storyboards into something an audience would pay to see. (So much of Hitchcock’s own personality—the drollery, the black humor, the pranks—seems to have emerged as a way of leavening the coldly mechanical approach his philosophy as a director demanded.) Godard says that every cut is a lie, but it’s also a sigh: a moment of resignation as the action halts for the next setup, with each splice concealing hours of laborious work. The popularity of long tracking shots is partially a response to the development of digital video and the Steadicam, but it’s also a way of bringing filmmaking closer to the excitement of theater. I didn’t much care for Birdman, but I can imagine that it must have been an exceptionally interesting shoot: extended takes create a consciousness of risk, along with a host of technical problems that need to be solved, that doesn’t exist when film runs through the camera for only a few seconds at a time.
Filmmaking is most interesting as a spectator sport when that level of risk, which is always present as an undertone, rises in a moment of shared awareness, with everyone from the cinematographer to the best boy silently holding his or her breath. There’s more of this risk when movies are shot on celluloid, since the cost of a mistake can be calculated by the foot: Greta Gerwig, in the documentary Side by Side, talks about how seriously everyone takes it when there’s physical film, rather than video, rolling through the camera. There’s more risk on location than in the studio. And the risk is greatest of all when the scene in question is a crucial one, rather than a throwaway. Given all that, I can’t imagine a more riveting night on the set than the shooting of the opening of Touch of Evil: shot on celluloid, on location, using a crane and a camera the size of a motorcycle, with manual focusing, on a modest budget, and built around a technical challenge that can’t be separated from the ticking bomb of the narrative itself. The story goes that it took all night to get right, mostly because one actor kept blowing his lines, and the the shot we see in the movie was the last take of all, captured just as the sun was rising. It all seems blessedly right, but it must have been charged with tension—which is exactly the effect it has on the rest of the movie. And you don’t need to have been there to appreciate it.
A resolution to read
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s question: “What’s your pop culture resolution for 2015?”
I can’t remember not being able to read. The thought of making time in my life for books has always seemed vaguely redundant, like making time to breathe. Growing up, I was often scolded for bringing a book to the dinner table, much as kids these days need to be told to put down their phones, and I read everywhere—in bed, on the train, sometimes even while walking down the street. Looking around now, though, I find that I rarely sit down for the kind of extended encounter that a good book demands. I read a lot of nonfiction this year, mostly in service of one writing project or another, but embarrassingly few novels: unless I’m forgetting a few, which I doubt, I only managed to get through Ada by Nabokov, Sweet Tooth by Ian McEwan, Netherland by Joseph O’Neill, and the first half of The Goldfinch, although I also found time to revisit Black Sunday and The Magus. I can chalk part of this up to the ongoing shift in my consumption of art and pop culture since the birth of my daughter: I watched a ton of television, didn’t play any video games, and only made it to theaters for Interstellar. And don’t even get me started on live music or theater.
But the relative absence of books from the picture feels different, and more troubling. In a sense, it’s only an extension of a trend that began long before my daughter came along. I spent my twenties living alone in New York, and I read a lot. After giving up on my first attempt at writing a novel, I spent the ensuing year reading everything I thought I should have read but hadn’t, reasoning that it was infinitely easier to get through Paradise Lost and War and Peace than to write even the most trivial piece of fiction on my own. I also had a regular commute on the subway for close to four years, during which I read thousands of pages—I even picked up a complete set of the Yale Shakespeare paperbacks because I could hold them in one hand, and made it all the way from Henry VI, Part I through The Two Noble Kinsmen. Working from home, and later my marriage, changed the dynamic considerably, since I wanted to spend my evenings in other ways. I still loved buying books, though, so I inevitably became more of a browser. And there are times when I think that my extended defense of browsing as a pursuit is less a meaningful argument in itself than a justification of my own habits. Browsing, by definition, demands very little; it has its rewards, but prolonged immersion and engagement aren’t among them.
To the extent that I have a resolution for the new year, then, it’s to reincorporate books, especially fiction, into my life in a more deliberate way. As much as I complain about not having any free time, with a toddler and an unfinished draft competing for every spare minute, that isn’t really true—there’s usually an hour or more each evening, after Beatrix has gone to bed and the day’s work is done, that I could spend with a book, rather than opening up my laptop while listening to a Lord of the Rings commentary track for the third time. There are countless novels, both old and new, screaming at me from the shelves: I’m a third of the way through James Salter’s A Sport and a Pastime, and I’ve been guiltily avoiding titles as different as The Group and Life: A User’s Manual. Part of me suspects that I’ve loved books for so long because I was able to take them for granted; when I shook my head over studies showing that a third of all adults in the United States haven’t read a book since high school, there was something smug about it, like being proud of being able to eat as much as you want because you were born with a good metabolism. But just as I’ve had to think more about diet and exercise than I did in my teens, I’ve also got to be conscious about staying healthy in other ways, starting with how I spend my time.
This also ties in with my other resolution for the year, which is to teach my daughter how to read. It’s a little premature: she’s only two, and I don’t want to shorten the necessary, and beautiful, stage when I’m reading to her aloud—or, even more crucially, telling her stories without any books at all. But I also want her to take reading for granted, just as I did, and in the face of greater challenges: the number of screens competing for her attention seems to grow by the day, and I can’t hold them back forever. Books are inevitably going to be a big part of her life; in this house, here’s no escaping them. And introducing them to her now, as we look at them side by side, feels like the greatest gift I can ever offer, even if it’s up to her to decide how much a part of her life they’ll be later on. (I can’t help but thinking of the recent New Yorker profile of the meme king Emerson Spartz, who loved the Harry Potter series enough to found MuggleNet when he was twelve years old, only to declare a kind of dismissive war on books of all kinds as an adult.) In any case, if I do teach Beatrix how to read this year, it’ll be for my sake as much as for hers. I’ve lived among books like a fish lives in water, but it’s time that both of us really learned, or remembered, how to swim.
What I learned on the Street
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s question: “What do you remember learning from Sesame Street?”
When you’re a parent, one of the first things you discover is how difficult—or impossible—it can be to keep a small child on point. Saying that children are easily distracted is just another way of stating that they find everything equally interesting, or of equal importance, and that they haven’t yet developed the filters that allow adults to prioritize a particular issue at the expense of everything else. (Much of being an artist consists of restoring that kind of sensory omnivorousness, in which nothing, as Sherlock Holmes says, is so important as trifles.) Whenever my daughter opens a book, I never know where her eye will go first, and a big part of the pleasure of reading to her lies in trying to follow her train of thought. In Goodnight Moon, for instance, when we get to the picture of the doll’s house, she’ll point to it and say “Okay now.” I don’t know what she means by this, but it’s clear that I’m only getting a glimpse of a secondary narrative that she’s happily working through as we read the story itself, which consists both of the words on the page and her own tiny, private associations.
This is why I’ve started choosing picture books less for whatever they claim to be about than for the topics of conversation that they evoke. Richard Scarry, for instance, presents a miniature world on each double spread, which seems designed to simultaneously teach new words and suggest networks between ideas. (I’ll never forget how my niece pointed to a picture of a pig next to a bin of corncobs and said: “Maybe the pig wants to eat one corn.”) Scarry, like many of the greatest children’s artists, has a style that takes as much delight in incidentals as in the main line of the story, or whatever educational purpose the book allegedly has, and the more tactile the illustrations, the better. Beatrix is already curious about drawing, and the fact that she can make the connection between the pictures in the books she has and her crayons can only pay off later on. There’s been a lot of debate about whether reading a book on a tablet has the same benefits as traditional storytime, but I’m a little wary of it, if only because interposing a screen between you and the story makes its human origins less obvious.
And when it comes time for Beatrix to watch Sesame Street, I’ll probably get her one of the Old School compilations on DVD, which collect classic scenes and sketches from the show’s early seasons. Old School comes with a disclaimer that states: “These early Sesame Street episodes are intended for grownups and may not suit the needs of today’s preschool child.” Well, maybe: I don’t want to discount the ongoing, and highly valuable, research on how children learn, and I can’t entirely separate my feelings from nostalgia for what I watched growing up. Yet I still believe that the show’s overt educational value—the letters, the numbers, the shapes—was only part of the story, and not even the most important part. When we think of Sesame Street, we think first of the Muppets, whose physicality is a huge part of their appeal, but everything in the show’s initial period had an appealingly funky quality about it. The animations were made on a shoestring; the shorts might have been shot in somebody’s backyard; and even the set was designed to evoke the kind of grungy, everyday neighborhood that many children in the audience knew best, elevated by the magic of imagination and performance.
In short, the classic seasons of Sesame Street are as much about the process of their own making as whatever else they were designed to teach, and the lesson I took away from it most vividly was less about counting to twelve than what it might take to make a show like this myself. In its current incarnation, it probably does a better job of teaching kids the fundamentals, but watching Big Bird explore a digital background detaches us from the weird, incredibly appealing process that brings such stories to life. As David Thomson notes on Jim Henson: “He worked with the odd, the personal, the wild, and the homemade, and flourished in the last age before the computer…Henson was not just the entrepreneur and the visionary, but often the hand in the glove, the voice, and the tall man bent double, putting on the show.” Sesame Street is still wonderful, but it seems less likely to turn kids into puppeteers, which is as good a word as any for what I want Beatrix to be—if we take “puppeteer” simply as a curious character who sees a potential friend in a length of felt, or how a woman’s green coat might one day be a frog.
The monster in the mirror
Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s question: “If you were a horror movie villain, what would be your hook?”
In horror movies, we’re supposed to relate to the victims, but some of the genre’s most enduring works implicate us into an uneasy identification with the monster. I’m not talking about the films that invite the audience to cheer as another mad slasher takes out a platoon of teenagers, or even more sophisticated examples like the original Halloween, which locks us into the killer’s eyes with its opening tracking shot. What I have in mind is something more like Norman Bates. Norman is “nutty as a fruitcake,” to use Roger Ebert’s memorable words, but he’s also immensely appealing and sympathetic in the middle sequence of Psycho, much more so than John Gavin’s square, conventional hero. The connection Norman has with Marion as she eats her sandwich in the parlor is real, or at least real enough to convince her to return the stolen money, and it fools us temporarily into thinking that this movie will be an adventure involving these two shy souls. Because what defines Norman isn’t his insanity, or even his mother issues, but his loneliness. As he says wistfully to Marion: “Twelve cabins, twelve vacancies. They moved away the highway.”
Which is only to say that in Norman, we’re confronted with a weird, distorted image of our own introversion, with his teenager’s room and Beethoven’s Eroica on the record player. Other memorable villains force us to confront other aspects of ourselves by taking these tendencies to their murderous conclusion. Hannibal Lecter is a strange case, since he’s so superficially seductive, and he was ultimately transformed into the hero of his own series. What he really represents, though, is aestheticism run amok. We’d all love to have his tastes in books, music, and food—well, maybe not entirely the latter—but they come at the price of his complete estrangement from all human connection, or an inability to regard other people as anything other than items on a menu. Sometimes, it’s literal; at others, it’s figurative, as he takes an interest in Will Graham or Clarice Starling only to the extent that they can relieve his boredom. Lecter, we’re told, eats only the rude, but “rude” can have two meanings, and for the most part, it ends up referring to those too lowly or rough to meet his own high standards. (Bryan Fuller, to his credit, has given us multiple reminders of how psychotic Lecter’s behavior really is.)
And if Lecter cautions us against the perversion of our most refined impulses, Jack Torrance represents the opposite: “The susceptible imagination,” as David Thomson notes, “of a man who lacks the skills to be a writer.” Along with so much else, The Shining is the best portrait of a writer we have on film, because we can all relate to Jack’s isolation and frustration. The huge, echoing halls of the Overlook are as good a metaphor as I’ve ever seen for writer’s block or creative standstill: you’re surrounded by gorgeous empty spaces, as well as the ghosts of your own ambitions, and all you can manage to do is bounce a tennis ball against the wall, again and again and again. There isn’t a writer who hasn’t looked at a pile of manuscript and wondered, deep down, if it isn’t basically the same as the stack of pages that Jack Torrance lovingly ruffles in his climactic scene with Wendy, and whenever I tell people what I’m working on at the moment, I can’t help but hear a whisper of Jack’s cheerful statement to Ullman: “I’m outlining a new writing project, and five months of peace is just what I want.”
There’s another monster who gets at an even darker aspect of the writer’s craft: John Doe in Seven. I don’t think there’s another horror movie that binds the process of its own making so intimately to the villain’s pathology: Seven is so beautifully constructed and so ingenious that it takes us a while to realize that John Doe is essentially writing the screenplay. Andrew Kevin Walker’s script was sensational enough to get him out of a job at Tower Records, but despite the moral center that Morgan Freeman’s character provides, it’s hard to escape the sense that the film delights more in its killer’s cleverness, which can’t be separated from the writer’s. Unlike Jack Torrance, John Doe is superbly good at what he does, and he’s frightening primarily as an example of genius and facility without heart. The impulse that pushes him to use human lives as pieces in his masterpiece of murder is only the absurdist conclusion of the tendency in so many writers, including me, to treat violence as a narrative tool, a series of marks that the plot needs to hit to keep the story moving. I’m not saying that the two are morally equivalent. But Seven—even in its final limitations, which Fincher later went on to explode in Zodiac—is still a scary film for any writer who ever catches himself treating life and death as a game.