Note: Every Friday, The A.V. Club, my favorite pop cultural site on the Internet, throws out a question to its staff members for discussion, and I’ve decided that I want to join in on the fun. This week’s topic: “What piece of art has actually stopped you in your tracks?”
“All art constantly aspires toward the condition of music,” Walter Pater famously said, but these days, it seems more accurate to say that all art aspires toward the condition of advertising. There’s always been a dialogue between the two, of course, and it runs in both directions, with commercials and print ads picking up on advances in the fine arts, even as artists begin to utilize techniques initially developed on Madison Avenue. Advertising is a particularly ruthless medium—you have only a few seconds to grab the viewer’s attention—and the combination of quick turnover, rapid feedback, and intense financial pressure allows innovations to be adapted and refined with blinding speed, at least within a certain narrow range. (There’s a real sense in which the hard lessons that Jim Henson, say, learned while shooting commercials for Wilkins Coffee are what made Sesame Street so successful.) The difference today is that the push for virality—the need to attract eyeballs in brutal competition with countless potential diversions—has superseded all other considerations, including the ability to grow and maintain an audience. When thousands of “content providers” are fighting for our time on equal terms, there’s no particular reason to remain loyal to any one of them. Everything is an ad now, and it’s selling nothing but itself.
This isn’t a new idea, and I’ve written about it here at length before. What really interests me, though, is how even the most successful examples of storytelling are judged by how effectively they point to some undefined future product. The Marvel movies are essentially commercials or trailers for the idea of a superhero film: every installment builds to a big, meaningless battle that serves as a preview for the confrontation in an upcoming sequel, and we know that nothing can ever truly upset the status quo when the studio’s slate of tentpole releases has already been announced well into the next decade. They aren’t bad films, but they’re just ever so slightly better than they have to be, and I don’t have much of an interest in seeing any more. (Man of Steel has plenty of problems, but at least it represents an actual point of view and an attempt to work through its considerable confusions, and I’d sooner watch it again than The Avengers.) Marvel is fortunate enough to possess one of the few brands capable of maintaining an audience, and it’s petrified at the thought of losing it with anything so upsetting as a genuine surprise. And you can’t blame anyone involved. As Christopher McQuarrie aptly puts it, everyone in Hollywood is “terribly lost and desperately in need of help,” and the last thing Marvel or Disney wants is to turn one of the last reliable franchises into anything less than a predictable stream of cash flows. The pop culture pundits who criticize it—many of whom may not have jobs this time next year—should be so lucky.
But it’s unclear where this leaves the rest of us, especially with the question of how to catch the viewer’s eye while inspiring an engagement that lasts. The human brain is wired in such a way that the images or ideas that seize its attention most easily aren’t likely to retain it over the long term: the quicker the impression, the sooner it evaporates, perhaps because it naturally appeals to our most superficial impulses. Which only means that it’s worth taking a close look at works of art that both capture our interest and reward it. It’s like going to an art gallery. You wander from room to room, glancing at most of the exhibits for just a few seconds, but every now and then, you see something that won’t let go. Usually, it only manages to intrigue you for the minute it takes to read the explanatory text beside it, but occasionally, the impression it makes is a lasting one. Speaking from personal experience, I can think of two revelatory moments in which a glimpse of a picture out of the corner of my eye led to a lifelong obsession. One was Cindy Sherman’s Untitled Film Stills; the other was the silhouette work of Kara Walker. They could hardly be more different, but both succeed because they evoke something to which we instinctively respond—movie archetypes and clichés in Sherman’s case, classic children’s illustrations in Walker’s—and then force us to question why they appealed to us in the first place.
And they manage to have it both ways to an extent that most artists would have reason to envy. Sherman’s film stills both parody and exploit the attitudes that they meticulously reconstruct: they wouldn’t be nearly as effective if they didn’t also serve as pin-ups for readers of Art in America. Similarly, Walker’s cutouts fill us with a kind of uneasy nostalgia for the picture books we read growing up, even as they investigate the darkest subjects imaginable. (They also raise fascinating questions about intentionality. Sherman, like David Lynch, can come across as a naif in interviews, while Walker is closer to Michael Haneke, an artist who is nothing if not completely aware of how each effect was achieved.) That strange combination of surface appeal and paradoxical depth may be the most promising angle of attack that artists currently have. You could say much the same about Vijith Assar’s recent piece for McSweeney’s about ambiguous grammar, which starts out as the kind of viral article that we all love to pass around—the animated graphics, the prepackaged nuggets of insight—only to end on a sweet sucker punch. The future of art may lie in forms that seize on the tools of virality while making us think twice about why we’re tempted to click the share button. And it requires artists of unbelievable virtuosity, who are able to exactly replicate the conditions of viral success while infusing them with a white-hot irony. It isn’t easy, but nothing worth doing ever is. This is the game we’re all playing, like it or not, and the artists who are most likely to survive are the ones who can catch the eye while also burrowing into the brain.
Every good writer knows that the more unusual the scenes and events of his story are, the slighter, more ordinary, the more typical his persons should be. Hence Gulliver is a commonplace little man and Alice is a commonplace little girl. If they had been more remarkable they would have wrecked their books. The Ancient Mariner himself is a very ordinary man. To tell how odd things struck odd people is to have an oddity too much; he who is to see strange sights must not himself be strange.
Aside from a handful of striking exceptions, a novel is a linear form of storytelling, designed to be read in sequence from first page to last. Yet writers are irresistibly drawn to metaphors from the visual arts to describe what they do, in part because they naturally think in terms of the shape of the work as a whole. As readers, when we refer to a novel as a tapestry or a mosaic, it’s less about our experience of it in the moment than the impression it creates over time. This shape is impossible to describe, but when we’re finished with the story, we can sort of hold it in our heads, at least temporarily. It reminds me a little of Borges’s definition of the divine mind:
The steps a man takes from the day of his birth until that of his death trace in time an inconceivable figure. The divine mind intuitively grasps that form immediately, as men do a triangle.
One of the pleasures of a perfectly constructed work of fiction is that it allows us to feel, however briefly, what it might be like to see life as a whole. And although the picture grows dim once we’ve put down the book and picked up another, we’re often left with a sense of the book as a complex shape that somehow exists all at once.
It’s tempting to divide books into groups based on the visual metaphors that come most readily to mind. There are stories that feel like a seamless piece of fabric, which may be the oldest analogy for fiction that we have: the words text and textile emerge from the same root. Other stories gain most of their power from the juxtaposition of individual pieces. They remind us of a mosaic, or, in modern terms, a movie assembled from many distinct pieces of film, so that the combination of two shots creates information that neither one had in isolation. The choice between one strategy or another is often a function of length or point of view. A short novel told with a single strong voice will often feel like a continuous whole, as The Great Gatsby does, while a story that shifts between perspectives and styles, like one of Faulkner’s novels, seems more like a collection of pieces. And it’s especially interesting when one mode blurs into the other. Ian McEwan’s Atonement begins as a model of seamless storytelling, with a diverse cast of characters united by a smooth narrative voice, but it abruptly switches to the juxtaposition strategy halfway through. And sometimes a mosaic can be rendered so finely that it comes back around to fabric again. In his review of Catch-22, which is essentially a series of comic juxtapositions, Norman Mailer observed: “It reminds one of a Jackson Pollock painting eight feet high, twenty feet long. Like yard goods, one could cut it anywhere.”
My own work can be neatly categorized by length: my short stories do their best to unfold as a continuous stream of action, while my novels proceed by the method of juxtaposition, intercutting between three or more stories. I’ve spoken before of how deeply influenced I’ve been by the book and movie of L.A. Confidential, which cut so beautifully between multiple protagonists, and I’ve followed that model almost to a fault. From a writer’s point of view, this approach offers clear advantages, as well as equally obvious pitfalls. Each subplot should be compelling in itself, but they all gain an additional level of interest by being set against the others, and the ability to cut between stories allows you to achieve effects of rhythm or contrast that would be hard to achieve with a single narrative thread. At the same time, there’s a danger that the structure of the overall story—with its logic of intercutting—will produce scenes that don’t justify their existence on their own. You can see both extremes on television shows with big ensemble casts. Mad Men handled those changes beautifully: within each episode’s overarching plot, there were numerous self-contained scenes that could have been presented in any order, and much of their fun and power emerged from Matthew Weiner’s arrangement of those vignettes. Conversely, on Game of Thrones, there are countless scenes that seem to be there solely to remind us that a certain character exists. The show grasps the grammar of intercutting, but not the language, and it’s no accident that many of its best episodes were the ones that focused exclusively on one location.
And I haven’t been immune to the hazards of multiple plots, or the way they can impose themselves on the logic of the story. When I read Chapter 30 of Eternal Empire, for instance, I have trouble remembering why it seemed necessary. Nothing much happens here: Wolfe interrogates a suspect, but gets no useful information, and you could lift out the entire chapter without affecting the rest of the plot whatsoever. It’s been a long time since I wrote it, but I have the uneasy feeling that I inserted a chapter here solely for structural reasons—I needed a pause in Maddy and Ilya’s stories, and Wolfe hadn’t had a scene for a while, so I had to give her something to do without advancing the story past the point where the other subplots had to be. (I can almost see myself with a stack of notecards, shuffling and rearranging them only to realize that I needed a chapter here to avoid upsetting the structure elsewhere.) I did my best to inject the scene with whatever interest I could, mostly by making the interrogation scene as amusing as possible, but frankly, it doesn’t work. In the end, the best thing I can say about this chapter is that it’s short, and if I had the chance to write this novel all over again, I’d either find a way to cut it or, more likely, revise it to advance the story in a more meaningful way. There’s nothing wrong with having a chapter serve as a pause in the action, and if nothing else, the next stretch of chapters is pretty strong. But as it stands, this is less a real chapter than a blank space created by the places where the other parts meet. And I wish I’d come up with a slightly better piece…
Even when I was mistaken, I’ve always been resourceful.
A few weeks ago, I picked up a worn paperback copy of The Art of Scientific Investigation by W.I.B. Beveridge, which I expect will join the short list of books on creativity that I’ll never get tired of reading. It was first published in 1950, but it’s still in print, and it isn’t hard to understand why. Beveridge’s book is essentially a collection of recipes or approaches for coming up with ideas, with meaty chapters devoted to the roles of reason, intuition, chance, and imagination, and it’s loaded with concrete, practical advice. Take the section on what Beveridge calls the transfer method:
Sometimes the central idea on which an investigation hinges is provided by the application or transfer of a new principle or technique which has been discovered in another field. The method of making advances in this way will be referred to as the “transfer” method in research. This is probably the most fruitful and the easiest method in research, and the one most employed in applied research. It is, however, not to be in any way despised. Scientific advances are so hard to achieve that every useful stratagem must be used.
The italics are mine. Success or failure in resolving any problem often boils down to a knowledge of the available tools, and this often requires familiarity with advances in apparently unrelated fields. One of my favorite recent examples comes from the field of adaptive optics. When astronomers are viewing an object through the earth’s atmosphere, which distorts light, they’ll shine a laser in the same direction. When the light from this artificial “guide star” returns, they can measure the distortion, then use that data to adjust their telescope to cancel out the aberrations, which gives them a much more accurate view of the object under observation. The physicist Eric Betzig took the idea of a guide star and applied it to microscopy, which also has to deal with optical information being warped by an intervening medium, which in this case is organic tissue. Taking a cue from astronomy, the technique creates a guide star by focusing light from the microscope on a fluorescent object in the sample, like an embedded bead. After using a wavefront sensor to determine how the light was warped, it can make the appropriate corrections. And because tissue causes more complex distortions than the atmosphere does, it employs yet another strategy—derived from ophthalmology, which uses it to correct images of a patient’s retina—to average out the error. The result won Betzig a Nobel Prize.
And it isn’t hard to see why Betzig paid close attention to astronomy and ophthalmology. These fields may study different classes of objects, but they’re all ultimately about dealing with properties of light as it passes from the observed to the observer, which has clear implications for microscopy. Betzig and his collaborators were shrewd enough to frame their work in the most general possible terms: it wasn’t about microscopes, but about light, and everything that dealt with similar problems was potentially interesting. Being able to correctly define your field—which has more to do with the concrete problems you’re addressing than with labels imposed from the outside—is the first step in identifying useful combinations. And even trained scientists have trouble doing this. As Beveridge notes:
It might be thought that as soon as a discovery is announced, all its possible applications in other fields follow almost immediately and automatically, but this is seldom so. Scientists sometimes fail to realize the significance which a new discovery in another field may have for their own work, or if they do realize it they may not succeed in discovering the necessary modifications.
Of course, it isn’t possible to read or absorb everything, so you need to be smart about how you filter the universe of available material, which can be done from either end. You can start with a solution and then look for interesting problems: Beveridge cites several examples of techniques, such as partition chromatography, in which researchers systematically cast about for fields in which it could be put to use. Alternatively, you can keep a handful of problems perpetually before you, and use it as a kind of sieve to isolate useful ideas, as Gian-Carlo Rota describes:
Richard Feynman was fond of giving the following advice on how to be a genius. You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say: “How did he do it? He must be a genius!”
This is essentially what novelists do. When you have the basic premise of a story in mind, suddenly everything you see becomes relevant—which is a good argument for coming up with at least a general outline as early as possible. But you don’t need to be a novelist, or a scientist, to find a guide star of your own.
I always say to my [writing] classes that it’s analogous to cooking a dinner. You go to the store and you buy a lot of things. You bring them home and you put them on the kitchen counter, and that’s what you’re going to make your dinner out of. If you’ve got a red pepper over here—it’s not a tomato. You’ve got to deal with what you’ve got.
I don’t think there’s another album from the last decade that I’ve played as often as 808s and Heartbreak. When I’m doing chores around the house or just want some music in the background while I’m busy with something else, it’s usually the first thing I cue up on my playlist, but I’ll occasionally just sit down and listen to the first six tracks on headphones, which always seems like the best possible use of my time. Earlier this year, when I was driving my daughter to her toddler play program every morning, we’d often listen to “Heartless” on the way there, to the point where she was able to sing along to most of the chorus. (Beatrix: “Why is he so sad?” Me: “Because he loved a woman who didn’t love him back.”) After I dropped her off, on the way home, I’d switch over to Yeezus, especially “Blood on the Leaves,” which I don’t think she needs to hear just yet. I like Kanye West’s other albums just fine, particularly My Beautiful Dark Twisted Fantasy, with its apparent determination to have every track be the one that renders all other music obsolete forever. But it’s 808s that struck me, when I first heard it, as an album that I’d been waiting to hear for my entire life, and that hasn’t changed since.
Still, when it was announced yesterday that West would be performing 808s in its entirety at a “surprise” concert in September, I found that I wasn’t particularly excited by the prospect. 808s doesn’t feel like an album that can or should be played live: in many respects, it’s the most writerly collection of songs I know, at least in the sense that it feels like the product of intensely concentrated, solitary thought. Plenty of people worked on 808s and Yeezus, but both albums manage to sound like they were composed in utter isolation, by a man singing to himself in the corner with his laptop. That’s the real genius of West’s use of AutoTune: thanks to samples and synthesizers, we’ve long been able to exclude musicians from the studio, but West was the first to realize that you could dispose of the singers, too, leaving as little mediation as possible between the songwriter’s conception and its creation. In my recent post on Tom Cruise, I described him as a producer who happened to be born into the body of a movie star, and much the same holds true of West, who willed himself into becoming one of the biggest musical acts in the world with little more than the kind of sustained craft and intelligence that can only emerge in private.
This isn’t an approach that would work for most other albums, but it comes across brilliantly on 808s. It was recorded after the death of West’s mother, and it feels like nothing less than a meditation on how unbearable emotion can best be expressed through what seems at first like cold, chilly impersonality. It reminds me, oddly, of the Pet Shop Boys—who were equally determined to exclude musical instruments from their early albums—and their insistence that irony and detachment were the only honest way to get at real unfaked feeling. 808s is like the death scene of HAL 9000 extended over fifty minutes, as he sings “Daisy” to himself as his mind goes away, or, on a lighter note, like the swan song of the robot on The Simpsons who asks despairingly: “Why was I programmed to feel pain?” But that doesn’t even hint at how richly, inexhaustibly listenable the result remains after countless plays. “Heartless” and “Paranoid” are close to perfect pop songs, executed without any room for error, and even in the album’s messier sections, we’re as close as we’ll ever get to music delivered straight from one man’s brain to yours, without any loss in the translation. And it isn’t the kind of effect that you can get at the Hollywood Bowl.
West remains an enigma. He’s a man who punches out paparazzi who wound up marrying one of the most photographed women on the planet; an introvert who only seems satisfied when he has the world’s undivided attention; a songwriter of intense self-awareness, even self-loathing, who can come across all too easily as an unfiltered jackass. The gap between West’s public persona and his meticulous craftsmanship is so vast that it’s easy to disregard the latter, and the number of people who have actually heard Yeezus—it barely even reached platinum status—is minuscule compared to those who know him only from the tabloids. As a result, even when West tries to kid himself, he can’t catch a break. Earlier this year at the Grammys, when he made a show of rushing the stage when Beck won the evening’s top prize over Beyoncé, only to turn back with a grin, it was clearly a joke at his own expense, but it was widely taken as just more evidence of his cluelessness. He’s smarter and more talented than any of his critics, but not in ways that express themselves easily before an audience of millions. For the rest of us, there’s always 808s. It’s just him in a room, and once we’re there, he quietly sets up his laptop, presses the play button, and invites us to listen.