Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘New York Times Magazine

The power of the page

leave a comment »

Laura Hillenbrand

Note: I’m on vacation this week, so I’ll be republishing a few of my favorite posts from earlier in this blog’s run. This post originally appeared, in a slightly different form, on December 22, 2014.

Over the weekend, I found myself contemplating two very different figures from the history of American letters. The first is the bestselling nonfiction author Laura Hillenbrand, whose lifelong struggle with chronic fatigue syndrome compelled her to research and write Seabiscuit and Unbroken while remaining largely confined to her house for the last quarter of a century. (Wil S. Hylton’s piece on Hillebrand in The New York Times Magazine is absolutely worth a read—it’s the best author profile I’ve seen in a long time.) The other is the inventor and engineer Buckminster Fuller, whose life was itinerant as Hillebrand’s is stationary. There’s a page in E.J. Applewhite’s Cosmic Fishing, his genial look at his collaboration with Fuller on the magnum opus Synergetics, that simply reprints Fuller’s travel schedule for a representative two weeks in March: he flies from Philadelphia to Denver to Minneapolis to Miami to Washington to Harrisburg to Toronto, attending conferences and giving talks, to the point where it’s hard to see how he found time to get anything else done. Writing a coherent book, in particular, seemed like the least of his concerns; as Applewhite notes, Fuller’s natural element was the seminar, which allowed him to spin complicated webs of ideas in real time for appreciative listeners, and one of the greatest challenges of producing Synergetics lay in harnessing that energy in a form that could be contained within two covers.

At first glance, Hillenbrand and Fuller might seem to have nothing in common. One is a meticulous journalist, historian, and storyteller; the other a prodigy of worldly activity who was often reluctant to put his ideas down in any systematic way. But if they meet anywhere, it’s on the printed page—and I mean this literally. Hylton’s profile of Hillebrand is full of fascinating details, but my favorite passage describes how her constant vertigo has left her unable to study works on microfilm. Instead, she buys and reads original newspapers, which, in turn, has influenced the kinds of stories she tells:

Hillenbrand told me that when the newspaper arrived, she found herself engrossed in the trivia of the period—the classified ads, the gossip page, the size and tone of headlines. Because she was not hunched over a microfilm viewer in the shimmering fluorescent basement of a research library, she was free to let her eye linger on obscure details.

There are shades here of Nicholson Baker, who became so concerned over the destruction of library archives of vintage newspapers that he bought a literal ton of them with his life savings, and ended up writing an entire book, the controversial Human Smoke, based on his experience of reading press coverage of the events leading up to World War II day by day. And the serendipity that these old papers afforded was central to Hillebrand’s career: she first stumbled across the story of Louie Zamperini, the subject of Unbroken, on the opposite side of a clipping she was reading about Seabiscuit.

Buckminster Fuller

Fuller was similarly energized by the act of encountering ideas in printed form, with the significant difference that the words, in this case, were his own. Applewhite devotes a full chapter to Fuller’s wholesale revision of Synergetics after the printed galleys—the nearly finished proofs of the typeset book itself—had been delivered by their publisher. Authors aren’t supposed to make extensive rewrites in the galley stage; it’s so expensive to reset the text that writers pay for any major changes out of their own pockets. But Fuller enthusiastically went to town, reworking entire sections of the book in the margins, at a personal cost of something like $3,500 in 1975 dollars. And Applewhite’s explanation for this impulse is what caught my eye:

Galleys galvanize Fuller partly because of the large visual component of his imagination. The effect is reflexive: his imagination is triggered by what the eye frames in front of him. It was the same with manuscript pages: he never liked to turn them over or continue to another sheet. Page = unit of thought. So his mind was retriggered with every galley and its quite arbitrary increment of thought from the composing process.

The key word here is “quite arbitrary.” A sequence of pages—whether in a newspaper or in a galley proof—is an arbitrary grid laid on a sequence of ideas. Where the page break falls, or what ends up on the opposite side, is largely a matter of chance. And for both Fuller and Hillenbrand, the physical page itself becomes a carrier of information. It’s serendipitous, random, but no less real.

And it makes me reflect on what we give up when pages, as tangible objects, pass out of our lives. We talk casually about “web pages,” but they aren’t quite the same thing: now that many websites, including this one, offer visitors an infinite scroll, the effect is less like reading a book than like navigating the spool of paper that Kerouac used to write On the Road. Occasionally, a web page’s endlessness can be turned into a message in itself, as in the Clickhole blog post “The Time I Spent on a Commercial Whaling Ship Totally Changed My Perspective on the World,” which turns out to contain the full text of Moby-Dick. More often, though, we end up with a wall of text that destroys any possibility of accidental juxtaposition or structure. I’m not advocating a return to the practice of arbitrarily dividing up long articles into multiple pages, which is usually just an excuse to generate additional clicks. But the primacy of the page—with its arbitrary slice or junction of content—reminds us of why it’s still sometimes best to browse through a physical newspaper or magazine, or to look at your own work in printed form. At a time when we all have access to the same world of information, something as trivial as a page break or an accidental pairing of ideas can be the source of insights that have occurred to no one else. And the first step might be as simple as looking at something on paper.

The Book of McBees

with 6 comments

McBee cards

A few weeks ago, The New York Times Magazine published an intriguing profile of Barbara Ketcham Wheaton, a librarian and food historian engaged in a decades-long attempt to catalog all the world’s recipes in a single database. The whole article—written appropriately enough, as we’ll soon find, by Bee Wilson—is fascinating, but this was the part that caught my eye:

In the 1970s Wheaton discovered McBee cards. They were a primitive data system, in which different pieces of information could be encoded by punching holes to designate broad categories (date, gender, country). “After the cards are properly punched, whole packs of them can be searched by running a knitting needle through the desired hole in the pack and lifting it up,” Wheaton explained in a talk last summer at a food symposium held at Oxford. “When, if one is lucky, gems of information will drop out.” McBee cards had obvious limitations, however. “My categories kept expanding, and the cards did not.” Wheaton tried to improve the cards by adding color-coded edges, but then she ran out of colors.

I was immediately captivated by the idea, and I soon found another article on the subject by Kevin Kelly, of Wired and Cool Tools fame. Back in the day, McBee cards came perforated on every edge by tiny holes, and the user employed a special tool to cut a notch—associated with a particular category—that allowed a card to fall out of the deck when the rest of the cards were skewered together. Given more than one needle, or successive selections, you had the equivalent of logical and and logical or functions. Kelly notes that the cards, sold under the brand name Indecks, were used to create the database of items at The Whole Earth Catalog, in which Stewart Brand wrote:

What do you have a lot of? Students, subscribers, notes, books, records, clients, projects? Once you’re past fifty or one hundred of whatever, it’s tough to keep track, time to externalize your store and retrieve system. One handy method this side of a high-rent computer is Indecks. It’s funky and functional: cards with a lot of holes in the edges, a long blunt needle, and a notcher. Run the needle through a hole in a bunch of cards, lift, and the cards notched in that hole don’t rise; they fall out. So you don’t have to keep the cards in order. You can sort them by feature, number, alphabetically or whatever; just poke, fan, lift and catch…They’ve meant the difference [at the Catalog] between partial and complete insanity.

McBee card and notcher

Reading over these descriptions, I began to wonder whether McBee cards would be useful as a writing tool, as a replacement or supplement to the index cards that many writers accumulate in such quantities. In some ways, a standard corkboard or separate stacks of conventional cards might seem preferable: they provide a necessary visual overview of the whole, rather than a single opaque deck, and the cards can be more easily recategorized and rearranged. (You can’t unnotch a McBee card.) But the McBee system offers some enticing advantages. For one thing, it’s portable: you can collapse the piles of cards that cover your desk into one rubber-banded deck, shuffle it, throw it into a backpack, and then easily restore the original order. Cards can also be sorted into more than one category, which is a genuinely useful feature. Let’s say you’re writing a novel like The Icon Thief, with multiple points of view, locations, and themes. Using the McBee system and some appropriate categories, you can quickly find all the scenes in which Maddy Blume appears, or the scenes set at Archvadze’s mansion, or the scenes in which the characters discuss the Rosicrucians—as well as any combination of the above. Instead of linearly categorizing each card only by where it appears in the book, you can “stack” cards across multiple dimensions with nothing but the thrust of a few needles. And I suspect that this method would yield connections and patterns that wouldn’t otherwise be visible.

Obviously, you can do much the same with a database or spreadsheet. But the tactile use of cards, as I’ve said elsewhere, confers other advantages: writing on physical cards with ink and manipulating them with your hands seems to yield insights and surprises that don’t appear when everything is in digital form. As a result, I’m seriously considering using McBee cards for my next big project—assuming that I can get my hands on some. As Kelly points out, there are “no sellers on eBay, no fan sites, no collector sites, no historical web pages, and no evidence that anyone is still using them. They are gone. Blasted out by the first computers.” But there might still be a place for them. They’d be particularly useful in applications in which the categories are clearly defined, as with Wheaton’s recipes, or with the bird or tree identification decks that are still gathering dust somewhere. Users could combine the easy searchability of a database with the rougher, more intuitive benefits that come from shuffling and stacking. (They’d also be a fantastic tool for tabletop games.) At least one writer has described making a deck at home, but the process seems unnecessarily laborious. They seem ideally suited for a modest Kickstarter campaign: all you’d need was a machine for punching the perforations, a supply of notching tools and knitting needles, and the cards themselves, presumably in colors and patterns cute enough to appeal to the hipster crowd. It’s a razor and blades model: once you sell someone a set, you can keep selling them the replacement decks, or starter kits with templates for recipes or other standard uses. If anyone reading this is an entrepreneur looking for an idea, consider this a freebie. I’d buy one in a second, and for a lot of other writers, I think they’d be the bee’s knees.

Written by nevalalee

November 17, 2015 at 8:55 am

The Judd Apatow paradox

leave a comment »

Judd Apatow, Paul Rudd, and Leslie Mann

I don’t think I’ve ever read an interview with a film editor that didn’t fascinate me from beginning to end, and Jonah Weiner’s recent New York Times Magazine profile of Brent White—Judd Apatow’s editor of choice—is no exception. Film editors need to think more intensely and exclusively about problems of structure than any other creative professional, and they represent a relatively neglected source of insights into storytelling of all kinds. Here are a few choice tidbits:

There are moments where [Will Ferrell] is thinking what the joke is, then he knows what the joke is, and then he’s saying the joke. Making the leap from one to two to three. What I’m doing is tightening up that leap for him: improving the rhythm, boom-boom-boom.

I reverse-engineer the scene to make sure I can get to the joke. Then it becomes bridge-building. How do I get to this thing from this other thing I like?

[Apatow will sometimes] have something he wants to say, but he doesn’t know exactly where it goes in the movie. Does it service the end? Does it go early? So he’ll shoot the same exact scene, the same exchange, with the actors in different wardrobes, so that I can slot it in at different points.

Weiner’s piece happened to appear only a few weeks after Stephen Rodrick of The New Yorker published a similar profile of Allison Jones, Apatow’s casting director, and it’s hard not to take them as two halves of a whole. Jones initiates the process that White completes, looking, as the article notes, for “comedic actors who, more than just delivering jokes, [can] improvise and riff on their lines, creating something altogether different from what was on the page.” (As Apatow puts it: “Allison doesn’t just find us actors; she finds us people we want to work with the rest of our lives.”) White then sifts through that mountain of material—which can be something like two million feet of film for an Apatow movie, an amount once reserved for the likes of Stanley Kubrick—to pick out the strongest pieces and fit them into some kind of coherent shape. It’s an approach that has been enormously influential on everything from a single-camera sitcom like Parks & Recreation, which allows actors to improvise freely without the pressure of a live audience, to a movie like The Wolf of Wall Street, which indulges Jonah Hill’s riffs almost to a fault. And although it’s been enabled by the revolution in digital video and editing, which allows miles of footage to be shot without bankrupting the production, it also requires geniuses like Jones and White who can facilitate the process on both ends.

Elliott Gould in The Long Goodbye

Yet as much as I admire what Jones, White, and the rest have done, I’m also a little skeptical. There’s no avoiding the fact that the Apatow approach has suffered from diminishing returns: if I had to list The 40-Year-Old Virgin, Knocked Up, Funny People, and This is 40 in order of quality, I’d end up ranking them by release date. From one minute to another, each can be hilarious, but when your comedic philosophy is predicated on keeping the camera rolling until something good happens, there’s an unavoidable loss of momentum. The greatest comedies are the ones that just won’t stop building; Apatow’s style has a way of dissipating its own energy from one scene to the next, precisely because each moment has to be built up from scratch. A Frat Pack comedy may objectively have more jokes per minute than Some Like It Hot or Annie Hall, but they start to feel like the comedic equivalent of empty calories, leaving you diverted but unsatisfied, and less energized by the end than exhausted. The fact that Anchorman 2 exists in two versions, with the same basic structure but hundreds of different jokes, can be taken, if you’re in a generous mood, as a testament to the comic fertility of the talents involved—but it can also start to look like evidence of how arbitrary each joke was in the first place. If one funny line can be removed and another inserted seamlessly in its place, it reminds us that neither really had to be there at all.

But if I’m being hard on Apatow and his collaborators, it’s because their approach holds such promise—if properly reined in. Comedy depends on a kind of controlled anarchy; when the balance slips too much to the side of control, as in the lesser works of the Coen Brothers, the result can seem arch and airless. And at their best, Apatow’s films have an unpredictable, jazzy charge. But a few constraints, properly placed, can allow that freedom to truly blossom. A movie like Robert Altman’s The Long Goodbye can’t be accused of sticking too much to the script: perhaps five minutes total is devoted to the plot, and much of the rest consists of the characters simply hanging around. Yet it uses the original Chandler novel, and the structure provided by Leigh Brackett’s screenplay, as a low-horsepower engine that keeps the whole thing moving at a steady but leisurely clip. As a result, it feels relaxed in a way that Apatow’s movies don’t. The latter may seem loose and shaggy, but they’re also characterized by an underlying tension, almost a desperation, to avoid going for more than a few seconds without a laugh, and it cancels out much of the gain in spontaneity. It promises us that we’ll be hanging out for two hours with a bunch of fun people, but it leaves us feeling pummeled. By freeing itself from the script, it turns itself, paradoxically, into a movie that can’t stop moving. The great comedies of the past could live in the spaces between jokes; the modern version has to be funny or die.

The crucial missing piece

leave a comment »

Andrew Bujalski

Last week, the New York Times Magazine published a feature in which fourteen screenwriters shared a few of their favorite writing tips. There’s a lot to enjoy here—I particularly liked Jeff Nichols’s description of how he lays out his scene cards—but the most interesting piece of advice comes courtesy of Andrew Bujalski, the writer and director of such mumblecore movies as Computer Chess and Mutual Appreciation. When asked how he writes believable dialogue, Bujalski says:

Write out the scene the way you hear it in your head. Then read it and find the parts where the characters are saying exactly what you want/need them to say for the sake of narrative clarity (e.g., “I’ve secretly loved you all along, but I’ve been too afraid to tell you.”) Cut that part out. See what’s left. You’re probably close.

Which, at first, sounds like just another version of the famous quote attributed to Hemingway: “Write the story, take out all the good lines, and see if it still works.” But there’s something a little more subtle going on here, which is the fact that the center of a scene—or an entire story—can be made all the more powerful by removing it entirely.

When a writer starts working on any unit of narrative, he generally has some idea of the information it needs to convey: a plot point, an emotional beat, a clarification of the relationship between two characters. Whatever it is, it’s the heart of the scene, and the other details that surround it are selected with an eye to clarifying or enriching that pivotal moment. What’s funny, though, is that when you delete what seems like the crucial piece, the supporting material often stands perfectly well on its own, like a sculpture once the supports have been taken away. And the result often gains in resonance. I’ve noted before that there’s a theory in literary criticism that Shakespeare, who based most of his plays on existing stories, intentionally omits part of the original source material while leaving other elements intact. For instance, in the Amleth story that provided the basis for Hamlet, the lead character feigns madness for a great reason—to protect himself from a plot against his life. The fact that he removes this motivation while preserving the rest of the action goes a long way toward explaining why we find Hamlet, both the play and the character, so tantalizing.

Walter Murch

Still, it’s hard for a writer to bring himself to remove what seems like the entire justification for a scene, and we often only find ourselves doing it in order to solve some glaring problem. Walter Murch, in Behind the Seen, has a beautiful analogy for this:

An interior might have four different sources of light in it: the light from the window, the light from the table lamp, the light from the flashlight that the character is holding, and some other remotely sourced lights. The danger is that, without hardly trying, you can create a luminous clutter out of all that. There’s a shadow over here, so you put another light on that shadow to make it disappear. Well, that new light casts a shadow in the other direction. Suddenly there are fifteen lights and you only want four.

As a cameraman what you paradoxically do is have the gaffer turn off the main light, because it is confusing your ability to really see what you’ve got. Once you do that, you selectively turn off some of the lights and see what’s left. And you discover that, “OK, those other three lights I really don’t need at all—kill ’em.” But it can also happen that you turn off the main light and suddenly, “Hey, this looks great! I don’t need that main light after all, just these secondary lights. What was I thinking?”

Murch goes on to say that much the same thing can happen in film editing: you’ll cut a scene that you thought was essential to the plot, only to find that the movie works even better without it, perhaps because something was being said too explicitly. It can be hard to generate this kind of ambiguity from scratch, and you’ll often find that you need to write that pivotal scene anyway, if only for the sake of excising it. This may seem like a waste of effort, but sometimes you need a big sculptural form to lend shape and meaning to its surroundings, even if you take it out in the end.

Written by nevalalee

December 12, 2013 at 8:53 am

Would you want your daughter to be one?

leave a comment »

Stephen King and family

By now, many of you have probably read the wonderful piece that appeared in last week’s New York Times Magazine about Stephen King and his immediate family, which currently includes no fewer than three novelists. The article, by Susan Dominus, may seem to go a little far when it calls the King clan “as close to a first family of letters as America is likely to have,” but really, it’s not that farfetched a statement. King is clearly the dominant popular novelist of his time, as well as the author of some of my own favorite books, and there’s no question that his influence over his family is as enormous as it has been on the larger world of fiction. And although the article doesn’t sugarcoat the difficulties they’ve experienced along the way, from King’s battle with drug addiction—which culminated in an intervention at which all three of his young children were present—to his recovery from a devastating hit-and-run accident, this is obviously a household in which storytelling has always been hugely important.

Honestly, the curious thing isn’t that the King family is so prolific in its fictional output, but that such a situation isn’t more common. The article checks off a few novelists who were also descended from famous writers, but once you get past Marin Amis, you need to dig fairly deep to find the likes of Rebecca Miller and Ted Heller. On the surface, this is somewhat surprising. People follow their parents into the family business all the time, and there’s no obvious reason why this shouldn’t also be true of the arts: film, for instance, has produced its share of dynasties, and many famous screenwriters—Joss Whedon, Tony Gilroy—have writing in the blood. Yet even though the children of authors can hardly avoid growing up in an atmosphere saturated with fiction and books, the world hasn’t seen many little Mailers or Updikes.  Such families must tend to produce devoted readers and interesting people, but whatever gene or mental quirk causes someone to become a writer is passed only infrequently down the line.

The author's daughter

There are a number of possible explanations for this. For one thing, it can be hard for lightning to strike twice: given the fact that so much of this stuff is outside an author’s control, it’s hard for any family to produce one successful novelist, much less two. A famous name in itself may get your manuscript read, but it won’t take you much further, and the King family’s example wouldn’t be nearly as interesting if Joe Hill hadn’t beaten the odds and turned out to be an important writer in his own right. It’s also possible that a parent’s example can be a little daunting. Being the son or daughter-in-law of the world’s bestselling novelist sets a standard that you can’t hope to meet, and King’s children appear to have struggled with their own feelings about their legendary father’s legacy. (Joe Hill evidently sees his resemblance to his father as more of a liability than an asset, and he’s worked hard to make it on his own: his first two novels were rejected, and he steadfastly resisted any temptation to trade on the family name.)

But the real question is whether novelists would even want their kids to be writers. Based on my own experience, my answer is a cautious no. A writer falls into his profession because he has no other choice, and it only makes sense to become a novelist, with all its attendant pitfalls and frustrations, if you don’t think you’d be happy doing anything else. When I look at my baby daughter, I want her to have a rich creative life, and I’d be thrilled if she did something in the arts. Having lived the life of a novelist from the inside, though, I’m not sure if it’s something I’d want to put her through. It’s a great life and one that I’ve worked hard to achieve, but it also comes with a psychic toll that I wouldn’t wish on anyone who didn’t demand it to the exclusion of all else. In the end, I just want her to be happy, and while I know firsthand that it’s possible to be happy and be a writer, the two things don’t always have much to do with each other. This is the best job in the world, but I can’t help but feel that any daughter of mine deserves a little better.

Written by nevalalee

August 14, 2013 at 8:50 am

Amanda Hocking and the lure of self-publishing

with 4 comments

Last weekend’s New York Times Magazine has a fascinating profile of Amanda Hocking, creator of the young adult Trylle franchise, whose self-published domination of Amazon e-book sales has sent shock waves through the world of conventional publishing. For those of us who are concerned about the future of books, Hocking’s story is a compelling one: after uploading her novels to Amazon, she became a cultural phenomenon almost overnight, to the point where she’s selling upward of 9,000 copies a day. At 26 years old, she has cleared more than $2 million in sales, with much more to come, thanks to a lucrative contract with St. Martin’s Press. And while I’m not exactly her target audience, I can see why some people feel that her example has called the model of the entire publishing industry into question.

It’s clear, though, that Hocking’s success represents the extreme end of a very long tail. And it’s important to remember that she tried very hard to place her work with a traditional publisher. According to the Times profile, she sent copies of her first novel to something like fifty agents, attempted to get published for years, and continued shopping her work around until just two months before uploading it to Amazon. She acknowledges that agents probably didn’t make a mistake in turning down her first novel, which she wrote when she was seventeen, and it’s likely that her fiction is better now than it would have been if she’d published it herself from the beginning. Which is why although traditional publishing may be on its way out, or evolving into a very different form, it’s still important for writers to try the conventional route, because it’s the only form of objective feedback they’re likely to get.

A year ago, when I was first shopping The Icon Thief around to agents, friends often asked me if I’d be willing to publish it myself. My answer, generally, was no, because if I did, I wouldn’t know if it was any good. While it’s true that an anonymous editor or agent may not be the best judge of an aspiring author’s work, the author himself is generally even worse. And while there’s some degree of arbitrariness about the publishing process, in which a novel has to pass through many ranks of gatekeepers before seeing print, it’s still valuable and mostly fair, if frustrating. Looking back, I wouldn’t have wanted my first, unpublished novel to appear in the form in which I initially submitted it. And paradoxically or not, it was only through responding to the criticism of strangers that I was able to find my own voice.

Publishing one’s own work certainly has its benefits. It’s perfect if you’re aiming for a smaller, specialized audience, or if you’re an established author who wants to cut out the middleman (as suspense novelist Barry Eisler has done). And I wouldn’t rule out the possibility of a self-published collection of short stories somewhere down the road. But for someone just starting out, it can be a mistake to put your work online without subjecting it to the clarifying fire of traditional submissions. Constraints, as I’ve said before, are crucial to creativity, and to some degree, the rigors of publishing are the greatest constraint of all, forcing you to grow as a writer in ways that allow you to reach the audience you deserve. And while electronic publishing has its benefits, there are still advantages to a more traditional format. “For me to be a billion-dollar author,” Hocking told the New York Times, “I need to have people buying my books at Wal-Mart.”

Written by nevalalee

June 21, 2011 at 9:47 am

Patton Oswalt punches up a movie

leave a comment »

Lately I’ve been doing punch-up on computer-animated films, but the trick with doing punch-up on these movies is that unlike the live-action script, which hasn’t been filmed yet, the computer-animated film is usually 80 percent complete by the time we see it. And when I say 80 percent complete, I mean, “We’ve spent $120 million on this, so we really can’t change anything.”

“Uh, well then,” you’ll ask, through a mouthful of takeout Chinese, “what exactly do you want us to do?”

“What we need is for you guys to come up with funny off-screen voices yelling funny things over the unfunny action.”

Patton Oswalt, in the New York Times Magazine

Written by nevalalee

June 19, 2011 at 8:48 am

%d bloggers like this: