Posts Tagged ‘New Yorker’
So what happened to John Carter?
In recent years, the fawning New Yorker profile has become the Hollywood equivalent of the Sports Illustrated cover—a harbinger of bad times to come. It isn’t hard to figure out why: both are awarded to subjects who have just reached the top of their game, which often foreshadows a humbling crash. Tony Gilroy was awarded a profile after the success of Michael Clayton, only to follow it up with the underwhelming Duplicity. For Steve Carrell, it was Dinner with Schmucks. For Anna Faris, it was What’s Your Number? And for John Lasseter, revealingly, it was Cars 2. The latest casualty is Andrew Stanton, whose profile, which I discussed in detail last year, now seems laden with irony, as well as an optimism that reads in retrospect as whistling in the dark. “Among all the top talent here,” a Pixar executive is quoted as saying, “Andrew is the one who has a genius for story structure.” And whatever redeeming qualities John Carter may have, story structure isn’t one of them. (The fact that Stanton claims to have closely studied the truly awful screenplay for Ryan’s Daughter now feels like an early warning sign.)
If nothing else, the making of John Carter will provide ample material for a great case study, hopefully along the lines of Julie Salamon’s classic The Devil’s Candy. There are really two failures here, one of marketing, another of storytelling, and even the story behind the film’s teaser trailer is fascinating. According to Vulture’s Claude Brodesser-Akner, a series of lost battles and miscommunications led to the release of a few enigmatic images devoid of action and scored, in the manner of an Internet fan video, with Peter Gabriel’s dark cover of “My Body is a Cage.” And while there’s more to the story than this—I actually found the trailer quite evocative, and negative responses to early marketing materials certainly didn’t hurt Avatar—it’s clear that this was one of the most poorly marketed tentpole movies in a long time. It began with the inexplicable decision to change the title from John Carter of Mars, on the assumption that women are turned off by science fiction, while making no attempt to lure in female viewers with the movie’s love story or central heroine, or even to explain who John Carter is. This is what happens when a four-quadrant marketing campaign goes wrong: when you try to please everybody, you please no one.
And the same holds true of the movie itself. While the story itself is fairly clear, and Stanton and his writers keep us reasonably grounded in the planet’s complex mythology, we’re never given any reason to care. Attempts to engage us with the central characters fall curiously flat: to convey that Princess Dejah is smart and resourceful, for example, the film shows her inventing the Barsoomian equivalent of nuclear power, evidently in her spare time. John Carter himself is a cipher. And while some of these problems might have been solved by miraculous casting, the blame lands squarely on Stanton’s shoulders. Stanton clearly loves John Carter, but forgets to persuade us to love him as well. What John Carter needed, more than anything else, was a dose of the rather stark detachment that I saw in Mission: Impossible—Ghost Protocol, as directed by Stanton’s former Pixar colleague Brad Bird. Bird clearly had no personal investment in the franchise, except to make the best movie he possibly could. John Carter, by contrast, falls apart on its director’s passion and good intentions, as well as a creative philosophy that evidently works in animation, but not live action. As Stanton says of Pixar:
We’re in this weird, hermetically sealed freakazoid place where everybody’s trying their best to do their best—and the films still suck for three out of the four years it takes to make them.
Which only makes us wonder what might have happened if John Carter had been granted a fourth year.
Stanton should take heart, however. If there’s one movie that John Carter calls to mind, it’s Dune, another financial and critical catastrophe that was doomed—as much as I love it—by fidelity to its source material. (In fact, if you take Roger Ebert’s original review of Dune, which came out in 1985, and replace the relevant proper names, you end up with something remarkably close to a review of John Carter: “Actors stand around in ridiculous costumes, mouthing dialogue with little or no context.”) Yet its director not only recovered, but followed it up with my favorite movie ever made in America. Failure, if it results in another chance, can be the opposite of the New Yorker curse. And while Stanton may not be David Lynch, he’s not without talent: the movie’s design is often impressive, especially its alien effects, and it displays occasional flashes of wit and humor that remind us of what Stanton can do. John Carter may go on record as the most expensive learning experience in history, and while this may be cold comfort to Disney shareholders, it’s not bad for the rest of us, as long as Stanton gets his second chance. Hopefully far away from the New Yorker.
Andrew Stanton and the world beyond Pixar
Art is messy, art is chaos—so you need a system.
—Andrew Stanton, to the New Yorker
For the second time in less than six months, the New Yorker takes on the curious case of Pixar, and this time around, the results are much more satisfying. In May, the magazine offered up a profile of John Lasseter that was close to a total failure, since critic Anthony Lane’s customary air of disdain was unprepared to draw any useful conclusions about a studio that, at least up to that point, had gotten just about everything blessedly right. This week’s piece by Tad Friend is far superior, focusing on the relatively unsung talents of Andrew Stanton, director of Finding Nemo and Wall-E. And while the publication of a fawning New Yorker profile of a hot creative talent rarely bodes well for his or her next project—as witness the recent articles on Tony Gilroy, Steve Carrell, Anna Faris, or even Lasseter himself, whose profile only briefly anticipated the release of the underwhelming Cars 2—I’m still excited by Stanton’s next project, the Edgar Rice Burroughs epic John Carter, which will serve as a crucial test as to whether Pixar’s magic can extend to the world beyond animation.
Stanton’s case is particularly interesting because of the role he plays at the studio: to hear the article tell it, he’s Pixar’s resident storyteller. “Among all the top talent here,” says Jim Morris, the head of Pixar’s daily operations, “Andrew is the one who has a genius for story structure.” And what makes this all the more remarkable is the fact that Stanton seems to have essentially willed this talent into existence. Stanton was trained as an animator, and began, like most of his colleagues, by focusing on the visual side. As the script for Toy Story was being developed, however, he decided that his future would lie in narrative, and quietly began to train himself in the writer’s craft, reading classic screenplays—including, for some reason, the truly awful script for Ryan’s Daughter—and such texts as Lajos Egri’s The Art of Dramatic Writing. In the end, he was generally acknowledged as the senior writer at Pixar, which, given the caliber of talent involved, must be a heady position indeed.
And while the article is littered with Stanton’s aphorisms on storytelling—”Inevitable but not predictable,” “Conflict + contradiction,” “Do the opposite”—his main virtue as a writer seems to lie in the most universal rule of all: “Be wrong fast.” More than anything else, Stanton’s success so far has been predicated on an admirable willingness to throw things out and start again. He spent years, for instance, working on a second act for Wall-E that was finally junked completely, and while I’m not sure he ever quite cracked the plot for that movie—which I don’t think lives up to the promise of its first twenty minutes—there’s no question that his ruthlessness with structure did wonders for Finding Nemo, which was radically rethought and reconceived several times over the course of production. Pixar, like the rest of us, is making things up as it goes along, but is set apart by its refusal to let well enough alone. As Stanton concludes:
We’re in this weird, hermetically sealed freakazoid place where everybody’s trying their best to do their best—and the films still suck for three out of the four years it takes to make them.
The real question, of course, is whether this approach to storytelling, with its necessary false starts and extensive rendering time, can survive the transition to live action, in which the use of real actors and sets makes retakes—and thus revision—drastically more expensive. So far, it sounds like John Carter is doing fine, at least judging from the trailer and early audience response, which has reportedly been encouraging. And more rides on this movie’s success or failure than the fate of one particular franchise. Pixar’s story has been extraordinary, but its most lasting legacy may turn out to be the migration of its talent beyond the safety zone of animation—assuming, of course, that their kung fu can survive. With Brad Bird’s Mission: Impossible—Ghost Protocol and John Carter in the wings, we’re about to discover if the directors who changed animation at Pixar can do the same in live action. The New Yorker article is fine, but it buries the lede: Stanton and Bird are the first of many. And if their next movies are half as entertaining as the ones they’ve made so far, we’re looking at an earthquake in the world of pop culture.
Do titles matter?
The other day, I found myself thinking about sex—in particular, about Sex.com, once thought to be the most valuable domain name on the Internet, until it was caught up in a legal battle so epic that it inspired its own book. Ultimately, after the site’s previous owner went bankrupt, someone paid $11.5 million for the rights, but the site, as it currently stands, is a ghost town (although still not safe for work). And it isn’t hard to figure out what happened. In the early days of the web, investors were furiously snatching up what seemed like lucrative domains, all common words like Clothes or Books, never expecting that our most heavily trafficked sites would have names that sound like complete nonsense: Google, Yahoo, Twitter, Bing. In fact, of the sites I visit daily, none has a domain name consisting of a single recognizable English word.
In hindsight, it’s clear that investors simply misunderstood how people would surf the web, assuming that they would find products and services by randomly typing words into the address bar, adding “.com,” and hitting return. It’s quite possible that some users are still doing this, but for the rest of us, Google provided a much more effective approach. It didn’t really matter what a site was called: as long as it appeared prominently in your search results, you’d find it, even if it was called Kazaa or Flickr or Picasa, rather than Music or Photos.com. Domain names ceased to matter when one’s primary interface became the search engine, rather than the address bar. If the web had been like browsing in a bookstore, with site after site scrolling by, a domain like Sex.com might have been an asset, but that just isn’t how we use the Internet.
These reflections were inspired by an article by John Colapinto in this week’s New Yorker—which is truly an excellent issue, by the way—about Lexicon, a company that does nothing but invent names for products. Lexicon’s triumphs include Pentium, Swiffer, PowerBook, and Dasani, and they’ve transformed the art of naming into a science, down to analyzing the meaning of p vs. b sounds, and quantifying the desirability of alternating vowels and consonants. The real question, though, is whether such names matter at all. Here’s Bernd Schmitt, a marketing professor at Columbia Business School:
Would Amazon be just as successful if it was called Nile? My guess would be yes, because the name is just a starting point for a brand. The most important branding decision is more about brand strategy, distribution channels—where are the customers you want to reach?
And yet it’s hard to believe that names don’t matter. Colapinto points to great novels that nearly had bad titles: would The Great Gatsby have become a classic as Trimalchio in West Egg, or Farewell, My Lovely as Zounds, He Dies? The answer seems to be no—although it ignores the fact that nearly all the great classics of world literature have bad titles—but it only brings us up against a related question, which is whether the function of a novel’s title has also changed.
Because the way we’re searching for books is changing as well. In the old days, many of us found books in the way that early speculators in domain names assumed we’d find things on the web: by browsing, essentially at random, among a vast but finite range of choices. When you’re trying to catch the eye of someone glancing casually over the thriller section of a bookstore, say, the title and cover become essential. Now, though, browsing in its pure sense is on its way out, and we find books in the same way we find everything else: through search queries, recommendations from other readers, and suggestions from sites like Amazon. The title and cover, then, seem much less important than the book’s metadata—its plot summary, its keywords, and even, in many cases, its searchable contents. (Interestingly, and for reasons I don’t entirely understand, the movie industry seems to be moving in the opposite direction, as witnessed by grindingly literal titles like Horrible Bosses and Bad Teacher.)
So do titles matter? Yes, obviously, for readers who still browse for novels in bookstores—and whoever you are, I thank you. They also matter, less obviously, for professional book buyers at bookstore chains, who essentially judge books by their covers when deciding how many copies to order. As for the rest of us, my own impression, having spent a long time thinking about titles for my own books, is that titles have a neutral or negative effect. A good title, in itself, won’t make a novel easier for an online browser to find, but a bad title might turn off a reader who found the book in an Amazon search. Titles, like covers and typography, are an index to quality: a bad title and design usually means a bad book, because a publisher that is sloppy about such issues is likely to be sloppy about more important things. Titles and covers still matter, then—but less as a way of attracting readers than as a means of sealing the deal.
Does a writer need a coach?
The physician and journalist Atul Gawande has a nice piece in this week’s New Yorker about coaching—what it is, why it works, and whether it’s useful in fields aside from professional sports. He concludes that, yes, it can be helpful even for those operating at a high level of expertise to get guidance from another expert, with the aim of sustaining and improving performance over time. Gawande draws on examples from a wide range of professions, including teaching, music (with some thoughts on the subject from Itzhak Perlman), and his own field, surgery, where he says that a single session with his chosen coach, a retired general surgeon, “gave me more to consider and work on than I’d had in the past five years.”
He also talks briefly about writing, a field in which most professional practitioners work constantly with coaches, of sorts, in the forms of editors and agents. Maxwell Perkins, the legendary editor at Scribner’s, served as a coach for such authors as Fitzgerald, Hemingway, and Wolfe, while in science fiction, John W. Campbell played an important coaching role for Isaac Asimov. The agent Scott Meredith performed a similar function for many of his clients. And many writers have had less formal, but equally important, mentors throughout their careers: Hemingway and Gertrude Stein, T.S. Eliot and Ezra Pound. And like coaches in other professions, their function is less to teach the basics of the craft than to shape and guide the work of others once a certain level of expertise has been reached.
I’ve certainly benefited from coaches of all kinds. I’ve had two literary agents, and while my first such partnership didn’t end particularly well, the process of taking a huge novel and stripping it down to its constituent parts was one of the most valuable, if painful, educations I’ve ever received. My current agent also put me through the paces for the early drafts of The Icon Thief, which was taken apart and reassembled in ways that allowed me to write the sequel, City of Exiles, in record time. I’ve also learned a lot from my editor, and from the many readers, formal and informal, who have offered me advice over the years. Some, like Stanley Schmidt at Analog, have done so in a professional capacity, while others have simply served as what Gawande calls “outside ears,” giving me valuable perspectives on work that I can no longer evaluate on my own.
Even relatively experienced authors, then, are constantly being coached, which may be why the best writers continue to make progress well into their forties and fifties, after professionals in most other fields have begun to peak. The hard part, it seems, isn’t so much finding a coach as knowing which ones to trust, and listening to tough advice even after you’ve grown confident in your own abilities. I’ve always said that if there’s one thing I know about writing, it’s structure—which didn’t prevent my first novel from being radically restructured, to its great benefit, on its way to publication. Writing is such a weird, absurd profession that you take help whenever you can get it. As Ed Macauley said, in reference to a different kind of game: “When you are not practicing, remember, someone somewhere is practicing, and when you meet him, he will win.”
Quote of the Day
Even after I received the Pulitzer Prize, my father reminded me that writing stories was not something to count on, and that I must always be prepared to earn my living in some other way. I listen to him, and at the same time I have learned not to listen, to wander to the edge of the precipice and to leap. And so, though a writer’s job is to look and listen, in order to become a writer I had to be deaf and blind.
—Jhumpa Lahiri, in the New Yorker
Of mouses and men
Quality is a probabilistic function of quantity.
—Dean Simonton
Malcolm Gladwell’s nifty article on the evolution of the computer mouse in this week’s New Yorker is a terrific read—nobody, but nobody, is better at this sort of thing than Gladwell, which has made him deservedly rich and famous. It’s also, somewhat surprisingly, the most valuable take on the creative process I’ve seen in a long time. I’ve always been interested in the affinities between the artistic process and the work of scientists and engineers, and Gladwell makes the useful point that what most creative geniuses in both fields have in common is their extraordinary productivity. His primary example is Gary Starkweather, the legendary Xerox PARC engineer and inventor of the laser printer, whose creativity was directly linked to the sheer number of his ideas. And in a paragraph that I want to clip out and put in my wallet, Gladwell writes:
The difference between Bach and his forgotten peers isn’t necessarily that he had a better ratio of hits to misses. The difference is that the mediocre might have a dozen ideas, while Bach, in his lifetime, created more than a thousand full-fledged musical compositions. A genius is a genius, [Dean] Simonton maintains, because he can put together such a staggering number of insights, ideas, theories, random observations, and unexpected connections that he almost inevitably ends up with something great.
Gladwell concludes with the Simonton quotation cited at the start of this post, which qualifies, to my mind, as one of the great aphorisms—that is, as a startling reminder of something that should be blindingly obvious. Simonton, incidentally, is a professor of psychology at UC Davis and the author of Origins of Genius: Darwinian Perspectives on Creativity, the subtitle of which refers not to the struggles of genius against genius, as one might think, but to the natural selection of ideas. In nature, natural selection is the result of a Malthusian competition within a large population for limited resources, and it stands to reason that the fittest ideas might arise in a similar fashion. As Simonton says:
Even the greatest creators possess no direct and secure path to truth or beauty. They cannot guarantee that every published idea will survive further evaluation and testing at the hands of audiences or colleagues. The best the creative genius can do is to be as prolific as possible in generating products in the hope that at least some subset will survive the test of time. [Italics mine.]
Which seems obvious enough: most of our greatest artists, from Shakespeare to Picasso, were monsters of productivity, as were nearly all of our great scientists, like Newton. But even more interesting is the point to which Gladwell alludes above, and what Simonton elsewhere calls the “equal odds” rule—that the ratio of total hits to total attempts “tends to stay more or less constant across creators.” Which is to say that if a creative individual of any kind wants to generate more good ideas, the solution isn’t to improve one’s hit rate, but to produce more ideas overall. Productivity is the mother of creativity, by providing the necessary conditions for lasting ideas to emerge. Which is something, I think, that most artists already intuitively grasp. Thanks to Simonton and Gladwell, we’re a little closer to understanding why.
James Wood on the “lazy” conventions of fiction
By [narrative] grammar, I mean the rather lazy stock-in-trade of mainstream realist fiction: the cinematic sweep, followed by the selection of small, telling details (“It was a large room, filled almost entirely by rows of antique computers; there was an odd smell of aftershave and bacon”); the careful mixing of dynamic and habitual detail (“At one of the computers, a man was unhurriedly eating a spring roll; traffic noise pierced the thick, sealed windows; an ambulance yelped by”); the preference for the concrete over the abstract (“She was twenty-nine, but still went home every evening to her mom’s ground-floor apartment in Queens, which doubled by day as a yoga studio”); vivid brevity of character-sketching (“Bob wore a bright-yellow T-shirt that read ‘Got Beer?,’ and had a small mole on his upper lip”); plenty of homely “filler” (“She ordered a beer and a sandwich, sat down at the table, and opened her computer”); more or less orderly access to consciousness and memory (“He lay on the bed and thought with shame of everything that had happened that day”); lucid but allowably lyrical sentences (“From the window, he watched the streetlights flicker on, in amber hesitations”). And this does not even touch on the small change of fictional narrative: how strange it is, when you think about it, that thousands of novels are published every year, in which characters all have different names (whereas, in real life, doesn’t one always have at least three friends named John, and another three named Elizabeth?), or in which characters quizzically “raise an eyebrow,” and angrily “knit their brows,” or just express themselves in quotation marks and single adverbs (“‘You know that’s not fair,’ he said, whiningly”). At this level of convention, there is a shorter distance than one would imagine between, say, Harriet the Spy and Disgrace.
—James Wood, in The New Yorker
A message from Tina Fey
Tina Fey’s charming article in this week’s New Yorker—in which she shares some of the lessons that she learned from nine years of working on Saturday Night Live—is essential reading for fans of our most unlikely celebrity writer, and especially for those trying to write for themselves. Her advice ranges from the aphoristic (“Producing is about discouraging creativity”) to the cheekily practical (“Never cut to a closed door”), but the big one, the one that every writer needs to bear in mind, is this:
The show doesn’t go on because it’s ready; it goes on because it’s eleven-thirty. This is something that Lorne [Michaels] has said often about Saturday Night Live, and it’s a great lesson in not being too precious about your writing. You have to try your hardest to be at the top your game and improve every joke until the last possible second, but then you have to let it go.
At first, Fey’s point might seem more relevant for writers on a weekly sketch comedy show than, say, for novelists, whose writing process is both private and infinitely expandable. If anything, though, the advice is even more important for those of us working alone, without a fixed deadline, who might otherwise be inclined to polish our work until it’s perfect, luminous, and dead. This impulse has crippled great writers from Virgil (who asked on his deathbed for the unfinished Aeneid to be burned) to Ralph Ellison (who worked on his second novel for forty years and never came close to finishing it), as well as countless lesser writers who remained unpublished, and therefore unknown.
The fact is that a novel—or any work of art—isn’t complete until other people have the chance to see it. A flawed story that strangers can read from beginning to end is infinitely superior to three perfect chapters from an unfinished novel. And there are times when productivity is a much greater virtue than perfection. Every writer, whether novelist or playwright or sketch comedian, needs to be capable, when necessary, of cranking it out. Even you intend to go back and polish what you’ve done, there are days, especially at the beginning of a project, when a novelist needs to be something of a hack. And that’s the way it should be. (One suspects that the backers of Spider-Man: Turn Off the Dark wish that Julie Taymor had displayed a little more of the hack and less of the artist.)
Which is why deadlines are so important. As Fey points out, writers in live television have deadlines whether they like it or not, but novelists—under contract or otherwise—need to establish deadlines as well. They can be as large as the deadline for completing the entire novel, and as small as the completion of a single chapter or paragraph. But once the deadline has been reached, you’ve got to move on. At the moment, I’m writing a chapter a day, and the results are far from perfect—but, as Fey notes, “perfect is overrated. Perfect is boring on live television.” And, sooner or later, every novel needs to go live.
Jeffrey Eugenides on second novels
No one is waiting for you to write your first book. No one cares if you finish it. But after your first, if it goes well, everyone seems to be waiting. You’re suddenly considered to be a professional writer, a fiction machine, but you know very well that you’re just getting going. You go from having nothing to lose to having everything to lose, and that’s what creates the panic…In my own case, I decided to give myself the time to learn the things I needed to know in order to write my second book, rather than just writing it in a rush because there were now people eager to read it. Finally, of course, I had to leave the country. In Berlin I regained the blessed anonymity I’d had while writing The Virgin Suicides. I got back to thinking only about the book…Now [since Middlesex] I’ve lost the anonymity I had in Berlin and so am moving to Chicago. If things continue to go well, I will end up living in Elko, Nevada.
—Jeffrey Eugenides, quoted in The New Yorker
Fooling yourself out of writer’s block
As I noted yesterday, writer’s block arises from a collision between the two inescapable facts of an author’s life: writing a novel requires inhuman dedication and daily hard work, but it also depends on inspiration, which can’t be forced into a regular schedule. The key to overcoming writer’s block, then, is for the author to fool himself, at least temporarily, into thinking that hard work alone is enough—or that writing is less mysterious an act than it actually is. Because good writing is mysterious and magical. But sometimes it’s useful to pretend that it isn’t—at least until it is again.
If this sounds confusing, that’s because novelists have trouble agreeing on how much writing ought to be like a regular job. If writing is only a job like any other, then lack of inspiration is no excuse for inactivity. Anthony Trollope, whom Joan Acocella quotes in her New Yorker article on writer’s block, takes this point of view to its logical extreme:
Let [writing] be to them as is his common work to the common laborer. No gigantic efforts will then be necessary. He need tie no wet towels round his brow, nor sit for thirty hours at his desk without moving,—as men have sat, or said that they have sat.
That is, if an author approaches writing as just another job, without relying on the vagaries of inspiration, then the problem of writer’s block simply disappears. Which is probably true. But it doesn’t mean that good writing really is just “common labor”—merely that this is a convenient fiction that writers need to tell themselves. Like most convenient fictions, it’s only partly correct. There are, in fact, times when all the hard work in the world can’t compensate for a lack of inspiration. But sometimes the only way to get inspired in the first place is to pretend that it doesn’t matter.
This is why most writer’s block “cures” treat writing as a form of muscle memory. For example, the writer is advised to retype the final paragraph from the previous day’s work, or to free associate, or even to type a favorite page from another author. The idea, it seems, is that once a writer’s hands start typing, they’ll eventually produce something good. Which sounds ridiculous—and yet it usually works, at least in my experience. It’s as if typing alone is enough to bring the creative faculty to life, or at least to fool it into thinking that something useful is going on. (The same thing is even more true of writing by hand, as I’ve discovered when making mind maps.)
This is why it’s also important to begin each writing day with a plan, even if that plan turns out to be a fiction in itself. As I’ve mentioned before, I write massive outlines for my stories, but these outlines are less about determining the actual plot, which can change radically from one draft to another, as to make writing seem like less of a leap in the dark. When I start each day’s work, I generally have an outline, some notes, and a target word count—as if writing were about nothing more than meeting a quota. It’s the security that this routine provides, even if it’s an illusion, that allows me to discover things that have nothing to do with planning or preparation.
Of course, sometimes writer’s block shades into its more benign counterpart—those periods of inactivity that are essential for any real original thinking. Tomorrow, then, I’ll be talking about the joyous flip side of writer’s block: creative procrastination.
The special terror of writer’s block
In less than a week, if all goes well, I’ll begin writing the first draft of Midrash, the sequel to Kamera, which I’m contracted to deliver to my publisher by the end of September. Finishing the manuscript on time will require a fairly ambitious schedule—basically a chapter a day when I’m writing, alternating with equally intense periods of research, outlining, and revision. I’ve tried to build some leeway into my schedule, in case I hit any unforeseen obstacles, but at this point, there isn’t a lot of wriggle room. If I reach a point where I can’t write for a month or more, this book isn’t going to get done on time. Which is why I’m going to tempt fate and spend the next few days talking about one of the most terrifying subjects in the world: writer’s block.
There are really two kinds of writer’s block. The more dramatic kind, and one I hope never to feel qualified to talk about, is the kind that lasts for years. As Joan Acocella points out in her very good New Yorker article on the subject, this sort of writer’s block—the kind that plagued Samuel Coleridge, Paul Valéry, and others—is less a professional problem than a metaphysical or linguistic predicament: the sense that inspiration or language itself is inadequate to express what the writer wants to say. I can’t dismiss this condition entirely, if only because the advancement of art depends on such struggles by a handful of exceptional authors. That said, for the vast majority of us, conventional language probably works just fine, and while daily drudgery is no substitute for inspiration, it’s often the next best thing.
The other kind of writer’s block, the kind that every author needs to confront at some point or another, comes from the collision of the two intractable facts of a writer’s life: one, that the heart of a novel, like or not, is built on moments of inspiration that can’t be predicted or willed into being; and two, that these moments require hours of tedious work to bring them to fruition. When inspiration and discipline go hand in hand, a writer can easily work for six or more hours a day; if they don’t fall into line, the writer produces nothing. While such dry spells can last for anything from a few hours to months on end, it’s probably impossible to avoid them altogether. And they hurt like hell.
So what’s a writer to do? Tomorrow, I’m going to be talking about some of the methods I’ve used in the past to get past writer’s block, whether on account of fear, lack of ideas, or simple exhaustion. And by discussing it so openly, I’ll also ensure, by a kind of anticipatory magic, that it won’t actually happen to me. Right?