Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Archive for the ‘Writing’ Category

This post has no title

leave a comment »

In John McPhee’s excellent new book on writing, Draft No. 4, which I mentioned here the other day, he shares an anecdote about his famous profile of the basketball player Bill Bradley. McPhee was going over a draft with William Shawn, the editor of The New Yorker, “talking three-two zones, blind passes, reverse pivots, and the setting of picks,” when he realized that he had overlooked something important:

For some reason—nerves, what else?—I had forgotten to find a title before submitting the piece. Editors of every ilk seem to think that titles are their prerogative—that they can buy a piece, cut the title off the top, and lay on one of their own. When I was young, this turned my skin pink and caused horripilation. I should add that I encountered such editors almost wholly at magazines other than The New YorkerVogue, Holiday, the Saturday Evening Post. The title is an integral part of a piece of writing, and one of the most important parts, and ought not to be written by anyone but the writer of what follows the title. Editors’ habit of replacing an author’s title with one of their own is like a photo of a tourist’s head on the cardboard body of Mao Zedong. But the title missing on the Bill Bradley piece was my oversight. I put no title on the manuscript. Shawn did. He hunted around in the text and found six words spoken by the subject, and when I saw the first New Yorker proof the piece was called “A Sense of Where You Are.”

The dynamic that McPhee describes at other publications still exists today—I’ve occasionally bristled at the titles that have appeared over the articles that I’ve written, which is a small part of the reason that I’ve moved most of my nonfiction onto this blog. (The freelance market also isn’t what it used to be, but that’s a subject for another post.) But a more insidious factor has invaded even the august halls of The New Yorker, and it has nothing to do with the preferences of any particular editor. Opening the most recent issue, for instance, I see that there’s an article by Jia Tolentino titled “Safer Spaces.” On the magazine’s website, it becomes “Is There a Smarter Way to Think About Sexual Assault on Campus?”, with a line at the bottom noting that it appears in the print edition under its alternate title. Joshua Rothman’s “Jambusters” becomes “Why Paper Jams Persist.” A huge piece by David Grann, “The White Darkness,” which seems destined to get optioned for the movies, earns slightly more privileged treatment, and it merely turns into “The White Darkness: A Journey Across Antarctica.” But that’s the exception. When I go back to the previous issue, I find that the same pattern holds true. Michael Chabon’s “The Recipe for Life” is spared, but David Owen’s “The Happiness Button” is retitled “Customer Satisfaction at the Push of a Button,” Rachel Aviv’s “The Death Debate” becomes “What Does It Mean to Die?”, and Ian Frazier’s “Airborne” becomes “The Trippy, High-Speed World of Drone Racing.” Which suggests to me that if McPhee’s piece appeared online today, it would be titled something like “Basketball Player Bill Bradley’s Sense of Where He Is.” And that’s if he were lucky.

The reasoning here isn’t a mystery. Headlines are written these days to maximize clicks and shares, and The New Yorker isn’t immune, even if it sometimes raises an eyebrow. Back in 2014, Maria Konnikova wrote an article for the magazine’s website titled “The Six Things That Make Stories Go Viral Will Amaze, and Maybe Infuriate, You,” in which she explained one aspect of the formula for online headlines: “The presence of a memory-inducing trigger is also important. We share what we’re thinking about—and we think about the things we can remember.” Viral headlines can’t be allusive, make a clever play on words, or depend on an evocative reference—they have to spell everything out. (To build on McPhee’s analogy, it’s less like a tourist’s face on the cardboard body of Mao Zedong than an oversized foam head of Mao himself.) A year later, The New Yorker ran an article by Andrew Marantz on the virality expert Emerson Spartz, and it amazed and maybe infuriated me. I’ve written about this profile elsewhere, but looking it over again now, my eye was caught by these lines:

Much of the company’s success online can be attributed to a proprietary algorithm that it has developed for “headline testing”—a practice that has become standard in the virality industry…Spartz’s algorithm measures which headline is attracting clicks most quickly, and after a few hours, when a statistically significant threshold is reached, the “winning” headline automatically supplants all others. “I’m really, really good at writing headlines,” he told me.

And it’s worth noting that while Marantz’s piece appeared in print as “The Virologist,” in an online search, it pops up as “King of Clickbait.” Even as the magazine gently mocked Spartz, it took his example to heart.

None of this is exactly scandalous, but when you think of a title as “an integral part of a piece of writing,” as McPhee does, it’s undeniably sad. There isn’t any one title for an article anymore, and most readers will probably only see its online incarnation. And this isn’t because of an editor’s tastes, but the result of an impersonal set of assumptions imposed on the entire industry. Emerson Spartz got his revenge on The New Yorker—he effectively ended up writing its headlines. And while I can’t blame any media company for doing whatever it can to stay viable, it’s also a real loss. McPhee is right when he says that selecting a title is an important part of the process, and in a perfect world, it would be left up to the writer. (It can even lead to valuable insights in itself. When I was working on my article on the fiction of L. Ron Hubbard, I was casting about randomly for a title when I came up with “Xenu’s Paradox.” I didn’t know what it meant, but it led me to start thinking about the paradoxical aspects of Hubbard’s career, and the result was a line of argument that ended up being integral not just to the article, but to the ensuing book. And I was amazed when it survived intact on Longreads.) When you look at the grindingly literal, unpoetic headlines that currently populate the homepage of The New Yorker, it’s hard not to feel nostalgic for an era in which an editor might nudge a title in the opposite direction. In 1966, when McPhee delivered a long piece on oranges in Florida, William Shawn read it over, focused on a quotation from the poet Andrew Marvell, and called it “Golden Lamps in a Green Night.” McPhee protested, and the article was finally published under the title that he had originally wanted. It was called “Oranges.”

Written by nevalalee

February 16, 2018 at 8:50 am

Instant karma

with one comment

Last year, my wife and I bought an Instant Pot. (If you’re already dreading the rest of this post, I promise in advance that it won’t be devoted solely to singing its praises.) If you somehow haven’t encountered one before, it’s a basically a programmable pressure cooker. It has a bunch of other functions, including slow cooking and making yogurt, but aside from its sauté setting, I haven’t had a chance to use them yet. At first, I suspected that it would be another appliance, like our bread maker, that we would take out of the box once and then never touch again, but somewhat to my surprise, I’ve found myself using it on a regular basis, and not just as a reliable topic for small talk at parties. Its great virtue is that it allows you to prepare certain tasty but otherwise time-consuming recipes—like the butter chicken so famous that it received its own writeup in The New Yorker—with a minimum of fuss. As I write these lines, my Instant Pot has just finished a batch of soft-boiled eggs, which is its most common function in my house these days, and I might use it tomorrow to make chicken adobo. Occasionally, I’ll be mildly annoyed by its minor shortcomings, such as the fact that an egg set for four minutes at low pressure might have a perfect runny yolk one day and verge on hard-boiled the next. It saves time, but when you add in the waiting period to build and then release the pressure, which isn’t factored into most recipes, it can still take an hour or more to make dinner. But it still marks the most significant step forward in my life in the kitchen since Mark Bittman taught me how to use the broiler more than a decade ago.

My wife hasn’t touched it. In fact, she probably wouldn’t mind if I said that she was scared of the Instant Pot—and she isn’t alone in this. A couple of weeks ago, the Wall Street Journal ran a feature by Ellen Byron titled “America’s Instant-Pot Anxiety,” with multiple anecdotes about home cooks who find themselves afraid of their new appliance:

Missing from the enclosed manual and recipe book is how to fix Instant Pot anxiety. Debbie Rochester, an elementary-school teacher in Atlanta, bought an Instant Pot months ago but returned it unopened. “It was too scary, too complicated,” she says. “The front of the thing has so many buttons.” After Ms. Rochester’s friends kept raving about their Instant Pot meals, she bought another one…Days later, Ms. Rochester began her first beef stew. After about ten minutes of cooking, it was time to release the pressure valve, the step she feared most. Ms. Rochester pulled her sweater over her hand, turned her back and twisted the knob without looking. “I was praying that nothing would blow up,” she says.

Elsewhere, the article quotes Sharon Gebauer of San Diego, who just wanted to make beef and barley soup, only to be filled with sudden misgivings: “I filled it up, started it pressure cooking, and then I started to think, what happens when the barley expands? I just said a prayer and stayed the hell away.”

Not surprisingly, the article has inspired derision from Instant Pot enthusiasts, among whom one common response seems to be: “People are dumb. They don’t read instruction manuals.” Yet I can testify firsthand that the Instant Pot can be intimidating. The manual is thick and not especially organized, and it does a poor job of explaining such crucial features as the steam release and float valve. (I had to watch a video to learn how to handle the former, and I didn’t figure out what the latter was until I had been using the pot for weeks.) But I’ve found that you can safely ignore most of it and fall back on a few basic tricks— as soon as you manage to get through at least one meal. Once I successfully prepared my first dish, my confidence increased enormously, and I barely remember how it felt to be nervous around it. And that may be the single most relevant point about the cult that the Instant Pot has inspired, which rivals the most fervent corners of fan culture. As Kevin Roose noted in a recent article in the New York Times:

A new religion has been born…Its deity is the Instant Pot, a line of electric multicookers that has become an internet phenomenon and inspired a legion of passionate foodies and home cooks. These devotees—they call themselves “Potheads”—use their Instant Pots for virtually every kitchen task imaginable: sautéing, pressure-cooking, steaming, even making yogurt and cheesecakes. Then, they evangelize on the internet, using social media to sing the gadget’s praises to the unconverted.

And when you look at the Instant Pot from a certain angle, you realize that it has all of the qualities required to create a specific kind of fan community. There’s an initial learning curve that’s daunting enough to keep out the casuals, but not so steep that it prevents a critical mass of enthusiasts from forming. Once you learn the basics, you forget how intimidating it seemed when you were on the outside. And it has a huge body of associated lore that discourages newbies from diving in, even if it doesn’t matter much in practice. (In the months that I’ve been using the Instant Pot, I’ve never used anything except the manual pressure and sauté functions, and I’ve disregarded the rest of the manual, just as I draw a blank on pretty much every element of the mytharc on The X-Files.) Most of all, perhaps, it takes something that is genuinely good, but imperfect, and elevates it into an object of veneration. There are plenty of examples in pop culture, from Doctor Who to Infinite Jest, and perhaps it isn’t a coincidence that the Instant Pot has a vaguely futuristic feel to it. A science fiction or fantasy franchise can turn off a lot of potential fans because of its history and complicated externals, even if most are peripheral to the actual experience. Using the Instant Pot for the first time is probably easier than trying to get into Doctor Who, or so I assume—I’ve steered clear of that franchise for many of the same reasons, reasonable or otherwise. There’s nothing wrong with being part of a group drawn together by the shared object of your affection. But once you’re on the inside, it can be hard to put yourself in the position of someone who might be afraid to try it because it has so many buttons.

Written by nevalalee

February 15, 2018 at 8:45 am

The fictional sentence

with one comment

Of all the writers of the golden age of science fiction, the one who can be hardest to get your head around is A.E. van Vogt. He isn’t to everyone’s taste—many readers, to quote Alexei and Cory Panshin’s not unadmiring description, find him “foggy, semi-literate, pulpish, and dumb”—but he’s undoubtedly a major figure, and he was second only to Robert A. Heinlein and Isaac Asimov when it came to defining what science fiction became in the late thirties and early forties. (If he isn’t as well known as they are, it’s largely because he was taken out of writing by dianetics at the exact moment that the genre was breaking into the mainstream.) Part of his appeal is that his stories remain compelling and readable despite their borderline incoherence, and he was unusually open about his secret. In the essay “My Life Was My Best Science Fiction Story,” which was originally published in the volume Fantastic Lives, van Vogt wrote:

I learned to write by a system propounded in a book titled The Only Two Ways to Write a Story by John W. Gallishaw (meaning by flashback or in consecutive sequence). Gallishaw had made an in-depth study of successful stories by great authors. He observed that the best of them wrote in what he called “presentation units” of about eight hundred words. Each of these units contained five steps. And every sentence in it was a “fictional sentence.” Which means that it was written either with imagery, or emotion, or suspense, depending on the type of story.

So what did these units look like? Used copies of Gallishaw’s book currently go for well over a hundred dollars online, but van Vogt helpfully summarized the relevant information:

The five steps can be described as follows: 1) Where, and to whom, is it happening? 2) Make clear the scene purpose (What is the immediate problem which confronts the protagonist, and what does it require him to accomplish in this scene?) 3) The interaction with the opposition, as he tries to achieve the scene purpose. 4) Make the reader aware that he either did accomplish the scene purpose, or did not accomplish it. 5) In all the early scenes, whether protagonist did or did not succeed in the scene purpose, establish that things are going to get worse. Now, the next presentation unit-scene begins with: Where is all this taking place. Describe the surroundings, and to whom it is happening. And so forth.

Over the years, this formula was distorted and misunderstood, so that a critic could write something like “Van Vogt admits that he changes the direction of his plot every eight hundred words.” And even when accurately stated, it can come off as bizarre. Yet it’s really nothing more than the principle that every narrative should consist of a series of objectives, which I’ve elsewhere listed among the most useful pieces of writing advice that I know. Significantly, it’s one of the few elements of craft that can be taught and learned by example. Van Vogt learned it from Gallishaw, while I got it from David Mamet’s On Directing Film, and I’ve always seen it as a jewel of wisdom that can be passed in almost apostolic fashion from one writer to another.

When we read van Vogt’s stories, of course, we aren’t conscious of this structure, and if anything, we’re more aware of their apparent lack of form. (As John McPhee writes in his wonderful new book on writing: “Readers are not supposed to notice the structure. It is meant to be about as visible as someone’s bones.”) Yet we still keep reading. It’s that sequence of objectives that keeps us oriented through the centrifugal wildness that we associate with van Vogt’s work—and it shouldn’t come as a surprise that he approached the irrational side as systematically as he did everything else. I’d heard at some point that van Vogt based many of his plots on his dreams, but it wasn’t until I read his essay that I understood what this meant:

When you’re writing, as I was, for one cent a word, and are a slow writer, and the story keeps stopping for hours or days, and your rent is due, you get anxious…I would wake up spontaneously at night, anxious. But I wasn’t aware of the anxiety. I thought about story problems—that was all I noticed then. And so back to sleep I went. In the morning, often there would be an unusual solution. All my best plot twists came in this way…It was not until July 1943 that I suddenly realized what I was doing. That night I got out our alarm clock and moved into the spare bedroom. I set the alarm to ring at one and one-half hours. When it awakened me, I reset the alarm for another one and one-half hours, thought about the problems in the story I was working on—and fell asleep. I did that altogether four times during the night. And in the morning, there was the unusual solution, the strange plot twist…So I had my system for getting to my subconscious mind.

This isn’t all that different from Salvador Dali’s advice on how to take a nap. But the final sentence is the kicker: “During the next seven years, I awakened myself about three hundred nights a year four times a night.” When I read this, I felt a greater sense of kinship with van Vogt than I have with just about any other writer. Much of my life has been spent searching for tools—from mind maps to tarot cards—that can be used to systematically incorporate elements of chance and intuition into what is otherwise a highly structured process. Van Vogt’s approach comes as close as anything I’ve ever seen to the ideal of combining the two on a reliable basis, even if we differ on some of the details. (For instance, I don’t necessarily buy into Gallishaw’s notion that every action taken by the protagonist needs to be opposed, or that the situation needs to continually get worse. As Mamet writes in On Directing Film: “We don’t want our protagonist to do things that are interesting. We want him to do things that are logical.” And that’s often enough.) But it’s oddly appropriate that we find such rules in the work of a writer who frequently came across as chronically disorganized. Van Vogt pushed the limits of form further than any other author of the golden age, and it’s hard to imagine Alfred Bester or Philip K. Dick without him. But I’m sure that there were equally visionary writers who never made it into print because they lacked the discipline, or the technical tricks, to get their ideas under control. Van Vogt’s stories always seem on the verge of flying apart, but the real wonder is that they don’t. And his closing words on the subject are useful ones indeed: “It is well to point out again that these various systems were, at base, just automatic reactions to the writing of science fiction. The left side of the brain got an overdose of fantasizing flow from the right side, and literally had to do something real.”

The last questions

leave a comment »

For two decades, the writer and literary agent John Brockman has posed a single question on an annual basis to a group of scientists and other intellectuals. The notion of such a question—which changes every year—was inspired by the work of the late artist and philosopher James Lee Byars, whose declaration of intent serves as a motto for the entire project: “To arrive at the edge of the world’s knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.” Brockman publishes the responses on his website, and the result resonates so strongly with just about everything that I love that I’m embarrassed to say that I hadn’t heard of it until this week. (I owe my discovery of it to an article by Brian Gallagher in the excellent magazine Nautilus.) It’s an attempt to take the pulse of what Brockman calls “the third culture, [which] consists of those scientists and other thinkers in the empirical world who, through their work and expository writing, are taking the place of the traditional intellectual in rendering visible the deeper meanings of our lives, redefining who and what we are.” Questions from recent years include “What is your favorite deep, elegant or beautiful explanation?” and “What scientific concept would improve everyone’s cognitive toolkit?” And the result is manifestly so useful, interesting, and rich that I’m almost afraid to read too much of it at once.

This year, to commemorate the twentieth anniversary of the project, Brockman issued a somewhat different challenge, asking his usual group of correspondents: “What is the last question?” By way of explanation, he quotes an essay that he originally wrote in the late sixties, when he first became preoccupied with the idea of asking questions at all:

The final elegance: assuming, asking the question. No answers. No explanations. Why do you demand explanations? If they are given, you will once more be facing a terminus. They cannot get you any further than you are at present…Our kind of innovation consists not in the answers, but in the true novelty of the questions themselves; in the statement of problems, not in their solutions. What is important is not to illustrate a truth—or even an interrogation—known in advance, but to bring to the world certain interrogations…A total synthesis of all human knowledge will not result in huge libraries filled with books, in fantastic amounts of data stored on servers. There’s no value any more in amount, in quantity, in explanation. For a total synthesis of human knowledge, use the interrogative.

Brockman strongly implies that this year’s question will be the last. (To which I can only respond with a lyric from The Simpsons: “To close this place now would be twisted / We just learned this place existed.”) And he closes by presenting the final question: “Ask ‘The Last Question,’ your last question, the question for which you will be remembered.”

I’ve just spent half an hour going through the responses, which are about as fascinating as you’d expect. As I read the questions, I felt that some of them could change lives, if they were encountered at just the right time. (If you know a bright teenager, you could do worse than to send the list his or her way. After all, you just never know.) And they’re a mine of potential ideas for science fiction writers. Here are a few of my favorites:

Jimena Canales: “When will we accept that the most accurate clocks will have to advance regularly sometimes, irregularly most of the time, and at times run counterclockwise?”
Bart Kosko: “What is the bumpiest and highest-dimensional cost surface that our best computers will be able to search and still find the deepest cost well?”
Julia Clarke: “What would comprise the most precise and complete sonic representation of the history of life?”
Stuart Firestein: “How many incommensurable ideas can we hold in our mind simultaneously?”
George Dyson: “Why are there no trees in the ocean?”
Andrew Barron: “What would a diagram that gave a complete understanding of imagination need to be?”

Not all are equally interesting, and some of the respondents were evidently daunted by the challenge. A few of the submissions feel like an answer—or an opinion—with a question mark stuck awkwardly on the end. As Gallagher notes in Nautilus: “The question ended up prompting many of the academics among the responders to just restate one of their research targets, albeit succinctly.” The computer scientist Scott Aaronson wrote on his blog:

I tried to devise a single question that gestured toward the P vs. NP problem, and the ultimate physical limits of computation, and the prospects for superintelligent AI, and the enormity of what could be Platonically lying in wait for us within finite but exponentially search spaces, and the eternal nerd’s conundrum, of the ability to get the right answers to clearly-stated questions being so ineffectual in the actual world. I’m not thrilled with the result, but reading through the other questions makes it clear just how challenging it is to ask something that doesn’t boil down to: “When will the rest of the world recognize the importance of my research topic?”

But it’s impossible to read it without wondering what your own question would be. (None of the participants went with what many science fiction fans know is the real last question: “How can the net amount of entropy of the universe be massively decreased?” But maybe they knew that there’s insufficient data for a meaningful answer.) I don’t know what mine is yet, but this one from Jonathan Gottschall comes fairly close, and it can serve as a placeholder for now: “Are stories bad for us?”

Written by nevalalee

February 8, 2018 at 8:43 am

A clockwork urge

with one comment

I haven’t always been a fan of the novels of Martin Amis, but I’ve long admired his work as a critic, and the publication next week of his new collection The Rub of Time feels like a major event. For every insufferable turn of phrase—the sort that made his father Kingsley Amis lament his son’s “terrible compulsive vividness” and his “constant demonstrating of his command of English”—we get an insight like this, from an essay on Anthony Burgess’s A Clockwork Orange:

The day-to-day business of writing a novel often seems to consist of nothing but decisions—decisions, decisions, decisions. Should this paragraph go here? Or should it go there? Can that chunk of exposition be diversified by dialogue? At what point does this information need to be revealed? Ought I to use a different adjective and a different adverb in that sentence? Or no adverb and no adjective? Comma or semicolon? Colon or dash? And so on.

This gets to the heart of writing in a way that only a true novelist could manage, not just in its description of the daily grind, which can seem endless, but the implication that readers don’t fully appreciate the work involved. I’m as guilty of this as anyone else. After reading a dismissive or critical note on something I’ve written, I often want to ask: “Don’t they appreciate all those choices I made?”

Of course, it isn’t the reader’s job to admire an author’s choices—although Amis’s own style occasionally seems designed to inspire nothing else. (In a book like Time’s Arrow, the act of continuous appreciation becomes exhausting after just a few pages.) For most authors, though, the process of making choices has to remain a source of private satisfaction, or, at best, a secret that we share with other writers. Revealingly, Amis’s soliloquy on “decisions, decisions, decisions” feels less like a commentary on A Clockwork Orange in particular than like something he just felt like getting off his chest. He continues:

These decisions are minor, clearly enough, and they are processed more or less rationally by the conscious mind. All the major decisions, by contrast, have been reached before you sit down at your desk; and they involve not a moment’s thought. The major decisions are inherent in the original frisson—in the enabling throb or whisper (a whisper that says, Here is a novel you may be able to write). Very mysteriously, it is the unconscious mind that does the heavy lifting. No one knows how it happens.

After evoking that mystery, Amis simply moves on, even though the question he poses is central to writing, or any creative activity. How do the intuitive choices that we make before the work begins inform the decisions that follow for months or years afterward?

In some ways, this is also a question about life itself, in which we spend much of our energy sorting through the unforeseen implications of choices that we made without much thought at the time. You might think that novelists have more control over the books that they write than over their own lives, but that isn’t necessarily true. In both cases, they’re doing the best with what they have, and the question of how much of it is free will and how much is out of their hands must necessarily remain unresolved. Much of the craft of writing lies in making such decisions more bearable. Some of it consists of self-imposed rules that guide your choices in the right direction. Occasionally, it lies in sensibly reducing the number of choices that you can make at any one time. A while back, I wrote a post on Barry Schwartz’s book The Paradox of Choice, in which he notes that shoppers are often happier when their options are constrained. It can be more satisfying to choose between two or three different pairs of jeans than fifty, even though the latter naturally increases your odds of finding one that you like. What matters isn’t the richness of options at your disposal, but your comfort with the process of making choices itself, and sometimes you actually benefit from reducing your range of possible action. That’s part of the reason why constraints are so important in art. Once you choose a form, a subject, or a set of arbitrary limits, you paradoxically free yourself from having to consider all of the possible paths. The subset that remains may not be any better than the alternative, but it will keep you from going insane.

And what Amis calls “the unconscious mind” can also be shaped by experience. Most writers have more ideas than they ever end up using, and it’s only through firsthand knowledge of your own strengths that you can discriminate between “the enabling throb or whisper” that will go somewhere and one that will lead you into a dead end. Afterward, it’s a matter of entrusting yourself to the logic of what the poet John Ciardi described so beautifully:

Nothing in a good poem happens by accident; every word, every comma, every variant spelling must enter as an act of the poet’s choice. A poem is a machine for making choices. The mark of a good poet is the refusal to make easy or cheap choices. The better the poet, the greater the demands he makes upon himself, and the higher he sets his level of choice. Thus, a good poem is not only an act of mind but an act of devotion to mind. The poet who chooses cheaply or lazily is guilty of aesthetic acedia, and he is lost thereby. The poet who spares nothing in his search for the most demanding choices is shaping a human attention that offers itself as a high—and joyful—example to all readers of mind and devotion.

Every work of art is a machine for making choices. Sometimes it operates fairly smoothly. Occasionally it breaks down. But it all justifies itself in those rare moments of flow in which it seems to go like clockwork.

Written by nevalalee

February 2, 2018 at 8:44 am

The rough draft

with 2 comments

In his book The Unknown Craftsman, Soetsu Yanagi, the founder of the folk craft movement in Japan, writes of an encounter that profoundly shaped his understanding of design:

I was favored with a rare chance of visiting the Korean village where beautiful lathed wood objects are made. When I got there after a long, hard trip, I noticed at once by their workshop many big blocks of pine wood ready for the hand lathe. But to my great astonishment, all of them were still sap green and were by no means ready for immediate use. To my surprise, a Korean craftsman took one of them, set it in a lathe, and began forthwith to turn it. The pine block was so fresh that turning made a wet spray, which gave off a scent of resin. This perplexed me very much because it is against common sense in lathe work. So I asked the artisan, “Why do you use such green material? Cracks will come out pretty soon!”

“What does it matter?” was the calm answer. I was amazed by this Zen monk-like response. I felt sweat on my forehead. Yet I dared to ask him, “How can you use something that leaks?” “Just mend it,” was his simple answer.

Yang concludes: “With amazement I discovered that they mend them so artistically and beautifully that the cracked piece seems better than the perfect one. So they do not mind whether it cracks or not.”

I first encountered this story in the book The Phenomenon of Life by the architect Christopher Alexander, who uses it to illustrate the principle of “roughness,” which is one of the fifteen fundamental properties that he associates with living works of art. After sharing his own version of Yanagi’s anecdote, Alexander comments:

It does not mean that the old man doesn’t care about the blows he makes. But he is deeply relaxed about it, not panicked. And in this state where nothing is quite so important, nothing is so terribly, heart-twistingly vital, he knows that he can let the greatest beauty show itself—and this is the only state of mind in which the property of roughness and the breath that lies in a thing which has the “it” in it can ever come to life.

This strikes me as a profound insight, and it has important implications for how we approach the first drafts of anything that we do. I’ve frequently written here about the importance of doing a rough first pass on any project, and that you shouldn’t go back to read or revise what you’ve done until the whole thing is complete. This is basically a pragmatic rule, born out of my observation that I was much more likely to finish something if I pushed through to the end without looking back. When you stop to fix every small problem along the way—or, even worse, wait until everything seems perfect before you start—you run the risk of never completing anything at all. And the notion of starting with green wood, which will inevitably lead to imperfections, is a memorable expression of the fact that sometimes it’s best to just get started and figure out the rest later.

But there’s also something about roughness that can be desirable in itself. We tend to think of a rough draft as something to be tolerated until it can be corrected—we just have to live with it for long enough to get to the point where we can fix it. (This is the insight that underlies one of my favorite pieces of creative advice, which William Goldman attributes to the theater producer George Abbott, who was speaking to one of his choreographers: “Well, have them do something! That way we’ll have something to change.”) But roughness is more than a means to an end. Alexander notes that many works of art that we cherish have a certain rough quality to their surfaces, but he cautions us against misreading it: “We probably attribute this charm to the fact that the bowl is handmade and that we can see, in the roughness, the trace of a human hand, and know therefore that it is personal, full of human error. This interpretation is fallacious, and has entirely the wrong emphasis.” He argues that roughness is a creative strategy that comes into play when perfect regularity would fail on the level of the whole, as in a rug with a complicated pattern, which requires the weaver to maintain a high level of awareness at all times:

If the weaver wanted instead to calculate or plot out a so-called “perfect” solution to the corner [of the rug], she would then have to abandon her constant attention to the right size, right shape, and right positive-negative of the border elements, because they would all be determined mechanically by outside considerations, i.e. by the grid of the border. The corner solution would then dominate the design in a way which would destroy the weaver’s ability to do what is just right at each point.

And Alexander’s conclusion is worth remembering: “The seemingly rough solution—which seems superficially inaccurate—is in fact more precise, not less so, because it comes about as a result of paying attention to what matters most, and letting go of what matters less.” Which seems to me like the most important point of all. Roughness allows an artist to adapt to problems in real time, preserving that ideal state of attentiveness that arises when each unit is addressed on its own terms, rather than as a component in an artificial scheme. When combined with an overall feel for order, it allows for flexibility and improvisation in the moment, but only when approached with what Alexander calls an “egolessness, which allows each part to be made exactly as it needs to be.” And this also requires a paradoxical detachment from the ideal of roughness itself. As Yanagi writes of Korean lathe workers:

They have neither attachment to the perfect piece nor to the imperfect…Since they use green wood, the wares inevitably deform in drying. So this asymmetry is but a natural outcome of their state of mind, not the result of conscious choice. That is to say, their minds are free from any attachment to symmetry as well as asymmetry. The deformation of their work is the natural result of nonchalance, free from any restriction…They make their asymmetrical lathe work not because they regard asymmetrical form as beautiful or symmetrical as ugly, but because they make everything without such polarized conceptions. They are quite free from the conflict between the beautiful and the ugly. Here, deeply buried, is the mystery of the endless beauty of Korean wares. They just make what they make without any pretension.

This sounds like it should be the easiest thing in the world to do, but it’s really the hardest. And perhaps the only way to do it reliably is to make a point of working whenever we can with green wood.

Written by nevalalee

January 31, 2018 at 9:12 am

Going with the flow

leave a comment »

On July 13, 1963, New York University welcomed a hundred attendees to an event called the Conference on Education for Creativity in the Sciences. The gathering, which lasted for three days, was inspired by the work of Dr. Myron A. Coler, the director of the school’s Creative Science Program. There isn’t a lot of information available online about Coler, who was trained as an electrical engineer, and the best source I’ve found is an unsigned Talk of the Town piece that ran earlier that week in The New Yorker. It presents Coler as a scholar who was interested in the problem of scientific creativity long before it became fashionable: “What is it, how does it happen, how is it fostered—can it be isolated, measured, nurtured, predicted, directed, and so on…By enhancing it, you produce more from what you have of other resources. The ability to exploit a resource is in itself a resource.” He conducted monthly meetings for years with a select group of scientists, writing down everything that they had to say on the subject, including a lot of wild guesses about how to identify creative or productive people. Here’s my favorite:

One analyst claims that one of the best ways that he knows to test an individual is to take him out to dinner where lobster or crab is served. If the person uses his hands freely and seems to enjoy himself at the meal, he is probably well adjusted. If, on the other hand, he has trouble in eating the crab, he probably will have trouble in his relations with people also.

The conference was overseen by Jerome B. Wiesner, another former electrical engineer, who was appointed by John F. Kennedy to chair the President’s Science Advisory Committee. Wiesner’s interest lay in education, and particularly in identifying and training children who showed an early aptitude for science. In an article that was published a few years later in the journal Daedalus, Wiesner listed some of the attributes that were often seen in such individuals, based on the work of the pioneering clinical psychologist Anne Roe:

A childhood environment in which knowledge and intellectual effort were so highly valued for themselves than an addiction to reading and study was firmly established at an early age; an unusual degree of independence which, among other things, led them to discover early that they could satisfy their curiosity by personal efforts; an early dependence on personal resources, and on the necessity to think for oneself; an intense drive that generated concentrated, persistent, time-ignoring efforts in their studies and work; a secondary-school training that tended to emphasize science rather than the humanities; and high, but not necessarily remarkably high, intelligence.

But Wiesner also closed on a note of caution: “We do not now have useful techniques for predicting with comfortable reliability which individuals will turn out to be creative in the sciences or in any other field, no matter how great an investment we make in their education. Nor does it appear likely that such techniques will be developed in the immediate future.”

As it happened, one of the attendees at the conference was Isaac Asimov, who took the bus down to New York from Boston. Years afterward, he said that he couldn’t remember much about the experience—he was more concerned by the fact that he lost the wad of two hundred dollars that he had brought as emergency cash—and that his contributions to the discussion weren’t taken seriously. When the question came up of how to identify potentially creative individuals at a young age, he said without hesitation: “Keep an eye peeled for science-fiction readers.” No one else paid much attention, but Asimov didn’t forget the idea, and he wrote it up later that year in his essay “The Sword of Achilles,” which was published by The Bulletin of the Atomic Scientists. His views on the subject were undoubtedly shaped by his personal preferences, but he was also probably right. (He certainly met most of the criteria listed by Weisner, aside from “an unusual degree of independence,” since he was tied down for most of his adolescence to his father’s candy store.) And science fiction had more in common with Coler and Wiesner’s efforts than they might have appreciated. The editor John W. Campbell had always seen the genre as a kind of training program that taught its readers how to survive in the future, and Weisner described “tomorrow’s world” in terms that might have been pulled straight from Astounding: “That world will be more complex than it is today, will be changing more rapidly than now, and it will have jobs only for the well trained.” Weisner closed with a quotation from the philosopher Alfred North Whitehead:

In the conditions of modern life, the rule is absolute, the race which does not value trained intelligence is doomed…Today we maintain ourselves. Tomorrow science will have moved forward one more step, and there will be no appeal from the judgment which will then be pronounced on the uneducated.

These issues tend to come to the forefront during times of national anxiety, and it’s no surprise that we’re seeing a resurgence in them today. In last week’s issue of The New Yorker, Adam Gopnik rounded up a few recent titles on education and child prodigies, which reflect “the sense that American parents have gone radically wrong, making themselves and their kids miserable in the process, by hovering over them like helicopters instead of observing them from a watchtower, at a safe distance.” The catch is that while the current wisdom says that we should maximize our children’s independence, most child prodigies were the result of intensive parental involvement, which implies that the real secret to creative achievement lies somewhere else. And the answer may be right in front of us. As Gopnik writes of the author Ann Hulbert’s account of of the piano prodigy Lang Lang:

Lang Lang admits to the brutal pressures placed on him by his father…He was saved because he had, as Hulbert writes, “carved out space for a version of the ‘autotelic experience’—absorption in an activity purely for its own sake, a specialty of childhood.” Following the psychologist Mihaly Csikszentmihalyi, Hulbert maintains that it was being caught in “the flow,” the feeling of the sudden loss of oneself in an activity, that preserved Lang Lang’s sanity: “The prize always beckoned, but Lang was finding ways to get lost in the process.”

This is very close to the “concentrated, persistent, time-ignoring efforts” that Weisner described fifty years ago, as well as his characterization of learning as “an addiction.” Gopnik concludes: “Accomplishment, the feeling of absorption in the flow, of mastery for its own sake, of knowing how to do this thing, is what keeps all of us doing what we do, if we like what we do at all.” And it seems to have been this sense of flow, above all else, that led Asimov to write more than four hundred books. He was addicted to it. As he once wrote to Robert A. Heinlein: “I like it in the attic room with the wallpaper. I’ve been all over the galaxy. What’s left to see?”

%d bloggers like this: