Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘New York Times

The A/B Test

leave a comment »

In this week’s issue of The New York Times Magazine, there’s a profile of Mark Zuckerberg by Farhad Manjoo, who describes how the founder of Facebook is coming to terms with his role in the world in the aftermath of last year’s election. I find myself thinking about Zuckerberg a lot these days, arguably even more than I use Facebook itself. We just missed overlapping in college, and with one possible exception, which I’ll mention later, he’s the most influential figure to emerge from those ranks in the last two decades. Manjoo depicts him as an intensely private man obliged to walk a fine line in public, leading him to be absurdly cautious about what he says: “When I asked if he had chatted with Obama about the former president’s critique of Facebook, Zuckerberg paused for several seconds, nearly to the point of awkwardness, before answering that he had.” Zuckerberg is trying to figure out what he believes—and how to act—under conditions of enormous scrutiny, but he also has more resources at his disposal than just about anyone else in history. Here’s the passage in the article that stuck with me the most:

The people who work on News Feed aren’t making decisions that turn on fuzzy human ideas like ethics, judgment, intuition, or seniority. They are concerned only with quantifiable outcomes about people’s actions on the site. That data, at Facebook, is the only real truth…This ideal runs so deep that the people who make News Feed often have to put aside their own notions of what’s best. “One of the things we’ve all learned over the years is that our intuition can be wrong a fair amount of the time,” John Hegeman, the vice president of product management and a News Feed team member, told me. “There are things you don’t expect will happen. And we learn a lot from that process: Why didn’t that happen, and what might that mean?”

Reading this, I began to reflect on how rarely we actually test our intuitions. I’ve spoken a lot on this blog about the role of intuitive thinking in the arts and sciences, mostly because it doesn’t get the emphasis it deserves, but there’s also no guarantee that intuition will steer us in the right direction. The psychologist Daniel Kahneman has devoted his career to showing how we tend to overvalue our gut reactions, particularly if we’ve been fortunate enough to be right in the past, and the study of human irrationality has become a rich avenue of research in the social sciences, which are often undermined by poor hunches of their own. It may not even be a matter of right or wrong. An intuitive choice may be better or worse than the alternative, but for the most part, we’ll never know. One of the quirks of Silicon Valley culture is that it claims to base everything on raw data, but it’s often in the service of notions that are outlandish, untested, and easy to misrepresent. Facebook comes closer than any company in existence to the ideal of an endless A/B test, in which the user base is randomly divided into two or more groups to see which approaches are the most effective. It’s the best lab ever developed for testing our hunches about human behavior. (Most controversially, Facebook modified the news feeds of hundreds of thousands of users to adjust the number of positive or negative posts, in order to gauge the emotional impact, and it has conducted similar tests on voter turnout.) And it shouldn’t surprise us if many of our intuitions turn out to be mistaken. If anything, we should expect them to be right about half the time—and if we can nudge that percentage just a little bit upward, in theory, it should give us a significant competitive advantage.

So what good is intuition, anyway? I like to start with William Goldman’s story about the Broadway producer George Abbott, who once passed a choreographer holding his head in his hands while the dancers stood around doing nothing. When Abbott asked what was wrong, the choreographer said that he couldn’t figure out what to do next. Abbott shot back: “Well, have them do something! That way we’ll have something to change.” Intuition, as I’ve argued before, is mostly about taking you from zero ideas to one idea, which you can then start to refine. John W. Campbell makes much the same argument in what might be his single best editorial, “The Value of Panic,” which begins with a maxim from the Harvard professor Wayne Batteau: “In total ignorance, try anything. Then you won’t be so ignorant.” Campbell argues that this provides an evolutionary rationale for panic, in which an animal acts “in a manner entirely different from the normal behavior patterns of the organism.” He continues:

Given: An organism with N characteristic behavior modes available. Given: An environmental situation which cannot be solved by any of the N available behavior modes, but which must be solved immediately if the organism is to survive. Logical conclusion: The organism will inevitably die. But…if we introduce Panic, allowing the organism to generate a purely random behavior mode not a member of the N modes characteristically available?

Campbell concludes: “When the probability of survival is zero on the basis of all known factors—it’s time to throw in an unknown.” In extreme situations, the result is panic; under less intense circumstances, it’s a blind hunch. You can even see them as points on a spectrum, the purpose of which is to provide us with a random action or idea that can then be revised into something better, assuming that we survive for long enough. But sometimes the animal just gets eaten.

The idea of refinement, revision, or testing is inseparable from intuition, and Zuckerberg has been granted the most powerful tool imaginable for asking hard questions and getting quantifiable answers. What he does with it is another matter entirely. But it’s also worth looking at his only peer from college who could conceivably challenge him in terms of global influence. On paper, Mark Zuckerberg and Jared Kushner have remarkable similarities. Both are young Jewish men—although Kushner is more observant—who were born less than four years and sixty miles apart. Kushner, whose acceptance to Harvard was so manifestly the result of his family’s wealth that it became a case study in a book on the subject, was a member of the final clubs that Zuckerberg badly wanted to join, or so Aaron Sorkin would have us believe. Both ended up as unlikely media magnates of a very different kind: Kushner, like Charles Foster Kane, took over a New York newspaper from a man named Carter. Yet their approaches to their newfound positions couldn’t be more different. Kushner has been called “a shadow secretary of state” whose portfolio includes Mexico, China, the Middle East, and the reorganization of the federal government, but it feels like one long improvisation, on the apparent assumption that he can wing it and succeed where so many others have failed. As Bruce Bartlett writes in the New York Times, without a staff, Kushner “is just a dilettante meddling in matters he lacks the depth or the resources to grasp,” and we may not have a chance to recover if his intuitions are wrong. In other words, he resembles his father-in-law, as Frank Bruni notes:

I’m told by insiders that when Trump’s long-shot campaign led to victory, he and Kushner became convinced not only that they’d tapped into something that everybody was missing about America, but that they’d tapped into something that everybody was missing about the two of them.

Zuckerberg and Kushner’s lives ran roughly in parallel for a long time, but now they’re diverging at a point at which they almost seem to be offering us two alternate versions of the future, like an A/B test with only one possible outcome. Neither is wholly positive, but that doesn’t make the choice any less stark. And if you think this sounds farfetched, bookmark this post, and read it again in about six years.

Parkinson’s Law and the creative hour

with 2 comments

In the November 19, 1955 issue of The Economist, the historian Cyril Northcote Parkinson stated the law that has borne his name ever since, in a paragraph remarkable for its sheer Englishness:

It is a commonplace observation that work expands so as to fill the time available for its completion. Thus, an elderly lady of leisure can spend the entire day in writing and dispatching a postcard to her niece at Bognor Regis. An hour will be spent in finding the postcard, another in hunting for spectacles, half an hour in a search for the address, an hour and a quarter in composition, and twenty minutes in deciding whether or not to take an umbrella when going to the pillar box in the next street. The total effort which would occupy a busy man for three minutes all told may in this fashion leave another person prostrate after a day of doubt, anxiety and toil.

Parkinson’s observation was originally designed to account for the unchecked growth of bureaucracy, which hinges on the fact that paperwork is “elastic in its demands on time”—and, by extension, on manpower. And he concluded the essay by stating, rather disingenuously, that it was only an empirical observation, without any value attached: “The discovery of this formula and of the general principles upon which it is based has, of course, no emotive value…Parkinson’s Law is a purely scientific discovery, inapplicable except in theory to the politics of the day. It is not the business of the botanist to eradicate the weeds. Enough for him if he can tell us just how fast they grow.”

In fact, Parkinson’s Law can be a neutral factor, or even a positive one, when it comes to certain forms of creativity. We can begin with one of its most famous, if disguised, variations, in the form of Blinn’s Law: “As technology advances, rendering time remains constant.” As I’ve noted before, once an animator gets used to waiting a certain number of hours for an image to render, as the hardware improves, instead of using it to save time, he just renders more complex graphics. There seems to be a fixed amount of time that any given person is willing to work, so an increase in efficiency doesn’t necessarily reduce the time spent at your desk—it just allows you to introduce additional refinements that depend on purely mechanical factors. Similarly, the introduction of word-processing software didn’t appreciably reduce how long it takes to write a novel: it only restructures it, so that whatever time you save in typing is expended in making imperceptible corrections. This isn’t always a good thing. As the history of animation makes clear, Blinn’s Law can lead to the same tired stories being played out against photorealistic backgrounds, and access to word processors may simply mean that the average story gets longer, as Ted Hughes observed while serving on the judging panel of a children’s writing competition: “It just extends everything slightly too much. Every sentence is too long. Everything is taken a bit too far, too attenuated.” But there are also cases in which an artist’s natural patience and tolerance for work provides the finished result with the rendering time that it needs to reach its ideal form. And we have it to thank for many displays of gratuitous craft and beauty.

This leads me to a conclusion that I’ve recently come to appreciate more fully, which is that every form of artistic activity is equally difficult. I don’t mean that the violin is as easy as the ukulele, or that there isn’t any difference between performance at a high level and the efforts of a casual hobbyist. But if you’re a creative professional and take your work seriously, you’re usually going to operate at your optimum capacity, if not all the time, than at least on average. Each day’s work is determined less by the demands of the project itself than by how much energy you can afford to give it. I switch fairly regularly between fiction and nonfiction, for instance, and whenever I’m working in one mode, I often find myself thinking fondly of the other, which somehow seems easier in my imagination. But it isn’t. I’m the same person with an identical set of habits whether I’m writing a novel, a short story, or an essay, and an hour of my time is pitched about at the same degree of intensity no matter what the objective is. In practice, it settles at a point that is slightly too intense to be entirely comfortable, but not so much that it burns me out. I’ve found that I unconsciously adjust the conditions to make each day’s work feel the same, either by moving a deadline forward or backward or by taking on projects that are progressively more challenging. (This doesn’t just apply to paid work, either. The amount of time I spend on this blog hasn’t varied much over the last five years, but the posts have definitely gotten more involved.) This also applies to particular stages. When I’m researching, outlining, writing, or revising, I sometimes console myself with the idea that the next part will be easier. In fact, it’s all hard. And if it isn’t, I’m doing something wrong.

This implies that we shouldn’t pick our artistic pursuits based on how easy they are, but on the quality that they yield for each unit of time invested. (“Quality” can mean whatever you like, from how much you get paid to the amount of personal satisfaction that you derive.) I work as diligently as possible on whatever I do, but this doesn’t mean that I’m equally good at everything, and there are certain forms of writing that I’ve given up because they don’t justify the cost. And I’ve also learned to be grateful for the fact that everything I do takes about the same amount of time and effort per page. The real limiting factor isn’t the time available, but what I bring to each creative hour, and over the long run, it makes sense to be as consistent as I can. It isn’t intensity that hurts, but volatility, and you lose a lot in ramping up and ramping down. But the appropriate level varies from one person to another. What Parkinson neglects to mention in his contrast between “an elderly lady of leisure” and “a busy man” is that each of them has presumably found a suitable mode of living, and you can find productive writers and artists who fall into either category. In the end, the process is all we have, and it makes sense that it would remain the same in its externals, regardless of its underlying goal. That’s a gentler way of stating Parkinson’s Law, but it’s no less accurate. And Parkinson himself seems to have softened his stance. As he said in an interview to the New York Times toward the end of his career: “My experience tells me the only thing people really enjoy over a long period of time is some kind of work.”

Written by nevalalee

April 6, 2017 at 8:39 am

The rendering time

leave a comment »

No-knead bread

Note: I’m taking a few days off, so I’ll be republishing some of my favorite pieces from earlier in this blog’s run. This post originally appeared, in a slightly different form, on December 30, 2015.

Last year, I went through a period in which I was baking a lot of bread at home, initially as an activity to share with my daughter. Not surprisingly, I relied entirely on the no-knead recipe first developed by Jim Lahey and popularized by Mark Bittman over a decade ago in the New York Times. As many amateur bakers know, it’s simplicity itself: instead of kneading, you mix a very wet dough with a tiny amount of yeast, and then let it rise for about eighteen hours. Bittman quotes Harold McGee, author of the legendary tome On Food and Cooking, who says:

It makes sense. The long, slow rise does over hours what intensive kneading does in minutes: it brings the gluten molecules into side-by-side alignment to maximize their opportunity to bind to each other and produce a strong, elastic network. The wetness of the dough is an important piece of this because the gluten molecules are more mobile in a high proportion of water, and so can move into alignment easier and faster than if the dough were stiff.

Bittman continues: “Mr. McGee said he had been kneading less and less as the years have gone by, relying on time to do the work for him.” And the results, I can confirm, are close to foolproof: even if you’re less than precise or make a few mistakes along the way, as I tend to do, you almost always get a delicious, light, crusty loaf.

And the idea that you can use the power of time to achieve results that would otherwise require intensive work is central to much of modernist cuisine, as the freelance food scientist Nathan Myhrvold notes in his massive book of the same name. Government food safety guidelines, he points out, are based on raising the core temperature of meat to a certain minimum, which is often set unreasonably high to account for different cooking styles and impatient chefs. In reality, most pathogens are killed by temperatures as low as 120 degrees Fahrenheit—but only if the food has been allowed to cook for a sufficient length of time. The idea that a lower temperature can be counterbalanced by a longer time is the basic premise behind sous vide, in which food is cooked in a warm water bath for hours rather than more rapidly over high heat. This works because you’re trading one kind of precision for another: the temperature is carefully controlled over the course of the cooking process, but once you’re past a certain point, you can be less precise about the time. If you’ve ever prepared a meal in a crock pot, you already know this, and the marvel of sous vide lies in how it applies the same basic insight to a wider variety of recipes. (In fact, there’s a little gadget that you can buy for less than a hundred dollars that can convert any crock pot into a sous vide machine, and although I haven’t bought one for myself yet, I intend to try it one of these days.)

Sous vide

But the relationship between intensity and time has applications far beyond the kitchen. Elsewhere, I’ve talked about the rendering time that all creative acts seem to require: it seems that you just have to live with a work of art for a certain period, and if your process has become more efficient, you still fill that time by rendering or revising the work. As Blinn’s Law states: “As technology advances, rendering time remains constant.” And rendering, of course, is also a term from the food industry, in which the inedible waste from the butcher shop is converted, using time and heat, into something useful or delicious. But one lesson that artists quickly learn is that time can be used in place of intensity, as well as the other way around. Many of the writing rules that I try to follow—trim ten percent from each draft, cut the beginning and ending of every scene, overlap the action, remove transitional moments—are tricks to circumvent a protracted revision process, with intense work and scrutiny over a focused window taking the place of a longer, less structured engagement. If I just sat and fiddled with the story for months or years, I’d probably end up making most of the same changes, but I use these rules of thumb to hurry up the revisions that I would have made anyway. They aren’t always right, and they can’t entirely take the place of an extended period of living with a story, but I can rely on them to get maybe ninety percent of the way there, and the time I save more than compensates for that initial expenditure of energy.

And art, like cooking, often consists of finding the right balance between time and intensity. I’ve found that I write best in bursts of focused activity, which is why I try to keep my total working time for a short story to a couple of weeks or so. But I’ve also learned to set the resulting draft aside for a while before the final revision and submission, which allows me to subconsciously work through the remaining problems and find any plot holes. (On a few occasions that I haven’t done this, I’ve submitted a story only to realize within a day or two that I’d overlooked something important.) The amount of real work I do remains the same, but like dough rising quietly on the countertop, the story has time to align itself in my brain while I’m occupied with other matters. And while time can do wonders for any work of art, the few good tricks I use to speed up the process are still necessary: you aren’t likely to give up on your dough just because it takes an extra day to rise, but the difference between a novel that takes twelve months to write and one that takes three years often amounts to one you finish and one you abandon. The proper balance depends on many outside factors, and you may find that greater intensity and less time, or vice versa, is the approach you need to make it fit with everything else in your life. But baking no-knead bread reminded me that we have a surprising amount of control over the relationship between the two. And even though I’m no longer baking much these days, I’m always thinking about what I can set to rise, or render, right now.

Written by nevalalee

March 30, 2017 at 9:00 am

The multiracial enigma

with one comment

Ann Dunham and Barack Obama

Over the weekend, the New York Times published an opinion piece by the writer Moises Velasquez-Manoff titled “What Biracial People Know.” Velasquez-Manoff, who, like me, is multiracial, makes many of the same points that I once did in a previous post on the subject, as when he writes: “I can attest that being mixed makes it harder to fall back on the tribal identities that have guided so much of human history, and that are now resurgent…You’re also accustomed to the idea of having several selves, and of trying to forge them into something whole.” He also highlights a lot of research of which I wasn’t previously aware, the most interesting being a study of facial recognition in multiracial babies:

By three months of age, biracial infants recognize faces more quickly than their monoracial peers, suggesting that their facial perception abilities are more developed. Kristin Pauker, a psychologist at the University of Hawaii at Manoa and one of the researchers who performed this study, likens this flexibility to bilingualism. Early on, infants who hear only Japanese, say, will lose the ability to distinguish L’s from R’s. But if they also hear English, they’ll continue to hear the sounds as separate. So it is with recognizing faces, Dr. Pauker says. Kids naturally learn to recognize kin from non-kin, in-group from out-group. But because they’re exposed to more human variation, the in-group for multiracial children seems to be larger.

As it happens, I’m terrible at remembering faces, so any advantage I once gained along those lines has long since faded away. But such findings are still intriguing, and they hint temptingly at broader conclusions. As Velasquez-Manoff says of our first biracial president: “His multitudinous self was, I like to think, part of what made him great.”

For obvious reasons, I’m wary of applying generalizations to any ethnic or racial group, including my own. But there’s something intuitively appealing about the notion that multiracial individuals are forced to develop certain advantageous forms of thinking in order to adapt. They don’t have a monopoly on the problem of forging an identity and figuring out the world around them, which, as Velasquez-Manoff notes, is “a defining experience of modernity.” But isn’t hard to believe that they might have a slight head start. If you’re exposed to greater facial variety as an infant, the reasoning goes, you’ll acquire the skills that allow you to distinguish between individuals just a little bit earlier, and you can easily imagine how that small advantage might grow over time. (Although, by the same logic, babies surrounded by faces with similar racial characteristics might become better at distinguishing between slight variations. I’d be curious to know if this has ever been tested.) If there’s a theme here, it’s that multiracial people are shaped by a more intensive version of an experience common to all human beings. Velasquez-Manoff writes:

In a 2015 study, Sarah Gaither, an assistant professor at Duke, found that when she reminded multiracial participants of their mixed heritage, they scored higher in a series of word association games and other tests that measure creative problem solving. When she reminded monoracial people about their heritage, however, their performance didn’t improve…[But] when Dr. Gaither reminded participants of a single racial background that they, too, had multiple selves, by asking about their various identities in life, their scores also improved. “For biracial people, these racial identities are very salient,” she told me. “That said, we all have multiple social identities.”

In other words, we’re all living with these issues, and multiracial just people have to exercise those skills earlier and more often.

Portrait of the author as a young man

Yet I also need to tread carefully here, precisely because these conclusions are just the ones that somebody like me would like to believe. (When you extend these arguments to social patterns, which is a big leap in itself, you also get tripped up by problems of cause and effect. When Velasquez-Manoff writes that “cities and countries that are more diverse are more prosperous than homogeneous ones,” he doesn’t point out that the causal arrow might well run the other way.) Last week, in my post about the replication crisis in psychology, I noted that experiments that confirm what feels like common sense—or that allow us to score easy points against the Trump administration—are less likely to be scrutinized than others, and many of the studies that Velasquez-Manoff mentions here sound a lot like the kind that have proven hard to duplicate. At Harvard and Tel Aviv University, for instance, subjects “read essays that made an essentialist argument about race, and then [were asked] to solve word-association games and other puzzles.” The study found that participants who were “primed” with stereotypes performed less well on such tests than those who weren’t, and it concluded: “An essentialist mindset is indeed hazardous for creativity.” That seems all too reasonable. But the insidious ways in which race pervades our lives bear little resemblance to reading an essay and solving a word puzzle. Maybe multiracial people do, in fact, score higher on such tests when reminded of their mixed heritage, at least when it takes the form, as it did at Duke, of writing essays about their identities. But on an everyday basis, that “reminder” is more likely to take the form of being miscategorized and mispronounced, filling out forms that only allow one racial box to be checked, feeling defined by otherness, and being asked by well-meaning strangers: “So where are you from?” For all I know, these social cues may be equally conductive to creativity. But I doubt that there’s ever been a study about it.

I’m not trying to criticize any specific study, and I’d love to embrace these findings—which is exactly why they need to be replicated. The problem of race is so pervasive and resistant to definition that it makes the average psychological experiment, with its clinical settings and word tests, seem all the more removed from reality. And multiracial people need to be conscious of the slippery slope involved in making any kind of claim about the uniqueness of their experience. (There’s also the huge, unstated point that what it means to be multiracial differs dramatically from one combination of races to another. If you look a certain way, that’s how you’re going to be treated, no matter how diverse your genetic background might be.) Velasquez-Manoff sees these studies as an argument in favor of diversity, which is certainly a case worth making. But creativity is just one factor in human life, and you don’t need to look far to sense the equally great advantages in being a member of a homogenous racial, ethnic, or cultural group, particularly one that has been historically empowered. Tradition is a convenient crystallization of the experiences of the past, and most of us spend our lives falling back on the solutions that people who look like us have provided, whether it’s in politics, society, or religion. Such attitudes wouldn’t persist if they weren’t more than adequate in the vast majority of situations. Creativity is a last resort, a survival mechanism adopted by those who feel excluded from the larger community, unable to rely on the rules that others follow unquestioningly, and forced to improvise tactics in real time. It doesn’t always go well. Creative types are often miserable and frustrated, particularly in a world that runs the most smoothly on monolithic categories. There are times when all your cleverness can’t help you. And that’s what biracial people really know.

Written by nevalalee

March 6, 2017 at 9:13 am

Quote of the Day

leave a comment »

Written by nevalalee

February 10, 2017 at 7:30 am

The moderate novelist

with one comment

Note: I’m taking a few days off, so I’ll be republishing some of my favorite posts from earlier in this blog’s run. This post originally appeared, in a somewhat different form, on November 6, 2012.

I never wanted to be a moderate. Growing up, and especially in college, I believed in coming down strongly on one side or the other of any particular issue, and was drawn to the people around me who embraced similar extremes. I didn’t know much, but I knew that I wanted to be a writer, which to my eyes represented a clear choice between the compromises of an ordinary existence and a willingness to risk everything for the life of art. My favorite classical hero was the Achilles of the Iliad, who might waver or sulk into prolonged inaction, but always saw the world around him in stark terms, with cosmic emotions that refused to be bound by the standards of the society in which he lived. And although I hadn’t read On the Road, I suspect that I might have agreed with Kerouac’s initially inspiring and then increasingly annoying insistence that the only true people were the ones who burn “like fabulous yellow roman candles exploding like spiders across the stars.”

No one has ever compared a moderate to a roman candle, fabulous or otherwise. Yet as time went on, my views began to change. In many ways, this was just part of the process of growing up, which tends to nudge most of us toward the center, on the way to the natural conservatism of old age. But it also had something to do with the realities of becoming a writer. Writing for a living, at least on a daily basis, is less about staking out a bold claim into the unknown than about coming to terms with many small compromises. It’s tactical, not strategic, and encourages a natural pragmatism, at least for those of us who want to write more than a couple of novels. You learn to deal with problems as they occur, and a solution that works in a particular situation may no longer make sense when it comes up again. Above all else, as a writer, you need to figure out a way of life that is mostly free of hard external dislocations, which are murder on any kind of artistic productivity. Hence my favorite writing quote of all time, from Flaubert: “Be well-ordered in your life, and as ordinary as a bourgeois, in order to be violent and original in your work.”

All these things tend to encourage a kind of reasonable moderation, at least on the outside—there’s a reason why most writers have boring biographies. And in my own case, it also shapes the way I see the rest of the world. There aren’t a lot of clear answers in ethics or politics, and as much as we’d all like to be consistent, dealing with reality, like writing fiction, is more likely to impose a series of increasingly messy workarounds. A novel forces you to deal with issues of character, behavior, and society in a laboratory setting, and even when you control the terms of the experiment, the answers that you get are rarely the ones you set out to find. In a defense of moderate thinking in the New York Times, David Brooks once wrote: “This idea—that you base your agenda on your specific situation—may seem obvious, but immoderate people often know what their solutions are before they define the problems.” And this describes bad fiction as well as bad politics.

As a result, my own politics are sort of a hodgepodge, and like my fiction, they’ve been deeply shaped by the particulars of my life story. I’m a multicultural agnostic who has spent much of his life under the spell of various dead white males. Not surprisingly, my strongest political conviction remains that of the power of free speech, but I’ve also got a weird survivalist streak that once left me more neutral on issues like gun control—although I’ve since changed my mind about this. I spent years working in finance, and I mostly believe in the positive power of capitalism and free markets, but I also think that it leads to conditions of inequality that the government needs to address, for the good of the system as a whole. And I could go on. But the bottom line is that I’ve found that a writer, and maybe a citizen, needs to be less like Achilles than Odysseus: adaptable, pragmatic, capable of changing his plans when necessary, but always with an eye to finding his way home, even if it takes far longer than he hoped.

Written by nevalalee

January 19, 2017 at 9:00 am

How is a writer like a surgeon?

leave a comment »

E.L. Doctorow

Note: I’m taking a short break this week, so I’ll be republishing a few posts from earlier in this blog’s run. This post originally appeared, in a slightly different form, on July 22, 2015. 

The late E.L. Doctorow belonged to a select group of writers, including Toni Morrison, who were editors before they were novelists. When asked how his former vocation had influenced his work, he said:

Editing taught me how to break books down and put them back together. You learn values—the value of tension, of keeping tension on the page and how that’s done, and you learn how to spot self-indulgence, how you don’t need it. You learn how to become very free and easy about moving things around, which a reader would never do. A reader sees a printed book and that’s it. But when you see a manuscript as an editor, you say, Well this is chapter twenty, but it should be chapter three. You’re at ease in the book the way a surgeon is at ease in a human chest, with all the blood and guts and everything. You’re familiar with the material and you can toss it around and say dirty things to the nurse.

Doctorow—who had the word “doctor” right there in his name—wasn’t the first author to draw a comparison between writing and medicine, and in particular to surgery, which has a lot of metaphorical affinities with the art of fiction. It’s half trade school and half priesthood, with a vast body of written and unwritten knowledge, and as Atul Gawande has pointed out, even the most experienced practitioners can benefit from the use of checklists. What draws most artists to the analogy, though, is the surgeon’s perceived detachment and lack of sentimentality, and the idea that it’s a quality that can be acquired with sufficient training and experience. The director Peter Greenaway put it well:

I always think that if you deal with extremely emotional, even melodramatic, subject matter, as I constantly do, the best way to handle those situations is at a sufficient remove. It’s like a doctor and a nurse and a casualty situation. You can’t help the patient and you can’t help yourself by emoting.

And the primary difference, aside from the stakes involved, is that the novelist is constantly asked, like the surgeon in the famous brainteaser, to operate on his or her own child.

Atul Gawande

Closely allied to the concept of surgical detachment is that of a particular intuition, the kind that comes after craft has been internalized to the point where it no longer needs to be consciously remembered. As Wilfred Trotter wrote: “The second thing to be striven for [by a doctor] is intuition. This sounds an impossibility, for who can control that small quiet monitor? But intuition is only inference from experience stored and not actively recalled.” Intuition is really a way of reaching a conclusion after skipping over the intermediate steps that rational thought requires—or what Robert Graves calls proleptic thinking—and it evolved as a survival response to situations where time is at a premium. Both surgeons and artists are called upon to exercise uncanny precision at moments of the highest tension, and the greater the stress, the greater the exactitude required. As John Ruskin puts it:

There is but one question ultimately to be asked respecting every line you draw: Is it right or wrong? If right, it most assuredly is not a “free” line, but an intensely continent, restrained and considered line; and the action of the hand in laying it is just as decisive, and just as “free” as the hand of a first-rate surgeon in a critical incision.

Surgeons, of course, are as human as anybody else. In an opinion piece published last year in the New York Times, the writer and cardiologist Sandeep Jauhar argued that the widespread use of surgical report cards has had a negative impact on patient care: skilled surgeons who are aggressive about treating risky cases are penalized, or even stripped of their operating privileges, while surgeons who play it safe by avoiding very sick patients maintain high ratings. It isn’t hard to draw a comparison to fiction, where a writer who consistently takes big risks can end up with less of a career than one who sticks to proven material. (As an unnamed surgeon quoted by Jahuar says: “The so-called best surgeons are only doing the most straightforward cases.”) And while it may seem like a stretch to compare a patient of flesh and blood to the fictional men and women on which a writer operates, the stakes are at least analogous. Every project represents a life, or a substantial part of one: it’s an investment of effort drawn from the finite, and nonrenewable, pool of time that we’ve all been granted. When a novelist is faced with saving a manuscript, it’s not just a stack of pages, but a year of one’s existence that might feel like a loss if the operation isn’t successful. Any story is a slice of mortality, distilled to a physical form that runs the risk of disappearing without a trace if we can’t preserve it. And our detachment here is precious, even essential, because the life we’ve been asked to save is our own.

Written by nevalalee

January 4, 2017 at 9:00 am

%d bloggers like this: