Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘New York Times

The public eye

leave a comment »

Last month, the New York Times announced that it was eliminating its public editor, an internal watchdog position that dates back over a decade to the Jayson Blair scandal. In a memo to employees, publisher Arthur Sulzberger outlined the reasoning:

The responsibility of the public editor—to serve as the reader’s representative—has outgrown that one office…Today, our followers on social media and our readers across the Internet have come together to collectively serve as a modern watchdog, more vigilant and forceful than one person could ever be. Our responsibility is to empower all of those watchdogs, and to listen to them, rather than to channel their voice through a single office.

We are dramatically expanding our commenting platform. Currently, we open only ten percent of our articles to reader comments. Soon, we will open up most of our articles to reader comments. This expansion, made possible by a collaboration with Google, marks a sea change in our ability to serve our readers, to hear from them, and to respond to them.

The decision was immediately criticized, as much for its optics and timing as for its underlying rationale. As Zach Schonfeld wrote for Newsweek: “The Times’s ability to hold the [Trump] administration accountable relies on its ability to convince readers that it’s holding itself accountable—to convince the country that it’s not ‘fake news,’ as Trump frequently charges, and that it is getting the story right.”

This seems obvious to me. Even if it was a legitimate call, it looks bad, especially at this particular moment. The public editor hasn’t always been as empowered or vocal as it should be, but these are problems that should have been addressed by improving it, not discontinuing it entirely, even if the Times itself lacked the inclination to do so. (Tom Scocca observed on Politico: “Sulzberger seemed to approach the routine duty of holding his paper accountable the same way a surly twelve-year-old approaches the task of mowing the lawn—if he could do it badly enough, maybe people would decide he shouldn’t have been made to do it at all.”) But I’m more concerned by the argument that the public editor’s role could somehow be outsourced to comments, both on the site itself and on unaffiliated platforms like Twitter. As another article in the Times explains:

We have implemented a new system called Moderator, and starting today, all our top stories will allow comments for an eight-hour period on weekdays. And for the first time, comments in both the News and Opinion sections will remain open for twenty-four hours.

Moderator was created in partnership with Jigsaw, a technology incubator that’s part of Alphabet, Google’s parent company. It uses machine-learning technology to prioritize comments for moderation, and sometimes, approves them automatically…The Times struck a deal with Jigsaw that we outlined last year: In exchange for the Times’s anonymized comments data, Jigsaw would build a machine learning algorithm that predicts what a Times moderator might do with future comments.

Without delving into the merits of this approach or the deal that made it possible, it seems clear that the Times wants us to associate the removal of the public editor with the overhaul of its comments section, as if one development were a response to the other. In his memo, Sulzberger wrote that the relationship between the newspaper and its readers was too important to be “outsourced”—which is a strange way to describe an internal position—to any one person. And by implication, it’s outsourcing it to its commenters instead.

But is that really what’s happening here? To my eyes, it seems more likely that the Times is mentioning two unrelated developments in one breath in hopes that we’ll assume that they’re solutions to the same problem, when, in fact, the paper has done almost nothing to build a comments section that could conceivably take on a watchdog role. In the article on the partnership with Jigsaw, we read: “The community desk has long sought quality of comments over quantity. Surveys of Times readers have made clear that the approach paid off—readers who have seen our comment sections love them.” Well, whenever I’ve seen those comment sections, which is usually by mistake, I’ve clicked out right away—and if these are what “quality” comments look like, I’d hate to see those that didn’t make the cut. But even if I’m not the intended audience, it seems to me that there are a number of essential factors that go into making a viable commentariat, and that the Times has implemented none of them. Namely:

  1. A sense of ownership. A good comment system provides users with a profile that archives all of their submissions in one place, which keeps them accountable and provides a greater incentive to put more effort into what they write. The Times, to my knowledge, doesn’t offer this.
  2. A vibrant community. The best comment sections, like the ones on The A.V. Club and the mid-sized communities on Reddit, benefit from a relatively contained pool of users, which allows you to recognize the names of prolific commenters and build up an identity for yourself. The Times may be too huge and sprawling to allow for this at all, and while workarounds might exist, as I’ll note below, they haven’t really been tried. Until now, the comments sections have appeared too unpredictably on articles to attract readers who aren’t inclined to seek them out, and there’s no support for threads, which allow real conversations to take place.
  3. A robust upvoting system. This is the big one. Comment sections are readable to the extent that they allow the best submissions to float to the top. When I click on an article on the Times, the column on the right automatically shows me the most recent comments, which, on average, are mediocre or worse, and it leaves me with little desire to explore further. The Times offers a “Reader’s Picks” category, but it isn’t the default setting, and it absolutely needs to be. Until then, it might get better policing from readers simply by posting every article as a link on Reddit and letting the comments live there.

It’s important to note that even if all these changes were implemented, they couldn’t replace a public editor, a high-profile position with access to the thought processes of editors and reporters that no group of outside commenters could provide. A good comment section can add value, but it’s a solution to a different problem. Claiming that beefing up the one allows you to eliminate the other is like removing the smoke alarm from your house because you’ve got three carbon monoxide detectors. But even if the Times was serious about turning its commenters into the equivalent of a public editor, like replacing one horse-sized duck with a hundred duck-sized horses, it hasn’t made the changes that would be required to make its comment sections useful. (Implementing items one and three would be fairly straightforward. Item two would be harder, but it might work if the Times pushed certain sections, like Movies or Sports, as portals in themselves, and then tried to expand the community from there.) It isn’t impossible, but it’s hard, and while it would probably cost less than paying a public editor, it would be more expensive than the deal with Google, in which the paper provides information about its readers to get an algorithm for free. And this gets at the real reason for the change. “The community desk has long sought quality of comments over quantity,” the Times writes—so why suddenly emphasize quantity now? The only answer is that it’s easier and cheaper than the alternative, which requires moderation by human beings who have to be paid a salary, rather than an algorithmic solution that is willing to work for data. Given the financial pressures on a site like the Times, which outlined the changes in the same article in which it announced that it would be offering buyouts to its newsroom staff, this is perfectly understandable. But pretending that a move based on cost efficiency is somehow better than the alternative is disingenuous at best, and the effort to link the two decisions points at something more insidious. Correlation isn’t causation, and just because Sulzberger mentions two things in successive paragraphs doesn’t mean they have anything to do with each other. I hate to say it, but it’s fake news. And the Times has just eliminated the one person on its staff who might have been able or willing to point this out.

Written by nevalalee

June 16, 2017 at 8:54 am

The logic of birdsong

with one comment

My favorite theory is that the structure of a bird song is determined by what will carry best in its home environment. Let’s say, you have one bird that lives in a coniferous forest and another in an oak forest. Since the song is passed down by tradition, then let’s say there’s an oak woodland dialect and coniferous woodland dialect. If you reproduce the sounds, you will find that the oak sound carries farther in an oak forest than it does in a coniferous forest, and vice versa…

[Bird songs] have an exposition of a theme. Very often, they have variations in theme reminiscent of canonical variations like Mozart’s Sonata in A major, where you have theme and variation. And eventually, they come back to the original theme. They probably do it for the same reasons that humans compose sonatas. Both humans and birds get bored with monotony. And to counter monotony, you always have to do something new to keep the brain aroused.

Luis F. Baptista, in an interview with the New York Times

Written by nevalalee

May 28, 2017 at 7:30 am

The A/B Test

with 2 comments

In this week’s issue of The New York Times Magazine, there’s a profile of Mark Zuckerberg by Farhad Manjoo, who describes how the founder of Facebook is coming to terms with his role in the world in the aftermath of last year’s election. I find myself thinking about Zuckerberg a lot these days, arguably even more than I use Facebook itself. We just missed overlapping in college, and with one possible exception, which I’ll mention later, he’s the most influential figure to emerge from those ranks in the last two decades. Manjoo depicts him as an intensely private man obliged to walk a fine line in public, leading him to be absurdly cautious about what he says: “When I asked if he had chatted with Obama about the former president’s critique of Facebook, Zuckerberg paused for several seconds, nearly to the point of awkwardness, before answering that he had.” Zuckerberg is trying to figure out what he believes—and how to act—under conditions of enormous scrutiny, but he also has more resources at his disposal than just about anyone else in history. Here’s the passage in the article that stuck with me the most:

The people who work on News Feed aren’t making decisions that turn on fuzzy human ideas like ethics, judgment, intuition, or seniority. They are concerned only with quantifiable outcomes about people’s actions on the site. That data, at Facebook, is the only real truth…This ideal runs so deep that the people who make News Feed often have to put aside their own notions of what’s best. “One of the things we’ve all learned over the years is that our intuition can be wrong a fair amount of the time,” John Hegeman, the vice president of product management and a News Feed team member, told me. “There are things you don’t expect will happen. And we learn a lot from that process: Why didn’t that happen, and what might that mean?”

Reading this, I began to reflect on how rarely we actually test our intuitions. I’ve spoken a lot on this blog about the role of intuitive thinking in the arts and sciences, mostly because it doesn’t get the emphasis it deserves, but there’s also no guarantee that intuition will steer us in the right direction. The psychologist Daniel Kahneman has devoted his career to showing how we tend to overvalue our gut reactions, particularly if we’ve been fortunate enough to be right in the past, and the study of human irrationality has become a rich avenue of research in the social sciences, which are often undermined by poor hunches of their own. It may not even be a matter of right or wrong. An intuitive choice may be better or worse than the alternative, but for the most part, we’ll never know. One of the quirks of Silicon Valley culture is that it claims to base everything on raw data, but it’s often in the service of notions that are outlandish, untested, and easy to misrepresent. Facebook comes closer than any company in existence to the ideal of an endless A/B test, in which the user base is randomly divided into two or more groups to see which approaches are the most effective. It’s the best lab ever developed for testing our hunches about human behavior. (Most controversially, Facebook modified the news feeds of hundreds of thousands of users to adjust the number of positive or negative posts, in order to gauge the emotional impact, and it has conducted similar tests on voter turnout.) And it shouldn’t surprise us if many of our intuitions turn out to be mistaken. If anything, we should expect them to be right about half the time—and if we can nudge that percentage just a little bit upward, in theory, it should give us a significant competitive advantage.

So what good is intuition, anyway? I like to start with William Goldman’s story about the Broadway producer George Abbott, who once passed a choreographer holding his head in his hands while the dancers stood around doing nothing. When Abbott asked what was wrong, the choreographer said that he couldn’t figure out what to do next. Abbott shot back: “Well, have them do something! That way we’ll have something to change.” Intuition, as I’ve argued before, is mostly about taking you from zero ideas to one idea, which you can then start to refine. John W. Campbell makes much the same argument in what might be his single best editorial, “The Value of Panic,” which begins with a maxim from the Harvard professor Wayne Batteau: “In total ignorance, try anything. Then you won’t be so ignorant.” Campbell argues that this provides an evolutionary rationale for panic, in which an animal acts “in a manner entirely different from the normal behavior patterns of the organism.” He continues:

Given: An organism with N characteristic behavior modes available. Given: An environmental situation which cannot be solved by any of the N available behavior modes, but which must be solved immediately if the organism is to survive. Logical conclusion: The organism will inevitably die. But…if we introduce Panic, allowing the organism to generate a purely random behavior mode not a member of the N modes characteristically available?

Campbell concludes: “When the probability of survival is zero on the basis of all known factors—it’s time to throw in an unknown.” In extreme situations, the result is panic; under less intense circumstances, it’s a blind hunch. You can even see them as points on a spectrum, the purpose of which is to provide us with a random action or idea that can then be revised into something better, assuming that we survive for long enough. But sometimes the animal just gets eaten.

The idea of refinement, revision, or testing is inseparable from intuition, and Zuckerberg has been granted the most powerful tool imaginable for asking hard questions and getting quantifiable answers. What he does with it is another matter entirely. But it’s also worth looking at his only peer from college who could conceivably challenge him in terms of global influence. On paper, Mark Zuckerberg and Jared Kushner have remarkable similarities. Both are young Jewish men—although Kushner is more observant—who were born less than four years and sixty miles apart. Kushner, whose acceptance to Harvard was so manifestly the result of his family’s wealth that it became a case study in a book on the subject, was a member of the final clubs that Zuckerberg badly wanted to join, or so Aaron Sorkin would have us believe. Both ended up as unlikely media magnates of a very different kind: Kushner, like Charles Foster Kane, took over a New York newspaper from a man named Carter. Yet their approaches to their newfound positions couldn’t be more different. Kushner has been called “a shadow secretary of state” whose portfolio includes Mexico, China, the Middle East, and the reorganization of the federal government, but it feels like one long improvisation, on the apparent assumption that he can wing it and succeed where so many others have failed. As Bruce Bartlett writes in the New York Times, without a staff, Kushner “is just a dilettante meddling in matters he lacks the depth or the resources to grasp,” and we may not have a chance to recover if his intuitions are wrong. In other words, he resembles his father-in-law, as Frank Bruni notes:

I’m told by insiders that when Trump’s long-shot campaign led to victory, he and Kushner became convinced not only that they’d tapped into something that everybody was missing about America, but that they’d tapped into something that everybody was missing about the two of them.

Zuckerberg and Kushner’s lives ran roughly in parallel for a long time, but now they’re diverging at a point at which they almost seem to be offering us two alternate versions of the future, like an A/B test with only one possible outcome. Neither is wholly positive, but that doesn’t make the choice any less stark. And if you think this sounds farfetched, bookmark this post, and read it again in about six years.

Parkinson’s Law and the creative hour

with 2 comments

In the November 19, 1955 issue of The Economist, the historian Cyril Northcote Parkinson stated the law that has borne his name ever since, in a paragraph remarkable for its sheer Englishness:

It is a commonplace observation that work expands so as to fill the time available for its completion. Thus, an elderly lady of leisure can spend the entire day in writing and dispatching a postcard to her niece at Bognor Regis. An hour will be spent in finding the postcard, another in hunting for spectacles, half an hour in a search for the address, an hour and a quarter in composition, and twenty minutes in deciding whether or not to take an umbrella when going to the pillar box in the next street. The total effort which would occupy a busy man for three minutes all told may in this fashion leave another person prostrate after a day of doubt, anxiety and toil.

Parkinson’s observation was originally designed to account for the unchecked growth of bureaucracy, which hinges on the fact that paperwork is “elastic in its demands on time”—and, by extension, on manpower. And he concluded the essay by stating, rather disingenuously, that it was only an empirical observation, without any value attached: “The discovery of this formula and of the general principles upon which it is based has, of course, no emotive value…Parkinson’s Law is a purely scientific discovery, inapplicable except in theory to the politics of the day. It is not the business of the botanist to eradicate the weeds. Enough for him if he can tell us just how fast they grow.”

In fact, Parkinson’s Law can be a neutral factor, or even a positive one, when it comes to certain forms of creativity. We can begin with one of its most famous, if disguised, variations, in the form of Blinn’s Law: “As technology advances, rendering time remains constant.” As I’ve noted before, once an animator gets used to waiting a certain number of hours for an image to render, as the hardware improves, instead of using it to save time, he just renders more complex graphics. There seems to be a fixed amount of time that any given person is willing to work, so an increase in efficiency doesn’t necessarily reduce the time spent at your desk—it just allows you to introduce additional refinements that depend on purely mechanical factors. Similarly, the introduction of word-processing software didn’t appreciably reduce how long it takes to write a novel: it only restructures it, so that whatever time you save in typing is expended in making imperceptible corrections. This isn’t always a good thing. As the history of animation makes clear, Blinn’s Law can lead to the same tired stories being played out against photorealistic backgrounds, and access to word processors may simply mean that the average story gets longer, as Ted Hughes observed while serving on the judging panel of a children’s writing competition: “It just extends everything slightly too much. Every sentence is too long. Everything is taken a bit too far, too attenuated.” But there are also cases in which an artist’s natural patience and tolerance for work provides the finished result with the rendering time that it needs to reach its ideal form. And we have it to thank for many displays of gratuitous craft and beauty.

This leads me to a conclusion that I’ve recently come to appreciate more fully, which is that every form of artistic activity is equally difficult. I don’t mean that the violin is as easy as the ukulele, or that there isn’t any difference between performance at a high level and the efforts of a casual hobbyist. But if you’re a creative professional and take your work seriously, you’re usually going to operate at your optimum capacity, if not all the time, than at least on average. Each day’s work is determined less by the demands of the project itself than by how much energy you can afford to give it. I switch fairly regularly between fiction and nonfiction, for instance, and whenever I’m working in one mode, I often find myself thinking fondly of the other, which somehow seems easier in my imagination. But it isn’t. I’m the same person with an identical set of habits whether I’m writing a novel, a short story, or an essay, and an hour of my time is pitched about at the same degree of intensity no matter what the objective is. In practice, it settles at a point that is slightly too intense to be entirely comfortable, but not so much that it burns me out. I’ve found that I unconsciously adjust the conditions to make each day’s work feel the same, either by moving a deadline forward or backward or by taking on projects that are progressively more challenging. (This doesn’t just apply to paid work, either. The amount of time I spend on this blog hasn’t varied much over the last five years, but the posts have definitely gotten more involved.) This also applies to particular stages. When I’m researching, outlining, writing, or revising, I sometimes console myself with the idea that the next part will be easier. In fact, it’s all hard. And if it isn’t, I’m doing something wrong.

This implies that we shouldn’t pick our artistic pursuits based on how easy they are, but on the quality that they yield for each unit of time invested. (“Quality” can mean whatever you like, from how much you get paid to the amount of personal satisfaction that you derive.) I work as diligently as possible on whatever I do, but this doesn’t mean that I’m equally good at everything, and there are certain forms of writing that I’ve given up because they don’t justify the cost. And I’ve also learned to be grateful for the fact that everything I do takes about the same amount of time and effort per page. The real limiting factor isn’t the time available, but what I bring to each creative hour, and over the long run, it makes sense to be as consistent as I can. It isn’t intensity that hurts, but volatility, and you lose a lot in ramping up and ramping down. But the appropriate level varies from one person to another. What Parkinson neglects to mention in his contrast between “an elderly lady of leisure” and “a busy man” is that each of them has presumably found a suitable mode of living, and you can find productive writers and artists who fall into either category. In the end, the process is all we have, and it makes sense that it would remain the same in its externals, regardless of its underlying goal. That’s a gentler way of stating Parkinson’s Law, but it’s no less accurate. And Parkinson himself seems to have softened his stance. As he said in an interview to the New York Times toward the end of his career: “My experience tells me the only thing people really enjoy over a long period of time is some kind of work.”

Written by nevalalee

April 6, 2017 at 8:39 am

The rendering time

leave a comment »

No-knead bread

Note: I’m taking a few days off, so I’ll be republishing some of my favorite pieces from earlier in this blog’s run. This post originally appeared, in a slightly different form, on December 30, 2015.

Last year, I went through a period in which I was baking a lot of bread at home, initially as an activity to share with my daughter. Not surprisingly, I relied entirely on the no-knead recipe first developed by Jim Lahey and popularized by Mark Bittman over a decade ago in the New York Times. As many amateur bakers know, it’s simplicity itself: instead of kneading, you mix a very wet dough with a tiny amount of yeast, and then let it rise for about eighteen hours. Bittman quotes Harold McGee, author of the legendary tome On Food and Cooking, who says:

It makes sense. The long, slow rise does over hours what intensive kneading does in minutes: it brings the gluten molecules into side-by-side alignment to maximize their opportunity to bind to each other and produce a strong, elastic network. The wetness of the dough is an important piece of this because the gluten molecules are more mobile in a high proportion of water, and so can move into alignment easier and faster than if the dough were stiff.

Bittman continues: “Mr. McGee said he had been kneading less and less as the years have gone by, relying on time to do the work for him.” And the results, I can confirm, are close to foolproof: even if you’re less than precise or make a few mistakes along the way, as I tend to do, you almost always get a delicious, light, crusty loaf.

And the idea that you can use the power of time to achieve results that would otherwise require intensive work is central to much of modernist cuisine, as the freelance food scientist Nathan Myhrvold notes in his massive book of the same name. Government food safety guidelines, he points out, are based on raising the core temperature of meat to a certain minimum, which is often set unreasonably high to account for different cooking styles and impatient chefs. In reality, most pathogens are killed by temperatures as low as 120 degrees Fahrenheit—but only if the food has been allowed to cook for a sufficient length of time. The idea that a lower temperature can be counterbalanced by a longer time is the basic premise behind sous vide, in which food is cooked in a warm water bath for hours rather than more rapidly over high heat. This works because you’re trading one kind of precision for another: the temperature is carefully controlled over the course of the cooking process, but once you’re past a certain point, you can be less precise about the time. If you’ve ever prepared a meal in a crock pot, you already know this, and the marvel of sous vide lies in how it applies the same basic insight to a wider variety of recipes. (In fact, there’s a little gadget that you can buy for less than a hundred dollars that can convert any crock pot into a sous vide machine, and although I haven’t bought one for myself yet, I intend to try it one of these days.)

Sous vide

But the relationship between intensity and time has applications far beyond the kitchen. Elsewhere, I’ve talked about the rendering time that all creative acts seem to require: it seems that you just have to live with a work of art for a certain period, and if your process has become more efficient, you still fill that time by rendering or revising the work. As Blinn’s Law states: “As technology advances, rendering time remains constant.” And rendering, of course, is also a term from the food industry, in which the inedible waste from the butcher shop is converted, using time and heat, into something useful or delicious. But one lesson that artists quickly learn is that time can be used in place of intensity, as well as the other way around. Many of the writing rules that I try to follow—trim ten percent from each draft, cut the beginning and ending of every scene, overlap the action, remove transitional moments—are tricks to circumvent a protracted revision process, with intense work and scrutiny over a focused window taking the place of a longer, less structured engagement. If I just sat and fiddled with the story for months or years, I’d probably end up making most of the same changes, but I use these rules of thumb to hurry up the revisions that I would have made anyway. They aren’t always right, and they can’t entirely take the place of an extended period of living with a story, but I can rely on them to get maybe ninety percent of the way there, and the time I save more than compensates for that initial expenditure of energy.

And art, like cooking, often consists of finding the right balance between time and intensity. I’ve found that I write best in bursts of focused activity, which is why I try to keep my total working time for a short story to a couple of weeks or so. But I’ve also learned to set the resulting draft aside for a while before the final revision and submission, which allows me to subconsciously work through the remaining problems and find any plot holes. (On a few occasions that I haven’t done this, I’ve submitted a story only to realize within a day or two that I’d overlooked something important.) The amount of real work I do remains the same, but like dough rising quietly on the countertop, the story has time to align itself in my brain while I’m occupied with other matters. And while time can do wonders for any work of art, the few good tricks I use to speed up the process are still necessary: you aren’t likely to give up on your dough just because it takes an extra day to rise, but the difference between a novel that takes twelve months to write and one that takes three years often amounts to one you finish and one you abandon. The proper balance depends on many outside factors, and you may find that greater intensity and less time, or vice versa, is the approach you need to make it fit with everything else in your life. But baking no-knead bread reminded me that we have a surprising amount of control over the relationship between the two. And even though I’m no longer baking much these days, I’m always thinking about what I can set to rise, or render, right now.

Written by nevalalee

March 30, 2017 at 9:00 am

The multiracial enigma

with one comment

Ann Dunham and Barack Obama

Over the weekend, the New York Times published an opinion piece by the writer Moises Velasquez-Manoff titled “What Biracial People Know.” Velasquez-Manoff, who, like me, is multiracial, makes many of the same points that I once did in a previous post on the subject, as when he writes: “I can attest that being mixed makes it harder to fall back on the tribal identities that have guided so much of human history, and that are now resurgent…You’re also accustomed to the idea of having several selves, and of trying to forge them into something whole.” He also highlights a lot of research of which I wasn’t previously aware, the most interesting being a study of facial recognition in multiracial babies:

By three months of age, biracial infants recognize faces more quickly than their monoracial peers, suggesting that their facial perception abilities are more developed. Kristin Pauker, a psychologist at the University of Hawaii at Manoa and one of the researchers who performed this study, likens this flexibility to bilingualism. Early on, infants who hear only Japanese, say, will lose the ability to distinguish L’s from R’s. But if they also hear English, they’ll continue to hear the sounds as separate. So it is with recognizing faces, Dr. Pauker says. Kids naturally learn to recognize kin from non-kin, in-group from out-group. But because they’re exposed to more human variation, the in-group for multiracial children seems to be larger.

As it happens, I’m terrible at remembering faces, so any advantage I once gained along those lines has long since faded away. But such findings are still intriguing, and they hint temptingly at broader conclusions. As Velasquez-Manoff says of our first biracial president: “His multitudinous self was, I like to think, part of what made him great.”

For obvious reasons, I’m wary of applying generalizations to any ethnic or racial group, including my own. But there’s something intuitively appealing about the notion that multiracial individuals are forced to develop certain advantageous forms of thinking in order to adapt. They don’t have a monopoly on the problem of forging an identity and figuring out the world around them, which, as Velasquez-Manoff notes, is “a defining experience of modernity.” But isn’t hard to believe that they might have a slight head start. If you’re exposed to greater facial variety as an infant, the reasoning goes, you’ll acquire the skills that allow you to distinguish between individuals just a little bit earlier, and you can easily imagine how that small advantage might grow over time. (Although, by the same logic, babies surrounded by faces with similar racial characteristics might become better at distinguishing between slight variations. I’d be curious to know if this has ever been tested.) If there’s a theme here, it’s that multiracial people are shaped by a more intensive version of an experience common to all human beings. Velasquez-Manoff writes:

In a 2015 study, Sarah Gaither, an assistant professor at Duke, found that when she reminded multiracial participants of their mixed heritage, they scored higher in a series of word association games and other tests that measure creative problem solving. When she reminded monoracial people about their heritage, however, their performance didn’t improve…[But] when Dr. Gaither reminded participants of a single racial background that they, too, had multiple selves, by asking about their various identities in life, their scores also improved. “For biracial people, these racial identities are very salient,” she told me. “That said, we all have multiple social identities.”

In other words, we’re all living with these issues, and multiracial just people have to exercise those skills earlier and more often.

Portrait of the author as a young man

Yet I also need to tread carefully here, precisely because these conclusions are just the ones that somebody like me would like to believe. (When you extend these arguments to social patterns, which is a big leap in itself, you also get tripped up by problems of cause and effect. When Velasquez-Manoff writes that “cities and countries that are more diverse are more prosperous than homogeneous ones,” he doesn’t point out that the causal arrow might well run the other way.) Last week, in my post about the replication crisis in psychology, I noted that experiments that confirm what feels like common sense—or that allow us to score easy points against the Trump administration—are less likely to be scrutinized than others, and many of the studies that Velasquez-Manoff mentions here sound a lot like the kind that have proven hard to duplicate. At Harvard and Tel Aviv University, for instance, subjects “read essays that made an essentialist argument about race, and then [were asked] to solve word-association games and other puzzles.” The study found that participants who were “primed” with stereotypes performed less well on such tests than those who weren’t, and it concluded: “An essentialist mindset is indeed hazardous for creativity.” That seems all too reasonable. But the insidious ways in which race pervades our lives bear little resemblance to reading an essay and solving a word puzzle. Maybe multiracial people do, in fact, score higher on such tests when reminded of their mixed heritage, at least when it takes the form, as it did at Duke, of writing essays about their identities. But on an everyday basis, that “reminder” is more likely to take the form of being miscategorized and mispronounced, filling out forms that only allow one racial box to be checked, feeling defined by otherness, and being asked by well-meaning strangers: “So where are you from?” For all I know, these social cues may be equally conductive to creativity. But I doubt that there’s ever been a study about it.

I’m not trying to criticize any specific study, and I’d love to embrace these findings—which is exactly why they need to be replicated. The problem of race is so pervasive and resistant to definition that it makes the average psychological experiment, with its clinical settings and word tests, seem all the more removed from reality. And multiracial people need to be conscious of the slippery slope involved in making any kind of claim about the uniqueness of their experience. (There’s also the huge, unstated point that what it means to be multiracial differs dramatically from one combination of races to another. If you look a certain way, that’s how you’re going to be treated, no matter how diverse your genetic background might be.) Velasquez-Manoff sees these studies as an argument in favor of diversity, which is certainly a case worth making. But creativity is just one factor in human life, and you don’t need to look far to sense the equally great advantages in being a member of a homogenous racial, ethnic, or cultural group, particularly one that has been historically empowered. Tradition is a convenient crystallization of the experiences of the past, and most of us spend our lives falling back on the solutions that people who look like us have provided, whether it’s in politics, society, or religion. Such attitudes wouldn’t persist if they weren’t more than adequate in the vast majority of situations. Creativity is a last resort, a survival mechanism adopted by those who feel excluded from the larger community, unable to rely on the rules that others follow unquestioningly, and forced to improvise tactics in real time. It doesn’t always go well. Creative types are often miserable and frustrated, particularly in a world that runs the most smoothly on monolithic categories. There are times when all your cleverness can’t help you. And that’s what biracial people really know.

Written by nevalalee

March 6, 2017 at 9:13 am

Quote of the Day

leave a comment »

Written by nevalalee

February 10, 2017 at 7:30 am

%d bloggers like this: