Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘Brian Uzzi

The foundations of novelty

with one comment

Yesterday, I was leafing through the journalist Charles Duhigg’s recent book Smarter, Faster, Better when my eye was caught by a discussion of a study conducted several years ago by two professors at Northwestern University. It dealt with the importance of unexpected combinations in creative thinking, which is a subject that is dear to my heart, and they emerged with some fascinating conclusions. Duhigg writes:

The researchers—Brian Uzzi and Ben Jones—decided to focus on an activity they were deeply familiar with: writing and publishing academic papers…They could estimate a paper’s originality by analyzing the sources authors had cited in their endnotes…Almost all of the creative papers had at least one thing in common: They were usually combinations of previously known ideas mixed together in new ways. In fact, on average, ninety percent of what was in the most “creative” manuscripts had already been published elsewhere—and had already been picked over by thousands of other scientists. However, in the creative papers, those conventional concepts were applied to questions in manners no one had considered before. “Our analysis of 17.9 million papers spanning all scientific fields suggests that science follows a nearly universal pattern,” Uzzi and Jones wrote. “The highest-impact science is primarily grounded in exceptionally conventional combinations of prior work yet simultaneously features an intrusion of unusual combinations.” It was this combination of ideas, rather than the ideas themselves, that typically made a paper so creative and important.

As Uzzi later explained in an interview with Duhigg: “A paper that combines work by Newton and Einstein is conventional. That combination has happened thousands of times. But a paper that combines Einstein and Wang Chong, the Chinese philosopher, that’s much more likely to be creative, because it’s such an unusual pairing.”

The late Arthur Koestler called this phenomenon “bisociation,” and its appearance here inspired me to look up the original paper, “How Atypical Combinations of Scientific Ideas Are Related to Impact.” Its most intriguing insight—which Duhigg mentions only in passing—is that not all combinations are equally useful. As you might expect, the study found that the majority of published scientific papers draw on a conventional set of sources, with most of their references occurring within a predictable subset of journals. Yet a novel combination of references in itself isn’t any guarantee of originality or importance. In fact, papers that have “high tail novelty” alone are actually less likely to be widely cited than papers that don’t stray far from conventional wisdom. The best combination, it seems, is a core of conventional work spiced up with a few unusual ingredients. An article on their research in the magazine of the Kellogg School of Management makes this point more explicit:

What’s interesting,” says Uzzi, “is most of the work done is conventional. And some of the work is truly novel. And the chances of either one of those classifications of papers being hits is about the same.” Only about five percent of research papers that draw from only very novel or only very conventional sources were among the most highly cited papers in the database. But there was a third category of research that had nearly twice the likelihood of making it big: papers that relied mostly on conventional combinations of sources but also included a small subset of highly novel ones. “It isn’t all about novelty or conventionality. It’s about both,” explains Jones, who was somewhat surprised by this result…“You want to be grounded in something that’s well understood and yet be adding in the piece that’s truly unusual. And if you do those two things [and] stretch yourself in both directions, then you radically increase your probability of hitting a home run.”

This point is technically present in Duhigg’s book, but it’s easy to miss, and it strikes me as the real takeaway here. Innovation doesn’t happen when you combine ideas haphazardly, but when you incorporate novel insights into a more conventional foundation. I’ve noticed this pattern in my own work. When I first started writing science fiction, my favorite method for generating ideas was to browse through a stack of science magazines and pick two or three articles at random, trusting that I would find a connection between them if I looked hard enough. My early novelette “The Last Resort,” for instance, was a combination of articles about lake eruptions, snowmaking, and the snake pits of Manitoba. “The Boneless One,” which was my first really good story, did the same with bioluminescence, octopus intelligence, and an expedition to catalog genetic material in the ocean. It was a reliable trick, and it served me well over the course of half a dozen stories. Around the time that I wrote “Stonebrood,” however, followed by “The Proving Ground” and my upcoming story “The Spires,” I began to get tired of that process, and I tried a different approach. These days, I usually start with one big subject or setting that I’d like to explore—wilderness firefighting, climate change in the Marshall Islands, bush piloting in Alaska. From there, I’ll look for an unusual angle that ties back into the main theme, often by searching the archives of science magazines until I come up with a promising hook. Instead of choosing a few random ideas and treating them equally, in other words, I start with a central premise that feels like it would make a good story and then look for unexpected offshoots. In some ways, this approach is riskier, since it that initial hunch is wrong, it’s easier to follow it into a dead end. But the results seem better. In the past, some of my stories, like “The Voices,” have had visible seams. The ones that I’m writing now are more of a piece, but they haven’t lost their ability to surprise me along the way, which is the main reason that I write them at all.

Obviously, this is a very minor example, and it probably isn’t all that interesting to anyone but me. But the notion that we should proceed by adding novelty to an established foundation, rather than by combining ideas purely at random, is a valuable one. It’s similar to the familiar principle that it’s hard to do interesting work across multiple disciplines until you’ve mastered one field well, both because of the habits of thinking that it teaches and the body of information that it provides. It was partly for this reason that Charles Darwin spent years studying barnacles, or cirripedes, as the scientist Thomas Henry Huxley observed:

The great danger which besets all men of large speculative faculty is the temptation to deal with the accepted statements of facts in natural science, as if they were not only correct, but exhaustive; as if they might be dealt with deductively, in the same way as propositions in Euclid may be dealt with. In reality, every such statement, however true it may be, is true only relatively to the means of observation and the point of view of those who have enunciated it. So far it may be depended upon. But whether it will bear every speculative conclusion that may be logically deduced from it is quite another question…The value of the Cirripede monograph lies not merely in the fact that it is a very admirable piece of work, and constituted a great addition to positive knowledge, but still more in the circumstance that it was a piece of critical self-discipline, the effect of which manifested itself in everything [Darwin] wrote afterwards, and saved him from endless errors of detail.

What the barnacles taught Darwin, in Huxley’s words, was “the speculative strain” that certain ideas would bear, and this applies as much to individual projects as to the work of a lifetime. Random combination can be a valuable tool, but it needs to be the right kind of randomness. Creativity, as Gregory Bateson wonderfully put it, often consists of “a raid on the random.” But like most raids, it’s more likely to succeed when it starts from a position of strength.

The right kind of randomness

leave a comment »

Yesterday, while talking about my search for serendipity in the New York Times, I wrote: “What the [Times‘s] recommendation engine thought I might like to see was far less interesting than what other people unlike me were reading at the same time.” The second I typed that sentence, I knew it wasn’t entirely true, and the more I thought about it, the more questions it seemed to raise. Because, really, most readers of the Times aren’t that much unlike me. The site attracts a wide range of visitors, but its ideal audience, the one it targets and the one that embodies how most of its readers probably like to think of themselves, is fairly consistent: educated, interested in the politics and the arts, more likely to watch Mad Men than Two and a Half Men, and rather more liberal than otherwise. The “Most Emailed” list isn’t exactly a random sampling of interesting stories, then, but a sort of idealized picture of what the perfect Times subscriber, with equal access to all parts of the paper, is reading at that particular moment.

As a result, the “serendipity” we find there tends to be skewed in predictable ways. For instance, you’re much more likely to see a column by Paul Krugman than by my conservative college classmate Ross Douthat, who may be a good writer who makes useful points, but you’d never know it based on how often his columns are shared. (I don’t have any hard numbers to back this up, but I’d guess that Douthat’s columns make the “Most Emailed” list only a fraction of the time.) If I were really in search of true serendipity—that is, to quote George Steiner, if I was trying to find what I wasn’t looking for—I’d read the most viewed or commented articles on, say, the National Review, or, better yet, the National Enquirer, the favorite paper of both Victor Niederhoffer and Nassim Nicholas Taleb. But I don’t. What I really want as a reader, it seems, isn’t pure randomness, but the right kind of randomness. It’s serendipity as curated by the writers and readers of the New York Times, which, while interesting, is only a single slice of the universe of randomness at my disposal.

Is this wrong? Not necessarily. In fact, I’d say there are at least two good reasons to stick to a certain subset of randomness, at least on a daily basis. The first reason has something in common with Brian Uzzi’s fascinating research on the collaborative process behind hit Broadway shows, as described in Jonah Lehrer’s Imagine. What Uzzi discovered is that the most successful shows tended to be the work of teams of artists who weren’t frequent collaborators, but weren’t strangers, either. An intermediate level of social intimacy—not too close, but not too far away—seemed to generate the best results, since strangers struggled to find ways of working together, while those who worked together all the time tended to fall into stale, repetitive patterns. And this strikes me as being generally true of the world of ideas as well. Ideas that are too similar don’t combine in interesting ways, but those that are too far apart tend to uselessly collide. What you want, ideally, is to live in a world of good ideas that want to cohere and set off chains of associations, and for this, an intermediate level of unfamiliarity seems to work the best.

And the second reason is even more important: it’s that randomness alone isn’t enough. It’s good, of course, to seek out new sources of inspiration and ideas, but if done indiscriminately, the result is likely to be nothing but static. Twitter, for instance, is as pure a slice of randomness as you could possibly want, but we very properly try to manage our feeds to include those people we like and find interesting, rather than exposing ourselves to the full noise of the Twitterverse. (That way lies madness.) Even the most enthusiastic proponent of intentional randomness, like me, has to admit that not all sources of information are created equal, and that it’s sometimes necessary to use a trusted home base for our excursions into the unknown. When people engage in bibliomancy—that is, in telling the future by opening a book to a random page—there’s a reason why they’ve historically used books like Virgil or the Bible, rather than Harlequin romance: any book would generate the necessary level of randomness, but you need a basic level of richness and meaning as well. What I’m saying, I guess, is that if you’re going to be random, you may as well be systematic about it. And the New York Times isn’t a bad place to start.

Written by nevalalee

May 23, 2012 at 10:42 am

%d bloggers like this: