As I’ve noted before, writing a series of novels is a little like producing a television series: the published result, as Emily Nussbaum says, is the rough draft masquerading as the final product. You want a clear narrative arc that spans multiple installments, but you also don’t want to plan too far in advance, which can lead to boredom and inflexibility. With a television show, you’re juggling multiple factors that are outside any one showrunner’s control: budgets, the availability of cast members, the responses of the audience, the perpetual threat of cancellation. For the most part, a novelist is insulated from such concerns, but you’re also trying to manage your own engagement with the material. A writer who has lost the capacity to surprise himself is unlikely to surprise the reader, which means that any extended project has to strike a balance between the knowns and the unknowns. That’s challenging enough for a single book, but over the course of a series, it feels like a real high-wire act, as the story continues to evolve in unexpected ways while always maintaining that illusion of continuity.
One possible solution, which you see in works in every medium, is to incorporate elements at an early stage that could pay off in a number of ways, depending on the shape the larger narrative ends up taking. My favorite example is from Star Trek II: The Wrath of Khan. Leonard Nimoy wanted Spock to die, and his death—unlike its hollow pastiche in Star Trek Into Darkness—was meant to be a permanent one. Fortunately, writer and director Nicholas Meyer was shrewd enough to build in an escape hatch, especially once he noticed that Nimoy seemed to be having a pretty good time on the set. It consisted of a single insert shot of Spock laying his hand on the side of McCoy’s unconscious face, with the enigmatic word: “Remember.” As Meyer explains on his commentary track, at the time, he didn’t know what the moment meant, but he figured that it was ambiguous enough to support whatever interpretation they might need to give it later on. And whether or not you find the resolution satisfying in The Search for Spock, you’ve got to admit that it was a clever way out.
The more you’re aware of the serendipitous way in which extended narratives unfold, the more often you notice such touches. Breaking Bad, for instance, feels incredibly cohesive, but it was often written on the fly: big elements of foreshadowing—like the stuffed animal floating in the swimming pool, the tube of ricin concealed behind the electrical outlet, or the huge gun that Walter buys at the beginning of the last season—were introduced before the writers knew how they would pay off. Like Spock’s “Remember,” though, they’re all pieces that could fit a range of potential developments, and when their true meaning is finally revealed, it feels inevitable. (Looking at the list of discarded endings that Vince Gilligan shared with Entertainment Weekly is a reminder of how many different ways the story could have gone.) You see the same process at work even in the composition of a single novel: a writer will sometimes introduce a detail on a hunch that it will play a role later on. But the greater challenge of series fiction, or television, is that it’s impossible to go back and revise the draft to bring everything into line.
City of Exiles is a good case in point. In the epilogue, I wanted to set up the events of the next installment without locking myself down to any one storyline, in case my sense of the narrative evolved; at the time I was writing it, I didn’t really know what Eternal Empire would be about. (In fact, I wasn’t even sure there would be a third installment, although the fact that I left a few big storylines unresolved indicates that I at least had some hopes in that direction.) What I needed, then, were a few pieces of vague information that could function in some way in a sequel. Somewhat to my surprise, this included the return of a supporting character, the lawyer Owen Dancy, whom I’d originally intended to appear just once: it occurred to me later on that it might be useful to let him hang around. When he comes to visit Ilya in prison, I didn’t know what that might mean, but it seemed like a development worth exploring. The same is true of the lock-picking tools that Ilya examines on the very last page, which I knew would come in handy. As I said yesterday, a draft can feel like a message—or an inheritance—from the past to the future. And you try to leave as much useful material as possible for the next version of you who comes along…
Last night, I found myself browsing through one of the oddest and most interesting books in my library: Julian Jaynes’s The Origin of Consciousness in the Breakdown of the Bicameral Mind. I don’t know how familiar Jaynes’s work remains among educated readers these days—although the book is still in print after almost forty years—but it deserves to be sought out by anyone interested in problems of psychology, ancient literature, history, or creativity. Jayne’s central hypothesis, which still startles me whenever I type it, is that consciousness as we know it is a relatively recent development that emerged sometime within the last three thousand years, or after the dawn of language and human society. Before this, an individual’s decisions were motivated less by internal deliberation than by verbal commands that wandered from one part of the brain into another, and which were experienced as the hallucinated voice of a god or dead ancestor. Free will, as we conceive of it now, didn’t exist; instead, we acted in automatic, almost robotic obedience to those voices, which seemed to come from an entity outside ourselves.
As Richard Dawkins writes: “It is one of those books that is either complete rubbish or a work of consummate genius, nothing in between! Probably the former, but I’m hedging my bets.” It’s so outrageous, in fact, that its novelty has probably prevented it from being more widely known, even though Jaynes’s hypothesis seems more plausible—if no less shattering—the more you consider his argument. He notes, for instance, that when we read works like the Iliad, we’re confronted by a model of human behavior strikingly different from our own: as beautifully as characters like Achilles can express themselves, moments of action or decision are attributed to elements of an impersonal psychic apparatus, the thumos or the phrenes or the noos, that are less like our conception of the soul than organs of the body that stand apart from the self. (As it happens, much of my senior thesis as an undergraduate in classics was devoted to teasing out the meanings of the word noos as it appears in the poems of Pindar, who wrote at a much later date, but whose language still reflects that earlier tradition. I hadn’t read Jaynes at the time, but our conclusions aren’t that far apart.)
The idea of a divided soul is an old one: Jaynes explains the Egyptian ka, or double, as a personification of that internal voice, which was sometimes perceived as that of the dead pharaoh. And while we’ve mostly moved on to a coherent idea of the self, or of a single “I,” the concept breaks down on close examination, to the point where the old models may deserve a second look. (It’s no accident that Freud circled back around to these divisions with the id, the ego, and the superego, which have no counterparts in physical brain structure, but are rather his attempt to describe human behavior as he observed it.) Even if we don’t go as far as such philosophers as Sam Harris, who denies that free will doesn’t exist at all, there’s no denying that much of our behavior arises from parts of ourselves that are inaccessible, even alien, to that “I.” We see this clearly in patterns of compulsive behavior, in the split in the self that appears in substance abuse or other forms of addiction, and, more benignly, in the moments of intuition or insight that creative artists feel as inspirations from outside—an interpretation that can’t be separated from the etymology of the word “inspiration” itself.`
And I’ve become increasingly convinced that coming to terms with that divided self is central to all forms of creativity, however we try to explain it. I’ve spoken before of rough drafts as messages from my past self, and of notetaking as an essential means of communication between those successive, or alternating, versions of who I am. A project like a novel, which takes many months to complete, can hardly be anything but a collaboration between many different selves, and that’s as true from one minute to the next as it is over the course of a year or more. Most of what I do as a writer is a set of tactics for forcing those different parts of the brain to work together, since no one faculty—the intuitive one that comes up with ideas, the architectural or musical one that thinks in terms of structure, the visual one that stages scenes and action, the verbal one that writes dialogue and description, and the boringly systematic one that cuts and revises—could come up with anything readable on its own. I don’t hear voices, but I’m respectful of the parts of myself I can’t control, even as I do whatever I can to make them more reliable. All of us do the same thing, whether we’re aware of it or not. And the first step to working with, and within, the divided self is acknowledging that it exists.
The hippopotamus was drawn to amuse my small daughter. Something about the creature’s expression convinced me that he had recently eaten a man. I added the hat and pipe and Mrs. Millmoss and the caption followed easily enough.
By now, many of you have probably already seen the previously unpublished essay on creativity by Isaac Asimov that appeared last week in Technology Review. We owe its appearance to Arthur Obermayer, who worked for Allied Research Associates in Boston and asked Asimov, a friend of his, to sit in on some of their brainstorming sessions. Asimov eventually declined to participate further, saying that receiving access to classified information would inhibit his work as a writer, but he left behind a short piece on creative thinking and the conditions that encourage it, both individually and in groups. It’s a charming, useful read, and it centers on a point that I’ve made here many times before:
Obviously, then, what is needed is not only people with a good background in a particular field, but also people capable of making a connection between item 1 and item 2 which might not ordinarily seem connected…Once the cross-connection is made, it becomes obvious.
Asimov mentions the famous example of Charles Darwin and Alfred Russel Wallace, who independently saw a connection between Malthus’s “Essay on Population” and the problem of evolution, inspiring Thomas Henry Huxley to exclaim: “How extremely stupid not to have thought of that!” This kind of thinking by combinations, or what Arthur Koestler calls “bisociation” in The Act of Creation, lies at the heart of all creativity, and that’s as much the case today as when Asimov was writing. Earlier this year, for instance, the lab headed by Eric Betzig—who won the Nobel Prize in Chemistry this month—announced an approach for improving the resolution and speed of microscopy images, using adaptive optics techniques that had originally been developed for astronomy and ophthalmology. Much of Betzig’s work over the last decade has consisted of taking cues from one field and joining them to another: “We combined the descan concept from the ophthalmologists with the laser guide stars of the astronomers, and came up with what amounts to a really good solution for aberrating but non-scattering transparent samples, like the zebrafish.”
These days, most scientific breakthroughs don’t arise in isolation, but through an intense collaborative process: the original paper cited above lists eight authors, headed by postdoctoral student Kai Wang and ending with Betzig himself. At times, as Asimov points out, stimulating connections can only emerge from an environment in which intelligent people have a chance to exchange ideas:
No two people exactly duplicate each other’s mental stores of items. One person may know A and not B, another may know B and not A, and either knowing A and B, both may get the idea—though not necessarily at once or even soon.
Furthermore, the information may not only be of individual items A and B, but even of combinations such as A-B, which in themselves are not significant. However, if one person mentions the unusual combination of A-B and another unusual combination A-C, it may well be that the combination A-B-C, which neither has thought of separately, may yield as answer.
The problem, of course, is that such ideas or connections don’t come on demand, and the pressure to show results, in academia and elsewhere, can inhibit the kind of relaxed, associative contemplation that inspiration requires. As Asimov notes: “To feel guilty because one has not earned one’s salary because one has not had a great idea is the surest way, it seems to me, of making it certain that no great idea will come in the next time, either.” He goes on to suggest that the thinkers be officially paid for “sinecure” tasks—reports, summaries, and other busywork—so that brainstorming sessions can occur without the additional distraction that arises when one’s livelihood is directly on the line. In other words, he proposes a model that allows for extended rendering time, those amorphous, sometimes unproductive, but always essential stretches of apparent inactivity that allow ideas to coalesce. (It’s the opposite, in fact, of the kind of intense focus on short-term results that drives so much of startup culture.) There’s no surefire recipe for innovation; insights, especially those that make connections between unrelated fields, don’t arrive on schedule. But it’s only by creating an environment in which such connections can emerge, and having the patience to wait, that we can come up with any insights at all.
A man is a poet if the difficulties inherent in his art provide him with ideas; he is not a poet if they deprive him of ideas.
Over the past year or so, I’ve scaled back on the number of books I buy each month, mostly for reasons of shelf space. Every now and then, though, my eye will be caught by a sale or special event I can’t resist, which is how I ended up receiving a big carton last week from Better World Books. (As I’ve noted before, this is the best site for used books around, and a fantastic resource for filling in the gaps in your library.) The box contained what looks, at first, like a random assortment of titles: Strong Opinions, the aptly named collection of interviews and essays by Vladimir Nabokov; Art and Illusion by E.H. Gombrich, which was named one of the hundred best nonfiction books of the century by Modern Library; Cosmic Fishing, a short memoir by E.J. Applewhite about his collaboration with Buckminster Fuller on the book Synergetics; and best of all, Ernest Schwiebert’s magisterial two-volume Trout, which I’ve coveted for years. If it seems like a grab bag, that’s no accident: I really had my eye on Trout, which I ended up getting for half the price it goes for elsewhere, and the others were mostly there to fill out the order. But my choices here also say a lot about me and the kind of books and authors I find most appealing.
The most obvious common thread between all these books is that they lie somewhere at the intersection of art and science. Nabokov, of course, was an accomplished lepidopterist, and Strong Opinions concludes with a sampling of his scientific papers on butterflies. Art and Illusion is a work on the psychology of perception written by an art historian, and the back cover makes its intentions clear: “This book is directed to all who seek for a meeting ground between science and the humanities.” Fuller always occupied a peculiar position between that of engineer, crackpot, and mystic, and this comes through strongly through the eyes of his literary collaborator, who strikingly argues that Fuller’s primary vocation is that of a poet, and reveals that he briefly considered rewriting all of Synergetics in blank verse. And in Schwiebert’s hands, the humble trout becomes a lens through which he considers nearly all of human experience: in the first volume, he wears the hats of historian, literary critic, biologist, ecologist, and entomologist, and that’s before he even gets to the intricacies of rods, flies, and waders. As Schwiebert writes: “[Angling's] skills are a perfect equilibrium between tradition, physical dexterity and grace, strength, logic, esthetics, our powers of observation, problem solving, perception, and the character of our experience and knowledge.”
In short, these are all books by or about generalists, original thinkers who understand that the divisions between categories of knowledge are porous, if not outright fictional, and who can draw freely on a wide range of disciplines. Yet these authors also share another, more subtle quality: a relentless focus on a single subject as a window onto all of the rest. Nabokov was as obsessed by his butterflies as Fuller was by the tetrahedron. Gombrich returns repeatedly to “the riddle of style,” or what it means when we say that we draw what we see, and Schwiebert, of course, loved trout. I wouldn’t go so far as to say that any of them became generalists by accident; it takes a certain inborn temperament, and an inhuman degree of patience and curiosity, to even attempt such a comprehensive vision. But it’s no accident that all four of these men—and most of the generalists we know and remember—arrived at their expansive vistas through the narrowest of gates. Occasionally, a thinker with global ambitions will begin by deliberately constraining his or her focus, in a kind of apprenticeship or training ground: Darwin spent long eight years studying the cirripedes, a kind of barnacle, in what Thomas Henry Huxley called “a piece of critical self-discipline.” He knew that you need to go deep before you can go really wide.
And that hasn’t changed. The entomologist Edward O. Wilson recently published a book entitled The Meaning of Human Existence, which would seem insufferably grandiose if he hadn’t already proven himself with decades of laborious work on the ants and other social insects. When we think of the intellectuals we respect, nearly all are men and women who made fundamental contributions to a single, clearly defined field before moving on to others. That’s the generalist’s dilemma: it’s hard to think in an original way about everything until you know one thing well. Otherwise, you end up seeming like a dilettante or worse. I’m acutely aware of my own shortcomings here: I’ve spent all my life trying to be a generalist, to the point of becoming a writer so I had an excuse to poke into whatever subjects I like, but I’ve rarely had the patience to drill down deeply. And while I’m content with my choice, I’m not sure I’d recommend it to anyone else. Hilaire Belloc once said that the best way for a writer to become famous was to concentrate on one subject, like the earthworm, for forty years: “When he is sixty, pilgrims will make a hollow path with their feet to the door of the world’s great authority on the earthworm. They will knock at his door and humbly beg to be allowed to see the Master of the Earthworm.” Belloc pointedly failed to take his own advice, but he has a point. We need to become masters of the earthworm before we become masters of the earth.