Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘Richard Dawkins

The time bind

with 3 comments

Last month, The Verge posted a leaked copy of a fascinating short film titled “The Selfish Ledger,” which was produced two years ago for internal consumption at Google. It’s only eight minutes long, and it’s well worth watching in its entirety, but this summary by journalist Vlad Savov does a good job of capturing its essence:

The nine-minute film starts off with a history of Lamarckian epigenetics, which are broadly concerned with the passing on of traits acquired during an organism’s lifetime. Narrating the video, [Google design head Nick] Foster acknowledges that the theory may have been discredited when it comes to genetics but says it provides a useful metaphor for user data…The way we use our phones creates “a constantly evolving representation of who we are,” which Foster terms a “ledger,” positing that these data profiles could be built up, used to modify behaviors, and transferred from one user to another…The middle section of the video presents a conceptual Resolutions by Google system, in which Google prompts users to select a life goal and then guides them toward it in every interaction they have with their phone…with the ledger actively seeking to fill gaps in its knowledge and even selecting data-harvesting products to buy that it thinks may appeal to the user. The example given in the video is a bathroom scale because the ledger doesn’t yet know how much its user weighs.

With its soothing narration and liberal use of glossy stock footage, it’s all very Black Mirror, and when asked for comment, a spokesperson at Google seemed to agree: “We understand if this is disturbing—it is designed to be. This is a thought-experiment by the Design team from years ago that uses a technique known as ‘speculative design’ to explore uncomfortable ideas and concepts in order to provoke discussion and debate. It’s not related to any current or future products.”

There’s a lot to unpack here, and I’m hoping to discuss various aspects of the film over the next few days. For now, though, I’d like to focus on one detail, which is the notion that the “ledger” of a user’s data amounts to a repository of useful information that can be passed down from one generation to another. (The title of the film is an open homage to The Selfish Gene by Richard Dawkins, which popularized an analogous concept in the realm of natural selection.) In a voiceover, Foster says:

User data has the capability to survive beyond the limits of our biological selves, in much the same way as genetic code is released and propagated in nature. By considering this data through a Lamarckian lens, the codified experiences within the ledger become an accumulation of behavioral knowledge throughout the life of an individual. By thinking of user data as multi-generational, it becomes possible for emerging users to benefit from the preceding generations’ behaviors and decisions. As new users enter an ecosystem, they begin to create their own trail of data. By comparing this emergent ledger with the mass of historical user data, it becomes possible to make increasingly accurate predictions about decisions and future behaviors. As cycles of collection and comparison extend, it may be possible to develop a species-level understanding of complex issues such as depression, health and poverty. Our ability to interpret user data combined with the exponential growth in sensor-enabled objects will result in an increasingly detailed account of who we are as people. As these streams of information are brought together, the effect is multiplied: new patterns become apparent and new predictions become possible.

In other words, the data that we create is our legacy to those who will come after us, who can build on what we’ve left behind rather than starting from scratch.

The funny thing, of course, is that we’ve been doing something like this for a while now, at least on a societal level, using a decidedly less sexy format—the book. In fact, the whole concept of “emerging users [benefiting] from the preceding generations’ behaviors and decisions” is remarkably close to the idea of time-binding, as defined by the Polish philosopher Alfred Korzybski, whose work had a profound impact on the science fiction of the thirties and forties. In the monumental, borderline unreadable Science and Sanity, the founding text of General Semantics, Korzybski describes this process in terms that might have been drawn directly from “The Selfish Ledger,” using language that is nearly a century old: “I defined man functionally as a time-binder, a definition based on a…functional observation that the human class of life differs from animals in the fact that, in the rough, each generation of humans, at least potentially, can start where the former generation left off.” Elsewhere, he adds:

The human rate of progress is swifter than that of the animals, and this is due mainly to the fact that we can summarize and transmit past experiences to the young generation in a degree far more effective than that of the animals. We have also extra-neural means for recording experiences, which the animals lack entirely.

The italics are mine. Korzybski uses the example of a mathematician who “has at his disposal an enormous amount of data; first, his personal experiences and observation of actual life…and also all the personal experiences and observations of past generations…With such an enormous amount of data of experience, he can re-evaluate the data, ‘see’ them anew, and so produce new and more useful and structurally more correct higher order abstractions.” And this sounds a lot like “The Selfish Ledger,” which echoes Korzybski—whose work was an important precursor to dianetics—when it speaks of reaching a better understanding of such issues as “depression, health and poverty.”

I don’t know whether “The Selfish Ledger” was directly influenced by Korzybski, although I would guess that it probably wasn’t. But he provides a useful starting point for understanding why the world evoked in the film feels so disturbing, when it’s really a refinement of a process that is as old as civilization itself. On some level, it strikes viewers as a loss of agency, with the act of improvement and refinement outsourced from human hands to an impersonal corporation and its algorithms. We no longer trust companies like Google, if we ever did, to guide us as individuals or as a society—although much of what the video predicts has already come to pass. Google is already an extension of my memory, and it determines my ability to generate connections between information in ways that I mostly take for granted. Yet these decisions have long been made for us by larger systems in ways that are all but invisible, by encouraging certain avenues of thought and action while implicitly blocking off others. (As Fredric Jameson put it: “Someone once said that it is easier to imagine the end of the world than to imagine the end of capitalism.”) Not all such systems are inherently undesirable, and you could argue that science, for instance, is the best way so far that the ledger of society—which depended in earlier periods on myth and religion—has found to propagate itself. It’s hard to argue with Korzybski when he writes: “If the difference between the animal and man consists in the capacity of the latter to start where the former generation left off, obviously humans, to be humans, should exercise this capacity to the fullest extent.” The problem, as usual, lies in the choice of tactics, and what we call “culture” or even “etiquette” can be seen as a set of rules that accomplish by trial and error what the ledger would do more systematically. Google is already shaping our culture, and the fact that “The Selfish Ledger” bothers to even explore such questions is what makes it a worthwhile thought experiment. Tomorrow, I’ll be taking a closer look at its methods, as well as the question of how speculative design, whether by corporations or by artists, can lead to insights that lie beyond even the reach of science fiction.

The divided self

with 3 comments

Julian Jaynes

Last night, I found myself browsing through one of the oddest and most interesting books in my library: Julian Jaynes’s The Origin of Consciousness in the Breakdown of the Bicameral Mind. I don’t know how familiar Jaynes’s work remains among educated readers these days—although the book is still in print after almost forty years—but it deserves to be sought out by anyone interested in problems of psychology, ancient literature, history, or creativity. Jayne’s central hypothesis, which still startles me whenever I type it, is that consciousness as we know it is a relatively recent development that emerged sometime within the last three thousand years, or after the dawn of language and human society. Before this, an individual’s decisions were motivated less by internal deliberation than by verbal commands that wandered from one part of the brain into another, and which were experienced as the hallucinated voice of a god or dead ancestor. Free will, as we conceive of it now, didn’t exist; instead, we acted in automatic, almost robotic obedience to those voices, which seemed to come from an entity outside ourselves.

As Richard Dawkins writes: “It is one of those books that is either complete rubbish or a work of consummate genius, nothing in between! Probably the former, but I’m hedging my bets.” It’s so outrageous, in fact, that its novelty has probably prevented it from being more widely known, even though Jaynes’s hypothesis seems more plausible—if no less shattering—the more you consider his argument. He notes, for instance, that when we read works like the Iliad, we’re confronted by a model of human behavior strikingly different from our own: as beautifully as characters like Achilles can express themselves, moments of action or decision are attributed to elements of an impersonal psychic apparatus, the thumos or the phrenes or the noos, that are less like our conception of the soul than organs of the body that stand apart from the self. (As it happens, much of my senior thesis as an undergraduate in classics was devoted to teasing out the meanings of the word noos as it appears in the poems of Pindar, who wrote at a much later date, but whose language still reflects that earlier tradition. I hadn’t read Jaynes at the time, but our conclusions aren’t that far apart.)

Sigmund Freud

The idea of a divided soul is an old one: Jaynes explains the Egyptian ka, or double, as a personification of that internal voice, which was sometimes perceived as that of the dead pharaoh. And while we’ve mostly moved on to a coherent idea of the self, or of a single “I,” the concept breaks down on close examination, to the point where the old models may deserve a second look. (It’s no accident that Freud circled back around to these divisions with the id, the ego, and the superego, which have no counterparts in physical brain structure, but are rather his attempt to describe human behavior as he observed it.) Even if we don’t go as far as such philosophers as Sam Harris, who denies that free will doesn’t exist at all, there’s no denying that much of our behavior arises from parts of ourselves that are inaccessible, even alien, to that “I.” We see this clearly in patterns of compulsive behavior, in the split in the self that appears in substance abuse or other forms of addiction, and, more benignly, in the moments of intuition or insight that creative artists feel as inspirations from outside—an interpretation that can’t be separated from the etymology of the word “inspiration” itself.`

And I’ve become increasingly convinced that coming to terms with that divided self is central to all forms of creativity, however we try to explain it. I’ve spoken before of rough drafts as messages from my past self, and of notetaking as an essential means of communication between those successive, or alternating, versions of who I am. A project like a novel, which takes many months to complete, can hardly be anything but a collaboration between many different selves, and that’s as true from one minute to the next as it is over the course of a year or more. Most of what I do as a writer is a set of tactics for forcing those different parts of the brain to work together, since no one faculty—the intuitive one that comes up with ideas, the architectural or musical one that thinks in terms of structure, the visual one that stages scenes and action, the verbal one that writes dialogue and description, and the boringly systematic one that cuts and revises—could come up with anything readable on its own. I don’t hear voices, but I’m respectful of the parts of myself I can’t control, even as I do whatever I can to make them more reliable. All of us do the same thing, whether we’re aware of it or not. And the first step to working with, and within, the divided self is acknowledging that it exists.

Agnosticism and the working writer

with 3 comments

Note: To celebrate the third anniversary of this blog, I’ll be spending the week reposting some of my favorite pieces from early in its run. This post originally appeared, in a somewhat different form, on June 6, 2011.

Being an agnostic means all things are possible, even God, even the Holy Trinity. This world is so strange that anything may happen, or may not happen. Being an agnostic makes me live in a larger, a more fantastic kind of world, almost uncanny. It makes me more tolerant.

Jorge Luis Borges, to the New York Times

Of all religious or philosophical convictions, agnosticism, at first glance, is the least interesting to defend. Like political moderates, agnostics get it from both sides, most of all from committed atheists, who tend to regard permanent agnosticism, in the words of Richard Dawkins, as “fence-sitting, intellectual cowardice.” And yet many of my heroes, from Montaigne to Robert Anton Wilson, have identified themselves with agnosticism as a way of life. (Wilson, in particular, called himself an agnostic mystic, which is what you get when an atheist takes a lot of psychedelic drugs.) And while a defense of the philosophical aspects of agnosticism is beyond the scope of this blog—for that, I can direct you to Thomas Huxley, or even to a recent posting by NPR’s Adam Frank, whose position is not far removed from my own—I think I can talk, very tentatively, about its pragmatic benefits, at least from a writer’s point of view.

I started thinking about this again after reading a blog post by Bookslut’s Jessa Crispin, who relates that she was recently talking about the mystical inclinations of W.B. Yeats when a self-proclaimed atheist piped up: “I always get sad for Yeats for his occult beliefs.” As Crispin discusses at length, such a statement is massively condescending, and also weirdly uninsightful. Say what you will about Yeats’s interest in occultism, but there’s no doubt that he found it spectacularly useful. It provided him with symbolic material and a means of engaging the unseen world that most poets are eventually called to explore. The result was a body of work of permanent importance, and one that wouldn’t exist, at least not in its present form, if his life had assumed a different shape. Was it irrational? Sure. But Wallace Stevens aside, strictly rational behavior rarely produces good poets.

I’ve probably said this before, but I’ll say it again: the life of any writer—and certainly that of a poet—is so difficult, so impractical on a cosmic scale, that there’s often a perverse kind of pragmatism in the details. A writer’s existence may look messy from the outside, but that mess is usually the result of an attempt to pick out what is useful from life and reject the rest, governed by one urgent question: Can I use this? If a writer didn’t take his tools wherever he found them, he wouldn’t survive, at least not as an artist. Which is why any kind of ideology, religious or otherwise, can be hard for a writer to maintain. Writers, especially novelists, tend to be dabblers, not so much out of dilettantism—although that can be a factor as well—as from an endless, obsessive gleaning, a rummaging in the world’s attic for useful material, in both art and life. And this process of feathering one’s nest tends to inform a writer’s work as well. What Christopher Hitchens says of Ian McEwan is true of many novelists:

I think that he did, at one stage in his life, dabble a bit in what’s loosely called “New Age,” but in the end it was the rigorous side that won out, and his novels are almost always patrolling some difficult frontier between the speculative and the unseen and the ways in which material reality reimposes itself.

Agnosticism is also useful for another reason, as Borges points out above: tolerance. A novelist needs to write with empathy about people very different from himself, and to vicariously live all kinds of lives, which is harder to do through the lens of an intractable philosophy. We read Dante and Tolstoy despite, not because of, their ideological convictions, and much of the fire of great art comes from the tension between those convictions and the artist’s reluctant understanding of the world. For a writer, dogma is, or should be, the enemy—including dogma about agnosticism itself. In the abstract, it can seem clinical, but in practice, it’s untidy and makeshift, like the rest of a writer’s life. It’s useful only when it exposes itself to a lot of influences and generates a lot of ideas, most unworkable, but some worthy of being pursued. Like democracy, it’s a compromise solution, the best of a bad lot. It doesn’t work all that well, but for a writer, at least for me, it comes closer to working than anything else.

Agnosticism and the working writer

with 6 comments

Being an agnostic means all things are possible, even God, even the Holy Trinity. This world is so strange that anything may happen, or may not happen. Being an agnostic makes me live in a larger, a more fantastic kind of world, almost uncanny. It makes me more tolerant.

Jorge Luis Borges, to the New York Times

Of all religious or philosophical convictions, agnosticism, at first glance, is the least interesting to defend. Like political moderates, agnostics get it from both sides, most of all from committed atheists, who tend to regard permanent agnosticism, in the words of Richard Dawkins, as “fence-sitting, intellectual cowardice.” And yet many of my heroes, from Montaigne to Robert Anton Wilson, have identified themselves with agnosticism as a way of life. (Wilson, in particular, called himself an agnostic mystic, which is what you get when an atheist takes a lot of psychedelic drugs.) And while a defense of the philosophical aspects of agnosticism is beyond the scope of this blog—for that, I can direct you to Thomas Huxley, or even to a recent posting by NPR’s Adam Frank, whose position is not far removed from my own—I think I can talk, very tentatively, about its pragmatic benefits, at least from a writer’s point of view.

I started thinking about this again after reading a blog post by Bookslut’s Jessa Crispin, who relates that she was recently talking about the mystical inclinations of W.B. Yeats when a self-proclaimed atheist piped up: “I always get sad for Yeats for his occult beliefs.” As Crispin discusses at length, such a statement is massively condescending, and also weirdly uninsightful. Say what you will about Yeats’s interest in occultism, but there’s no doubt that he found it spectacularly useful. It provided him with symbolic material and a means of engaging the unseen world that most poets are eventually called to explore. The result was a body of work of permanent importance, and one that wouldn’t exist, at least not in its present form, if his life had assumed a different shape. Was it irrational? Sure. But Wallace Stevens aside, strictly rational behavior rarely produces good poets.

I’ve probably said this before, but I’ll say it again: the life of any writer—and certainly that of a poet—is so difficult, so impractical on a cosmic scale, that there’s often a perverse kind of pragmatism in the details. A writer’s existence may look messy from the outside, but that mess is usually the result of an attempt to pick out what is useful from life and reject the rest, governed by one urgent question: Can I use this? If a writer didn’t take his tools wherever he found them, he wouldn’t survive, at least not as an artist. Which is why any kind of ideology, religious or otherwise, can be hard for a writer to maintain. Writers, especially novelists, tend to be dabblers, not so much out of dilettantism—although that can be a factor as well—as from an endless, obsessive gleaning, a rummaging in the world’s attic for useful material, in both art and life. And this process of feathering one’s nest tends to inform a writer’s work as well. What Christopher Hitchens says of Ian McEwan is true of many novelists:

I think that he did, at one stage in his life, dabble a bit in what’s loosely called “New Age,” but in the end it was the rigorous side that won out, and his novels are almost always patrolling some difficult frontier between the speculative and the unseen and the ways in which material reality reimposes itself.

Agnosticism is also useful for another reason, as Borges points out above: tolerance. A novelist needs to write with empathy about people very different from himself, and to vicariously live all kinds of lives, which is harder to do through the lens of an intractable philosophy. We read Dante and Tolstoy despite, not because of, their ideological convictions, and much of the fire of great art comes from the tension between those convictions and the artist’s reluctant understanding of the world. For a writer, dogma is, or should be, the enemy—including dogma about agnosticism itself. In the abstract, it can seem clinical, but in practice, it’s untidy and makeshift, like the rest of a writer’s life. It’s useful only when it exposes itself to a lot of influences and generates a lot of ideas, most unworkable, but some worthy of being pursued. Like democracy, it’s a compromise solution, the best of a bad lot. It doesn’t work all that well, but for a writer, at least for me, it comes closer to working than anything else.

%d bloggers like this: