Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘Nautilus

The magic window

with one comment

Last week, the magazine Nautilus published a conversation on “the science and art of time” between the composer Philip Glass and the painter Fredericka Foster. The entire article is worth a look, but my favorite detail is one that Glass shares at the very beginning:

There are many strange things about music and time. When I’m on a tour with the dance company we work in a different-sized theater every night. The first thing the dance company does when we arrive is to measure the stage. They have to reset the dance to fit that stage. So you also have to reset the time of the music: in a larger theater, you must play slower. In a smaller theater, you have to play faster. The relation of time and space in music is dynamic. I have a range of speed in mind. If the players don’t pay attention to that, it will look really funny. You can see the stage fill up with dancers because they are playing at the wrong speed.

And a few lines afterward, in a more contemplative mood, Glass continues: “I was reflecting on the universe expanding. We know that it is and can measure it, by the way time is operating, or by the way we see a star exploding far away. For various reasons, when a physicist tells me that the universe is expanding, I say ‘Okay, let’s go back to the dance floor.’ The dance floor is getting bigger, what does that mean? It means that time has to slow down.”

The relationship between the pacing of a work of art and the physical space in which it occurs is an intriguing one, and it reminds me of a trick employed by one of my heroes, the film editor Walter Murch. In his excellent book Behind the Seen, Charles Koppelman describes the “little people,” a pair of tiny paper silhouettes—one male, one female—that Murch attaches to the screening monitor in his editing room. Koppelman explains:

They are his way of dealing with the problem of scale…As an editor, Murch must remember that images in the edit room are only 1/240 the square footage of what the audience will eventually see on a thirty-foot-wide screen…It’s still easy to forget the size of a projected film, which can trick an editor into pacing a film too quickly, or using too many close-ups—styles more akin to television. The eye rapidly apprehends the relatively small, low-detail images on a TV. Large-scale faces help hold the attention of the audience sitting in a living room with lots of distractions or ambient light. But in movies, images are larger than life and more detailed, so the opposite is true. The eye needs time to peruse the movie screen and take it all in…The solution for Murch is to have these two human cutouts stand sentry on his monitor, reminding him of the film’s eventual huge proportions.

And Murch writes in his book In the Blink of an Eye: “Why don’t we just edit in large rooms with big screens? Well, with digital editing and video projection, we could, very easily, be editing with a thirty-foot screen. The real estate for the room would be expensive, however.”

And while the problems presented by a live performance and a projected image on film might seem rather different, the underlying issue, in both cases, is the audience’s ability to receive and process information. On a purely practical level, a big stage may require the tempo of the choreography to subtly change, because the dancers are moving in a larger physical space, and the music has to be adjusted accordingly. But the viewer’s relationship to the work is also affected—the eye is more likely to take in the action in pieces, rather than as a whole, and the pacing may need to be modified. A similar phenomenon occurs in the movies, as Murch writes:

I have heard directors say that they were were disappointed when they finally saw their digitally edited films projected on a big screen. They felt that the editing now seemed “choppy,” though it had seemed fine on the television monitor…With a small screen, your eye can easily take in everything at once, whereas on a big screen it can only take in sections at a time. You tend to look at a small screen, but into a big screen. If you are looking at an image, taking it all in at once, your tendency will be to cut away to the next shot sooner. With a theatrical film, particularly one in which the audience is fully engaged, the screen is not a surface, it is a magic window, sort of a looking glass through which your whole body passes and becomes engaged in the action with the characters on the screen.

Murch notes that the lack of detail on a small screen—or a compressed video file—can mislead the editor as well: “There may be so little detail that the eye can absorb all of it very quickly, leading the careless editor to cut sooner than if he had been looking at the fully detailed film image…Image detail and pace are intimately related.

And the risk of editing on a smaller screen isn’t anything new. Over thirty years ago, the director and editor Edward Dmytryk wrote in On Film Editing:

Many editors shape their editing concepts on the Moviola, a technique I consider decidedly inferior. One does not see the same things on a small Moviola screen, or even on the somewhat larger, though fuzzier, flatbed screen, that one sees in a theater. The audience sees its films only on the “big screen,” and since every cut should be made with the audience in mind, the cutter must try to see each bit of film as the viewer in the theater will eventually see it. (Even a moderate-sized television screen offers far more scope than a Moviola; therefore, it too presents a somewhat different “picture” for the viewer’s inspection.)

Today, of course, viewers can experience stories on a range of screen sizes that Dmytryk might never have anticipated, and which no editor can possibly control. And it’s unclear how editors—who, unlike Philip Glass, don’t have the luxury of measuring the space in which the film will unfold—are supposed to deal with this problem. Taken as a whole, it seems likely that the trend of editorial pacing reflects the smallest screen on which the results can be viewed, which is part of the reason why the average number of cuts per minute has steadily increased for years. And it’s not unreasonable for editors to prioritize the format in which movies will be seen for most of their lifetimes. Yet we also give up something when we no longer consider the largest possible stage. After the editor Anne V. Coates passed away last month, many obituaries paid tribute to the moment in Lawrence of Arabia that has justifiably been called the greatest cut in movie history. But it wouldn’t have nearly the same impact if it weren’t for the fact that the next shot is held for an astonishing thirty-five seconds, which might never have occurred to someone who was cutting it for a smaller screen. Even viewed on YouTube, it’s unforgettable. But in a theater, it’s a magic window.

The last questions

leave a comment »

For two decades, the writer and literary agent John Brockman has posed a single question on an annual basis to a group of scientists and other intellectuals. The notion of such a question—which changes every year—was inspired by the work of the late artist and philosopher James Lee Byars, whose declaration of intent serves as a motto for the entire project: “To arrive at the edge of the world’s knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.” Brockman publishes the responses on his website, and the result resonates so strongly with just about everything that I love that I’m embarrassed to say that I hadn’t heard of it until this week. (I owe my discovery of it to an article by Brian Gallagher in the excellent magazine Nautilus.) It’s an attempt to take the pulse of what Brockman calls “the third culture, [which] consists of those scientists and other thinkers in the empirical world who, through their work and expository writing, are taking the place of the traditional intellectual in rendering visible the deeper meanings of our lives, redefining who and what we are.” Questions from recent years include “What is your favorite deep, elegant or beautiful explanation?” and “What scientific concept would improve everyone’s cognitive toolkit?” And the result is manifestly so useful, interesting, and rich that I’m almost afraid to read too much of it at once.

This year, to commemorate the twentieth anniversary of the project, Brockman issued a somewhat different challenge, asking his usual group of correspondents: “What is the last question?” By way of explanation, he quotes an essay that he originally wrote in the late sixties, when he first became preoccupied with the idea of asking questions at all:

The final elegance: assuming, asking the question. No answers. No explanations. Why do you demand explanations? If they are given, you will once more be facing a terminus. They cannot get you any further than you are at present…Our kind of innovation consists not in the answers, but in the true novelty of the questions themselves; in the statement of problems, not in their solutions. What is important is not to illustrate a truth—or even an interrogation—known in advance, but to bring to the world certain interrogations…A total synthesis of all human knowledge will not result in huge libraries filled with books, in fantastic amounts of data stored on servers. There’s no value any more in amount, in quantity, in explanation. For a total synthesis of human knowledge, use the interrogative.

Brockman strongly implies that this year’s question will be the last. (To which I can only respond with a lyric from The Simpsons: “To close this place now would be twisted / We just learned this place existed.”) And he closes by presenting the final question: “Ask ‘The Last Question,’ your last question, the question for which you will be remembered.”

I’ve just spent half an hour going through the responses, which are about as fascinating as you’d expect. As I read the questions, I felt that some of them could change lives, if they were encountered at just the right time. (If you know a bright teenager, you could do worse than to send the list his or her way. After all, you just never know.) And they’re a mine of potential ideas for science fiction writers. Here are a few of my favorites:

Jimena Canales: “When will we accept that the most accurate clocks will have to advance regularly sometimes, irregularly most of the time, and at times run counterclockwise?”
Bart Kosko: “What is the bumpiest and highest-dimensional cost surface that our best computers will be able to search and still find the deepest cost well?”
Julia Clarke: “What would comprise the most precise and complete sonic representation of the history of life?”
Stuart Firestein: “How many incommensurable ideas can we hold in our mind simultaneously?”
George Dyson: “Why are there no trees in the ocean?”
Andrew Barron: “What would a diagram that gave a complete understanding of imagination need to be?”

Not all are equally interesting, and some of the respondents were evidently daunted by the challenge. A few of the submissions feel like an answer—or an opinion—with a question mark stuck awkwardly on the end. As Gallagher notes in Nautilus: “The question ended up prompting many of the academics among the responders to just restate one of their research targets, albeit succinctly.” The computer scientist Scott Aaronson wrote on his blog:

I tried to devise a single question that gestured toward the P vs. NP problem, and the ultimate physical limits of computation, and the prospects for superintelligent AI, and the enormity of what could be Platonically lying in wait for us within finite but exponentially search spaces, and the eternal nerd’s conundrum, of the ability to get the right answers to clearly-stated questions being so ineffectual in the actual world. I’m not thrilled with the result, but reading through the other questions makes it clear just how challenging it is to ask something that doesn’t boil down to: “When will the rest of the world recognize the importance of my research topic?”

But it’s impossible to read it without wondering what your own question would be. (None of the participants went with what many science fiction fans know is the real last question: “How can the net amount of entropy of the universe be massively decreased?” But maybe they knew that there’s insufficient data for a meaningful answer.) I don’t know what mine is yet, but this one from Jonathan Gottschall comes fairly close, and it can serve as a placeholder for now: “Are stories bad for us?”

Written by nevalalee

February 8, 2018 at 8:43 am

The science fiction sieve

leave a comment »

In a remarkably lucid essay published last week in Nautilus, the mathematician Noson S. Yanofsky elegantly defines the self-imposed limitations of science. Yanofsky points out that scientists deliberately take a subset of phenomena—characterized mostly by how amenable it is to their chosen methods—for their field of study, while leaving the rest to the social sciences or humanities. (As Paul Valéry put it: “Science means simply the aggregate of all the recipes that are always successful. All the rest is literature.”) He visualizes science as a kind of sieve, which lets in some subjects while excluding others:

The reason why we see the structure we do is that scientists act like a sieve and focus only on those phenomena that have structure and are predictable. They do not take into account all phenomena; rather, they select those phenomena they can deal with…Scientists have classified the general textures and heights of different types of clouds, but, in general, are not at all interested in the exact shape of a cloud. Although the shape is a physical phenomenon, scientists don’t even attempt to study it. Science does not study all physical phenomena. Rather, science studies predictable physical phenomena. It is almost a tautology: science predicts predictable phenomena.

Yanofsky groups these criteria under the general heading “symmetry,” and he concludes: “The physicist must be a sieve and study those phenomena that possess symmetry and allow those that do not possess symmetry to slip through her fingers.” I won’t get into the rest of his argument, which draws an ingenious analogy from mathematics, except to say that it’s worth reading in its entirety. But I think his thesis is sound, and it ties into many issues that I’ve discussed here before, particularly about the uncomfortable status of the social sciences.

If you’re trying to catch this process in action, though, the trouble is that the boundaries of science aren’t determined by a general vote, or even by the work of isolated geniuses, but emerge gradually and invisibly from the contributions of countless individuals. But if I were a historian of science, I’d take a close look at the development of science fiction, in which an analogous evolution occurred in plain sight over a relatively short period of time. You can see it clearly in the career of the editor John W. Campbell, who remained skeptical of the social sciences, but whose signal contribution to the genre may have been to put them at its center. And the “sieve” that he ended up using is revealing in itself. A significant turning point was the arrival on his desk of Robert A. Heinlein’s landmark novella “If This Goes On—,” of which Campbell wrote in 1939:

Robert Heinlein, in his “If This Goes On—,” presents a civilization in which mob psychology and propaganda have become sciences. They aren’t, yet…Psychology isn’t a science, so long as a trained psychologist does—and must—say “there’s no telling how an individual man will react to a given stimulus.” Properly developed, psychology could determine that.

As an editor, Campbell began to impose psychological and sociological elements onto stories where they didn’t always fit, much as he would gratuitously insert references to uranium-235 during World War II. He irritated Isaac Asimov, for instance, by asking him to add a section to the story “Homo Sol” about “certain distinctions between the emotional reactions of Africans and Asians as compared with those of Americans and Europeans.” Asimov saw this as an early sign of Campbell’s racial views, and perhaps it was, but it pointed just as convincingly to his interest in mass psychology.

And readers took notice at a surprisingly early stage. In the November 1940 issue of Astounding, a fan named Lynn Bridges presciently wrote:

The Astounding Science Fiction of the past year has brought forth a new type of story, best described, perhaps, as “sociological” science fiction. The spaceships…are still present, but more emphasis has been placed on the one item which will have more to do with shaping the future than anything else, that strange race of bipeds known as man…Both Asimov [in “Homo Sol”] and Heinlein [in “If This Goes On—”] treat psychology as an exact science, usable in formulas, certain in results. I feel called upon to protest. Its very nature prevents psychology from achieving the exactness of mathematics…The moment men stop varying and the psychologist can say definitely that all men are alike psychologically, progress stops and the world becomes a very boring Utopia.

Campbell responded: “Psychology could improve a lot, though, without becoming dangerously oppressive!” Just two months later, in a letter in the January 1941 issue, Asimov referred to the prospect of “mathematical psychology”: “If we can understand Einstein and Hitler down to the mathematical whys and wherefores, we might try to boost along a few Einsteins and cut down on a few Hitlers, and progress might really get going.” Campbell replied much as before: “Psychology isn’t an exact science—but it can be.” Implicit in the whole discussion was the question of whether psychology could be tackled using the same hard-headed engineering approach that had worked for the genre before. And as I’ve written elsewhere, the evolution of Campbellian science fiction is largely one of writers who were so good at lecturing us about engineering that we barely even noticed when they moved on to sociology.

But what interests me now is the form it took in Astounding, which looks a lot like the sieve that Yanofsky describes. Campbell may have hoped that psychology would learn how to predict “how an individual man will react to a given stimulus,” but he seems to have sensed that this wouldn’t be credible or interesting in fiction. Instead, he turned to two subsets of psychology that were more suited to the narrative tools at his disposal. One was the treatment of simplified forms of human personality—say, for instance, in a robot. The other was the treatment of large masses of individuals. Crucially, neither was necessarily more possible than predicting the behavior of individuals, but they had the advantage that they could be more plausibly treated in fiction. Campbell’s preferred instrument at the time was Asimov, who was reliable, willing to take instruction, and geographically close enough to talk over ideas in person. As a result, Asimov’s most famous stories can be read as a series of experiments to see how the social sciences could be legitimately explored by the genre. The Three Laws of Robotics, which Campbell was the first to explicitly formulate, are really a simplified model of human behavior: Campbell later wrote that they were essentially “the basic desires of a small child, with the exception that the motivation of desire for love has been properly omitted.” At the other end of the spectrum, psychohistory looks for laws that can be applied on a mass scale, and it’s central not only to the Foundation series but even to “Nightfall,” with its theme of the cyclical rise and fall of civilizations. In science, you could draw a parallel to artificial intelligence and macroeconomics, which represent two extremes at which qualities of symmetry and predicability seem to enter the realm of psychology. In between, there’s a vast terrain of human experience that Campbell was never quite able to tackle, and that impulse ended up being channeled into dianetics. But much as science can be defined as everything that makes it through the sieve of symmetry, Campbell had a sieve of his own, and the result was the science fiction of the golden age.

Written by nevalalee

June 28, 2017 at 9:07 am

Quote of the Day

with one comment

Frank Wilczek

One [common feature of beauty] is what I call exuberance or productivity, where you get out more than you put in. You find some equation or law by putting together clues and making a guess, and then you can explain seven other things and you know you’re on the right track. You get out more than you put in.

Frank Wilczek, to Nautilus

Written by nevalalee

January 21, 2016 at 7:30 am

Posted in Quote of the Day

Tagged with ,

The book of numbers

with one comment

Neil Sloane

The recent Nautilus article by Siobhan Roberts about the mathematician Neil Sloane, titled “How to Build a Search Engine for Mathematics,” is the most interesting thing I’ve read online in months. I stumbled across it around six this morning, at a point when I was thinking about little more than my first cup of coffee, and when I was done, I felt energized, awake, and excited about the future. At first glance, its subject might not seem especially promising: Sloane’s baby, The On-Line Encyclopedia of Integer Sequences, sounds about as engaging as the classic bestseller A Million Random Digits with 100,000 Normal Deviates. But the more you think about Sloane and his life’s work, the more it starts to seem like what the Internet was meant to do all along. It’s a machine for generating connections between disciplines, a shortcut that turns good hunches into something more, and a means of quickly surveying an otherwise unnavigable universe of information. In short, it does for numbers, or anything that can be expressed as a sequence of integers, what Google Books theoretically should do for words. The result is a research tool that led Rutgers University professor Doron Zeilberger to call Sloane “the world’s most influential mathematician,” although, if anything, this understates the possible scope of his accomplishments. And even if you’re already familiar with OEIS, the article is well worth reading anyway, if only for how beautifully Roberts lays out its implications.

The appeal of Sloane’s encyclopedia can best be understood by going back to its origins, when its creator was a graduate student at Cornell. While writing his doctoral dissertation on a problem in artificial intelligence, he calculated an integer sequence—0, 1, 8, 78, 944, and so on—that described the firing of neurons in a neural network. As Roberts writes:

The sequence looked promising, though Sloane couldn’t figure out the pattern or formula that would give him the next and all further terms, and by extension the sequence’s rate of growth. He searched out the sequence at the library to see if it was published in a math book on combinatorics or the like, and found nothing. Along the way, however, he came upon other sequences of interest, and stashed them away for further investigation. He eventually computed the formula using a tool from 1937, Pólya’s enumeration theorem.

But the roundabout process had been frustrating. The task should not have been so difficult. He should have been able to simply look up his sequence in a comprehensive reference guide for all extant integer sequences. Since no such thing existed, he decided to build it himself. “I started collecting sequences,” he said. “I went through all the books in the Cornell library…And articles and journals and any other source I could find.”

Neil Sloane's notebook

Reading this, I was inevitably reminded of the experience of writing my own senior thesis, in the days before universal book search was available, and the kind of random scavenging through the stacks that was required back then to track down references and make connections. Sloane’s impulse to collect such sequences initially took the form of a set of punchcards, followed years later by A Handbook of Integer Sequences, published by his employers at Bell Labs. Finally, about twenty years ago, he put it online. Before long, the database began to prove its value, as when it revealed that a sequence related to the problem of placing cell towers matched one from an unrelated subject in number theory. It’s the closest thing we have to a search engine for math, as long as you can express whatever you’re doing in terms of a sequence of numbers:

Ultimately, it all comes back to counting things, and counting is a universally handy tool. Which in turn makes the encyclopedia handy, too. “Suppose you are working on a problem in one domain, say, electronics, and while solving a problem you encounter a sequence of integers,” said Manish Gupta, a coding theorist by training who runs a lab at the Dhirubhai Ambani Institute of Information and Communication Technology. “Now you can use the encyclopedia and search if this is well known. Many times it happens that this sequence may have appeared in a totally unrelated area with another problem. Since numbers are the computational output of nature, to me, these connections are quite natural.”

As Roberts concludes: “The encyclopedia’s impact on scientific research broadly speaking can be measured by its citations in journals, which currently Sloane has tallied to more than 4,500, ranging through biology, botany, zoology, chemistry, thermodynamics, optics, quantum physics, astrophysics, geology, cybernetics, engineering, epidemiology, and anthropology. It is a numerical database of the human canon.” And although the humanities go mostly unrepresented in that list, that’s probably because the translation of such concepts into numbers isn’t always intuitive. But researchers in other areas can at least appreciate its usefulness by analogy. When I think of how I use Google as a creative tool, it’s less to find specific information than to unearth connections—as when I spent a month looking up pairs of concepts like “Dadaism” and “Vehmgericht” to populate the conspiracy theory in The Icon Thief—or to verify a hunch I’ve already had. (As E.L. Doctorow once put it: “[Research] involved finding a responsible source for the lie I was about to create, and discovering that it was not a lie, which is to say someone else had thought of it first.”) Sloane’s encyclopedia essentially allows mathematicians and scientists to do the same, once they’ve converted their ideas into a searchable sequence, which can be a useful exercise in itself. And even if you aren’t in one of those fields, a few minutes browsing in OEIS is enough to remind you of how large the world is, how patterns can emerge in unexpected places, and how the first step to insight is making sure that those connections are accessible.

%d bloggers like this: