Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Making the mark

with one comment

Illustration by Jules Feiffer

I only wanted to do what I could right away. I didn’t want to have to do things that were hard. Hard was too hard. Hard was full of defeat. Hard was full of rejection. Hard was full of self-reproach and self-hate. There was enough self-hate operating within me under ordinary circumstances not to provoke even more by repeated failures at something I felt was beyond my ken, but which might not have been, had I been able to apply a little more effort. So, I think that at least unconsciously, becoming the sort of cartoonist I became, instead of the more traditional cartoonist, was because I felt I couldn’t compete as a more traditional cartoonist. I couldn’t do the slick thick and thin line. I couldn’t draw super-characters with ease and facility. I couldn’t do the work I thought I wanted to do…

Not being able to really be as good as I wanted to at my first love, which would have been a daily strip, I had to invent another form for myself, within cartooning. That no one else could do, that I was the only one doing, so that I couldn’t have any competitors. So nobody could be any better at it than I was. If I invented it, who was my competition? I mean, all competition had to be measured against me. I was making the mark. I’m not saying this was by any means a conscious choice. I think by a process of elimination, I just slipped into it…

Striking off on my own has never been intimidating. Being like everyone else has been intimidating, because I’m lousy at it. Being part of a group is intimidating, because I just don’t get the hang of it.

Jules Feiffer, to The Comics Journal

Written by nevalalee

February 10, 2016 at 10:09 am

Quote of the Day

leave a comment »

Terrence McNally

Theater should resemble more a newsroom, with deadlines, than a slow, leisurely workshop development process…In theater, you should strike while the iron is hot. It’s the moment. It’s like with food. You taste it, and you don’t wait a week to say if you like the sauerkraut.

Terrence McNally, to the Boston Globe

Written by nevalalee

February 10, 2016 at 7:30 am

The time factor

with 4 comments

Concept art for Toy Story 3

Earlier this week, my daughter saw Toy Story for the first time. Not surprisingly, she loved it—she’s asked to watch it three more times in two days—and we’ve already moved on to Toy Story 2. Seeing the two movies back to back, I was struck most of all by the contrast between them. The first installment, as lovely as it is, comes off as a sketch of things to come: the supporting cast of toys gets maybe ten minutes total of screen time, and the script still has vestiges of the villainous version of Woody who appeared in the earlier drafts. It’s a relatively limited film, compared to the sequels. Yet if you were to watch it today without any knowledge of the glories that followed, you’d come away with a sense that Pixar had done everything imaginable with the idea of toys who come to life. The original Toy Story feels like an exhaustive list of scenes and situations that emerge organically from its premise, as smartly developed by Joss Whedon and his fellow screenwriters, and in classic Pixar fashion, it exploits that core gimmick for all it’s worth. Like Finding Nemo, it amounts to an anthology of all the jokes and set pieces that its setting implies: you can practically hear the writers pitching out ideas. And taken on its own, it seems like it does everything it possibly can with that fantastic concept.

Except, of course, it doesn’t, as two incredible sequels and a series of shorts would demonstrate. Toy Story 2 may be the best example I know of a movie that takes what made its predecessor special and elevates it to a level of storytelling that you never imagined could exist. And it does this, crucially, by introducing a new element: time. If Toy Story is about toys and children, Toy Story 2 and its successor are about what happens when those kids become adults. It’s a complication that was inherent to its premise from the beginning, but the first movie wasn’t equipped to explore it—we had to get to know and care about these characters before we could worry about what would happen after Andy grew up. It’s a part of the story that had to be told, if its assumptions were to be treated honestly, and it shows that the original movie, which seemed so complete in itself, only gave us a fraction of the full picture. Toy Story 3 is an astonishing achievement on its own terms, but there’s a sense in which it only extends and trades on the previous film’s moment of insight, which turned it into a franchise of almost painful emotional resonance. If comedy is tragedy plus time, the Toy Story series knows that when you add time to comedy, you end up with something startlingly close to tragedy again.

Robert De Niro in The Godfather Part II

And thinking about the passage of time is an indispensable trick for creators of series fiction, or for those looking to expand a story’s premise beyond the obvious. Writers of all kinds tend to think in terms of unity of time and place, which means that time itself isn’t a factor in most stories: the action is confined within a safe, manageable scope. Adding more time to the story in either direction has a way of exploding the story’s assumptions, or of exposing fissures that lead to promising conflicts. If The Godfather Part II is more powerful and complex than its predecessor, it’s largely because of its double timeline, which naturally introduces elements of irony and regret that weren’t present in the first movie: the outside world seems to break into the hermetically sealed existence of the Corleones just as the movie itself breaks out of its linear chronology. And the abrupt time jump, which television series from Fargo to Parks and Recreation have cleverly employed, is such a useful way of advancing a story and upending the status quo that it’s become a cliché in itself. Even if you don’t plan on writing more than one story or incorporating the passage of time explicitly into the plot, asking yourself how the characters would change after five or ten years allows you to see whether the story depends on a static, unchanging timeframe. And those insights can only be good for the work.

This also applies to series in which time itself has become a factor for reasons outside anyone’s control. The Force Awakens gains much of its emotional impact from our recognition, even if it’s unconscious, that Mark Hamill is older now than Alec Guinness was in the original, and the fact that decades have gone by both within the story’s universe and in our own world only increases its power. The Star Trek series became nothing less than a meditation on the aging of its own cast. And this goes a long way toward explaining why Toy Story 3 was able to close the narrative circle so beautifully: eleven years had passed since the last movie, and both Andy and his voice actor had grown to adulthood, as had so many of the original film’s fans. (It’s also worth noting that the time element seems to have all but disappeared from the current incarnation of the Toy Story franchise: Bonnie, who owns the toys now, is in no danger of growing up soon, and even if she does, it would feel as if the films were repeating themselves. I’m still optimistic about Toy Story 4, but it seems unlikely to have the same resonance as its predecessors—the time factor has already been fully exploited. Of course, I’d also be glad to be proven wrong.) For a meaningful story, time isn’t a liability, but an asset. And it can lead to discoveries that you didn’t know were possible, but only if you’re willing to play with it.

Quote of the Day

leave a comment »

Written by nevalalee

February 9, 2016 at 7:30 am

Posted in Quote of the Day

Tagged with

The case against convenience

with one comment

Early patent sketches for Apple handheld device

Last week, I finally bought a MacBook Pro. It’s a slightly older model, since I wanted the optical drive and the ports that Apple is busy prying away from its current generation of devices, and though it isn’t as powerful under the hood as most of its younger cousins, it’s by any measure the nicest laptop I’ve ever owned. (For the last few years, I’ve been muddling through with a refurbished MacBook that literally disintegrated beneath my fingers as I used it: the screws came out of the case, the plastic buckled and warped, and I ended up keeping it together with packing tape and prayer. If this new computer self-destructs, I assume that it won’t be in such a dramatic fashion.) And while it might seem strange that I sprang for a relatively expensive art object from Apple shortly after my conversion to an Android phone, my favorite thing about this new arrangement is that I don’t need to worry about syncing a damned thing. For years, keeping my laptop and my phone synced up was a minor but real annoyance, particularly on a computer that seemed to audibly gasp for air whenever I connected it with my iPhone. Now that I don’t have that option, it feels weirdly liberating. My smartphone is off in its own little world, interacting happily with my personal data through Google Photos and other apps, while my laptop has access to the same information without any need to connect to my phone, physically or otherwise. Each has its own separate umbilicus linking it with the cloud—and never the twain shall meet.

And there’s something oddly comforting about relegating these devices to two separate spheres, as defined by their incompatible operating systems. I’ve spoken here before about Metcalfe’s Law, which is a way of thinking about the links between nodes in a telecommunications network: in theory, the more connections, the greater the total value. And while this may well be true of systems, like social media, in which each user occupies a single node, it’s a little different when you apply it to all the devices you own, since the complexity of overseeing those gadgets and their connections—which are entities in themselves—can quickly become overwhelming. Let’s say you have a laptop, a tablet, a smartphone. If each connects separately with the cloud, you’ve only got three connections to worry about, and you can allocate separate headspace to each one. But if they’re connected with each other as well as the cloud, the number of potential connections increases to six. This may not sound like much, although even two extra connections can grow burdensome if you’re dealing with them every day. But it’s even worse than that: the connections don’t run in parallel, but form a web, so that any modification you make to one invisibly affects all the others. If you’re anything like me, you’ve experienced the frustration of trying to customize the way you interact with one device, only to find that you’ve inadvertently changed the settings on another. The result is a mare’s nest of incompatible preferences that generate unpredictable interference patterns.

Apple Continuity

Segregating all the parts of your digital life from one another takes away much of that confusion: you don’t have to think about any of it if your computer and your phone don’t speak a common language. (They can each talk to the cloud, but not to each other, which provides all the connectivity you need while keeping the nodes at arm’s length.) But Apple and other tech companies seem determined to combine all of our devices into one terrifying hydra of information. One of the big selling points of the last few Mac OS X updates has been a feature ominously known as Continuity: you can start writing an email or editing a document on one device and pick it up on another, or use your laptop or tablet to make calls through your phone. This sounds like a nice feature in theory, but on closer scrutiny, it falls apart. The whole point of owning multiple devices is that each one is best suited for a certain kind of activity: I don’t want to edit a text document on my phone or make a call on my laptop if I can possibly avoid it. It might be nice to have the option of resuming on one device where you left off somewhere else, but in practice, most of us structure our routines so that we don’t have to worry about that: we can always save something and come back to it, and if we can’t, it implies that we’re enslaved to our work in a way that makes a mockery of any discussion of convenience. And retaining that option, in the rare cases when it’s really useful, involves tethering ourselves to a whole other system of logins, notifications, and switching stations that clutter up the ordinary tasks that don’t require that kind of connectivity.

Is the result “convenient?” Maybe for a user assembling such a system from scratch, like Adam naming the animals. But if you’re at all intelligent or thoughtful about how you work, you’ve naturally built up existing routines that work for you alone, using the tools that you have available. No solution designed for everybody is going to be perfect for any particular person, and in practice, the “continuity” that it promises is really a series of discontinuous interruptions, as you struggle to reconcile your work habits with the prepackaged solution that Apple provides. That search for idiosyncratic, practical, and provisional solutions for managing information and switching between different activities is central to all forms of work, creative and otherwise, and an imperfect solution that belongs to you—even if it involves rearranging your plans, heaven forbid, to suit whatever device happens to be accessible at the time—is likely to be more useful than whatever Apple has in mind. And treating the different parts of your digital life as essentially separate seems like a good first step. When we keep each device in its own little silo, we have a decent shot at figuring out an arrangement that suits each one individually, rather than wrestling with the octopus of connectivity. In the long run, any version of convenience that has been imposed from the outside isn’t convenient at all. And that’s the inconvenient truth.

Written by nevalalee

February 8, 2016 at 9:59 am

Posted in Writing

Tagged with , , ,

Quote of the Day

leave a comment »

Written by nevalalee

February 8, 2016 at 7:30 am

Picasso until proven otherwise

leave a comment »

August Wilson

I always say that any painter that stands before a canvas is Picasso until proven otherwise. He stands before a blank canvas and he takes his tools. Paint, form, line, mass, color, relationship—those are the tools, and his mastery of those tools is what will enable him to put that painting on canvas. Everybody does the same thing. His turn out like that because he’s mastered the tools. What happens with writers is that they don’t want to learn the craft—that is, your tools. So if you wanna write plays, you can’t write plays without knowing the craft of playwriting. Once you have your tools, then you still gotta create out of that thing, that impulse. Out of necessity, as Bearden says: “Art is born out of necessity.” Most writers ignore the very thing that would get them results, and that’s craft. And how do you learn craft? In the trenches.

August Wilson, to The Believer

Written by nevalalee

February 7, 2016 at 7:30 am


Get every new post delivered to your Inbox.

Join 11,150 other followers

%d bloggers like this: