“Gestural Abstractions”

leap_motion_leap-580x359

In the March 2013 issue of Artforum, I published “Gestural Abstractions,” an essay on touch-screen interfaces, copyright law, and how we might interact with images in the not-too-distant future.

___

APPLE HAS FILED PATENTS for Pinch-to-Zoom, Slide-to-Unlock, Multifinger Twisting, Double-Tap-to-Zoom, and Over-Scroll Bounce, aka Rubber-Banding, among other functional finger gestures. The company is indisputably striving to corner the market on how we move our fingers across screens, how we scan and massage images. This was evident in August, when Apple won a major copyright-infringement lawsuit against Samsung and was awarded one billion dollars in damages, bringing us closer to the apocalypse Steve Jobs augured a few years ago in his Herman Kahn–inspired attack on Google’s Android OS: “I’m willing to go thermonuclear war on this.”

The patent war to end all wars—Apple has recently been involved in scores of lawsuits with other technology companies, many of which are also suing one another—may limit innovation and inhibit usability: Imagine every manufacturer implementing slightly different methods for scrolling and zooming. But amid the attrition, we may also come to notice the ways in which our interactions with digital interfaces are changing, with increasingly complex functions facilitated by a new level of haptic choreography, the goal being to integrate seamlessly sensation, cognition, and computation. The companies introducing advanced swiping techniques into our mundane interactions with digital images are likely to capture and commodify the information produced by such gestures. This means making them feel inconspicuous, natural. And so even as we’re being trained to adhere to these serpentine haptic routines, we will need to forget—if only in our muscles—having ever swiped otherwise.

Every era has its interface, and every interface determines how we relate to the world. In 1968, when computer pioneer Douglas Engelbart first publicly demonstrated the mouse—a crude wooden box undergirded by metal wheels—there was no clear sense of how people might employ computers. Were they advanced calculators or interactive theaters? Should the transformation of command into code occur via the punching of keys or the caressing of the machine? Crude touch-screen devices existed in the late 1960s but were developed in the direction of the ATM and air-traffic control. The mouse did not become ubiquitous until Apple licensed Engelbart’s invention and the 1984 debut of the Macintosh ushered in the age of personal computing. Engelbart argued that computers should “augment human intellect” and conceal their own complexity in order to help us solve “the big problems”; here, finally, was a machine that did just that. The graphic interface granted the user unparalleled control by cloaking the actual computing—which was, of course, in no small part controlling the user. Interactions supplanted inputs; work muddled leisure. And to maintain mastery over this constantly metamorphosing system, the lay user always had to be training herself. Jobs didn’t so much change Engelbart’s basic interface as design a way of engaging with it that reflected—and helped to foster—the principles and social formations of the New Economy.

Now Apple and its competitors are doing much the same with the touch-screen interface, which has endured for half a century but has only recently become paradigmatic of the extrapersonal-computing era. One recent Apple patent maps out a series of gestures involving minuscule slides and directional shifts, with such evocative names as Scoop, Infinity, Star, Crossbar, and “Ohm.” Apple has made clear that these proprietary finger movements will be used in the manipulation of images and video; surely it won’t be long before we are cutting and pasting ourselves into branded stock environments with a pinch and an Ohm, then pushing the result to the cloud. While some humble tablet may yet provide salvation for the book, the furious patenting of touch-screen hand motions suggests the ascendance of generative rather than contemplative experiences, and image files rather than text. Just as ergonomics shaped the workplaces of the twentieth century, from assembly lines to cockpits to cubicles, interaction design is shaping the economy of the twenty-first century, priming us for data-rich, personalized expressions rather than efficient, standardized motions.

In the early ’80s, Vilém Flusser, the late media theorist, anticipated this shift from a reactive relationship with technological interfaces, in which our actions are governed by remote controls and keyboards, to a world in which we pattern our behavior and maneuver our bodies according to increasingly complex—and eventually factitiously intelligent—programs of apparatuses. “At the current stage of key development, there will continue to be faulty keys, namely, those that permit me to choose but not to express myself (e.g. the television control panel),” Flusser writes in Ins Universum der technischen Bilder (Into the Universe of Technical Images, 1985). “But we can expect to be enraptured by all keys at a later stage of automation because they will all be instruments that permit us to join with all others, giving meaning to the whirring chaos of the particulate universe.”

Flusser hoped the “fascistic circuitry” that was likely to result from such a scenario would be superseded by a “positive utopia” where people would assume full responsibility for making sense of a world deprived of objective coherence—primarily through technical images, “computations of concepts” rather than observations of objects. What we got instead was Facebook, fascistic circuitry in positive-utopia garb, where networked meaning-making is marshaled to commodify affect. But while Facebook’s “Like” button sophisticatedly converts clicks into information, the gestural repertoire of the mouse and keyboard remains extremely limited. That simple click is nonetheless quite fulfilling, because it seems to “permit us to join with all others”; and also insidious, because the gesture (along with our training) erases the actual purpose, which is to submit data to Facebook. As such, the Like button may represent the zenith of the mouse-and-keyboard interface.

Even if they are successfully implemented, Apple’s patented finger routines risk unsettling the delicate balance between managing the body and promising users unparalleled freedom and expressivity, between the unyielding protocol that governs all these gestures and the communicative horizons they herald. The interface should feel intuitive and effectively shroud the underlying code, but should never disappear entirely, lest users get the uncanny sensation they’re being manipulated. “Any mediating technology is obliged to erase itself to the highest degree possible in the name of unfettered communication, but in so doing it proves its own virtuosic presence as technology thereby undoing the original erasure,” writes Alexander Galloway in The Interface Effect (2012). This is precisely why minimizing user labor, so that digital self-expression is nearly an unconscious act, is still the industry norm. And this is why start-ups such as Gigya can make millions selling plug-ins that cajole users to “earn rewards for performing valuable behaviors,” such as evaluating images on a client’s website simply by clicking “Awesome,” “Want It,” or “Innovative,” then posting their “reactions” to social networks.

Flusser recognized the movement toward a “society of image producers and image collectors” that would “no longer be found in any place or time but in imagined surfaces, in surfaces that absorb geography and history.” The historical consciousness cultivated by the birth of writing four thousand years ago—and deliberately summoned by the tablet computer—would be replaced by “the consciousness of a pure information society.” The current incarnation of this vision seems rather grotesque; smartphones offer immediate access to a wealth of images that hardly ever reveal anything beyond the role of mediation in an economy defined by constant circulation. And we are only at a primitive stage in our swiping and sharing.

So far, the haptic regime particular to tablets and smartphones hasn’t really improved that much on point-and-click; most apps make fairly elementary use of the standard pinches and drags. But we can nevertheless look to them to limn the direction of tactile interaction design, and the course of our inculcation. Instagram’s alleged ninety million active users imbue images with meaning (and metadata) mostly by applying a filter and posting them; conversely, this generic form of personalization has turned the undefiled—which is to say untouched—image into a talisman of the real and led to the prevalence of the “#nofilter” hashtag. Photosynth, on the other hand, supplies a vertiginous experience of “direct manipulation,” the act of maneuvering an object within a graphic interface in real time. Microsoft’s app allows you to pan from one side of the Brooklyn Bridge to the other with your iPhone and create an interactive panorama, which you can then pinch, swipe, and share. But each panorama seems less like an “immersive experience” than a blank document of the interface, a testament to the gulf between the person who sees and the apparatus-operating persona that materializes when the app is fired up.

Flusser believed that our gadgets might serve noble purposes if only we could reprogram them—which is to say reprogram ourselves, the users. Then we might engage in the socialized production of information (rather than in its vacuous circulation), leading to radical democratic structures. Play would replace work, and society would become a “school for freedom” in which everyone is a creator and critic. But this cybernetic vision has famously become the domain of Silicon Valley venture capitalism, and instead of a “positive utopia” we have the monetization of “valuable behaviors.” In the patent wars we can glimpse a possible future in which all the fingertips and eyeballs of the world are united as workers in the everywhere-factory of the image.

According to a recent raft of Apple patents, the gestural patterns established for touch-screen devices may be a prelude to 3-D interfaces with sensors monitoring the movement of the hands and head, detaching the fingers from the screen and appropriating such everyday gestures as waving good-bye or tipping one’s hat to an acquaintance. Galloway insists that the interface is not a single object—consider the various combinations of screens, controllers, data, algorithms, protocols, and so on—but a technique of mediation. And so rather than dwell on the formal differences between the wearable Google Glass and the touchable iPad, we might think about the common effect of such seemingly divergent technologies: to mark each individual’s sensorium as an agglomeration of data points, a constellation of associated images and actions and GPS coordinates, a commodifiable expression of identity. G-speak, a 3-D interface recently demoed by Oblong Industries, updates an earlier design, created for Minority Report (2002), that made Tom Cruise into a manual sorcerer, conjuring windows and databases. The “spatial operating environment” may seem like the apotheosis of the interface that purports to liberate us by enabling total control of the screen in real time, turning us all into power users so long as we adhere to the program. But so might Leap Motion’s nimble tracking device for Apple computers, to be released any day now, which enables users to drag maps, edit images, and “turn” digital pages with orchestrated flicks of the fingers through empty space. “This is like day one of the mouse,” the company’s website proclaimed. “Except no one needs an instruction manual for their hands.”