Quantcast

Supra-Human

Refik Anadol’s Archive Dreaming allows users to “fly” through a digitized archive of 1.7 million documents— “a universe you cannot see,” according to Anadol, “without the power of machine intelligence.”

Google’s Artists and Machine Intelligence program grants a cadre of accomplished artists direct access to unfathomable computing power. The collaboration between art, engineering and artificial intelligence may be rewriting the future.  

Art kind of exists in a sandbox or a white cube, where the expectation is that anything can happen, right?”

I’m talking with Kenric McDowell, whom I know as Big Phone, the musical persona under which he’s released minimal techno and DJed around Seattle for the last few years. On a shockingly sunny late winter afternoon, McDowell and I are eating lunch at one of the dining halls within Google’s gleaming Seattle offices (this one overlooks the Fremont Ship Canal and features an elaborate nautical theme). McDowell, I recently learned, works here as part of Google’s Artists and Machine Intelligence program, or AMI for short—a 16-month-old collaboration between one of the world’s most powerful corporations and a handful of artists of various disciplines in cities around the globe.

As much as I dig McDowell’s incantational dance music, I’m more intrigued by his work with machine intelligence—mainly because the crossover is so unusual, and because of my loose grasp of what machine intelligence actually is. The phrase suggests a step forward for the citizens of Earth; humans and machines traveling an evolutionary path together, guided by pure reason and benevolence. Who isn’t seduced by the imminence of a more perfect world and the ultimate flowering of human ingenuity?

Right now, however, my practical understanding of the concept is pretty dumb: Machine intelligence is the mysterious algorithm that suggests I watch “British Supernatural TV Shows” on Netflix, and (related) the center of a sci-fi trope about the inevitable war between humanity and our merciless automated overlords. Thus I’m seeking demystification from an MI researcher who also happens to be an electronic musician.

So far I’m following McDowell: In the art world, in the most general sense, anything is possible. There are no rules. Especially, as he points out, compared to the bottom-line pragmatism that reigns over the adjacent world of product development.

“And so [with AMI],” he continues, “we’re able to explore different relationships and ideas around technology and machine intelligence that wouldn’t be possible in the product space. By allowing ourselves to not make sense right away—which is something that art is very good at—things can emerge that wouldn’t in a product space. In MI research in particular, art is very helpful because it’s not a linear type of problem-solving.”

Nonlinear problem-solving is trendy in business, science and even military strategy. The idea is to apply a lateral “artist” approach in all industries, not just those that involve a paintbrush or piano, as a way to break out of routine. Machine learning applies nonlinear thinking to the binary world of traditional computational analysis, which dates all the way back to abacuses and digital calculators. Traditional computation requires a discrete data set input by a programmer, and it yields a similarly discrete—and often predictable—set of solutions. It’s productive on a large scale but somewhat clunky and slow (though it did put people on the moon). Given enough time and resources, humans alone could eventually arrive at the same results.

Machine intelligence, enabled over the last 40 years by advances in microchip technology and the ocean of information collected and cached by the Internet, allows machines to analyze inhuman swaths of data on their own, self-directed within predetermined parameters, with encouragement to develop original conclusions. (The terms machine intelligence, artificial intelligence and machine learning are all used more or less interchangeably.) Emphasis on original: Machine intelligence conjures responses that human minds very well might not. And that’s the point.

For instance, rather than adhering to an inflexible strand of if-then logic, Netflix analyzes my viewing history for trends in plot and pacing and theme and region of provenance, compares it to millions of other users’ histories, compare its analysis of those comparisons to hundreds of thousands of film and TV synopses, and then concludes that, out of countless choices, I’d like Black Mirror. Google gauges real-time traffic and weather conditions from hundreds of localized nodes, cameras and sensors to calculate the best route for me to drive to my office. Facebook, using extremely advanced machine-learning skills, identifies the faces of my friends in a random assortment of photos. The more initial information, or training data, that the system receives from its human master, the more accurate—though not necessarily more human—its results. The MI refines itself with consistent use. It develops taste.

Predictive search. Language processing. Translation. Self-driving cars (eventually). Beyond these practical applications, on a philosophical level, the study of machine intelligence coincides with what McDowell calls the “emergent cultural complexity” of the present moment, exemplified by the dissolution of trust in mass-media and government, the refraction of gender and sexuality into a spectrum and the “wicked problem” of climate change.

Nuanced new problems demand new, nuanced solutions. “Machine intelligence fits right into that,” McDowell says.

“We’re entering a time when everything is interconnected, everything is complex. And more and more, everything is intelligent. Artists can help us move through that space, whether it’s intellectually and analytically or emotionally and aesthetically. And that’s my interest—framing a way to work artistically in the 21st century that doesn’t hide from complex problems but gives us an aesthetic, artistic and professional relationship to them.”

+ + + +

Where mainstream tech culture obsesses with consumer-oriented gadgets and games, machine learning embraces the intangible and theoretical—another reason why it resonates with the art world. MI technology is not an end unto itself, wherein we devise the ideal artificial decision-making apparatus and, inadvertently or no, render ourselves obsolete. Nor is the goal simply computer-generated art.

Instead, MI research investigates how humans think as much as how computers compute—perhaps more. It’s interested in process as much as result. And as technical as it is, it’s also a philosophical inquiry that plumbs our species’ most intricate emotional and intellectual depths. Its usefulness depends entirely on a single, profound relationship: That between a human and human-made intelligence.

The art created in conjunction with the AMI program isn’t inherently more advanced or progressive than conventional art. But it does explore fascinating revelations unearthed by a non-human conception of the human mind. Within the AMI program, McDowell interfaces with photographers, filmmakers, writers, choreographers and other artists that he and the group select; a researcher named Mike Tyka liaises with volunteers among Google’s engineering cadre in Seattle and Mountain View, Calif.; and an engineer named Blaise  Agüera y Arcas navigates the group’s higher-level administration within the company. Like every salaried Google employee, all of them are encouraged to spend 20 percent of their working time on personal creative projects like AMI. Along with its best minds, the company provides its prodigious computing power and occasional financial backing (the details of which they declined to share) to artists.

Tyka is trained in computational biophysics and biochemistry, and as an artist he uses 3D printers to sculpt models of protein folds and other microscopic molecular structures. He also had a hand in the the development of DeepDream, the Google software platform that brought MI to pop culture via a stream of gooey, fractalized photographic images that appeared online in 2015. Input a photo or other image into DeepDream—essentially an algorithm trained in recognizing specific patterns, like faces or insects or dogs, within shapes or color contours in a visual image—and the platform would embellish, or “hallucinate,” as the MI parlance goes, over the image, adding visual echoes of the training data with every round. It was intended to visualize the mysterious action of a “neural network,” a type of computerized conception of human brain function. With the software available online, people cranked out DeepDream-filtered images, which became a tidal wave of kitsch, like the Magic Eye posters of the early ‘90s. Lots of faces of dogs.

To laypeople like me, DeepDream seems like nothing more than a fancy trip toy. Tyka and McDowell, though, are fascinated by the public’s enthusiastic misapprehension. (“Kitsch-ification,” according to McDowell, isn’t necessarily something to run away from.) Moreover, Tyka explains, poking holes in the value of DeepDream’s graphics misses the point, as the software was an experiment to begin with.

To him, DeepDream’s remix of images harkens back to the innovative origins of hip-hop, when popular opinion scoffed at the idea that scratching records and sampling beats was a genuine form of creativity. A few short decades later, hip-hop is the prevailing form of pop culture around the world. It’s a legitimate art form, it uses preexisting components, and it’s wholly original.

“I’d argue that no creative processes are in the vacuum,” he says. “All are to some extent a recombination of your experiences. You could say your entire life that you’ve lived, that’s what you work with. You don’t have anything else. The data that’s your set of the experience you’ve had so far is what your actions are going to be based on. That’s why you don’t see modern art pop out in the middle of nowhere, randomly. There was a process and each step takes it a little bit further. If Pollock turned up in 1800, people wouldn’t have even recognized it as art.”

Some AMI-created art is abstractly beautiful; some conveys vast amounts of data through beautiful imagery; some is the awkward, eerie or humorous brainchild of a fascinating and invisible collusion between man and machine. The common thread is artist-MI collaboration. Working with Google engineers, artists take advantage of MI’s mind-boggling abilities to analyze data and recognize patterns—considered a mysterious and deeply human skill—then apply the results in ways that are wildly divergent, occasionally illuminating and endlessly intriguing.

+ + + +

Under the auspices of AMI, this past December, Allegra Bliss Searle-LeBel, a Seattle-based choreographer, technologist and performance artist, produced an immersive performance that attempted to bridge the gap between human imagination, machine intelligence and space exploration. “We Are More than the Sum of All We Do Not Yet Know” involved choreographed dance, live music, projection-mapped visuals and audience participation.

The goal of her piece, Searle-LeBel says, was to provide a sense of humanity on an inhuman scale. “We’re connected to earth and to space, to these big ideas. And if we can have that kind of relationship with things we’re scared of—artificial intelligence, for example—then that changes the future. Because if we imagine that we could have a relationship with these tools that isn’t just based on everyday functionality or fear, then that lets us see a wide variety of possibilities for the future. Instead of, ‘Well, we’re going to have these super functional, very helpful algorithms, right up until they kill us all’— that’s kind of the story right now—being able to work with engineers and say, ‘We’re going to create something that’s never been done and make art with this,’ and have it come from a place of imagination and empathy, that’s a powerful way to shift the conversation.”

Tivon Rice, a University of Washington-trained sculptor with a brand-new Ph.D. from the University of Washington’s DXArts program, worked with Google engineers to analyze public land-use records in Seattle to create a pair of artificial intelligences, one that embodied the perspective of developers, the other the general public. He and a Google engineer fed the machines thousands of online documents—building permits, requests for building permits, online comments in favor of or against development—totaling about 250,000 words, roughly the bare minimum amount of training data to produce a valid result. The machines learned which letters are most likely to follow one another, how letters become words that become sentences that convey concepts that comprise human speech. Rice and the machines developed a rudimentary lexicon, sketched two vaguely formed characters, showed them photos he took of landscapes around Seattle and spurred them into an uncanny simulacrum of formal conversation. Then he incorporated this artificial dialog into a six-minute virtual reality film (which is a whole other story).

“It’s less about letting a robot make your art for you and more about the relationship you start to build with large sets of data, or with these systems that can surprise you in how they output that data,” Rice says, sitting in his basement studio on the UW campus while his son, headphones on, watches cartoons on a nearby computer. “My interaction with it has always felt more like a collaboration. Because [the MIs] inherently need some sort of a human curation, some sort of a critical or creative filter put on them.”

Rice also brings up the example of early hip-hop, pointing out that DJs intentionally chose disco and jazz music (rather than, say, country or classical) as their foundation. As training data for his next project, Rice will feed an MI the collected works of JG Ballard (!), a deceased British author “very familiar with architecture, urbanism and disaster. It’ll be interesting to see what types of images flow through.”

In Rice’s experience, artists more easily generate meta frameworks than nuts-and-bolts engineering students do. Without the artists’ inspiration, the engineers’ MI has no motivation.

+ + + +

In 2014, a Turkish-born, Los Angeles-based architect named Refik Anadol installed an epic projection-mapped light show inside Walt Disney Concert Hall for a performance of Edgard Varèse’s Amériques by the LA Philharmonic. (It’s nuts; watch the video ASAP.) In April, as this story went to press, he launched his AMI-affiliated project in Istanbul. Titled Archive Dreaming, it’s set inside a renowned contemporary art museum and archive called SALT, which has digitized its Ottoman Bank-owned collection of some 1.7 million documents and images—literature, correspondence, contracts, calligraphy, drawings, paintings and photos going back to the 1850s. Anadol used MI to analyze the archive and organize patterns in color, content, design, place of origin, chronology, etc. and screens them inside a projection chamber. Inside a dark, round room, visitors use a digital tablet to manipulate an animated, projected cloud of images—1.7 million!—in real time.

“You can see the entire archive in three dimensions,” Anadol says. “You’re inside this room and you have an interactive tablet and you can fly through this data universe, a universe you can’t see without the power of machine intelligence. It’s a mind-blowing futurist moment.”

Users can also switch the interface into “dream” mode and the program will use its pattern-recognition capabilities to render hundreds of fictional historical documents—original reproductions, if there can be such a thing—on the spot.

“Suddenly the entire archive will hallucinate itself. This is the amazing part of the project, seeing an alternate history from the perspective of a machine. That’s where it’s groundbreaking.”

Breathless—and perhaps rightfully so. For better or worse, AMI is not all mind-blowing futurist moments. Some iterations are small, intimate, weird. Even funny. NYU grad student, linguist and “creative technologist” Ross Goodwin used Google’s “long short-term memory” neural network resources to generate mankind’s first machine-made film script. Sunspring, a nine-minute sci-fi-ish short starring Silicon Valley’s Thomas Middleditch, premiered online last summer. It’s a tense, abstract talker, as confounding as it is endearing, like David Lynch meets Quentin Tarantino on the holodeck. Which makes sense given the dozens of sci-fi screenplays from the ’80s and ’90s that Goodwin used as training data.

“It’s hard to watch Sunspring and not find it funny,” Goodwin says. “Humor is a good context for this work because it’s not human-level. The thing I’m trying to communicate is that this could be human level soon, so what are the implications from that? Maybe in a few years it’ll make people cry, too.”

Humans developing smart machines that help  humans develop empathy: Imagine that.

Goodwin believes that if machine intelligence raises concerns over eventual human obsolescence, the resulting eventuality to consider is universal basic income, not homicidal robots. “We’ve absorbed at ton of fiction about ruining the world, but it’s not in line with AI today or any of the research that’s being done.”

Throughout history, we’ve leveraged technology to both cause unspeakable damage and demonstrably improve the human condition. With perhaps more at stake than ever before, we must make, as the engineers say, a design decision. We must determine where MI will take us. Our current paranoia over a machine-enhanced future is based on our own feverish narratives, not the objective neutrality of the technology itself. If we end up designing machines that do harm, we have only our own imprudence to blame. Our own lack of intelligence. As we evolve, we continue to learn more about our own behavior, our best and worst. The machines can only help.

See more in Art
See more in the May 2017 issue   →