Most agree that AI will be the defining technology of our time, but our predictions tend to differ wildly. Either AI will become the perfect servant, ushering in a new era of productivity and leisure one weather report at a time (Hi, Alexa), or it’ll master us, consigning humanity to the ash heap of biological history (I see you, Elon).
But there’s a slice of gray in between we should consider: What if AI became a peer and a collaborator, rather than a servant or an overlord?
Let’s use art as an example. The history of art and the history of technology have always been intertwined. In fact, artists—and whole movements—are often defined by the tools available to make the work. The precision of flint knives (the high technology of the Stone Age) allowed humans to sculpt the first pieces of figurative art out of mammoth ivory. The Old Masters used camera obscura to render scenes of extraordinary depth. In 2018, artists work in every media available to them, such as fluorescence microscopy, 3D bioprinting, and mixed reality, further stretching the possibilities of self-expression and investigation.
The defining art-making technology of our era will be AI. But this won’t be the kind of artificial intelligence of our past imagination—it’ll be the augmented intelligence of the present. While “artificial intelligence” still evokes the idea of autonomous machines that, after a period of algorithmic maturation, will ruthlessly and inevitably surpass their human makers, “augmented intelligence” reflects the pragmatic truth of the situation: sophisticated technologies that enhance our capabilities, but still require human intelligence to define rules and steer the way.
Working with AI, artists can harness chaos and complexity to find unexpected signals and beauty in the noise. When applied to art-making, you can think of it as a collaboration between the human artistic spirit and advanced intelligent technologies. Artists collaborate for many reasons: seeking a greater sum of combined talent (illustrator and writer), feedback loops of inspiration (improvisational jazz duo), or simply for the unexpected contributions that arrive from conflict and partnership (any two humans who have ever danced together). Tools and technologies assist artists in voicing their expression, but artists don’t collaborate with paint brushes, saxophones, or pointe shoes—they wield them. Value is contributed to creative processes through intelligence, insight, or inspiration.
AI is unlike any of our previous art-making technologies. Working with AI, artists can harness chaos and complexity to find unexpected signals and beauty in the noise. We can parse, recode, and connect to values and patterns that exceed our grasp. AI can provide extraordinary precision tools for artists who are, on the whole, perhaps better suited to tangential and divergent thinking.
But it can’t do it all. While AI can compute complex systemic analysis, humans provide the bolt out of the blue; the patterns within patterns, the novel and intuitive leap into something wholly new.
When these abilities are combined simultaneously, we achieve an aesthetic dialogue similar to that employed with improvisational jazz. During a jam session, musicians feed off swerving cues from each other: key changes, flourishes, tempo, rhythmic shifts. They eschew the written code of music and wander off the path together to create an expectation-free sound where the whole is greater than the sum of its parts. Even though it appears effortless and instantaneous to outsiders, this high-wire act of art-making only exists because of the trained insights, capabilities, and intelligence of the partners in that moment and time.
This is the creative feedback loop, the heart of artistic collaboration.
The output of this expression differs categorically from all art previously made by humans through history. Artists collaborating with AIs could produce much the same result. In a similar improv session, an artist could give an input to the technology, the AI processes it, flings something back, and the artist responds to whatever gets spat out. By reacting to whatever the AI has produced, the artist provides another input for the system, and the human-machine feedback loop begins to circle. By distilling the essence of an artist’s expression, translating it to a technology, and then letting the technology process it back in its own language, we can find new inspirations and methods of expression.
I believe this shift in creative dynamic is a new artistic language. We as artists can now truly collaborate with a tool to harness new abilities, tap into greater complexities, explore possibilities, and thus create a new kind of art. The output of this expression differs categorically from all art previously made by humans through history, and this intelligent contribution inspires deeper investigations of the meanings of authorship, creativity, and art.
I propose we call this new artistic language “augmented art.”
There are many practices percolating in this new era of augmented art. For example, interdisciplinary artist Sougwen Chung creates collaborative art with her robot, Drawing Operations Unit: Generation 2: DOUG. Through drawing experiments, she has created an AI that learns from her style of drawing and collaborates with her by translating her gestures in its own way, thus affecting her own drawing behaviors. DOUG has its own innate behavior and works as a collaborative artist with her, turning her practice of art-making into a real-time duet.
Last year, I explored the artist/technology feedback loop with See Sound, an art-making tool that translates the human voice into digital sculptures, thus inspiring us to modulate our voice in real-time to create the shapes we see in our mind’s eye. Materials, orientation, shape, and volume are defined by subtle variables of voice: timbre, pitch, volume, dissonance, and attack. The result is a “vocal fingerprint” that is also a multi-track loop of audio, allowing anyone to create beautiful sculptures and music compositions, simply with their voice.
This year, I’ll be going a step further by premiering a vision of the future of live musical performance at SXSW. We are using deep-learning algorithms to train an AI to beatbox live with a human: the vocal phenomenon, champion beatboxer, and artist-in-residence at Harvard, Reeps One.
We have created an AI that will duet and battle with Reeps One, parsing his voice, intonation, and rhythms to create new rhythmic accompaniments and melodies voiced using a remix of Reeps One’s vocal samples. In other words (or sounds), human Reeps One will perform with machine Reeps One—except the machine is not limited by the physical limitations of vocal cords and breathing. The added complexities of recognizing an extraordinarily expressive instrument—the human voice—and making art in real-time creates a true meeting of artist and machine. This premiere will be accompanied by a mixed-reality, sound-reactive floating cathedral that operates as an impossible piece of stage design; a prototype of what concert experiences and productions could look like in the future.
These projects ultimately raise the fundamental question of what it means to be creative. We’re feeding software what we consider to be beautiful, allowing it to identify the common attributes of that material, and create permutations within those boundaries. In the case of our SXSW project and other human-machine collaborations, it creates improvisational fodder for the human—unexpected twists that escape creative prediction, with the sum being something wholly new.
But is the machine’s participation true creativity? An expression of “self”? Or is it simply the result of intersecting algorithms with curated data?
The truth is that we struggle to define creativity. We don’t know where flashes of inspiration and ideas come from. We hazily squint into ourselves and assign poetic language and best guesses as to where the muse and the artist intersect, but it’s difficult to define it into a clearly structured set of rules. Ironically, this is the very first step required to create an algorithm for beauty, imagination, and serendipity.
Until we arrive at that, technology will not be able to generate, reflect, or imagine its own art without human input. Even the most advanced deep-learning techniques achieve amalgamated mimicry. (Though one could argue this is analogous to human artists mining the sum of their experiences to influence their art.) However, this is still a far cry from the autonomous musings of a 3-year-old child with finger paints.
With this understanding, we peer into the future, and remember that the things that technology can do are only bounded by the imagination of the person using it. As artists collaborating with technology, we are sifting through possibility. We are on a mission of discovery to find a new way to express ourselves with our increasingly sophisticated partners: to paint, write, sculpt, and make beautiful music.