Bodies Built by Machines

The Work of Art in the Age of Algorithmic Reproduction

As artificial intelligence seeps into our lives, artists are exploring AI both as a tool and in its impact on our humanness.

Anna Ridler’s Fall of the House of Usher unspools, rooms and bodies spreading half-seen across the frames of this 12-minute film like gossamer. A woman appears to walk down a hallway, then melts into a moonlit sky. A face appears in the dark, contorts into shapes. The animation is based on a 1929 film version of Edgar Allen Poe’s story, but its inky and strange visuals are the result of something altogether more modern: machine learning.

Each moment of Ridler’s film has been generated by artificial intelligence. The artist took stills from the first four minutes of the 1929 movie, then drew them with ink on paper. These versions were then used to train a generative adversarial network (GAN), teaching it what sort of picture should follow on from another. The GAN uses this information to create its own procession of stills, based around a pair of networks that work in competition with each other — one as a generator, one as a discriminator, evaluating the work of the former like an algorithmic critic.

The result is an AI-generated animation based on drawings that are based on the opening minutes of a 1929 film, which is based on an 1839 short story about a decaying lineage. It is a project that uses machine learning techniques not to showcase the technology, but as a way to engage with ideas of memory, the role of the creator, and the prospect of degeneration. It is primarily an artistic work, leveraging artificial intelligence as a medium in a way another artist may use acrylics or videotape.

“By restricting the training set to the first four minutes of the film, I was able to control to a certain extent the levels of ‘correctness,’” Ridler explains. “As the animation progresses, it has less and less of a frame of reference to draw on, leading to uncanny moments that I cannot predict where the information starts to break down, particularly at the end of the piece. I deliberately take the ‘decay’ offered by making an image in this way and turn it into a central part of the piece, echoing the destruction that is so central to the narrative.”

Ridler is part of a new wave of artists who are adept at coding and plugged into the nascent field of machine learning. If neural networks have largely been the domain of the computer science community, projects like Fall of the House of Usher are efforts to reframe these cryptic technologies as both artistic apparatus and important subject matter. After all, talk of adversarial networks may sound obscure, but these are techniques that lie beneath the interfaces we swipe and stroke on a daily basis, from video games to photo recognition on Facebook.

“Given the implications on our society that machine learning already has, and will increasingly have, it is crucial that people investigate and question this technology from all possible angles,” says Mario Klingemann, currently artist in residence at Google Arts and Culture.

“Artists tend to ask different questions than scientists, businesspeople, or the general public,” Klingemann adds. “Artists also might be in the right position to interpret or extrapolate the possibilities and dangers that machine intelligence harbors and express their findings in a language that many people can understand.”

Klingemann’s work, much like Ridler’s, hinges between artificial intelligence and human bodies. His 2017 collaboration with Albert Barqué-Duran, titled My Artificial Muse, for example, resulted in an oil painting of a neural network–generated “muse,” itself based on a training set of classic paintings, including John Everett Millais’ Ophelia. Earlier this year, another project, Alternative Face, involved training a neural network to generate controllable faces based on the French singer Françoise Hardy. Klingemann used this to make it seem as if Hardy was speaking the words of Trump counselor Kellyanne Conway during her infamous “alternative facts” interview.

The ability to put the words of one person into the face of another is an unsettling illustration of the scope for machine learning to undermine the truth of what we see on our screens. Klingemann has continued to mine this seam, regularly posting neural network–generated faces on Twitter that bring to mind the abject self-portraits of Cindy Sherman. I asked him if he considers working with artificial intelligence in this way to be a form of collaboration. Klingemann told me that it’s closer to playing an instrument that he happens to build himself.

“Admittedly it is a very complex instrument that at times seems to have its own unexpected behaviors, but with more practice and experience, outcomes that at first seem to be unexpected or surprising become predictable and controllable,” he said.

For Klingemann and Ridler, engaging with artificial intelligence means understanding the meat and potatoes behind neural networks: algorithms and code. But other artists are also tackling these concepts and asking the same questions about invisible, intelligent systems, but from a different angle. Lauren McCarthy’s work, for example, frequently involves substituting software for herself. In Follower, volunteers were invited to download an app that granted them a real-life follower for one day. At the end of the day, the volunteer was sent a picture of themselves, taken by the follower.

In a more recent project, LAUREN, McCarthy took on the role of an artificial intelligence assistant, akin to Amazon’s Alexa. Volunteers allowed the artist to install cameras, microphones, and smart sensors in their homes. Over the course of three days, LAUREN studies their habits, takes orders, makes recommendations, and controls everything from bathroom taps to door locks. It’s a purposeful inversion of the networks purported by Amazon, Google, and Apple—something McCarthy calls a “human intelligent smart home.”

“I’d say that I am trying to remind us that we are humans in the midst of technological systems,” McCarthy told me. “My interest is in people, not technology. What it means and what it feels like to be a person right now is changing quickly as the systems around us evolve, but there are also some parts of the human experience that remain constant through it all. I think this is the question as we think about ourselves in relation to machines. Where is the boundary of what we consider ‘human’?”

One advantage of McCarthy’s approach is that by using human performance in favor of artificial intelligence, she skirts a potential pitfall in terms of patronage. Because AI is such an emerging area in art circles, there isn’t much set up for it in the way of funding streams. This means artists are often reliant on commercial companies to offer funding or technical expertise, and this, McCarthy suggests, could limit the subjects that tech-heavy projects are given license to confront.

“On one hand, I am happy to see corporations recognizing the potential for art to explore these topics and putting money toward it,” McCarthy says. “However, we need to be careful. Google and other companies providing these funding streams means that they have ultimate editorial control. It is unlikely that we will see work come from it that includes strong critique of AI, political provocation, or questioning of technologies developed by the companies.”

This October, Arts Council England announced a new pot of funding for arts organizations working in the relatively new field of virtual reality, so it follows that machine learning–based projects could similarly be included in future publicly funded initiatives. There’s also the sentiment, however, that it’s ultimately more crucial to interrogate the systems of power this type of technology facilitates, rather than the technology itself — regardless of whether an artwork uses a generative adversarial network or a human sitting behind a monitor, watching a man brush his teeth. If AI is to be part of our lives, art should be there to meet it.

“Most artists are dealing, in one way or another, with the experience of being a person right now,” McCarthy says. “Technology is a force that affects almost every aspect of this experience, whether we feel it directly or not.”