In my previous post, I detailed how we at the Ekho Collective began our project of building an immersive opera experience for The Finnish National Opera and Ballet. In this post, I discuss what inspires the story we are concocting for our future visitors.

An excerpt from our storyboard

Hyperbolic Futures

In media, AI is often seen through two hyperboles. In imagined utopias, AI will cure cancer and save us from the climate disaster. In dystopias, AI is depicted as a superpower—usurping control and forcing humanity into underlings in a new-found society, destroying all that is humane. Our collective stories exemplify these stereotypes: in Spike Jonze’s movie Her, the protagonist finds an idealistic love with an AI agent (although, this love is short-lived); and in the movie I, Robot (based on science-fiction writer Isaac Asimov’s famous stories), dumb automatons do not appreciate the value of human life.

Finding the dialogue between these two hyperbolic futures is at the crux of the story we hope visitors of our experience will explore. The AI present in our work will follow after past stars of the opera such as Salome and Carmen. Laila — an AI which will build upon the data left behind by its new visitors — will reflect the nature of humanity by building itself on traces of it. The experience will allow visitors to interact with AI in a tangible and understandable way.

How We See AI — and How AI Sees Us

Part of exploring how we see AI is showing how AI sees us. AI has a limited view of the world, and bases its decisions on a restricted number of variables — variables which don’t always represent the complex nature of human societies. Data points are also always limited, and can be influenced by their environment in unexpected ways .When building itself on biased data, AI can reflect the worst qualities of us. For example, an algorithm used by the US justice system to estimate recidivism rate was shown to have racial bias.

A good illustration of bias is the recently published project ImageNet Roulette, an art project by researcher Kate Crawford and artist Trevor Paglen. The website was trained on ImageNet, one of the most popular open source image recognition databases available online. It allowed users to upload their own images and see how the website’s algorithm recognized their image. Outcomes were unacceptable: journalist Julia Carrie Wong was labeled a racist slur, and the image of a dark-skinned man was labeled as an offender

How AI sees us: results can vary according to environment and other variables, even with the same objects (ImageNet Roulette)

The source of the data this algorithm was built on is Amazon’s online crowd-sourcing platform mechanical Turk. Mechanical Turk recruits users who perform repetitive tasks: labelling images, performing surveys, etc. It is often used by researchers and developers seeking out mass data to build their machine learning applications. Data purchasable on mTurk is generated by real humans — algorithms built on data obtained through it are a true reflection of us — the best and the worst.

A Reflection of Us

AI, like all human inventions, has the potential for good and bad. More than on the technology itself, the future we build will depend on the choices we make. In Laila we want to highlight this thought, and provide a venue for visitors to actually meet and interact with an AI in a physical form. AI is often spoken of in abstract concepts of machine learning — remaining unattainable for the layperson. 

By providing an interface for interaction with AI, we hope the user can reflect on how the actions they take in this new sphere of existence influence the decisions of the AI Laila. As philosopher Martin Heidegger wrote in his essay The Question Concerning Technology

Technology is therefore no mere means. Technology is a way of revealing.