In a book-lined glass corner office, Natasha Egan is doing what curators do best: arranging art to speak to larger stories or societal mores.
Egan is the executive director of the Museum of Contemporary Photography (MoCP). The museum, on the campus of Columbia College Chicago, does more than just hang photos on walls, Egan says. What the museum truly focuses on is the image, the great medium of our day. We communicate regularly, rapidly, and with surprising clarity via memes and emoji. On social apps, we jump into endless streams of photos, the sheer number and accessibility of which would be unthinkable a few decades ago.
And some images do not just sit idly to be observed.
Outside Egan’s office, artists and staff are setting up for In Real Life, an exhibition curated by Egan focusing on the most polarizing forms of imagery at the moment: surveillance cameras and facial recognition.
The technologies have a staggering potential for impact on our daily lives. Already its uses vary wildly, from benignly unlocking iPhones to sending Muslim Uighurs to Chinese concentration camps. And Chicago has the highest number of individual security cameras in the U.S., about 35,000 electronic eyes — enough to make a peacock envy. It’s a suitable venue for the show; you’re basically guaranteed to pass a camera on your way.
“For a long time, I’ve been interested in surveillance,” Egan says — books on the subject sit notably among shelves marching along her walls. She’s been gathering what she appropriately calls a “data file” on the subject for years.
In Real Life’s true subjects, the electronic eyes, are the most prolific image makers of all time. But despite being powered by math and carried out by unthinking machines, facial recognition is not without bias. Facial recognition algorithms are notoriously worse at recognizing people of color and women. This isn’t just an issue for unlocking a smartphone. These algorithms are used by authorities including police departments and ICE — misidentifying someone can lead to serious, possibly life-threatening situations.
What Egan wants In Real Life to engender is for people to confront the biases in our algorithms — and to understand that they’re part of it.
“We’re the ones continuing the biases into a new level of intelligence.”
Giving them a face to recognize
Climbing through the exhibition, the circles of hot light dance beautifully, on the walls, on the stairs, on your body — a coquettish fairy reverie. Their joyful disco ball aesthetic becomes menacing when you realize where the lights are coming from: mirrors placed on a swiveling bank of red-eyed security cameras. The hydra-headed sculpture surveys the landing on the way to the second floor of In Real Life.
Leo Selvaggio’s motion activated sculpture is eerie, but it’s nothing compared to what’s at the top of the stairs: a massive photo of people walking down the street, all wearing printed facsimiles of Selvaggio’s own face.
His works take advantage of the biases in facial recognition. The algorithms work best on white male faces, so he’s giving them a face to recognize.
“I have this white male privilege,” Selvaggio says. “What if I could distribute that to others?”
His answer to surveillance is to gavage the machine. His (literal) facemask shields others’ privacy by sacrificing his own — but he’d already revealed all of his Internet accounts and passwords before anyway, as part of another piece. His WWWW Project (Who Will Watch the Watchers) is a line of designs for DIY, hands-free, body-cam phone holders. Decked out in these, people can feed into the surveillance system on their own terms — or turn the eyes back on the watchers.
“They (the designs) try to use commonplace materials,” Selvaggio says. One mannequin display wears a harness fashioned out of a work shirt; it looks almost identical to the rig police body cams are mounted on. Another has their phone held by a violet wig, inspired by the face-shielding hair styles of Hong Kong protestors. Selvaggio encourages a wig in case someone tries to snatch your phone (also, traction alopecia is for real).
The goal is to inspire positive and proactive feelings about AI while highlighting the flaws within it.
“We need to be able to produce our own video as evidence for protecting ourselves from certain legal bodies,” Selvaggio says.
Selvaggio’s work is about running with surveillance, not from it, and taking advantage of its biases and gluttony for images. Flood the system, take advantage of its tools.
Building a new system
Stephanie Dinkins is acutely aware that she may be more likely to be hit by a self-driving car than white people.
It’s a dramatic and possibly fatal example of the dangers inherent in the systems Dinkins engages with. Her practice is intersectional, concerning race, gender, age, and artificial intelligence.
These questions are explored in Dinkins’ ongoing work Conversations with Bina48, The artist and the robot talk about race, robot’s rights, loneliness, and other heavy topics.
Machine learning models must be trained on the data we give them. If that human-verified data, called the ground truth, is biased — say, too many photos of white guys, not enough of other people — then that bias will be built into the computer. With AI being trusted with more and more tasks in our daily lives, it may compound inequities instead of eliminating them.
By confronting these issues now, we may spare decades of pain and problems in the future.
The problematic old system, she says, is informing the new system.
“There are so few people who are really contributing to the new system,” Dinkins says. A relatively small group of programmers and computer scientists are crafting algorithms with the potential to touch us all.
Her aim is to get people to recognize what biases exist in artificial intelligence and inspire them to do something about it. The machine learning driven world is not yet set in stone; there is time to point out and correct the biases that have held people back before they become codified.
Being afraid of artificial intelligence is not the best way to approach it, Dinkins says. Fear may alarm people, but it does not empower them. The goal is to inspire positive and proactive feelings about AI while highlighting the flaws within it.
Even the most alien thing about AI, its “black box” (basically, we often don’t know what an AI is “thinking” about to arrive at its answer), can be seen as empowering to Dinkins. After all, if its very creators don’t know how it works, why does it matter if you don’t either?
Egan hopes that visitors to In Real Life come away thinking deeply about the biases in facial recognition and surveillance technology made manifest by artists like Selvaggio and Dinkins. By confronting these issues now, we may spare decades of pain and problems in the future.
“This is a point of hyper inflection for opportunity,” Dinkins says.