See what AI really thinks of you with this deeply humbling website

Spoiler, it's not good.
 By 
Jack Morse
 on 
See what AI really thinks of you with this deeply humbling website
I'd say that's more of a chinstrap, but what do I know. Credit: Composite: Oleksandr Suhak / getty / imagenet roulette

You are nothing more than a collection of deeply embarrassing and problematic machine learning-determined classifiers.

That humbling truth is brought home by ImageNet Roulette, an online tool that gives anyone bold or foolish enough to upload a photo the opportunity to learn just exactly how artificial intelligence sees them. The project, described as "a provocation" by its creators, aims to shed light on how artificial intelligence systems view and classify humans.

And, surprise(!), AI has some pretty racist and misogynistic ideas about people. Or, rather, the dataset ImageNet Roulette draws from, ImageNet, is filled with problematic categories that reflect the bias often inherent in the large datasets that make machine learning possible.

Calling attention to that fact is the project's entire point.

"[We] want to shed light on what happens when technical systems are trained on problematic training data," explains the ImageNet Roulette website. "AI classifications of people are rarely made visible to the people being classified. ImageNet Roulette provides a glimpse into that process – and to show the ways things can go wrong."

The project, which is part of Trevor Paglen's and Kate Crawford's Training Humans exhibition at Milan's Fondazione Prada museum, identifies what it thinks are faces in photos and then labels them as it sees fit.

Often, these make no sense to the casual observer — such as in the case of the below photo, featuring former President Barack Obama and Prince Harry, labeled as "card player" and "sphinx," respectively.

Mashable Image
Hmm. Credit: Composite: Samir Hussein / getty / imagenet roulette

"[Training Humans] is the first major photography exhibition devoted to training images: the collections of photos used by scientists to train artificial intelligence (AI) systems in how to 'see' and categorize the world," explains the exhibit page.

Uploading a personal photo into ImageNet Roulette is both an exercise in humility — it categorized a photo of this reporter as "flake, oddball, geek" — and a reminder that the systems making judgments about people based solely on photographs are, frankly, not that good.

It's the latter point that should cause concern. Automated systems that replicate, and by extension exacerbate, the biases present in society have the power to codify those very problems. ImageNet Roulette is a stark reminder that the AI powering image-recognition tools aren't some digital arbiter of truth.

Remember that the next time you hear someone waxing poetic about the powers of machine learning.

Mashable Image
Jack Morse

Professionally paranoid. Covering privacy, security, and all things cryptocurrency and blockchain from San Francisco.

Mashable Potato

Recommended For You
The 5 best films we saw at Sundance 2026
Mashable Entertainment Editor Kristy Puchko talking about her favorite films out of Sundance


Start 2026 strong with this deeply discounted Microsoft Office license
Microsoft Office 2024 Home & Business for Mac or PC Lifetime License


What's AI.com, the mysterious website with the Super Bowl commercial?
AI.com logo on mobile device

Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone

What's new to streaming this week? (April 3, 2026)
A composite of images from film and TV streaming this week.

Google launches Gemma 4, a new open-source model: How to try it
Google Gemma

The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!