We are “creating control over a very unknown, complex space.”
Why A.I. knows who you find attractive better than you do
Researchers have designed a brain-computer interface that can generate uniquely attractive images based on your brainwaves.
When it comes to earning social currency, being attractive is as good as gold.
A team of scientists from Finland has now designed a machine learning algorithm that can plumb the depths of these subjective judgments better than we can and can accurately predict who we find attractive via our unique brainwaves — and even generate a unique portrait that captures these qualities — with 83 percent accuracy.
Far beyond just the laws of attraction, this novel brain-computer interface (BCI) could push wide-open a new era of BCI that can bring our unvoiced desires to life.
The research was published this February in the journal IEEE Transactions on Affective Computing.
Why do I find someone attractive?
The hallmarks of attraction may change over time (from twisted mustaches and monocles to a clean shave and aviators), but regardless, top-tier social status can not only give your love life a boost but can even help you score the big promotion or easily slide into the good graces of the powerful elite.
But while the societal effects of being deemed attractive are numerous, the mechanism behind these personal preferences are still often shrouded in shadow. Instead of making a conscious, logical judgment of someone’s attractiveness, we instead experience the feeling first and are left to riddle out its cause after. This is a problem that A.I. just might be able to help us with.
“My concern would be that this kind of research may ultimately reinforce harmful or privileging forms of attractiveness”
Why do humans want this? Mathematicians, philosophers, and painters alike have been attempting to quantify beauty for centuries, applying the same Golden Ratio that captures the spirals of a snail's shell to the symmetry of someone’s facial features.
What is the Golden Ratio?
The Golden Ratio, also called Phi, is a mathematical equivalence equal to roughly 1.62 that is satisfied if the ratio between two quantities is the same as the ratio of their sum. That’s a lot of words, but to sum it up: it’s a ratio our brains find appealing.
The Golden Ratio is found across the natural world, leading academics of yore to postulate that these definitions of beauty and perfection must be divinely given.
And it’s still pulled out today in fashion magazines to estimate someone’s proximity to historically divine beauty, but the overall symmetry of features (e.g. equally spaced eyes) has nudged out the Golden Ratio in recent decades as the prevailing theory behind what makes someone attractive.
But Tuukka Ruotsalo, lead researcher on the paper and associate professor of computer and information sciences at the University of Helsinki, says that it’s clear in practice that these theories don’t always hold water.
“A lot of people have different ideas about what is attractive or unattractive, especially gendered preferences,” Ruotsalo tells Inverse. “If you have a model that looks only at a picture, they can never get a true understanding of [what] is attractive or not attractive.
“Our work basically looks at how different people respond to the images and then feeds that back into the A.I.”
To truly understand the underpinning of what makes you attractive to one person and unattractive to another, the researchers say you need to examine the neurons setting off this biological reaction — but that could prove easier said than done.
Ruotsalo gives this as an example:
“I want to create a picture that is pretty for me — then [it’s] very difficult to say which direction I [should] go in because there are so many ways that faces are different.”
Ruotsalo and colleagues report in their paper that this new research represents the first time that brain responses have been used as interactive feedback to a generative neural network.
“What we are doing is creating a control over a very unknown, complex space,” she says.
What is a generative neural network?
Just as our brains run on the firing of neurons, the “brain” of machine learning models runs on something called a neural network, a network of connections in an algorithm that helps an A.I. process new input, find patterns, and learn from them.
This BCI and machine learning model can be broken up into three main components:
- Brainwave data read from an EEG worn on a group of 30 participants’ heads that records neural data when participants are exposed to new stimuli (e.g. faces) in the experiment
- A type of machine learning network called a GAN (generative adversarial network) that learns patterns from a set of training data (a celebrity image library, in this case) and then extrapolates those to generate new images
- Together, these neural data and trained GAN unite to create something called a generative brain-computer interface (GBCI). Essentially, an interface that uses brain signals to generate new images.
Digging into the ethics of AI rating attractiveness
What are the ethical implications? The research field of BCIs — the very same that Elon Musk is attempting to break into with Neuralink — is still in its adolescence, at least compared to apocalyptic visions of sentient robots controlled via our thoughts. Research in recent years has demonstrated how BCI can be used to help those with mobility limitations use computers or even translate thoughts into speech.
But that doesn’t mean that this incremental research is without its ethical concerns even today, particularly when it comes to potentially promoting biases.
Eran Klein is a neurologist and affiliate assistant professor at the University of Washington who focuses on neuroethics. He tells Inverse that a lack of diversity perpetrated through tech like this could undermine efforts for inclusivity in media and culture.
“My concern would be that this kind of research may ultimately reinforce harmful or privileging forms of attractiveness,” Klein says. “If diversity in the training data set or participants is not achieved (as the authors think it should be in future research) ... the result will be further normalizing harmful conceptions of beauty and worth.”
How does it actually work? In a nutshell, here’s how all these moving parts came together to create uniquely beautiful faces: 30 participants were shown 240 celebrity faces while wearing an EEG cap and asked to mentally note which ones they found attractive. This mental note created a spark of neural activity each time they saw an attractive face, which the GAN observed.
Ruotsalo says that EEG data is far too fuzzy to extract any discrete variables from (e.g. whether or not someone thinks a certain lip shape is attractive) and instead works as a Tinder-like swipe “yes” or “no” of attractiveness.
This neural data and trained GAN are then combined to form a GBCI which uses the data to smush together the faces each participant deemed attractive in order to create a novel, Deep Fake-esque face that should be unique to their taste.
When the research team blindly presented these new images to participants, they found that they were deemed attractive over 80 percent of the time.
When will it affect the future? As for when a GBCI might be coming to a dating app near you, Ruotsalo and first author Michiel Spapé say it can be hard to predict at this early stage exactly how — or even in what form — this technology will eventually be ready for the masses. Before that can happen, the team writes in their paper that there are still many other trials to be done with more diverse participants and training data populations.
But beyond using BCI to simply judge attractiveness, Ruotsalo says that the novelty and potential impact of this tech instead lies in its ability to draw out elusive and subjective judgments or ideas.
Can this help build a better world? From the Inverse perspective, we don’t imagine that a technology like this will have any serious repercussions on how we judge attractiveness, but the potential for change may instead lie in how technology can help us better visualize subjective feelings that resist being quantified.
In the distant future, such technology could either work to further monetize our thoughts and opinions (think Adbuddy in Maniac) or potentially boost the ability of artists or poets to express feelings that words alone fail to capture.
It’s too soon to say which direction these dominos will fall, but their impact is sure to make ripples.