Skip to navigationSkip to content

The only way an algorithm can tell if you really like a song is by scanning your brain

People test headphones during the CanJam headphone and personal audio expo in Singapore February 21, 2016.
Edgar Su/Reuters
Your clicks may lie, but your brain tells the truth.
Published Last updated This article is more than 2 years old.

A team of Greek scientists has discovered how to tell how much you like a particular song by literally reading your mind. And they want to fit it into Spotify, Apple Music, and other similar online music streaming services. No more stars. No more likes and dislikes. No more hearts. It will all happen automatically.

Current recommendation systems rely heavily on active user feedback. In most cases, you have to do something to let the system know what you like—hit a “love” button in the bottom left corner of your iPhone for example. Another way is to meticulously follow your actions while you use the service, tracking things like if you skip the song or listen through the whole piece, how soon you press fast forward, and so on. All that data is then processed by clever artificial intelligence algorithms to figure out your next favorite song.

Sadly, I’m a painfully unreliable source of feedback. When I’m listening to music, my phone usually rests in my pocket and I really can’t be bothered to fish it out just to tell Apple if I love or hate a piece. Most users are probably like me—and so our streaming services could certainly use some mind reading.

That, though, is no easy feat. Music can trigger an immensely wide spectrum of emotions, from sadness and wistfulness to joy and elation. Human brains generate all these states using the most advanced neural network in the world—synapses are firing, some regions become active, others become inactive, they’re all communicating with each other and it all results in a complex mess of electrical activity.

The amount of that electrical activity can be measured by electroencephalogram (EEG) scanners—basically a set of electrodes attached to the scalp. But assessing the general state of mind wouldn’t be much help in figuring out whether you like a song. Sad songs are supposed to make us sad and happy songs are supposed to make us happy, but that has little to do with whether we like a particular piece or not. So a team of researchers led by Dimitrios Adamos at Aristotle University of Thessaloniki, Greece tried instead to isolate a single, discernible electrical pattern that corresponded to a specific human aesthetic experience: our appreciation of music. And they think they’ve found it.

Adamos drew inspiration from research published three years ago by Robert Zatorre and Valorie Salimpoor of McGill University in Montreal, Canada. Zatorre and Salimpoor tried to find out why exactly music gives us pleasure. According to them, it works like this: Sound is processed in auditory cortical regions of the brain. Those, in turn, communicate with frontal cortices and working memory to knit together the separate sounds into patterns. On top of that, we have expectancies derived from our individual listening history, which we use to make minute predictions about what we’re going to hear next. Those predictions, when fulfilled, are fed into subcortical reward systems responsible for releasing dopamine. Fulfilled prediction leads to dopamine release and thus pleasure.

The key takeaway, Adamos realized, was that because various areas of the brain are involved in music appreciation, our subjective experience of music depends on intensity of interconnections between those different brain regions—and this intensity could be reliably measured with simple EEG scanner.

So, the team designed an experiment. A group of students was asked to choose their favorite songs. They were also asked to choose a song they neither liked nor disliked (oddly enough, they all, independently, chose Enya’s “Watermark”). All they had to do next was sit in a comfortable studio and listen to both of their chosen pieces with wireless EEG headsets on.

It turned out Adamos was spot on. The device was able to easily discern between a favorite song and a neutral one by analyzing the brain’s electrical activity.

In a paper resulting from the study published in Information Sciences, the team writes that the technology is affordable, user-friendly, and could be applied to mass-market services. How to get the technology to market is still an open question. Most likely, some startup will have to run with the Aristotle University team’s findings and develop an app built on it. Eventually, it could be acquired by one of the big players, as was the case with Apple’s Siri. However, there are lots of things that determine if an invention makes it or breaks it. One of them is whether it looks cool.

The headset used in the experiment had eight electrodes and, to put it mildly, made the listener look a bit outlandish—it certainly wouldn’t fly in a world of Apple AirPods and Beats by Dre. But, it turns out it is possible to fit an EEG scanner in a pair of headphones. In fact, they are already for sale: they are called Aware, and you can pre order them for $199.

📬 Kick off each morning with coffee and the Daily Brief (BYO coffee).

By providing your email, you agree to the Quartz Privacy Policy.