Three Longhorns Want to Change the Way Music Makes Onto Your Playlists

You know the feeling. You’re cruising down the highway listening to a playlist curated specifically for you by a major music service you pay for each month. You’re jamming out to, say, “Hot Freaks,” from lo-fi Dayton, Ohio-based indie rock veterans Guided By Voices. When the song ends, you are now hearing something from comparatively rookie indie rock band Parquet Courts. Wait a second, you think. I hate Parquet Courts. This was made for me?

A few years ago, Elad Liebman, PhD ’19, experienced this very problem. “Online streaming recommendations kept feeling like they were shuffling wild guesses of songs at random,” he says. “It was like listening to someone else’s iPod on shuffle.” The Israel-born PhD student, a musician himself, had heard enough.

At the time, Liebman was taking a class with UT Computer Science Professor Peter Stone. Together they created DJ-MC—the MC stands for Monte Carlo, in machine learning lingo, the name of the type of search they used and a play on the phrase “master of ceremonies”—an algorithm to address problems with music streaming platforms: not only was the song selection off in many cases, but none of them took sequence into account. His idea: create an algorithm that runs a series of feedback loops, using direct user feedback to song choices, which creates an intelligent organization of songs, like a personalized DJ. It would counteract the seemingly arbitrary sequencing music streaming platform users are accustomed to. What Liebman had realized early on was that a DJ creates a mood by sequencing songs in a pleasurable order. Streaming services presented songs without context; Liebman wanted to create a recommendation system that provided that to the listener, rather than asking the user to experience different songs in a vacuum.

Liebman worked on the algorithm together with Stone, his thesis advisor. The end result was a program that planned songs well in advance of one another; while one song plays, the algorithm devises tens of thousands of possible sequences and predicts the next, hopefully most desirable selection. McCombs professor Maytal Saar-Tsechansky helped with some machine learning aspects and a lab study with human participants.

The study looked like this: more than 30 volunteers were given a computer to run the app, and a pair of headphones. To reduce bias, the program was made to look as clean as possible—a simple white panel with black font, and just song titles without artist names. To squeeze in as many song choices as possible, only the first minute or so from each song was played. For this study, they isolated the user’s direct feedback—a simple thumbs-up or thumbs-down symbol to indicate enjoyment or dissatisfaction. Only a thumbs down would immediately skip to the next song.

“Unlike most recommender systems, we had no history on the user,” Saar-Tsechansky says. “We had to figure out on the fly from direct feedback: what they prefer, and how to sequence them so they would like the transitions, tailoring it to each user. If you want to fit music to a particular context, you don’t have all day.”

Still, though, humans will be humans, and they had to modify user behavior in some participants.

“You’d be amazed how childish people can behave. I had to shut down the internet because people would open browsers while listening,” Liebman laughs. “Maybe it’s authentic. In our case, we needed undivided attention.”

Overall, though, the lab was a successful proof of concept, even if the no-frills application was decidedly not market-ready. “There was no proof that this would work,” Liebman says. “It was good news that it did!” They found that people still care more about hearing songs they like than the sequence in which they appear, but that a pleasurable sequence had a measurable effect on user experience.

“When we sequence in a more thoughtful way, there’s an additional benefit, above what you can do just by playing songs that they liked,” Saar-Tsechansky says.

It’s notable that Saar-Tsechansky says that music, like dating, has a lot to do with tacit things we think we like, but don’t necessarily. She compares musical tastes to a woman believing that her type is an athletic partner, but ending up with someone else.

“If you ask people what they’re looking for, eventually you can see from data that they like something completely different,” she says. Liebman agrees, noting that music is a signifier of who we are, and that, sometimes people think they love the Pixies and hate Taylor Swift, but, “in the dark night of the soul, they love listening to Taylor Swift and don’t love indie rock all that much.”

He uses the example that people who like the blues may also like traditional Iranian music, because of a connection with instrumentation.

“There’s a clarinet-like instrument called a sorna, which sounds, to me, bluesy,” Liebman says. “The fact that that’s how I feel means there’s a latent connection there that’s true for someone. That’s why we want this to be personalized—not an average of 1,000,000 users smushed together.”

Liebman, who now works as a senior data scientist at AI systems company SparkCognition, says he doesn’t have plans to implement the algorithm in the market just now, for one simple reason: he doesn’t have the resources of a Pandora or Spotify. That doesn’t mean that Liebman and Saar-Tsechansky have put the music ideas aside, however.

“We are working on our next paper, it’s focused on music …” Saar-Tsechansky trails off, smiling. “But the premise … it’s a little premature. I’ll just say that we are collaborating.”

Illustration by Ian Keltie.

 
 
 

No comments

Be the first one to leave a comment.

Post a Comment


 

 
 
Menu