Algorithms can compute new symphonies and improvise jazz riffs. They can even aggregate a rap. But can real artistry be pre-programmed? Dartmouth professors Michael Casey and Dan Rockmore, founders of the Turing Tests in the Creative Arts, hope to find out. They’ve set up a competition to determine whether humans can distinguish between human and man-made art. Neither man doubts that beauty can be programmed, but neither has witnessed something truly unexpected come from an automaton — yet.
“For any of these things to really reach the level of an expert on human-generated music, it couldn’t [just] be imitating what music sounds like,” Casey, a professor of both music and computer science, tells Inverse. “A composer, meaning J.S. Bach, or Mozart, or Skrillex, is someone who is acutely aware of how we form expectations about what’s about to happen, and they play with those.”
Good composers set the “rules” of the song early on — say, laying down a pattern of beats or musical phrases — teaching the listener what to expect. Gently thwarting those expectations over the course of the song is what keeps the listener engaged. This is a very hard thing to do, much less teach an algorithm to master.
Daft Punk’s “Around The World,” Casey says, is a perfect example of music that could be considered mechanical-sounding but carries the distinctly human signature of a composer engaged with his listeners. “It has precisely five sounds in it,” he explains. “Five elements. And they are introduced, one by one, in very careful arrangements. First you get sound A. Then you get sound B and A. Then sound C, but then B switches off. And a little bit later you get C and B together but not C, B, and A. And your brain’s playing all these games, saying, when am I going to hear A, B, and C together, because I haven’t heard them together yet?”
Algorithms can learn how to create musical patterns — good ones — and deploy them in a way that sounds songlike. Where they generally fail is in understanding what audiences might expect and then subverting those expectations. Casey explains this limitation is because our thinking is so often informed by culture and experience in a way that is horrifically difficult to model mathematically. Culture cannot simply be fed into a machine.
That said, there are musical breakthroughs, and algorithms could potentially lead to one of those becoming a genre unto themselves. For years, Casey explains, ears were attuned to analog sounds, like the folk-inspired ditties of Crosby, Stills, Nash and Young, which is why Madonna’s heavily modulated, pitch-shifted early music seemed so distinctive when audiences first heard it. Today’s Top 40, bearing Madonna’s direct influence, aren’t nearly as memorable because the songs represent antecedents. But what if Madge could be math?
Dan Rockmore, also a professor of mathematics and computer science, doesn’t rule out the idea of machine-generated music becoming a dominant genre. “If, for 50 years, the only thing people ever listened to was computer-generated music, anything that a human generated might feel foreign.” he says. Music with a robotic aesthetic already exists — artists like Kraftwerk and yes, Daft Punk — and is reshaping the rules of the ongoing musical game. The fact that the term “robotic aesthetic” makes inherent sense to people is proof, Rockmore says, of an emerging musical type. “When the word ‘robot’ arrived, that would’ve been a total oxymoron. Now it’s kind of a shoulder shrug.”
Still, Casey and Rockmore insist that robots will not be taking the music industry by storm any time soon.
“I actually am relieved, as a music professor, that what we consider to be a human proclivity seems somewhat safe, for now,” Casey says. Besides, he adds, there’s always this: “If I were able to write a machine that could write perfect, beautiful dance music or piano music, well, I’m still the composer. I’m just composing it at a different level.”