For the classically-trained musician Sage Lewis, technology poses a challenge. Though his compositions have appeared in ads for Google, Facebook, and Nintendo, Lewis remains steadfast with acoustic sounds. “There’s something about the string quartet and the main acoustic instruments, which express so much emotion,” he says over the phone. “You can’t really express that with electronic music in the same way.”
But maybe he’s figured it out. At South by Southwest, Lewis’s most recent scores — the techno-drama Operator starring Martin Starr (Silicon Valley) and Mae Whitman, and the VR thriller The Surrogate — tackle our ever-changing understanding of human interactivity with breakthrough technology. Lewis’s scores, a meld of electronics, and acoustics reflect these seismic shifts.
Directed by Logan Kibens, Operator follows a computer programmer who builds a customer service bot using his wife as a template before he takes an obsessional, destructive path built on love. The Surrogate, meanwhile, is a VR experience from a first-person view that exposes a troubled marriage. Nominated for the Interactive Innovation Award at the festival, The Surrogate “problematizes the notion of physical presence and reflects … cultural fears and afflictions in the digital age.”
Before the Austin festival, Lewis spoke to Inverse and told us how he composed one of SXSW’s hidden gems and how the rules for composing VR are still being written.
At SXSW this year, you have Operator and The Surrogate. What can you tell me about your understanding of the two projects?
It’s exciting, they’re two films that are similar in their stories but very different in form and the way they’re experienced. Both have to do with technology, a relationship to and how it’s interfering with real relationships in our personal lives. [They’re about] artificial intelligence and where we’re going in the future.
Technology and human relationships are popular themes, evidenced in things like Her and Black Mirror. How did you approach your score for Operator?
Technology is mediating our communications more and more, but where we could be going is our relationships [aren’t just] with another human but an algorithm. It was fun to work on these themes musically, the score has a chamber ensemble with a string quartet, piano, acoustic guitar, drum set and percussion. Then there’s a whole electronic side, each electronic instrument maps directly to the other of the acoustic. [We] recreated them from scratch with a modular synthesizer. That created a score that was able to express these emotions electronically.
Considering how these films are so crucially about technology, why did you include natural and classic sounds in Operator?
For me it was about two co-existing worlds, one virtual and one natural, and how they are co-existing in harmony and conflicting, fighting over the same space. It was important to have pieces. If you have two things set against each other in contrast, it makes each one more powerful. Electronic music sounds more electronic when it’s in the context of acoustic music. The quality of something that’s natural can sound even more natural when you’re setting it against something very unnatural.
What was your favorite part of Operator to score?
There’s some beautiful scenes of Lake Michigan where the character, Joe, goes running every day. He measures his health data, goes to Lake Michigan and takes a picture. There is a scene where they were like, “This is one of the most important pieces of the film. This is all you, Sage.” The music had to be simple too, it wasn’t telling with a lot of gestures. It was just a couple minutes of a few gestures that were simple to place and could really hit you in the heart. That was the most exciting for me.
How do you compose a subjective experience in VR versus a movie like Operator?
It’s exciting because no one understands how to do it. There’s nowhere you can study it, not even to check out other VR projects to see what they’ve done because it’s still not that accessible. There’s a lot of figuring yourself. We have our own Oculus headsets we used to develop this. Sometimes we go to VR conferences to see other content but for the most part, it’s a product that hasn’t hit the market yet.
How about video games, which you’ve worked on — and also are a subjective technological experiences?
In some ways it’s like video games but the experience is different. The Surrogate isn’t a game but an “interactive film,” which did a really good job figuring out new territory of VR. It’s 12 minutes long, it’s spherical, meaning rather than a 16x9 frame you’re in the film anywhere you look, behind you, up, down, you can walk around. It’s this hybrid of CG shot with a spherical camera, [with] real characters. Not avatars.
As you go around, some of the music [are] used the [whole] time to support the narrative. There’s this other type located in spaces. If you go to that space, you’ll hear it but if you’re not, you don’t. There’s a lot you need to communicate when you’re in a film to make the story work so people don’t miss important parts but [in VR] you don’t have control over the user. The user can wander anywhere.
The score [in VR] helps the emotional arc be communicated, whether they’re paying attention in the performance or not. You’ve got to use the score to bring things together because you don’t have control. This is interesting for me because internally, in the character you’re inside of it’s a psychological thriller for that character inside their mind and body, but in the outside world it’s not. It’s people moving around and talking, not a regular thriller with action.
Camera angles in films communicate narrative and theme, which music intensifies. In VR, it seems like music is doing all the work.
It’s hard to figure out too because when I’m composing, I can’t experience it. I can’t demo it because you can’t compose with the headset on. In a film, you can watch it, put in the music and watch it again and have a bunch of people sit down together and talk about it.
VR is similar to theater actually. Some people have good seats, some people have bad seats. They’re looking at all angles. From the musical perspective, you know how things are going to sync. You don’t have to guess how they’re going to feel on stage when you’re working on it. [In VR] you’re guessing, imagining it in your mind, materials you’re referencing that are helping inform you of how things are supposed to look and feel on the end.
It’s never finished. It just keeps changing. The Surrogate, we just finished doing the latest update to it. Then we’ll keep working on it for a long time based on what we learn. Things are slower because you’re relying on programmers, things break all the time inside the system. It’s so different.
How long do you think it will take for composers to find the rhythm for music in VR?
Figuring out how to write music for VR will take as long as figuring out the other elements. I think three to five years until it starts getting to a point it’s working and part of everyone’s lives. This year, VR will be heading to market. It will probably take years to improve the prototype and figure out how to create content but I see it happening around 2020, when it’s a well-developed form. Film took a long time too. There were great films from the beginning, Charlie Chaplin and stuff like that, but to develop that technology took a long time until its golden age. We’re still far away from that but it’s happening now. We’re going to start seeing it this year. It’s exciting.