Sure, dragons are fictional creatures, but somehow the gigantic winged lizard beasts look stunningly realistic in HBO’s Game of Thrones. Derek Spears, a visual effects supervisor at the VFX house Rhythm & Hues, has won Emmys two years in a row for leading the team responsible for those dragons. Daenerys’s faithful companions have become more and more advanced over the years, as the effects industry has taken leaps and bounds in advancing its tech and streamlining workflow.
Inverse spoke with Spears about his work on Game of Thrones, The Walking Dead, and the future of visual effects — which will make Star Wars: Rogue One’s revival of Peter Cushing look like amateur hour.
What do you think the next frontier for VFX is? What do you think the next big leap is over the next five years?
I think just within the next 5 to 10 years it’ll be A.I.-driven actor performances. I think that’s the big frontier. I think that’s gonna be driven by a lot of the VR technologies and how that trickles into the visual effects. I think that interactive technologies are gonna drive that. I think that’s a big frontier and interesting.
Do you work with A.I.?
There’s not an applicable tool set that actually solves problems with it yet, but that’s the frontier. I think people are doing research into A.I., and I think that VR will be a field that we’ll try to find a way to use that and interact with performances, try to find ways to drive facial performances and drive human interaction based on A.I.
Let’s talk fantasy: You do work on Game of Thrones, particularly with the dragons. How have they evolved on the show, and what makes them look so eerily real?
I think that the personality of Drogon, especially, is what really helps sell that dragon. And of course, it doesn’t hurt that you have some extremely talented animators here. If you look at what makes any creature look realistic, you’ve got three aspects to it. You got animation, you’ve got lighting, and then you’ve got scene immigration and compositing. If any one of them doesn’t work, then you don’t have a realistic looking creature.
Last season we did an interesting thing: When Daenerys was flying and riding with the dragon, there was a motion-controlled buck setup that was basically determined by pre-animation we did. Someone did a pre-visualized scene, and we went to look at that pre-vis, fleshed that out, and made some high quality animation, which drove the motion control that both the camera and the buck with Dany was riding on.
So once we had something that was very much going to look like the final shot, we went back and parented our dragon to that animation, and now you’ve got dynamic camera moves. You’ve got her moving in reaction to the death, the dragon’s flaps and movements. This was a big improvement over Season 5. Things like that add to the realism of it because you tie in all that motion. You’re kind of more coupled with what’s actually going on rather than being camera stuck and having something that’s very static.
VFX has made so many leaps in creating humans the last few years, too. ILM brought back Peter Cushing to play Tarkin in Rogue One. How do you feel about the ethics of doing something like that?
The ethics is a tricky question, and who controls your likeness after death is an area I will leave to somebody more skilled in legal means than myself. We did something not entirely dissimilar with Kevin Bacon a few years ago in RIPD. The entire back half of the movie had to be digital because he had to be seven feet tall, and his face had to be split open, but at the same time it was Kevin, and I don’t think that audiences were even aware. We even had screening with the DP, and he asked what we did. That was much easier — and I say “easy” in air quotes because of course we have a living likeness, so we could scan him, we could get facial performance from him, and we could see lighting reference from him.
What ILM did with Tarken is much, much harder because you don’t have any of that reference. You had to go back to movies and study that, and it’s a harder deal. We’ve been able to make pretty good digital humans or evolved into making pretty good digital humans over the past few years. There’s always been lots of references, lots of motion capture, and the real goal is, can you make something from scratch that works, and I think that’s what they’re moving towards with Tarken.
How far away are we from doing something without reference at all?
We’re kind of there. We’re doing it, and then it just continues to get better. The hard part about all of this is a tremendous amount of human labor that goes into it. It takes a lot of modeling. It takes a lot of texture paintings. It takes a lot of built in key shapes and models. It takes a lot of animation, it takes a lot of guessing about what those performances look like, and I think at some level some of that has to get automated and so does performance. Do we get enough automation tools with the machine learning and with a Turing experience with a digital human? Can we get to the point where we don’t have to have that much manual labor underneath and it becomes a little more of an automatic process?
I think that stuff will come, but the real test is, can you not just take digital human that take nine months to build, but can you build a digital human from scratch that you can interact with? And speak to it, do things with it? I think that’s the bigger evolutionary step.
The real issue that we see in our industry is we’re not trying to create some kind of revolutionary technology. We’re always trying to shave time and effort off things because everything was bigger for same or less money, so you’re trying to find ways to improve process. So those center around human, labor-oriented issues. Machines are cheap, comparatively.
How do we animate faster those things? Motion capture tries to shortcut that, but you need a base to go from, so you’re still doing some things. But the technology to make things move faster, I believe Adobe’s talking about trying to find keying technology to make green screens go away. How can you take human effort out of the equation and automate long-standing, long-enduring human tasks to shorten it down? It’s not getting rid of jobs, it’s about making it easier.
It requires so much work, and sometimes the big budget blockbusters have low margins. They rarely make you money, right?
It rarely does. If you go look at any major motion picture these days, most of them are largely effect films. The largest number of bodies on the film will be at the effects department. More than every other department or the crew combined, most likely. And just the amount of labor that goes into doing this, and it’s expensive. Film tickets aren’t rising at the rate the complexity in film is rising. So if there has to be some push to squeeze those margins down, that’s obviously gonna be visual effects.
What kind of projects make up the difference?
It can be all over the map. Projects typically have highly repetitive work, where you can somehow get better at it as you go, as you’re doing the same effect over and over and over and over again for many, many shots. You’ll see the first 100 shots you do will cost “X” and the next time it’ll be maybe the cost of 9/10 of “X,” and it’ll kind of ramp down as you get the process. You start off doing it more expensive than you thought it would be, and by the end, hopefully, you’re doing it a lot cheaper. Unfortunately, for a lot of people, that’s not necessarily interesting work, but the more different things you do, the more setup time you have to take and learn about stuff, and by the time you’ve learned it, you’re off doing the next thing. So highly repetitive jobs tend to be more profitable.
So TV shows, maybe?
More so shows where you’re doing the same exact thing; you’re putting a couple characters in the same shot over and over and over again, or you’re doing the same set extension over and over and over again. TV series tend to change a lot; they’ll have different sets, they’ll have different characters. Every week’s a different story.
You guys did some work on The Walking Dead — what did you do on it?
It’s kind of a cross between doing some creature work, and of course we do a lot of kills. There’s decapitations. There’s some body parts. We get a lot of crowd work and some various other creature things for it. And that’s another one I think that people probably don’t realize how much is going on behind the scenes. Walking crowds of zombies, maybe the first 100 people are real, and about 500 people aren’t. Nobody really thinks about those kinds of things.
Where do the models come from? Is it certain people you use over and over again?
We start with a lot of base models from photography or from scans. We’ll take variations of them by playing mix and match with clothes and colors, so we’ll have a base library of shapes, and they get randomized for different pieces of clothing and color. A bunch of different walk cycles that we’ll have animated or motion captured, and that will differentiate them up. Each individual person will have a matte that we can then change the color of a piece of clothing, so we can shift it around to randomize it some more.
So if you look really closely, you might see the same face, but it’s hard to look that closely.
It’s a level. We’re not doing people upfront and center. They have actors to do that, so we put it beyond the second and third layer of practical crowds, and no, you’re not going to see it, unless they all have the same color, or they all have the same walk cycle. Other than that, your mind forgives it very easily.
The zombies have deteriorated over the years. Have you played a part in that?
They do a lot of it practically. The only time we get involved is when you see through somebody. They still don’t like to punch holes in actors, so they’ll remove a torso or a bit of a neck and then build a CG version of that application, that the applicant will fit in there and reveal a background through it. There’s a couple shots; in the last season you’ll see things like that happen.