If you’ve watched some of the biggest Hollywood blockbusters in recent years, you’ve almost surely seen Kiran Bhat’s work. It’s thanks to Kiran and his former colleagues at Industrial Light & Magic that digitally-created characters like the Hulk in The Avengers or Grand Moff Tarkin from Star Wars: Rogue One show emotions just like a living, breathing human being.
Last week, Kiran, along with Michael Koperwas, Brian Cantwell and Paige Warner, was given the highest honour for the revolutionary ILM facial performance-capture solving system they created. They were chosen to receive a Scientific and Technical Award from The Academy of Motion Picture Arts and Sciences, popularly known as a Technical Oscar award. Talking to The News Minute over email, Kiran, who originally hails from Coimbatore and currently lives in San Francisco, speaks about his work and the joy and honour of being so highly recognised for it.
How you are feeling about receiving the award? You now join an elite group of only seven other Indians to have a won a prestigious Oscar Award. How does that feel?
It’s quite humbling, and I am super honoured to be on this list!
You had received some intimation about the possibility when the Academy had contacted you some six months ago. Did you think then that you would eventually receive this honour?
The Sci-Tech Academy committee is extremely thorough and can sometimes take years before they give an award. I was delighted to find out (six months ago) that they were investigating our work for 2017, but these awards tend to be extremely selective, and so I wasn’t raising my hopes then! I was cautiously optimistic, though, given that we had done several high profile movies with the technology.
Could you tell me a little more about the technology that you developed with Michael Koperwas, Brian Cantwell and Paige Warner, which has won you the award?
This technology analyses an actor’s face from video and automatically animates a 3D digital character that exactly matches the actor’s facial performance. For example, the Hulk character’s face in The Avengers was built and animated from Mark Ruffalo, a Hollywood actor. The technology allowed the ILM to automatically create the Hulk’s facial emotions by analysing Mark’s performance on a movie set. Another example was the Tarkin character (fully computer-generated) in Star Wars: Rogue One, which was animated from the facial performance of Guy Henry, a British actor.
What got you into your line of work, which seems to be an interesting mix of highly technical and artistic work?
I’ve always been fascinated with understanding movement and nature; so studying facial movements and representing it in a computer felt like the ultimate challenge in this aspect. Representing faces in a computer is tricky to get right – we have to capture both large scale movement (jaw movement and lip sync during dialog, for example), but it’s even more critical to capture subtle movements and nuances because they are very important in conveying emotion. I pursued this challenge using computer vision algorithms, which analyzes human faces from video and converts them into animation signals.
Also, it turns out that digital artists are also quite interested in all of this – they are constantly trying to portray realism in movies. Especially, at a place like ILM which has a perfect blend of technologists and artists, all working towards digital realism. I was fortunate to work with amazing artists like Michael Koperwas, Brian Cantwell and Paige Warner, who brought an artistic and VFX production viewpoint to solving this puzzle.
What does it feel like to build such a technology that touches so many millions of lives, and to know that the fruits of your work are witnessed by people around the globe?
Truly humbling and inspiring!
To enable filmmakers to connect emotionally with their audience using iconic and believable CGI characters was one of the primary reasons I got into VFX. Plus, I am a die-hard Star Wars fan.
You have moved on from ILM to found your own company Loom.ai which builds 3-D digital avatars out of 2-D photographs. Could you tell me a little more about this work you are currently pursuing?
Yes, I co-founded Loom.ai in March of 2016 with a long-time friend of mine from BITS Pilani, Mahesh Ramasubramanian (who was previously at DreamWorks Animation).
Loom.ai is a machine learning start-up based in San Francisco that builds photoreal 3D avatars from selfies. The new suite of computational algorithms built by Loom.ai will democratise the process of building believable 3D avatars for everyone, a process that was previously expensive and exclusive to Hollywood actors benefiting from a studio infrastructure. The animatable and expressive faces can be used to power applications in VR (virtual reality), gaming, messaging, e-commerce and virtual classrooms for individuals and businesses around the world.
Do you see yourself and your work coming into Indian cinema in the near future?
I think you will see our work at Loom.ai in VR sooner than cinema. Loom.ai’s technology will allow you to create your 3D avatars in minutes from a selfie, and will let you immerse yourself in virtual environments such as VR arcades or theme parks or games.
Source: The News Minute