Metastage - Creating the Future of Video Entertainment
BY Georgi R. Chakarov
Metastage is one of the companies out there which are creating the future of video entertainment right now. Their volumetric video capture technology allows viewers to experience a human performance into real (AR) or virtual (VR) environments in high-quality 3D mixed-reality environment. Georgi R. Chakarov got on the phone with Metastage CEO Christina Heller to talk about the advantages of the company’s technological innovation and how it is redefining the future of video entertainment in terms of production and user experience.
Christina, tell us a bit more about your company. When was it founded and how did you decide to develop this line of business?
Metastage launched in August of 2018, so about a year and a half ago. We built the company with the goal to become a home for reliable, commercially-viable immersive solutions that wouldn’t be practical for agencies and production studios to own or develop in-house. We launched with our first offering of that kind: volumetric video, using the Microsoft Mixed Reality Capture Software. With volumetric video, we use 106 cameras facing inward like a globe on somebody doing a performance and we capture that from every angle. We then we put it thru the MS software and on the other side you get a fully 3D lifelike authentic rendering of whatever happens on that stage. You can bring those live performances in to VR or AR experiences, or virtual production scenarios. Until this point, bringing real people into these immersive experiences was a huge challenge.

Before launch did you present your project to investors? Did you get any additional financial backup?
Metastage is a privately funded company by a number of individuals who believe in the vision of the company. When I came onboard, the license from Microsoft had been acquired and we had a lot of the funding already put together. My task was to launch and lead the company. We are a group of domain experts and individuals that believe in what we are building. After putting together the funding to build the company, we use production services revenue in order to sustain it.

In terms of technology, was there a moment where you had to develop something yourselves?
We are really lucky because Microsoft has spent almost a decade building the software that Metastage is making commercially available across the US. The technology was ready, but it needed production partners to commercialize it. Metastage worked with Magnopus to design and engineer our volumetric stage. We also bought top-of-the-line hardware and developed our workflow. We put energy into giving audio the same consideration as the visual elements, which we designed in collaboration with the audio specialists at EccoVR.

Experts are now convinced that AR is the future of television. You are bringing the volumetric video technology. Can you explain how it works and what are its advantages?
I think the technology that makes the most sense to compare to is motion capture, because if you were working in XR before this and you wanted to put lifelike characters in your experiences, you would have been stuck with motion capture as the only option. But it has its limitations – the biggest being what we call the uncanny valley, where if you try to make a realistic rendering of a person, it often falls a little short because it doesn’t have the nuances of a real human performance. That becomes even more prominent when you try to capture a publicly recognizable figure. We just know when we are looking at an animated rendering of Beyoncé – It is simply not her. Volumetric capture is a technology that solves that problem by using real video data and then compositing that data on to a mesh to create something that accurately reflects the real person and real performance. It also has a lot of benefits for a production: the performers don’t have to wear any special suits – they are shot as they are. That makes it really easy for both performers. It’s easy on developers, too. They don’t need to do additional VFX, or any of the animation that takes weeks or months in post-production. We provide our partners with the highest quality of volumetric capture which they can simply drag and drop into the game engine and they are ready to go.

How does your technology compare to let’s say what they did with the ‘de-aging’ of Robert de Niro and the other stars in Scorsese’s The Irishman?
What we are doing is the opposite of that. They used VFX to de-age De Niro and the other guys. I think sometimes it looked really good and sometimes it did have some of that uncanny valley effect, which was distracting – I knew I wasn’t looking at the real Robert de Niro, but at an altered version of his face. There are a variety of opinions whether they were successful or not with the de-aging effect. I found it distracting from the story because it didn’t look quite right. It didn’t look like a believable human face to me. What we are doing with volumetric capture at Metastage is to create the most accurate recording of what happens on that stage in full 3D, and when you experience it you feel like the person is standing in front of you. You can feel their presence in the closest way to the real thing. And of course, we are able to capture them from every angle and capture their full body and still carry on those micro details and nuances which are really hard to animate.

The technology of producing content seems clear, but how about reproducing the content? What type of players do the consumers need to play out your productions at home, in the living room?
Our production team and partners use game engine to integrate the captures and to publish them. We typically use Unreal and Unity. Any device that uses this software is a potential point of distribution for the consumer. Most of our experiences have been on a tablet or phone. You can see the volumetric capture on your iPhone and have it appear in your actual living room using the camera phone. As of January 2020, that’s the most common way people are engaging with our captures. You can also experience them in VR experiences, or augmented reality HMD’s like Hololens and Magic Leap. As the devices for engaging with 3D assets mature, these captures are ready-to-go and future proofed.



How long does it take to process a minute of volumetric video and what would be the price?
We encourage anybody who wants to get rates to contact us and we can send them a rate card. Prices typically start around $15.000 for a simple capture and we build up from there. If we can get at least two weeks for pre-production, that is preferred. We have done things in less time, but we encourage people to give us at least two weeks to prepare and consult. During the production day you can do as many takes as you want. We give you time-coded dailies to review at the end of the day. You then give us the time codes for final processing of the shots that you want to work with, and we turn that around within two weeks.

How big is the demand for such content? Do you get many calls?
We are receiving a lot of inquiries, but that doesn’t mean that they all get the green light. There is a tremendous amount of interest in what we are doing, and artists from all around the world want to work with volumetric video. Creative pioneers are excited about doing something groundbreaking and volumetric video has so much potential for innovation – both creative and technological. I’m on calls and doing tours every day. A growing percentage of those turn into commercial projects. I can say that the volume and seriousness of projects is increasing, and we are moving into bigger, more ambitious episodic content.

Are you busy only with the technological side of the productions, or also with the creative?
We help with the creative as much as is desired by the client. At minimum, we provide volumetric counsel. For example, hair is one of our bigger challenges and if somebody comes in and says “Okay, we are going to have a wind machine blowing the character’s hair while she is performing” – then we would say it would look really bad in the final processing and offer suggestions to achieve that same creative vision differently. Ultimately, our goal is to have a very clean high-quality capture. We do not necessarily try and interfere creatively if that is not what the client is looking for. If they want input, we are happy to engage.

What type of content are you making most of the time?
We have seen a tremendous amount of work from music, sports, and healthcare. There is also a lot of interest in how 5G can make volumetric content easier to consume. I am hoping to see more development in training this year because there are a lot of great applications on DYI, coaching, training employees. We have seen a lot of interest from the music industry in order to get the stars closer to their fans and also from athletes as the full-body capture allows you to experience an athlete in a more intimate and accessible way than ever before.

We can easily think of great productions like The Matrix, for example, which could possibly be taken into new dimensions with the technology you are offering. Has there been interest in that direction?
We have spoken with filmmakers who are interested in using volumetric capture for traditional productions. As virtual production becomes more commonplace and people are building scenes inside the game engine, you can put volumetrically captured characters into those scenes and frame your shot and story inside the game engine. I’m excited about that too because what we have at Metastage is an incredible new tool. It’s great to see it bringing value to many mediums, both traditional and new.
CHRISTINA HELLER is the CEO of metastage, an Xr studio that brings performances into digital worlds through volumetric capture and complementary tools. Prior to leading metastage, Christina was the CEO of Vr Playhouse, an award-winning immersive content company based out of Los angeles. She is a recipient of the advanced Imaging Society’s Distinguished Leadership in Technology award and was named in the Huffington Post as one of 5 women changing the virtual reality scene. She has contributed to over 90 immersive projects and comes from the world of journalism, radio and television.
Share this article: