How Pixar Uses AI and GANs To Create High-Resolution ContentAI & Surroundings

Originally published in VentureBeat, July 17, 2020.

As digital animators continue to push the boundaries of technology and creativity, the technical teams that support them are turning to artificial intelligence and machine learning to deliver the tools they need. That’s the case at Pixar, where the company has made new machine learning breakthroughs it hopes will both improve quality and reduce costs.

Vaibhav Vavilala, technical director at Pixar Animation Studios, has been leading some of those efforts after carefully studying recent scientific literature and monitoring the work at the R&D labs of parent company Disney. Vavilala said those advances have the potential to reduce Pixar’s datacenter footprint by half for some stages of production.

Vavilala made his remarks during a presentation at VentureBeat’s Transform 2020 conference.

In recent years, Vavilala has worked on the lighting and rendering optimization team on such Pixar movies as Coco, Incredibles 2, and Toy Story 4. But a couple of years ago, he began focusing on a concept called Deep Learned Super Resolution. Vavilala and a collaborator at Pixar, Mark Meyer, published a technical paper detailing their work.

During his presentation, Vavilala explained that a major challenge the digital animation industry faces is the time and cost in rendering animation at high resolutions such as 2K or 4K.

“It typically takes at least 50 CPU hours to render one frame at 2K resolution,” he said. “We render 24 frames per second for a 90-minute film and render each shot many times as the artists iterate. Further, we have multiple films in flight at a given time. All this adds up to a lot of demand for the render farm, which is a finite resource. If we render at a higher resolution like 4K, that’s four times the pixels and more than double the cost.”

To continue reading this article, click here.


social experiment by Livio Acerbo #greengroundit #thisisnotapost #thisisart