What AI is Doing to Reinvent Visual Effects

  • Home
  • Film District Dubai Blog
What AI is Doing to Reinvent Visual Effects

Josh Brolin's portrayal of the 9-foot super-mutant Thanos in Avengers Endgame was expertly animated by teams of animators from Weta Digital and Digital Domain. Harrison Ford, now 76, portrays Han Solo in a scene from Solo: A Star Wars Story from 2018, when he was 35 years old and portraying the role.

While both examples were produced with the help of Hollywood's full might and uploaded to the Derpfakes YouTube channel, the other appeared to have been produced solely by one person. Both examples made use of artificial intelligence and machine learning tools to automate certain aspects of the production process.

Both show how AI and ML may revolutionize the development of VFX for blockbuster movies and make cutting-edge VFX techniques accessible to everyone.

The public now perceives 3D animations, simulations, and renderings as having a photorealistic or artistic fidelity that is almost flawless. Given enough resources (artists, money), virtually few effects are hard to produce, including difficulties like overcoming the uncanny valley for photorealistic faces.

In order to fulfill the demands for growing VFX film production, the VFX industry has recently concentrated most of its efforts on developing more affordable, efficient, and flexible pipelines.

For a while, the most time-consuming and repetitive tasks, like match-moving, tracking, rotoscoping, compositing, and animation, were often outsourced to less expensive foreign studios. However, with recent advancements in deep learning, many of these tasks can now be fully automated, carried out for free, and done so very quickly.

Innovations are enabling the use of learning systems that can improve the quality of work, whether it be in your character simulation and animation process, your render pipeline, or your project planning.


Manual to Automatic:

For instance, matchmoving enables the accurate insertion of CGI while maintaining scale and motion in live-action films. Being manual and taking up more than 5% of the time spent on the entire VFX pipeline, tracking camera placement inside a scene may be a difficult procedure.

Using metadata from the camera at the point of acquisition, software firm Foundry has developed a novel method for tracking camera movement more precisely (lens type, how fast the camera is moving, etc.). The algorithm was trained using data from DNEG, one of the biggest facilities in the world, and according to lead software engineer Alastair Barber, the results enhanced the matchmaking process by 20%. This confirmed the concept.

Studios will need to persuade clients to grant them access to their data in order to achieve widespread adoption. Barber believes that this shouldn't be too challenging. He claims that the client-studio relationship is mostly responsible for this. "It's easier to explain what they need and why without raising suspicion if a studio has good access to what is happening on set."

Another labor-intensive process, rotoscoping, is being handled by the Rotobot from the Australian business Kognat. The startup claims that a frame can be processed using AI in about 5 to 20 seconds. The precision of Rotobot is only as accurate as its deep learning model, which will become better as more data is fed into it.

Similar image processing strategies are being investigated by other businesses. Even when the camera and the CGI object are moving, Arraiy's AI can add photorealistic CGI objects to situations. The mill has displayed a sample of its work.

AI And Real-Time Are the Methods of Filmmaking In The Future.

a demonstration of the concept by a facility The Mill demonstrated the possibilities for real-time production techniques in broadcast, motion picture, and commercial settings.

The game engine from Epic, Unreal, the Cyclops virtual production toolset from The Mill, and Blackbird, a movable automobile rig that records motion and environmental data, were all used to create "The Human Race."

In order to create the augmented reality image of the virtual object tracked and composited into the scene using computer vision technology from Arraiy, Cyclops stitched 360-degree camera video on the set and transferred this live to the Unreal engine. The filmmaker may customize the scenario using photo-real graphics on the spot while viewing the virtual car on location and reacting in real time to changes in the lighting and atmosphere.

In automobile showrooms, the technology is being marketed to automotive firms as a sales tool, but its applications go far beyond marketing. The technology allows filmmakers to place a virtual character or object in any real-world setting.

It is claimed that a short film created using this technique was the first to combine real-time game engine processing with live-action filming.

Ziva, a company based in California, has developed software that was originally created by Peter Jackson's digital studio Weta for the Planet of the Apes movies to generate CG characters in a fraction of the time and cost of traditional VFX. To model natural bodily movements, particularly those of soft tissues like skin elasticity and layers of fat, Ziva's algorithms are trained using data sets from the fields of physics, anatomy, and kinesiology.

Mocap's democratisation

Motion capture is undergoing a similar transformation. This pricey activity has typically required specialized technology, suits, trackers, controlled studio environments, and a staff of specialists to make it all work.

The goal of RADiCAL was to provide a motion capture AI-driven system devoid of any physical components. It seeks to simplify the process to the point where shooting footage of an actor—even from a smartphone—and sending it to the cloud will suffice. The company's AI will then retrieve motion-captured animation of the actor's actions. The most recent version promises a huge increase in range of motion from athletic to combat, as well as 20x faster processing.

AI is also used by DeepMotion in San Francisco to retarget and post-process motion-capture data. With the help of its cloud application, Neuron, programmers may upload and train their own 3D avatars using a library of hundreds of interactive actions. The service is also said to give artists more time to concentrate on the animation's more expressive details.

Pinscreen is also generating buzz. It is developing algorithms that can create a 3D animatable avatar that is photorealistic from just one still shot. Similar to ILM's posthumous reproduction of Carrie Fisher as Princess Leia or MPC's re-generation of the character Rachel in Blade Runner: 2049, these VFX simulations involve meticulously accomplished scanning, modeling, texturing, and lighting.

Generative Adversarial Networks, a method for generating fresh, credible 2D and 3D imagery from a collection of millions of genuine 2D photo inputs, are the foundation of Pinscreen's facial simulation AI tool. Thispersondoesnotexist.com features a startling illustration of photorealistic human face synthesis.

These fixes are paving the way for what Ziva's Smit refers to as "a heightened creative class."

This will, in theory, allow expert VFX artists and animators to delegate technical work to automation, freeing up more time for human creativity, while also democratizing the entire VFX sector by making AI technologies accessible to everyone.

Solo: A Star Wars Story is one of the videos presented to Derpfakes that showcases the power of deep learning for image processing. An AI has created a database of people in various positions and poses after studying a sizable collection of photographs of a person (in this example, Ford). On a chosen clip, it can then carry out an automatic face replacement.


Pressing of a button

A recent USC project focuses on producing anime illustrations from thousands of massively trained works of art. According to Lin, "Our algorithm is even capable of differentiating the drawing method and style from these artists and create previously unheard work utilising a similar style." I predict that in the near future, this method of content generation will advance to include complicated animations and arbitrary content.

Given the openness of the ML and computer vision communities and the popularity of open-source publication sites like arXiv, progress in this subject is advancing quickly. To create learning-efficient 3D representations and interpretations of higher-level semantics, more research must be done.

According to Lin, "AI/ML for VFX production is now in its infancy, and while it can already automate many pipeline-related difficulties, it has the potential to significantly revolutionize how high-quality material will be made in the future, as well as how it will be available to end-users."

A Human Touch

While AI and ML algorithms are capable of synthesizing extremely intricate, lifelike, and even stylized image and video content, just labeling a tool as "machine learning" is insufficient.

"Human talent must be used to refine and iterate on any algorithmically generated content." We're not in the business of informing content creators that their work is limited by the algorithm's final say. However, a wider variety of innovative options will be available in less time.

We are a renowned and award-winning video production company specializing in creating content that has been directly commissioned. Since 2010, we have been trusted by clients in the arts, cultural, corporate, medical, education, charity, and government sectors to provide videos that meet their objectives in the UAE. We produce video for film, corporates, promotions, commercials, and music.