Did you know that shadowgraphy, the art of creating images using hand shadows to tell a story, is one of the earliest known examples of “film”?
Technological advancements in film have been so rapid in the past few decades that it’s hard to imagine a time when technology wasn’t used in movies, both for animation and live action.
Computer-animated films have completely dominated traditional animated filmmaking since the release of “Toy Story” in 1995. Early animated movies were either hand-drawn by artists or shot using stop motion. Computer-animated films involve creating 2D motion pictures with 3D modeling software, but drawing is still a large part of the storyboarding and animation process.
Artists create comic strips with character designs and settings to illustrate the story beats. They then use model sheets to bring a character to life by drawing what they would look like with different emotions on their faces or standing in different positions.
The initial rough animation is created using animatics, a series of images played successively to create motion. Earlier movies generated motion by filming drawings, but computer-animated films utilize software.
Get The Daily Illini in your inbox!
Modelers then turn two-dimensional concept art into models in a three-dimensional digital space. The process is a lot like sculpting, but with software instead of physical equipment. Artists work to paint each scene, provide proper lighting and texture, rig characters with costumes and bone structure and apply paint fixes and color grading before rendering the final shots.
Animation allows artists to reimagine character concepts and environments in a way live action can’t. For example, it can give anthropomorphic animals more expressive range to make them livelier and more endearing. Therefore, it’s no surprise that live-action films began to incorporate CGI more heavily into their productions to create more fantastical character designs and sets.
With the overuse of CGI in films in recent years and the call for a return to practical effects, it’s easy to forget that without CGI, we wouldn’t have had the T-1000 in “Terminator 2: Judgement Day,” Gollum in “Lord of the Rings” or the dinosaurs in “Jurassic Park.”
More recently in 2016, CGI enabled the creators of “Rogue One” to digitally recreate the characters Grand Moff Tarkin and Princess Leia in a process very similar to deepfaking — a technology that was later used in 2020 and 2021 to bring back young Luke Skywalker in the “The Mandalorian” and “The Book of Boba Fett.”
The word “deepfake” is a portmanteau of the words “deep learning” and “fake.” The technique leverages machine learning to create fake audio and visual content. However, it’s not the same as using video editing software to edit a video. Deepfaking uses a framework called generative adversarial network, which consists of two competing neural networks.
The GAN trains a “generator” to record a face from multiple angles to deconstruct how the face makes different expressions. It learns to mimic and manipulate facial features and creates new footage based on the old images.
The GAN also trains a “discriminator” to analyze and point out errors in the generated footage. The generator constantly produces new images and corrects errors caught by the discriminator until it can imitate the original face almost perfectly. Thus, a deepfake algorithm can teach itself to improve its own results.
Deepfaking is done solely by a constantly evolving autoencoder. Humans may be involved in the process, but an algorithm makes the final decisions that control how a face should look and move. It lacks the imagination of artists who design and build characters from scratch or the humanity of actors who bring their talents and unique personalities to the role.
That’s what makes the deepfaked Luke Skywalker look and sound so uncanny and robotic. The character may look like a young Mark Hamill, but it isn’t really him – it’s merely how an algorithm decided Mark Hamill would talk and act based on his past performances as Luke Skywalker. It’s not art but a mere imitation of art.
The biggest shame is that all of this is completely unnecessary. The perfectly viable alternative of recasting exists, but that is something Lucasfilm has been reluctant to do since the financial failure of “Solo: A Star Wars Story,” in which Alden Ehrenreich replaces Harrison Ford as a young Han Solo.
Recasting is never an issue, especially if the newer actor can live up to the original performance, such as Ewan McGregor taking on the mantle of Obi-Wan Kenobi from Sir Alec Guinness in George Lucas’ “Star Wars” films.
Graham Hamilton, the stand-in for Luke Skywalker in the Disney+ shows, looks enough like young Mark Hamill that there was no justification for deepfaking beyond nostalgia-baiting. The same could be said of Guy Henry and Ingvild Deila standing in for Sir Peter Cushing and Carrie Fisher as Tarkin and Leia in “Rogue One.”
Actors used to be the driving force behind films, getting people into theatre seats through their star power alone. That role has now been taken up by brands and intellectual properties instead. For instance, when people went to the theatres to watch “Captain America: Civil War,” they were looking forward to watching an MCU film, not a Chris Evans film.
With the importance of actors already in steep decline, the rise of deepfaking in film could be the last straw. It will allow studios to own the likeness of talented actors and replace or resurrect them at any time with cheap algorithm-driven replicas.
Technological advancements have been the driving force behind films since day one. But using technology to create new kinds of art or push the boundaries of what’s possible in filmmaking is one thing. Using technology to replace what makes art so beautiful in the first place — the people who put their heart and soul into creating it — is another matter entirely.
Ayushi is a sophomore in Engineering.