You can catch a glimpse of the hyperrealistic future of computer graphics in the latest update of Epic Games’ Unreal Engine, one of the most widely used platforms for developing video games. Beyond the graphics, the increasing sophistication and ease of use offered by platforms like this are making it easier than ever for small game studios and independent filmmakers to create realistic digital worlds.
What’s new? Unreal Engine debuted in 1998 with the video game Unreal, a first-person shooter that gave the industry groundbreaking visual effects (particularly through more detailed textures), smarter NPCs, and UnrealScript, a scripting language that allowed for more innovative game design and customization.
“Despite what it says on the box, this world isn’t unreal. It’s hyperreal,” The New York Times wrote in 1998 shortly after the game’s release. “Everything you see and hear has been sharpened. Crystallized. Heightened. Torch flickers seem more flamelike than real fire. Water is more watery. Footsteps are crisper. Large exhaust fans are backlighted with just the right amount of dust in the room so the blades cast perfect cinematic shadows.”
Even today, Unreal’s graphics look surprisingly good. But in terms of photorealism, it’s nothing compared to what later iterations of Unreal Engine produced, such as Tom Clancy’s Splinter Cell: Chaos Theory (Unreal Engine 2), Batman: Arkham Knight (Unreal Engine 3), and Observer (Unreal Engine 4), to name just a handful of the hundreds of games built using the engine.
In April 2022, Epic Games released Unreal Engine 5 — an update that has enabled games to feature graphics so realistic that it can be hard to immediately tell whether you’re looking at a video game or real footage. Check out this demo of Unrecord as an example:
Photorealism in computer graphics: One feature in Unreal Engine 5 that boosts the realism of graphics is Nanite — a system that allows games to showcase extremely high-fidelity visuals without suffering performance issues due to high computational costs.
On the smallest scale, each object you see in a video game or a computer-generated film is made up of tons of tiny triangles — ranging from a few triangles to millions or even billions. The more detailed the object is, the more computing power is required to render it. To reduce the computational load, game developers have often turned to a strategy called “Level of Detail” (LOD), where multiple versions of the same object are created, each with varying levels of detail. As you move farther away from an object in the game world, the engine will show you less detailed versions of the object, optimizing performance without noticeably sacrificing visual quality.
Nanite essentially automates this process: Instead of designers having to spend time creating and configuring multiple versions of the same object, the system dynamically adjusts the resolution of 3D models as you move through the game, presenting high-resolution details when close and simplifying objects as they move into the distance, all without requiring manual pre-configuration by the developers. To avoid wasting resources, the system only renders triangles in the player’s visual field.
But dynamically rendering high-fidelity objects is only part of the battle in achieving photorealism: Light must look natural, too. That’s why Unreal Engine 5 uses a system called Lumen, whose primary function is to automatically calculate how light from a source (like the Sun or a lamp) would naturally illuminate an environment in real-time — a developing process that used to be manual and static.
Another major innovation is more refined procedural content generation — essentially, the game engine automatically populates a world with models (or “meshes”) based on the developer’s selected parameters. Previously, if you wanted to build a game that took place in, say, a forest, you’d typically have to spend time manually designing each aspect of the environment: trees, pathways, hills.
Procedural content generation does this automatically, allowing game designers to create varied landscapes in a fraction of the time. This can save studios a ton of resources, especially if it turns out they need to make edits to a game level — the tool helps make editing more of a “drag and drop” process rather than starting from scratch. (Procedural generation was first used in video games in 1984 with the space game Elite, though the technique has become much more sophisticated over the past decade.)
Leveling the playing field: Creating massive, photorealistic digital worlds was once only possible if you had the deep pockets and large teams of AAA game studios like Electronic Arts and Ubisoft. But with the increasing sophistication and automation of platforms like Unreal Engine, it’s becoming much easier for smaller teams to enter the ring.
Games made by independent developers have been growing increasingly popular in recent years, becoming the only class of games to grow among American gamers from 2021 to 2022, according to a YouGov survey. Of course, indie games don’t always aim to be photorealistic. (After all, it’s still relatively resource-intensive to do so, and there are arguably more important aspects to a good video game: physics, level design, story.)
The main advantage of modern game engines is not necessarily enhanced realism of the final product but time saved. That applies to independent filmmakers, too. Although computer-generated blockbusters like Avatar: The Way of Water — which was produced on an estimated budget of about $400 million and included a mix of computer-generated imagery and human actors — will likely remain the gold standard for computer graphics in film for some time, you can find countless independent short films whose computer visuals would’ve been nearly impossible to achieve just a decade ago.
Soon, the difference between the two may be hard to spot.