Gaming: A Synthetic Snapshot

Thomas Graham
Metaphysic.ai
Published in
4 min readMar 3, 2022

--

Whether it’s a classic like Tetris or Pong, an open-world game landscape, or the complex mechanics of a first-person shooter, almost all video games are built from synthetic models and computer-generated graphics.

While gaming has made massive strides in graphics, gameplay, and immersive world-building such as in-game physics, many video games still feature characters that sit firmly within the uncanny valley and require mammoth budgets to create.

What’s more, most games do not allow users to truly personalize their characters, with the few that do using limited and awkward tools.

However, AI-generated and hyperreal synthetic media are already having a profound effect on how games are made and experienced.

Epic’s Metahuman creation software shows the impressive progress AI has enabled in generating realistic synthetic characters. Credit: Epic Games

More efficient game production

Creating games, particularly in the “triple-A” category, is a notoriously complex process, often involving thousands of developers and budgets exceeding $100m. However, new synthetic media technologies are helping to make some key processes of game building less expensive and more time-efficient.

One of these technologies is photogrammetry, a technique for ‘importing’ realistic digital 3d models of real-world objects using a camera and computer vision software.

Populating realistic virtual worlds used to require painstaking modeling of all a game’s objects and textures in a ‘click by click’ manual process. However, photogrammetry has allowed developers, such as Resident Evil’s Capcom, to create increasingly realistic and detailed gameworlds at a fraction of the cost and time.

Capcom used photogrammetry to create realistic 3d models of real-world objects for their blockbuster title Resident Evil 7. Credit: Capcom Studios

Another crucial part of many games is recording voice audio for in-game characters and dialogue. This can be a hugely expensive and time-consuming process, with the best-selling title Red Dead Redemption 2 featuring over 500,000 lines of voice dialogue recorded by 700 voice actors.

However, Game developer Obsidian partnered with synthetic voice startup Sonantic to generate entirely synthetic voices for the characters in its Outer Worlds game, which itself featured 600k words of dialogue. The synthetic voice models are highly realistic and customizable, going unnoticed by many players while saving the developer significant costs and avoiding the logistical challenges of recording different voice actors’ audio.

Outer Worlds developer Obsidian partnered with voice synthesis startup Sonantic to create realistic synthetic voices for in-game characters. Credit: Obsidian Entertainment

Expanding hyperreal in gaming

Synthetic media is not just about cost-cutting and improving the process of creating games. In the case of hyperreal synthetic media, it’s also about pushing the boundaries of what gamers believe to be possible, as well as driving new gaming experiences.

One of the most exciting possibilities is that in-game characters move beyond the uncanny valley and provide a new level of immersion and realism for players. At Metaphysic, we recently released a demo showing how hyperreal synthetic media could transform gamers’ experience of FIFA by bringing the world’s most recognizable players to life like never before with next-level graphics.

Metaphysic’s hyperreal FIFA demo was designed to show the immense possibilities synthetic media could unlock in realistic gaming.

This use of hyperreal synthetic media could easily translate into other sports games featuring famous athletes, but also for the growing number of actors lending their synthetic likenesses to games, such as Norman Reedus and Mads Mikkelsen in Hideo Kojima’s Death Stranding or Keanu Reeves in CD Projekt Red’s Cyberpunk 2077.

Mads Mikkelsen's motion-captured performance for Hideo Kojima’s Death Stranding hints at how hyperreal synthetic media could combine to create compelling video game experiences. Credit: Kojima Productions

Looking to the future, hyperreal synthetic media won’t just allow celebrities to be realistically recreated in-game, but also the everyday player! As hyperreal technologies becomes more accessible and scalable, gamers will be able to import their hyperreal likenesses onto their custom in-game character.

Compared to the painstaking process of carefully adjusting sliders to create a character that ‘sort of’ looks like you, the heavy lifting of making the player hyperreal will be carried out by AI, while still allowing the player to customize their hyperreal likeness to better express themselves in different games.

What does the future of gaming look like?

Just as protogenic synthetic media is entwined with gaming’s origins, so too is it intimately connected to its future.

As generative AI improves and becomes more accessible, innovative gaming will harness hyperreal assets to cut costs while preserving immersive gameplay and transporting the player into personalized gaming experiences like never before.

Metaphysic builds software to help creators make incredible content with the help of artificial intelligence. Find out more: www.metaphysic.ai

For more info please contact info@metaphysic.ai or press@metaphysic.ai

About the author: in a galaxy far away, I was a lawyer turned internet & society researcher. In the 7 years before co-founding Metaphysic, I built tech companies in SF and London. I have always been obsessed with computational photography and computer vision, so it is a thrill to work alongside amazing people on the next evolution in how we build and perceive reality — one pixel at a time.

© Thomas Graham & Metaphysic Limited 2021.

--

--