In role-playing games (RPGs) such as the modern crime classic Grand Theft Auto, many players create their in-game characters based on their own appearance. Although today’s built-in character customization systems are becoming increasingly sophisticated, they can involve tedious manual adjustments across dozens or even hundreds of parameters, which can take up to several hours to complete.
A team of researchers from the Chinese gaming giant NetEase have developed a method to automatically create players’ in-game characters from a standard portrait photo. They break down the details of their method the paper Face-to-Parameter Translation for Game Character Auto-Creation.
The character generation process starts by aligning the human player’s portrait photo, which is used as the training input for a deep learning-based framework comprising an imitator module and a feature extractor.
The imitator is designed to simulate the behavior of the game engine to automatically create a character in a style consistent with the ground truth (generated by game engine). Taking account of user-customized facial parameters such as hairstyle, eyebrow style, beard style, and lipstick style, the deep generative network-based imitator eventually produces a rendered facial image. In the paper the researchers explain that the gradient can smoothly backpropagate to the input, enabling the facial parameters to be updated by gradient descent.
Once the imitator is trained, the feature extractor exploits the feature space and performed facial similarity measurement under a neural style transfer framework to optimize the facial parameters using gradient descent.
But how to ensure the accuracy of the global appearance and local details during the cross-domain transition? Essentially, this is a cross-domain (from real-world human photo to anime-like 3D characters) image similarity measurement problem. Researchers used a deep convolutional neural network and multi-task learning to tackle the issue. Their solution was to leverage two carefully designed loss functions: discriminative loss and facial content loss. For example, the shape of a person’s face and the overall facial impression are included in the global appearance, while more specific traits such as shadows within a given local region can also be analyzed.
A standout feature in this paper is its 3D face reconstruction approach, which creates a bone-driven model, unlike other 3D face reconstruction approaches which produce 3D face mesh grids. Thus, the model predicts a set of facial parameters with a clearer physical significance.
The automatic generation process is not limited to photographs, it can also work well with artistic portrait inputs such as sketches and caricatures. Researchers believe the 3D characters generated in this way have a high degree of similarity to the input 2D pictures because the approach relies on facial semantics rather than raw pixels.
Although the new method can generate an in-game character automatically, players can also use it as a supplementary tool in their own DIY character creation process. Researchers point out the approach also enables tweaking details and making further general changes according to a user’s needs.
The face translation method has already been used over one million times by Chinese gamers.
The paper Face-to-Parameter Translation for Game Character Auto-Creation is on arXiv.
Journalist: Fangyu Cai | Editor: Michael Sarazen