MeInGame neural network generates game character from a face image

MeInGame is a neural network model that generates a character in the game from one face image. The neural network predicts the shape of the face and its texture. The final prediction can be implemented in most of the existing 3D games. Experiments have shown the model bypasses alternative approaches for character generation.

Why is it needed

Current systems for customizing game characters either require the user to manually customize a character or are limited in face shapes and textures. Techniques based on the 3D Morphable Face Model (3DMM) architecture can accurately reconstruct 3D face models from an image. However, the meshes these models generate are different from the meshes used in games. This complicates the use of such models in computer games.

Also, current models require a large amount of facial texture data to learn. Collecting such datasets is laborious and time-consuming. MeInGame requires less training data and can be integrated into video games.

More about the method

MeInGame consists of three parts:

  1. A method for collecting textures that is not labor-intensive;
  2. Algorithm for transferring the shape of the face from the 3DMM format to the format used in games;
  3. Pipeline for training 3D face reconstruction models for game characters.

The proposed method can not only generate detailed and realistic faces of players, similar to the input images but also make the model resistant to lighting and occlusions. The project source code and dataset are available in the open repository on GitHub.

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments