About us

Introduction about Avatar 3D

One crucial part of the overall project is rendering a 3D Avatar. First, because it is at the core of the project. It is named after "avatar" because one goal is to allow deaf people to easily edit and produce signed content, and be able to speak anonymously. Secondly, for visibity. If the project comes to its end, it will be very important to be able to spread it all over the world. The whole team contributes to this project because we all realized, even for a brief moment, what it could add to both the signing community and the non signing people. We think that it could lead to substantial improvement of people's life, and we are ready to work a lot to complete it. But this would not be possible if afterwards, only few people care about it. This is why a nice 3D avatar is important. It will let us share a visible part of our work, on which even the non-signing people can rely. The amount of work in the package is intended to be divided as follows:

  • The 3d modeling and the rigging (putting bones in the model) of the avatar, that is the base on which the 3d avatar software will work on. For this, one crucial resource is the 3d modeling sowftare Blender. This software is not only provided under the GPL licence, that meets our free software convictions, but is also very powerful and is widely used. Another tools, this one used on some similar projects[12] about singing, is "Make Human" that allows to easily produce a human 3D model with the desired characteristics of size, sex, morphology and so on.
  • The software will take as input a sequence of sign descriptions, and produce a video - either real time or not - that corresponds to the input signs. The parsing part of the sofware "with buttons" will be reused. For actual rendering of the avatar, we thought about using the game engine of Blender but the fact that the way to export a standalone executable version of the game is very WWIWYG might not 5let us easily interface code and re-use the parsing part of the interface software. This is why we are thinking about using some other game engine, maybe written in C or C++. Having in mind that this could guide deaf people that are not literate on websites, one possibility could also be to build the 3D part of the software on web technologies. Indeed nowadays, the javascript engines are quite ecient - maybe as ecient as native compiled C thanks to WebAssembly-, and the 3D rendering can reach almost native performance thanks to WebGl. The desktop version would therefore integrate a javascript engine by using a tool like electron. Plus there exist several well known 3D webGL engines such as three.js or babylonjs.
Although the part is relatively simple compared to the recognition part, literature may warn us about the appearing simplicity of the task. Indeed in reference to Sign Language Avatars: Animation and Comprehensibility, we discovered that poorly animated avatars, which don't reproduce the gestures well enough, might lead to understanding issues. One related potential problem is the "beauty" of the avatar. The parti pris from the game industry, that is to scan actual people faces, and to capture actor's movements seems to indicate that for now there are no simple way of making an entirely computer designed human model and movements that sounds really human. This might be a problem since the very body of the avatar will be used to convey information - a problem that for instance doesn't appear in MMORPG players avatars because they communicate with voice or text. One way to solve this problem would be to almost entirely delete the human aspect. As emojis can convey emotions, one might think that a simplied avatar might do the trick. Indeed, while a small issue in the animation of a sign with a classical human avatar could be directly interpreted as a mistake when compared to the real life sign, it will only be seen as a characteristic of a simplied avatar because of the absence of straightforward comparison.


Contributions

#####Loading...