What is Avatar ENSigne?

Introduction

The software we wish to create could be assimilated to a word processing software but for sign languages.This obviously means that we have to find equivalents to letters and words ; the final software will look nothing like an actual word processor.What we want to keep is the concept :

  • the usual word processing software allows an input,
  • then it enables you to edit your work without having to type it entirely again,
    1. you can change each letter separately
    2. and if you want you can get auto-completion for words that appear frequently ;
  • Finally, you get a final product that is potentially anonymous.
Now, let's transpose that to sign languages :
  • the input should be video or a text file ;
  • editing without having to redo the entire video should be possible :
    1. changes should be allowed on each parameter of sign language separately (hand shape, facial expression, placement and so on) ;
    2. the user should have access to a "dictionary" of signs (for which there is no parameter assembling needed because the whole thing is already implemented) : each user should be able to add asmany signs as they want in the dictionary, we might want to create a module that would beable to gather all this data because it could be useful at some point.
  • the final sign language speech should be executed by an avatar so that it is anonymous ; an addedfeature could be the possibility to change the aspect of the avatar, just like one can change the fontof a written text.


Scientific goal and expected results

In order to implement what is described in the previous part, we'll have to work on the animation ofan avatar from the description based on a break-down of the movement following parameters used inlinguistical analysis. The fluidity of the avatar would be one of the main achievements.Another part of the work will be the creation of a user interface that'll allow the user to edit the "text"and also save signs in the dictionary.The last part is the use of the video as an input. This part might be too difficult to be totally dealtwith by the end of the school year because we want to extract the set of parameters from the video, whichis quite a challenge. The success of this will also depend on the amount of accurately annoted videos wecan gather.


Targeted users

The first obvious set of targeted users is the signing deaf community. Indeed, the actual goal is to createa written form of sign language, which is lacking in some ways.

Creating a written form would benefit the language itself (spreading news and pieces of information would be made easier, knwoledge about sign language could be improved, allowing everyone to get a better grasp of the analysis of this language as well as the ability too step back from the natural expression to have a look on how it is actually built). It would also benefit the deaf community as a whole (making written communication easier since many deaf people aren't comfortable with writing a language that is not natural to them and badly taught; improving the means of information sharing and teaching: possibility to create material in written Sign Sanguage -which, once deaf kids master it, would make the teaching of another written language easier for them).