Why this project?

- Why would such an Avatar be useful?

Why not just use the written form of their country's oral language?

Many deaf people are not very comfortable with writing because they have to write in their second language, which is not natural to them since they can not hear it. On top of that, the methods that are currently in placeto teach reading and writing to deaf kidsare not yet fully thought through [1]. As a result, writing French, English or whatever oral language is not ideal for deaf people.

Then we could just create a written form of sign languages?

That is exactly what we are trying to create here, using machines instead of the old pen and paper because those have proven less than ideal.

In fact, there have been tentatives to create a written form of sign language, but none of them has really taken yet. Spoken languages are a sequence of consecutive sounds, which is quite easy to write with an alphabet, but sign languages use 3D, simultaneity and movement, making the writing process more complicated. Thankfully, computers offer us a mean to create a more dynamic approach to writingthat might just be enough to take in the complexity of sign laguages.

Right now, people tend to say videos are the written form of sign language. However, experts disagree because filming oneself is still aspontaneous form of expression, which cannot be easily edited.

Other difficulties with the use of videos lie in the absence of anonymity andthe weight that is not suited for emails for instance and takes up a lot of space on your devices.

Other applications

In this community as a whole, we especially target schools and universities that could use this software not only to create teaching material but also as a way to actually allow students to work in their natural language (sign language), the exact same way hearing students use the written form of their spoken language to study.

It could also be a tool that would allow people to get a better analysis of sign language, because, for now, some deaf people lack tools to step back from their language as they learned it on the fly, which is quite unfair and part of the problem.

Researchers in sign language linguistics could also benefit from this software that would allow them to extract more information from their corpuses. Sign language is also getting more and more popular and more and more people are trying to learn it. Such a tool would make it easier for them.

Finally, the whole signing community would benefit from such a tool because it would allow the creation of a dictionary with both directions of research: for now, finding a French word is easy, but if you don’t know what a sign means, you likely won’t find that piece of information.


What is Avatar ENSigne?

Introduction

The software we wish to create could be assimilated to a word processing software but for sign languages.This obviously means that we have to find equivalents to letters and words ; the final software will looknothing like an actual word processor.What we want to keep is the concept :

  • the usual word processing software allows an input,
  • then it enables you to edit your work without having to type it entirely again,
    1. you can change each letter separately
    2. and if you want you can get auto-completion for words that appear frequently ;
  • Finally, you get a final product that is potentially anonymous.
Now, let's transpose that to sign languages :
  • the input should be video or a text file ;
  • editing without having to redo the entire video should be possible :
    1. changes should be allowed on each parameter of sign language separately (hand shape, facial expression, placement and so on) ;
    2. the user should have access to a "dictionary" of signs (for which there is no parameter assembling needed because the whole thing is already implemented) : each user should be able to add asmany signs as they want in the dictionary, we might want to create a module that would beable to gather all this data because it could be useful at some point.
  • the final sign language speech should be executed by an avatar so that it is anonymous ; an addedfeature could be the possibility to change the aspect of the avatar, just like one can change the fontof a written text.

Scientific goal and expected results

In order to implement what is described in the previous part, we'll have to work on the animation ofan avatar from the description based on a break-down of the movement following parameters used inlinguistical analysis. The fluidity of the avatar would be one of the main achievements.Another part of the work will be the creation of a user interface that'll allow the user to edit the "text"and also save signs in the dictionary.The last part is the use of the video as an input. This part might be too difficult to be totally dealtwith by the end of the school year because we want to extract the set of parameters from the video, whichis quite a challenge. The success of this will also depend on the amount of accurately annoted videos wecan gather.

Targeted users

The first obvious set of targeted users is the signing deaf community. Indeed, the actual goal is to createa written form of sign language, which is lacking in some ways.

Creating a written form would benefit the language itself (spreading news and pieces of information would be made easier, knwoledge about sign language could be improved, allowing everyone to get a better grasp of the analysis of this language as well as the ability too step back from the natural expression to have a look on how it is actually built). It would also benefit the deaf community as a whole (making written communication easier since many deaf people aren't comfortable with writing a language that is not natural to them and badly taught; improving the means of information sharing and teaching: possibility to create material in written Sign Sanguage -which, once deaf kids master it, would make the teaching of another written language easier for them).


References

[1] Séro-Guillaume Philippe. Langue des signes, surdité et accès au langage, 2008.