Монтажная область 7.png

SignAll is the first in the world to be able to recognize full sign language. 

We attribute our success to these three areas of focus:

System Features:

Teachers

Teacher dashboard (accessible online) that includes:

  • Student / classroom management

  • Reporting on student progress, scoring and usage

  • ASL lesson content is available for review 

Students

  • ASL lessons and games accessible on the system

  • Student access to lessons online (video only, no sign recognition)- to extend learning when away from the SignAll system

SignAll Learn UI
SignAll Learn UI
SignAll Learn UI
SignAll Learn UI
SignAll Learn UI
SignAll Learn UI
SignAll Learn UI
SignAll Learn UI
 

SignAll has the largest database of ASL vocabulary in the world

Thanks to our partnership with

In 2017, a strong partnership was forged with Gallaudet University (GU), the world’s leading university for the Deaf and Hard of Hearing (located in Washington D.C.).  GU has been instrumental in moving the technology forward, providing guidance, and opening doors for SignAll.

In addition to this partnership, SignAll has been dedicated to having Deaf team members on board to provide necessary input and help us hold our vision in a way that celebrates Deaf culture and its heritage language.

SignAll has collected and annotated the largest ASL vocabulary database in the world, from a range of native signers. 

 

Computer Vision and AI drive the technology

  • 3D camera detects a “point cloud” image that has depth instead of colors.

  • Additional 3D (color) cameras detect the colored markers on gloves. 

  • The information from both cameras are merged so we can locate where a signer is in 3D space.

  • As camera hardware improves, data quality will also improve. Over time we will be able to reduce the amount of hardware required, leading to a more portable solution.  

3D Color Camera

3D Camera

3D Color Camera

Colored markers indicate space and time

Our users sometimes ask if the gloves are necessary.  While we can extract large amounts of data without the gloves, the technology cannot accurately detect handshapes without them.  

The system is calibrated to detect the subtle differences in colors, where they are located, and how fast they move.

 

Computational Linguistics: modeling human language for computer processing 

Human language is astoundingly complex and diverse. We express ourselves in infinite ways: verbally, in writing, and signing. For example, there are hundreds of languages and dialects, and within those are unique sets of grammar and syntax rules, terms and slang.

The field of Computational Linguistics/NLP involves making computers to perform useful tasks with the natural languages humans use.

The fact that no written form of SL exists makes it quite difficult to research or implement statistical methods. Linguistically, many aspects are still under-researched, although high levels of details required for computational modeling are needed.

 

The necessary elements to capture

Other sign language technologies have failed because the necessary elements of the language were not captured.

The multiple data points that are captured and processed give context (both grammatical and meaning) to the language.

Body Movements

The skeleton shown is a representation of the body movements that are captured. Speed, posture, and movements relative to each other are all considered.

Challenges with signed languages + machine translation

Natural sign languages have a number of similarities to oral natural languages. However, the three dimensional nature of the space around a signer makes them even more difficult to model:

Non-manual features (multi-modal signs) carry additional information.:

  • Facial expressions associated with the position of eyebrows distinguish declarative (neutral brows), yes/no questions (raised brows) and wh-questions (furrowed brows).

  • Mouth patterns can provide adverbial information or help disambiguate manually similar signs.

  • Facial expression and body posture can indicate the signer's attitude to the accompanying proposition.

  • Syntactic - Agreement verbs incorporate within the sign information about person and number of the subject and indirect object.

Handshapes

The colored markers on the glove show where each hand is in space, and what shape it's making.  

Facial Expressions

Also referred to as non-manual data (explained at right), they give context to ambiguous language.

Installation 

  • We make the installation process easy for you. We help to manage legal, privacy, and IT compliance.

  • Download Installation Requirements

     

© 2020 SignAll

  • Facebook
  • LinkedIn
  • Twitter
  • YouTube
  • Instagram
SignAll Learn UI

Online Video Library