Skip to main content

Prof. D’Imperio Receives NSF Grant for AI Facial Analytics in Language

Prof. Mariapaola D’Imperio has been awarded a one-year, interdisciplinary NSF grant as a co-PI with Dimitris Metaxas (Rutgers CS), Caron Neidle (Boston University), and Matt Huenerfauth (Rochester IT). Additional information is below:

Project Title:NSF Convergence Accelerator–Track D: Data & AI Methods for Modeling Facial Expressions in Language with Applications to Privacy for the Deaf, ASL Education & Linguistic Research

Summary:

There are currently no reliable AI methods for facial analytics that can be widely used across do­mains and applications (signed & spoken lan­guag­es, military security, detection of deception & emo­tion). We have been developing such AI meth­ods, algorithms, and related data infrastructure (with NSF support) for American Sign Language (ASL).

We have assembled an interdisciplinary team of linguists, computer scientists, Deaf and hearing experts on ASL, and potential industry partners.

We propose to leverage and extend our work to date in order to address research and societal challenges through four types of deliverables, targeted to diverse user and research communities:

1) Modifications and extension of our ASL-based tools, methods, and data sharing to encompass spoken language. Although facial expressions and head gestures, which are essential to the grammar of signed languages, also play an important role in speech, this is currently not well-understood, because resources of the kind we will provide have not been available to researchers. The new data and analyses will also enable comparative study of the role of facial ex­pres­sions across modalities, as well as the role of facial expressions in sign language vs. prosody in spoken language. The raw data, analyses, and visualizations that we will be share for our signed and (new) spoken language, linguistically annotated, video corpora will open up vast new avenues for linguistic and computer science research on spoken and signed languages aimed at understanding the role and spatiotemporal synchronization of facial expressions and head gestures in conjunction with speech and signing. The results of such research will have a broad set of impacts and applications.

2) An application to help ASL learners produce facial expressions and head gestures to convey grammatical information in signed languages; and

3) Development of a tool for real-time anonymization of ASL videos to preserve grammatical information expressed non-manually, while de-identifying the signer.”