Research Projects

Man wearing a Google Glass with a sign language video.

Improving Classroom Accessibility

Enhancing the classroom experience for students who are deaf and hard-of-hearing...

Screenshot of person speaking with caption below saying: The appoint mint has been moved to Monday

Automatic Captions for Meetings

How can imperfect captions based on speech recognition be useful for live meetings...

Image of powerpoint slides for teaching accessibility.

Effective Methods of Teaching Accessibility

Comparing the effectiveness of methods for teaching computing students about accessibility...

A computing student conducting a test with prototype glasses.

Pedagogies for Teaching Accessibility

Designing new pedagogical techniques for including accessibility in higher education curricula...

A computing student gathering requirements from a blind user.

Encourage Inclusion in Design Thinking

Can design practice incorporate reflective tools to raise awareness of social aspects of accessibility....

An ASL signer wearing motion-capture equipment.

Generating ASL Animation from Motion-Capture Data

Collecting a motion-capture corpus of ASL and modeling data to produce accurate animations...

Depth image of two people standing, as taken by a Kinect camera.

Learning ASL through Real-Time Practice

Enabling students learning ASL to practice independently through a tool that provides feedback...

An English sentence with words at different heights to indicate importance.

Calculating Word Importance in Captions

Can we determine automatically which words are most important to the meaning of text captions...

An animation of a virtual human and images of handshapes.

Linguistic Stimuli for ASL Research

Animated ASL can produce useful perceptual stimuli for linguistic research experiments...

An image of a human performing sign language.

Methodologies for DHH User Research

Can can we best conduct empirical research on assistive technologies with DHH users...

Visualization of various speech parameters as scatterplots or graphs.

Resources for Speech Language Therapists

We investigate the usability and utility of resources available to speech language therapists...

Image of computer programming code, with one light highlighted.

Creating New Tools for Blind Programmers

Understanding the requirements of blind programmers and creating useful tools for them...

Screenshot of an animation of a virtual human signer.

Facial Expression for Animations of ASL

Producing linguistically accurate facial expressions for natural and understandable ASL animations...

Screenshot of a website for requesting accessibility accommodations.

Requesting Accessibility Services

How can we enhance the usability of university websites for requesting access services...

Diagram of a white-cane user standing holding a cane with an smart-phone device strapped to their upper arm.

Situation awareness of blind travelers

Using situational awareness techniques to evaluate navigation technologies for blind travelers...

Comprehension Questions for a Text Readability Detection Test.

Predicting English Text Readability for Users

Analyzing English text automatically to identify the difficulty level of the content for users...

Screenshot of a video of a human signer and an animation of a virtual human.

Eye-Tracking to Predict User Performance

Analyzing eye-movements to automatically predict when a user does not understand content...

An image of a human face with gridwork overlaid, and an image of a virtual human face, showing a grid-like mesh of its structure.

ASL Animation Tools & Technologies

Tools for automating the synthesis of computer animations of American Sign Language...

 

Man wearing a Google Glass with a sign language video

Improving classroom accessibility

How can we improve the classroom experience of deaf and hard-of-hearing students? This project’s goal is to investigate the effectiveness of eyewear computers to display ASL for managing multiple visual sources of information.

 

Screenshot of person speaking with caption below saying: The appoint mint has been moved to Monday.

Usability of Automatic Captions for Meetings

We are investigating a tool to caption live one-on-one meetings using imperfect automatic speech recognition (ASR) technology, including how to best convey when the ASR system is not confident it has recognized the words - so that users know when they can trust the captions.


Funding Support

Matt Huenerfauth (PI). February 2017 to February 2018. Identifying the Best Methods for Displaying Word- Confidence in Automatically Generated Captions for Deaf and Hard-of-Hearing Users. Google Faculty Research Awards Program. Amount of funding: $56,902.

Larwan Berke (student fellowship recipient), Matt Huenerfauth (faculty advisor). September 2017 to August 2020. National Science Foundation Graduate Research Fellowship (NSF-GRF) to Larwan Berke. Amount of funding: Tuition and stipend for three years, approximate value: $138,000.

Matt Huenerfauth and Michael Stinson, PIs. September 2015 to August 2017. “Creating the Next Generation of Live-Captioning Technologies.” Internal Seed Research Funding, Office of the President, National Technical Institute for the Deaf, Rochester Institute of Technology.

Matt Huenerfauth, PI. Start-Up Research Funding, Golisano College of Computing and Information Sciences, Rochester Institute of Technology.

 

Image of powerpoint slides for teaching accessibility

Effective Methods of Teaching Accessibility

This project examines the effectiveness of a variety of methods for teaching computing students about concepts related to computer accessibility for people with disabilities. This multi-year project will include longitudinal testing of students two years after the instruction to search for lasting impacts.


Relevant Links

Teach Access

This national initiative among technology companies and universities is promoting accessibility education in university computing degrees.

 

A computing student testing a set of prototype glasses.

Investigating Effective Pedagogies for Teaching Accessibility

This project focuses on how to create and evaluate various pedagogical techniques for including accessibility topics in computing curricula in higher education.


This project is conducted by Kristen Shinohara and her students.

 

A computing student interviewing a blind user to gather requirements.

Tools and Techniques to Encourage Inclusion in Design Thinking

This project investigates how design practice can best incorporate reflective tools and techniques designed to raise awareness of social aspects of accessibility.


This project is conducted by Kristen Shinohara and her students.

 

An ASL signer wearing motion-capture equipment

Generating ASL Animation from Motion-Capture Data

This project is investigating techniques for making use of motion-capture data collected from native ASL signers to produce linguistically accurate animations of American Sign Language. In particular, this project is focused on the use of space for pronominal reference and verb inflection/agreement.

This project also supported a summer research internship program for ASL-signing high school students, and REU supplements from the NSF have supported research experiences for visiting undergraduate students.


Data & Corpora

The motion-capture corpus of American Sign Language collected during this project is available for non-commercial use by the research community.


This project is conducted by Matt Huenerfauth and his students.

 

Depth image of two people standing, as taken by a Kinect camera

Learning ASL through Real-Time Practice

We are investigating new video and motion-capture technologies to enable students learning American Sign Language (ASL) to practice their signing independently through a tool that provides feedback automatically.


This project is joint work with City University of New York, City College and Hunter College.

 

An English sentence with words a different heights to indicate their importance.

Word Importance in Captions for Deaf Users

The accuracy of Automated Speech Recognition (ASR) technology has improved, but it is still imperfect in many settings. In order to evaluate the usefulness of captions for Deaf or Hard of Hearing (DHH) users based on ASR, simply counting the number of errors is insufficient, since some words contribute more to the meaning of the text.

We are studying methods for automatically predicting the importance of individuals words in a text, for DHH users in a captioning context, and we are using these models to develop alternative evaluation metrics for analyzing ASR accuracy, to predict how useful ASR-based captions would be for users.


Funding Support

Matt Huenerfauth and Michael Stinson, PIs. September 2015 to August 2017. “Creating the Next Generation of Live-Captioning Technologies.” Internal Seed Research Funding, Office of the President, National Technical Institute for the Deaf, Rochester Institute of Technology.

Matt Huenerfauth, PI. Start-Up Research Funding, Golisano College of Computing and Information Sciences, Rochester Institute of Technology.

 

Animation of a virtual human with images of ASL handshapes.

Creating Linguistic Stimuli for ASL Research

Animated virtual humans can produce a wide variety of subtle performances of American Sign Language, including minor variations in handshape, location, orientation, or movement. This technology can produce stimuli for display in experimental studies with ASL signers, to study ASL linguistics.


This project is joint work among Matt Huenerfauth and colleagues at NTID.

 

An image of a human performing sign language.

Methodologies for DHH User Research

We have conducted a variety of methodological research on the most effective ways to structure empirical evaluation studies of technology with Deaf and Hard of Hearing (DHH) users.

This research has included the creation of standard stimuli and question items for studies with ASL animation technology, analysis of the relationship between user demographics and responses to question items, the use of eye-tracking in studies with DHH users, and the creation of American Sign Language versions of standard usability evaluation instruments.


This research is conducted by Matt Huenerfauth and his students.

 

Visualization of various speech parameters using scatterplots or graphs.

Improving the Usability of Resources for Speech Language Therapists

This project investigates the usability and utility of resources available to speech language therapists. By understanding the usability of existing resources, we design tools that give insight to the varied language characteristics of diverse individuals with non-fluent aphasia.


This project is conducted by Vicki Hanson and her students.

 

Image of computer programming code, with one light highlighted.

Tools for Blind Programmers

This project investigates what the difficulties are that blind computer programmers face when navigating through software code. By investigating what current tools these programmers use when moving through computer code and studying the work-arounds that many of these programmers use to make technologies work for them, we look for ways to improve this experience with new technologies.

 

Screenshot of an animation of a virtual human signer

Facial Expression for Animations of ASL

We are investigating techniques for producing linguistically accurate facial expressions for animations of American Sign Language; this would make these animations easier to understand and more effective at conveying information -- thereby improving the accessibility of online information for people who are deaf.


Funding Support

Matt Huenerfauth, PI. July 2011 to December 2015.  “Generating Accurate Understandable Animations of American Sign Language Animation.”   National Science Foundation, CISE Directorate, IIS Division.  Amount: $338,005. (Collaborative research, linked to corresponding NSF research grants to Carol Neidle, P.I., Boston University, for $385,957 and to Dimitris Metaxas, P.I., Rutgers University, for $469,996.)


This project is joint work with researchers at Boston University and Rutgers University.

 

Screenshot of a website for requesting accessibility accommodations

Requesting Accessibility Services

RIT’s Department of Access Services enables students to request services for classroom accessibility. This project has re-designed the services request to improve the user experience.

 

Diagram of a white-cane user standing holding a cane with an smart-phone device strapped to their upper arm.

Developing an objective method to facilitate the situation awareness of blind travelers

The current evaluation methods of Orientation Assistive Technology (OAT) that aid blind travelers indoors rely on the performance metrics. When enhancing such systems, evaluators conduct qualitative studies to learn where to focus their efforts.


This project has been completed. This was conducted by Stephanie Ludi and her students.

 

Comprehension Questions for a Text Readability Detection Test.

Predicting English Text Readability for Users

This project has investigated the use of computational linguistic technologies to identify whether textual information would meet the special needs of users with specific literacy impairments.

In research conducted prior to 2012, we investigated text-analysis tools for adults with intellectual disabilities. A state-of-the-art predictive model of readability was developed that was based on discourse, syntactic, semantic, and other linguistic features.

In current work, we are investigating technologies for a wider variety of users.


This project is conducted by Matt Huenerfauth and his students.

 

Screenshot of a video of a human signer and an animation of a virtual human.

Eye-Tracking to Predict User Performance

Computer users may benefit from user-interfaces that can predict whether the user is struggling with a task based on an analysis of the user's eye movement behaviors. This project is investigating how to conduct precise experiments for measuring eye-tracking movements and user task performance -- relationships between these variables can be examined using machine learning techniques in order to produce preditive models for adaptive user-interfaces.

An important branch of this research has investigated whether eye-tracking technology can be used as a complementary or alternative method of evaluation for animations of sign language, by examining the eye-movements of native signers who view these animations to detect when they may be more difficult to understand.


This project is conducted by Matt Huenerfauth and his students.

 

An image of a human face with gridwork overlaid, and an image of a virtual human face, showing a grid-like mesh of its structure.

ASL Animation Tools & Technologies

The goal of this research is to develop technologies to generate animations of a virtual human character performing American Sign Language. The funding sources have supported various animation programming platforms that underlie research systems being developed and evaluated at the laboratory.

In current work, we are investigating how to create tools that enable researchers to build dictionaries of animations of individual signs and to efficiently assemble them to produce sentences and longer passages.


This project is conducted by Matt Huenerfauth and his students.

Want to get involved?

go back to page top