Navigation Menu

Research

This page contains five sections: Lab and Research Center, Brief Overview of Huenerfauth's Research, Video about ASL Animation Research, List of Publications, List of Other Presentations.


LATLab and CAIRLATLab: Linguistic and Assistive Technologies Laboratory

Matt Huenerfauth directs the Linguistic and Assistive Technologies Laboratory (LATLab) and co-directs the Center for Accessibility and Inclusion Research (CAIR, pronounced "cair"), whose websites contain up-to-date information about his research projects.

Brief Overview of Huenerfauth's Research

Matt Huenerfauth's research is in the areas of human computer interaction, natural language processing, and accessibility for people with disabilities. He is the editor-in-chief of the ACM Transactions on Accessible Computing, the leading journal in the field of computer accessibility, and he has secured over $2.5 million in research funding, including a National Science Foundation CAREER Award in 2008.

Huenerfauth's team of over 20 researchers includes both hearing and deaf students. With an over-arching theme of studying the design and evaluation of linguistic technologies to benefit people who are deaf or hard-of-hearing, his laboratory is actively pursuing a variety of projects:

Animations of American Sign Language (ASL)

An animated character performing sign language.Many members of the Deaf Community prefer to receive information in the form of American Sign Language (ASL). In addition, standardized testing has revealed that many deaf adults in the U.S. have lower levels of English literacy; therefore, providing ASL on websites can make information and services more accessible. Unfortunately, video recordings of human signers are difficult to update when information changes, and there is no way to support just-in-time generation of website content from a user request. Software is needed that can automatically synthesize understandable animations of a virtual human performing ASL, based on an easy-to-update script as input. The challenge is for this software to select the details of such animations so that they are linguistically accurate, understandable, and acceptable to users. By modeling the way that humans move during ASL (from motion-capture recordings his team has collected), their technology can produce more realistic animations of ASL, which they evaluate in studies with deaf participants. Prior funded projects (including collaborations with Rutgers and Boston University) have investigate the synthesis of ASL verb signs and grammatical facial expressions; in current work, his team is studying how to automatically predict appropriate speed and timing of signing during ASL sentences.

Educational Tools for ASL Students

A kinect video image of a student performing sign language.In this NSF-funded collaborative project with CUNY researchers, Huenerfauth's lab is creating a tool that would allow students who are learning ASL to practice their signing skills by performing ASL into a Kinect video camera, and the software would automatically provide feedback on their signing, to indicate when they have performed specific linguistic elements or common errors.

Automatic Captioning for Meetings

An image of a person speaking with a caption text box below.In collaboration with NTID researchers, Huenerfauth's lab is investigating how automatic speech recognition technology could be used to produce captions automatically for one-on-one or small-group meetings between deaf and hearing participants. His team is examining how to improve the accuracy of these captions and how to indicate which words in the output are more trustworthy.

Effective Methods of Teaching Accessibility

An image of a powerpoint slide.In this NSF-funded collaborative project with Vicki Hanson (RIT) and Stephanie Ludi (UNT), Huenerfauth is investigating which methods of teaching accessibility to university computing-degree students are most effective and have a lasting impact. This multi-year study, with longitudinal follow-up, will identify pedagogical methods that are effective at encouraging computing students to consider how to design technology that is accessible for people with disabilities or older adults.

 


Video about ASL Animation Research

Huenerfauth's prior university (CUNY) produced a video about his laboratory's research on sign language animation; you can view the video below or you can view the video on YouTube. While his lab has now relocated to the Rochester Institute of Technology, this video gives a nice introduction to this research theme.

 


Publications

A list of publications and presentations is included below. This section begins with links to online listings of publications and citations. Next, publications are listed for the following years: 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, 2004, 2003, 2002, and 2001.

Online Listings of Publications and Citations

You can view the Google Scholar Author page for Matt Huenerfauth, the ACM Digital Library Author Profile for Matt Huenerfauth, or the ORCID profile for Matt Huenerfauth.

Google Scholar page for Matt Huenerfauth   ACM Digital Library page for Matt Huenerfauth   ORCID researcher page for Matt Huenerfauth

2017

Hernisa Kacorri, Matt Huenerfauth, Sarah Ebling, Kasmira Patel, Kellie Menzies, Mackenzie Willard. 2017. "Regression Analysis of Demographic and Technology Experience Factors Influencing Acceptance of Sign Language Animation." ACM Transactions on Accessible Computing.

Matt Huenerfauth, Elaine Gale, Brian Penly, Sree Pillutla, Mackenzie Willard, Dhananjai Hariharan. 2017. "Evaluation of Language Feedback Methods for Student Videos of American Sign Language." ACM Transactions on Accessible Computing.

Kevin Rathbun, Larwan Berke, Christopher Caulfield, Michael Stinson, Matt Huenerfauth. 2017. “Eye Movements of Deaf and Hard of Hearing Viewers of Automatic Captions.” Journal on Technology and Persons with Disabilities, California State University, Northridge.

Michael Stinson, James Mallory, Lisa Elliot, Michael Stinson, Donna Easton, and Matt Huenerfauth. 2017. “Field Study of Using Automatic Speech Recognition to Facilitate Communication between Deaf Students and Hearing Customers.” NTID Scholarship Symposium, National Technical Institute for the Deaf, Rochester, NY, January 12, 2017. http://www.ntid.rit.edu/sites/default/files/pd/program_book_2017.pdf

Lisa Elliot, Michael Stinson, Donna Easton, James Mallory, and Matt Huenerfauth. 2017. “Communication Strategies in the Workplace Survey.” NTID Scholarship Symposium, National Technical Institute for the Deaf, Rochester, NY, January 12, 2017. http://www.ntid.rit.edu/sites/default/files/pd/program_book_2017.pdf

Larwan Berke, Sushant Kafle, Christopher Caulfield, Matt Huenerfauth, and Michael Stinson. 2017. “Making the Best of Imperfect Automatic Speech Recognition for Captioning One-on-One Meetings.” NTID Scholarship Symposium, National Technical Institute for the Deaf, Rochester, NY, January 12, 2017. http://www.ntid.rit.edu/sites/default/files/pd/program_book_2017.pdf

2016

Lisa Elliot, Michael Stinson, James Mallory, Donna Easton, Matt Huenerfauth. 2016. Deaf and Hard of Hearing Individuals’ Perceptions of Communication with Hearing Colleagues in Small Groups. In Proceedings of the 18th Annual SIGACCESS Conference on Computers and Accessibility (ASSETS'17). Reno, Nevada, USA. New York: ACM Press.

Sushant Kafle, Matt Huenerfauth. 2016. “Effect of Speech Recognition Errors on Text Understandability for People who are Deaf or Hard of Hearing.” Proceedings of the 7th Workshop on Speech and Language Processing for Assistive Technologies (SLPAT), INTERSPEECH 2016, San Francisco, CA, USA.

Hernisa Kacorri, Matt Huenerfauth. 2016. “Selecting Exemplar Recordings of American Sign Language Non-Manual Expressions for Animation Synthesis Based on Manual Sign Timing.” Proceedings of the 7th Workshop on Speech and Language Processing for Assistive Technologies (SLPAT), INTERSPEECH 2016, San Francisco, CA, USA.

Hernisa Kacorri and Matt Huenerfauth. 2016. “Continuous Profile Models in ASL Syntactic Facial Expression Synthesis.” Proceedings of the 54rd Annual Meeting on Association for Computational Linguistics (ACL '16). Association for Computational Linguistics, Stroudsburg, PA, USA.

Chenyang Zhang, Yingli Tian, Matt Huenerfauth. 2016. “Multi-Modality American Sign Language Recognition.” Proceedings of the IEEE International Conference on Image Processing (ICIP 2016), Phoenix, Arizona, USA.

Matt Huenerfauth, Hernisa Kacorri. 2016. “Eyetracking Metrics Related to Subjective Assessments of ASL Animations.” Journal on Technology and Persons with Disabilities, California State University, Northridge.

Mark Dilsizian, Zhiqiang Tang, Dimitris Metaxas, Matt Huenerfauth, Carol Neidle. 2016. “The Importance of 3D Motion Trajectories for Computer-based Sign Recognition.” Proceedings of the 7th Workshop on the Representation and Processing of Sign Languages: Corpus Mining, The 10th International Conference on Language Resources and Evaluation (LREC 2016), Portoroz, Slovenia.

Hernisa Kacorri, Ali Raza Syed, Matt Huenerfauth, Carol Neidle. 2016. “Centroid-Based Exemplar Selection of ASL Non-Manual Expressions using Multidimensional Dynamic Time Warping and MPEG4 Features.” Proceedings of the 7th Workshop on the Representation and Processing of Sign Languages: Corpus Mining, The 10th International Conference on Language Resources and Evaluation (LREC 2016), Portoroz, Slovenia.
[Unofficial Preprint PDF]

2015

Matt Huenerfauth, Elaine Gale, Brian Penly, Mackenzie Willard, Dhananjai Hariharan. 2015. “Designing Tools to Facilitate Students Learning American Sign Language.” Effective Access Technologies Conference, Rochester, New York, USA. November 10, 2015. Poster Presentation.
Finalist for Best Poster Award 2015

Hernisa Kacorri, Matt Huenerfauth, Sarah Ebling, Kasmira Patel, Mackenzie Willard, Kellie Menzies. 2015. “Measuring Participant Characteristics that Relate to Sign Language Technology Acceptance.” Effective Access Technologies Conference, Rochester, New York, USA. November 10, 2015. Poster Presentation.

Hernisa Kacorri, Matt Huenerfauth, Sarah Ebling, Kasmira Patel, Mackenzie Willard. 2015. Demographic and Experiential Factors Influencing Acceptance of Sign Language Animation by Deaf Users. In Proceedings of the 17th Annual SIGACCESS Conference on Computers and Accessibility (ASSETS'16). Lisbon, Portugal. New York: ACM Press.
[Available on ACM Digital Library]

Matt Huenerfauth, Elaine Gale, Brian Penly, Mackenzie Willard, Dhananjai Hariharan. 2015. Comparing Methods of Displaying Language Feedback for Student Videos of American Sign Language. In Proceedings of the 17th Annual SIGACCESS Conference on Computers and Accessibility (ASSETS'16). Lisbon, Portugal. New York: ACM Press.
[Available on ACM Digital Library]

Matt Huenerfauth, Pengfei Lu, Hernisa Kacorri. 2015. Synthesizing and Evaluating Animations of American Sign Language Verbs Modeled from Motion-Capture Data. Proceedings of the 6th Workshop on Speech and Language Processing for Assistive Technologies (SLPAT), INTERSPEECH 2015, Dresden, Germany.
[Available on the ACL Anthology]

Hernisa Kacorri, Matt Huenerfauth. 2015. Evaluating a Dynamic Time Warping Based Scoring Algorithm for Facial Expressions in ASL Animations. Proceedings of the 6th Workshop on Speech and Language Processing for Assistive Technologies (SLPAT), INTERSPEECH 2015, Dresden, Germany.
[Available on the ACL Anthology]

Sarah Ebling, Matt Huenerfauth. 2015. Bridging the gap between sign language machine translation and sign language animation using sequence classification. Proceedings of the 6th Workshop on Speech and Language Processing for Assistive Technologies (SLPAT), INTERSPEECH 2015, Dresden, Germany.
[Available on the ACL Anthology]

Hernisa Kacorri, Matt Huenerfauth. 2015. Comparison of Finite-Repertoire and Data-Driven Facial Expressions for Sign Language Avatars. Universal Access in Human-Computer Interaction, Access to Interaction. Lecture Notes in Computer Science. Volume 9176, pp. 393-403. Switzerland: Springer International Publishing.
[Available from Springer]

Matt Huenerfauth, Hernisa Kacorri. 2015. Augmenting EMBR Virtual Human Animation System with MPEG-4 Controls for Producing ASL Facial Expressions. The Fifth International Workshop on Sign Language Translation and Avatar Technology (SLTAT), Paris, France, April 9-10, 2015.
[Official PDF]

Matt Huenerfauth, Hernisa Kacorri. 2015. Best Practices for Conducting Evaluations of Sign Language Animation. Journal on Technology and Persons with Disabilities, Volume 3, September 2015, California State University, Northridge.
[Available Online Open-Access]

2014

Hernisa Kacorri, Matt Huenerfauth. 2014. Implementation and evaluation of animation controls sufficient for conveying ASL facial expressions. In Proceedings of The 16th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS'14). Rochester, New York, USA. New York: ACM Press.
[Available on the ACM Digital Library]

Matt Huenerfauth. 2014. Learning to Generate Understandable Animations of American Sign Language. In Proceedings of the 2nd Annual Effective Access Technologies Conference, Rochester, NY, USA, June 2014. Rochester Institute of Technology.
[Available from ScholarWorks] [Official PDF]

Hernisa Kacorri, Allen Harper, Matt Huenerfauth. 2014. Measuring the Perception of Facial Expressions in American Sign Language Animations with Eye Tracking. Universal Access in Human-Computer Interaction. Lecture Notes in Computer Science, Volume 8516, pp. 549-559. Switzerland: Springer International Publishing.
[Available from Springer] [Unofficial Preprint PDF]

Matt Huenerfauth, Hernisa Kacorri. 2014. Release of Experimental Stimuli and Questions for Evaluating Facial Expressions in Animations of American Sign Language. Proceedings of the 6th Workshop on the Representation and Processing of Sign Languages: Beyond the Manual Channel, The 9th International Conference on Language Resources and Evaluation (LREC 2014), Reykjavik, Iceland.
[Unofficial Preprint PDF] [Available from Workshop Website]

Pengfei Lu, Matt Huenerfauth. 2014. Collecting and Evaluating the CUNY ASL Corpus for Research on American Sign Language Animation. Computer Speech & Language. Volume 28, Issue 3, May 2014, Pages 812–831. Elsevier. doi:10.1016/j.csl.2013.10.004
[Available from Science Direct] [Unofficial Preprint PDF]

2013

Hernisa Kacorri, Pengfei Lu, Matt Huenerfauth. 2013. Effect of Displaying Human Videos During an Evaluation Study of American Sign Language Animation. ACM Transactions on Accessible Computing. Volume 5, Issue 2, Article 4 (October 2013), 31 pages. DOI=10.1145/2517038
[Available on ACM Digital Library.] [Unofficial Preprint PDF.]

Hernisa Kacorri, Allen Harper, Matt Huenerfauth. 2013. Comparing Native Signers Perception of American Sign Language Animations and Videos via Eye Tracking. In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '13). ACM, New York, NY, USA, Article 9, 8 pages. DOI=10.1145/2513383.2513441
[Available on ACM Digital Library.] [Unofficial Preprint PDF.]

Hernisa Kacorri, Pengfei Lu, Matt Huenerfauth. 2013. “Evaluating Facial Expressions in American Sign Language Animations for Accessible Online Information.” Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion, Lecture Notes in Computer Science Volume 8009, 2013, pp 510-519.
[Available on Springerlink] [Unofficial Preprint PDF.]

2012

Matt Huenerfauth, Pengfei Lu. 2012. “Effect of Spatial Reference and Verb Infection on the Usability of American Sign Language Animations.” Universal Access in the Information Society: Volume 11, Issue 2 (June 2012), pages 169-184. doi: 10.1007/s10209-011-0247-7.
[Unofficial Preprint PDF.] [Available on Springerlink.]

Pengfei Lu, Matt Huenerfauth. 2012. “Learning a Vector-Based Model of American Sign Language Inflecting Verbs from Motion-Capture Data.” Proceedings of the Third Workshop on Speech and Language Processing for Assistive Technologies (SLPAT), The 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2012), Montreal, Quebec, Canada. East Stroudsburg, PA: Association for Computational Linguistics.
[Available on the ACL Anthology] [Unofficial Preprint PDF.]

Pengfei Lu, Matt Huenerfauth. 2012. “CUNY American Sign Language Motion-Capture Corpus: First Release.” Proceedings of the 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon, The 8th International Conference on Language Resources and Evaluation (LREC 2012), Istanbul, Turkey.
[Adobe Acrobat PDF.]

2011

Pengfei Lu, Matt Huenerfauth. 2011. “Synthesizing American Sign Language Spatially Inflected Verbs from Motion-Capture Data.” The Second International Workshop on Sign Language Translation and Avatar Technology (SLTAT), The 13th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2011), Dundee, Scotland, United Kingdom.
[Adobe Acrobat PDF.]

Matt Huenerfauth, Pengfei Lu and Andrew Rosenberg. 2011. “Evaluating Importance of Facial Expression in American Sign Language and Pidgin Signed English Animations.” The 13th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2011), Dundee, Scotland, United Kingdom. New York: ACM Press.
[Please use download link on the LATLab website to access this paper.]

Pengfei Lu, Matt Huenerfauth. 2011. “Collecting an American Sign Language Corpus through the Participation of Native Signers.” Universal Access in Human-Computer Interaction. Applications and Services Lecture Notes in Computer Science Volume 6768, 2011, pp 81-90.
[Available on Springerlink.] [Unofficial Preprint PDF.]

Pengfei Lu, Matt Huenerfauth. 2011. “Data-Driven Synthesis of Spatially Inflected Verbs for American Sign Language Animation.” ACM Transactions on Accessible Computing. Volume 4 Issue 1, November 2011. New York: ACM Press. 29 pages.
[Please use download link on the LATLab website to access this paper.]

2010

Martin Jansche, Lijun Feng, Matt Huenerfauth. 2010. “Reading Difficulty in Adults with Intellectual Disabilities: Analysis with a Hierarchical Latent Trait Model.” The 12th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2010), Poster Session, Orlando, Florida, USA. New York: ACM Press, pp. 277-278.
[Please use download link on the LATLab website to access this paper.]

Matt Huenerfauth, Pengfei Lu. 2010. “Modeling and Synthesizing Spatially Inflected Verbs for American Sign Language Animations.” In Proceedings of The 12th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2010), Orlando, Florida, USA. New York: ACM Press.
[Please use download link on the LATLab website to access this paper.]

Lijun Feng, Martin Jansche, Matt Huenerfauth, Noémie Elhadad. 2010. “A Comparison of Features for Automatic Readability Assessment.” In Proceedings of The 23rd International Conference on Computational Linguistics (COLING 2010), Beijing, China.
[Available on ACM Digital Library] [Also on: ACL Anthology.]

Matt Huenerfauth, Pengfei Lu. 2010. “Accurate and Accessible Motion-Capture Glove Calibration for Sign Language Data Collection.” ACM Transactions on Accessible Computing, Volume 3, Number 1, Article 2. New York: ACM Press. 32 pages.
[Please use download link on the LATLab website to access this paper.]

Matt Huenerfauth. 2010. “Participation of High School and Undergraduate Students who are Deaf in Research on American Sign Language Animation.” ACM SIGACCESS Accessibility and Computing newsletter. New York: ACM Press. Issue 97 (June 2010).
[Please use download link on the LATLab website to access this paper.]

Pengfei Lu, Matt Huenerfauth. 2010. “Collecting a Motion-Capture Corpus of American Sign Language for Data-Driven Generation Research,” Proceedings of the First Workshop on Speech and Language Processing for Assistive Technologies (SLPAT), Human Language Technologies: The 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2010), Los Angeles, CA, USA. East Stroudsburg, PA: Association for Computational Linguistics.
[Available on ACM Digital Library] [Also on: ACL Anthology.]

Matt Huenerfauth, Pengfei Lu. 2010. “Eliciting Spatial Reference for a Motion-Capture Corpus of American Sign Language Discourse,” Proceedings of the Fourth Workshop on the Representation and Processing of Signed Languages: Corpora and Sign Language Technologies, The 7th International Conference on Language Resources and Evaluation (LREC 2010), Valetta, Malta.
[Adobe Acrobat PDF]

Matt Huenerfauth. 2010. “Representing American Sign Language Classifier Predicates Using Spatially Parameterized Planning Templates.” In M.T. Banich and D. Caccamise (eds), Generalization of Knowledge: Multidisciplinary Perspectives. New York: Psychology Press.
[Available from Psychology Press.] [Available on Amazon.] [Available on iBooks.]

2009

Matt Huenerfauth, Lijun Feng, Noemie Elhadad. 2009. “Comparing Evaluation Techniques for Text Readability Software for Adults with Intellectual Disabilities.” In Proceedings of the 11th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2009), Pittsburgh, Pennsylvania, USA.
[Please use download link on the LATLab website to access this paper.]

Pengfei Lu, Matt Huenerfauth. 2009. “Accessible Motion-Capture Glove Calibration Protocol for Recording Sign Language Data from Deaf Subjects.” In Proceedings of the 11th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2009), Pittsburgh, Pennsylvania, USA.
[Please use download link on the LATLab website to access this paper.]

Matt Huenerfauth. 2009. “Improving Spatial Reference in American Sign Language Animation through Data Collection from Native ASL Signers.” International Conference on Universal Access in Human-Computer Interaction (UAHCI). San Diego, CA. July 2009. In C. Stephanidis (Ed.), Universal Access in HCI, Part III, HCII 2009, LNCS 5616, pp. 530–539, 2009. Berlin/Heidelberg: Springer-Verlag.
[Available via Springerlink.] [Adobe Acrobat PDF]

Matt Huenerfauth. 2009. “A Linguistically Motivated Model for Speed and Pausing in Animations of American Sign Language.” ACM Transactions on Accessible Computing (journal). Volume 2, Number 2, Article 9.
[Please use download link on the LATLab website to access this paper.]

Lijun Feng, Noemie Elhadad, Matt Huenerfauth. 2009. “Cognitively Motivated Features for Readability Assessment,” Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2009), Athens, Greece.
[Available on ACM Digital Library] [Also on: ACL Anthology.]

Matt Huenerfauth and Vicki L. Hanson.  2009.  Sign Language in the Interface: Access for Deaf Signers.  To appear in C. Stephanidis (Ed.), The Universal Access Handbook. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
[Adobe Acrobat PDF] [Available from CRC Press.] [CRCnetBase.] [Amazon.]

2008

Matt Huenerfauth. 2008. "Evaluation of a Psycholinguistically Motivated Timing Model for Animations of American Sign Language." The 10th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2008), Halifax, Nova Scotia, Canada.
[Please use download link on the LATLab website to access this paper.]

Matt Huenerfauth. 2008. "Spatial, Temporal, and Semantic Models for American Sign Language Generation: Implications for Gesture Generation." International Journal of Semantic Computing. Volume 2, Number 1.
[Available on WorldScientific] [Journal homepage.]

Matt Huenerfauth, Liming Zhou, Erdan Gu and Jan Allbeck. 2008. "Evaluation of American Sign Language Generation by Native ASL Signers." ACM Transactions on Accessible Computing (journal). Volume 1, Number 1, Article 3.
[Please use download link on the LATLab website to access this paper.]

Matt Huenerfauth.  2008.  Generating American Sign Language Animation: Overcoming Misconceptions and Technical Challenges.  Universal Access in the Information Society (journal), Volume 6, Number 4.
[Available on Springerlink.]

2007

Matt Huenerfauth, Liming Zhou, Erdan Gu and Jan Allbeck.  2007.  Evaluating American Sign Language Generation Through the Participation of Native ASL Signers.   Ninth International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS-2007. Tempe, Arizona, USA. October 2007.
Conference Award: ACM SIGACCESS Best Technical Paper Award, 2007.
[Please use download link on the LATLab website to access this paper.]

Matt Huenerfauth, Liming Zhou, Erdan Gu and Jan Allbeck.  2007.  Design and Evaluation of an American Sign Language Generator.  45th Annual Meeting of the Association for Computational Linguistics. Workshop on Embodied Language Processing. Prague, Czech Republic. June 2007.
[Available on ACM Digital Library] [Also: ACL Anthology.]

2006

Matt Huenerfauth.  2006.  Representing Coordination and Non-Coordination in American Sign Language Animations.  Behaviour & Information Technology (journal), Volume 25, Issue 4, Pages 285-295.
[Available from Taylor & Francis]

Matt Huenerfauth.  2006.  Generating American Sign Language Classifier Predicates For English-To-ASL Machine Translation.  Doctoral Dissertation, Computer and Information Science, University of Pennsylvania.
[Adobe Acrobat PDF]

2005

Matt Huenerfauth.  2005.  Representing Coordination and Non-Coordination in an American Sign Language Animation.  The 7th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2005), Baltimore, MD, USA.
Conference Award: ACM SIGACCESS Best Technical Paper Award, 2005.
[Please use download link on the LATLab website to access this paper.]

Matt Huenerfauth.  2005.  American Sign Language Spatial Representations for an Accessible User-Interface.  3rd International Conference on Universal Access in Human-Computer Interaction. Las Vegas, NV, USA.
[Adobe Acrobat PDF]

Matt Huenerfauth.  2005.  American Sign Language Generation: Multimodal NLG with Multiple Linguistic Channels.  Student Research Workshop, The 43rd Annual Meeting of the Association for Computational Linguistics. Ann Arbor, MI, USA.
[Available on ACM Digital Library] [Also: ACL Anthology]

Matt Huenerfauth.  2005.  American Sign Language Natural Language Generation and Machine Translation.  ACM SIGACCESS Accessibility and Computing. New York: ACM Press. Issue 81 (January 2005).
[Please use download link on the LATLab website to access this paper.]

2004

Matt Huenerfauth.  2004.  Spatial and Planning Models of ASL Classifier Predicates for Machine Translation.  The 10th International Conference on Theoretical and Methodological Issues in Machine Translation (TMI 2004). Baltimore, MD, USA.
[Adobe Acrobat PDF]

Matt Huenerfauth.  2004.  American Sign Language Natural Language Generation and Machine Translation.  The 6th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2004), Doctoral Consortium Presentation and Poster Session. Atlanta, Georgia, USA.
Conference Award: Best Doctoral Candidate, Delivered Closing Plenary Address
Abstract: [Adobe Acrobat PDF]

Matt Huenerfauth.  2004.  Spatial Representation of Classifier Predicates for Machine Translation into American Sign Language.  Workshop on the Representation and Processing of Signed Languages, 4th International Conference on Language Resources and Evaluation (LREC 2004), Lisbon, Portugal.
[Adobe Acrobat PDF]

Matt Huenerfauth.  2004.  A Multi-Path Architecture for Machine Translation of English Text into American Sign Language Animation.  In the proceedings of the Student Workshop at the Human Language Technology conference / North American chapter of the Association for Computational Linguistics annual meeting (HLT-NAACL 2004), Boston, MA, USA.
[Available on ACM Digital Library] [Also: ACL Anthology]

2003

Matt Huenerfauth.  2003.  A Survey and Critique of American Sign Language Natural Language Generation and Machine Translation Systems.  Technical Report MS-CIS-03-32, Computer and Information Science, University of Pennsylvania.
[Adobe Acrobat PDF]

2002

Matt Huenerfauth.  2002.  Design Approaches for Developing User-Interfaces Accessible to Illiterate Users.  Intelligent and Situation-Aware Media and Presentations Workshop. American Association of Artificial Intelligence (AAAI2002) Conference, Edmonton, Alberta, Canada.
[Adobe Acrobat PDF]

Matt Huenerfauth.  2002.  Developing Design Recommendations for Computer Interfaces Accessible to Illiterate Users.  Thesis. Master of Science (MSc). Department of Computer Science. National University of Ireland: University College Dublin. (Páipéar. Máistir Eolaíochta (MSc). Roinn na Ríomheolaíochta. Ollscoile na hÉireann: An Coláiste Ollscoile, Baile Átha Cliath.)
[Adobe Acrobat PDF]

2001

Matt Huenerfauth.  2001.  Development of PeTaLS: Personality Tagged Logical Statistical Generator.  Thesis. Master of Science (MS). Computer and Information Sciences.  University of Delaware.



Presentations and Guest Lectures

(Does not include presentations given at conferences listed above.)

Matt Huenerfauth. November 2016. “Accessibility in U.S. Computing Degrees.” Invited Speaker as part of a panel on embedding accessibility in STEM education, White House Disability Inclusive Technology Summit, Organized by the American Association of People with Disabilities (AADP) and the White House, Washington, DC.

Matt Huenerfauth. September 2016. “Ethical Inclusion of People with Disabilities through Undergraduate Computing Education” Cultivating Cultures for Ethical STEM Principal Investigator Meeting, National Science Foundation, Washington, DC, USA, September 15-16, 2016.

Matt Huenerfauth. July 2016. “Accessible Computing Research for Users who are Deaf and Hard of Hearing.” University of Washington Computer Science and Engineering / Microsoft Research Summer Institute, Union, WA, USA.

Matt Huenerfauth. April 2016. “Accessibility in Academia: What’s happening? How can we change?” Invited Speaker, TeachAccess Kickstart Workshop, Yahoo! headquarters, Sunnyvale, CA.

Matt Huenerfauth. November 2015. “Comparing Methods of Providing Feedback for Student Videos of American Sign Language.” Invited Speaker, Language Science Research Mixer, Rochester Institute of Technology, Rochester, NY.

Matt Huenerfauth. September 2015. “Learning to Generate Understandable Animations of American Sign Language.” Invited Speaker, Seminar, Center for Imaging Science, Rochester Institute of Technology, Rochester, NY. https://youtu.be/pcwXQ9WYKh8

Matt Huenerfauth. May 2015. “Learning to Generate Understandable Animations of American Sign Language.” Invited Speaker, Seminar, Ph.D. Program in Computing and Information Sciences, Rochester Institute of Technology, Rochester, NY.

Matt Huenerfauth. April 2015. “Learning to Generate Understandable Animations of American Sign Language.” Invited Speaker, Seminar, Office of the Associate Dean for Research, National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, NY.

Matt Huenerfauth. April 2015. “Conducting Experiments with People Who are Deaf to Evaluate ASL Technologies.” Invited Speaker, Seminar, SIGCHI Chapter at RIT, Rochester, NY.

Matt Huenerfauth. December 2014. “Learning to Generate Understandable Animations of American Sign Language.” Invited Speaker, Seminar, Department of Computer Science, University of Rochester, Rochester, NY.

Matt Huenerfauth. November 2014. “Learning to Generate Understandable Animations of American Sign Language.” Invited Speaker, Seminar, The Center for Language and Speech Processing at Johns Hopkins University, Baltimore, MD.

Matt Huenerfauth. February 2014. “Learning to Generate Understandable Animations of American Sign Language.” Invited Speaker, School of Communication and Information Sciences, Rutgers University, New Brunswick, NJ.

Matt Huenerfauth. May 2013. “Automatically Generating Understandable Animations of American Sign Language.” Invited Speaker, Colloquium, Graduate Program in Linguistics, The Graduate Center, City University of New York.

Matt Huenerfauth. March 2013. “Automatically Generating Understandable Animations of American Sign Language.” Invited Speaker, Monthly Lecture Series, International Linguistics Association, New York, NY.

Matt Huenerfauth. July 2012. “Automatically Generating Understandable Animations of American Sign Language.” July 2012. Invited Speaker, Summer Academy Colloquium, Department of Computer Science & Engineering, University of Washington, Seattle, WA.

Matt Huenerfauth. January 2012. “Generating Linguistically Accurate and Understandable Sign Language Animations.” January 2012. Invited Speaker, Department of Linguistics, Montclair State University, Montclair, NJ, USA.

Matt Huenerfauth. December 2011. “Design, Accessibility, Code: Three Perspectives on the Web. Part 2: Accessibility.” Invited Speaker, “Tech Tuesday” Speaker Series, Center for Teaching and Learning, Queens College, The City University of New York, New York, NY, USA.

Matt Huenerfauth. November 2011. “Learning to Produce Accurate and Understandable Sign Language Animations.” Invited Speaker, Columbia Linguistics Society, Columbia University, New York, NY, USA.

Matt Huenerfauth. October 2011. “Learning to Produce Accurate and Understandable Sign Language Animations.” Invited Speaker, School of Computing, University of Dundee, Scotland, United Kingdom.

Matt Huenerfauth. February 2011. “Linguistic and Assistive Technology for People with Disabilities.” Guest Lecture, Computer Science 87100, “Research at CUNY,” Ph.D. Program in Computer Science, The Graduate School and University Center, The Rochester Institute of Technology, New York, NY, USA.

Matt Huenerfauth. January 2011. “Cyclic Data-Driven Research on American Sign Language Animation.” Invited Keynote Speaker, International Workshop on Sign Language Translation and Avatar Technology (SLTAT), Federal Ministry of Labor and Social Affairs, Berlin, Germany.

Matt Huenerfauth. November 2010. “Experimental HCI Research with People with Disabilities: Case studies from the LATLab at CUNY.” Guest Lecture, Library Sciences 754, “Human Computer Interaction,” Graduate School of Library and Information Sciences, Queens College, The Rochester Institute of Technology, NY, USA.

Matt Huenerfauth. December 2009. A Motion-Capture Corpus of American Sign Language for Generation Research. CUNY-NLP Seminar Series, NLP at CUNY: Computational Linguistic Research Community, Graduate Center, The Rochester Institute of Technology, New York, NY, USA.

Matt Huenerfauth. November 2009. Sign Language Animation: Making Information Accessible for People who are Deaf. Sigma Xi Scientific Research Society Faculty Research Presentation, Queens College, The Rochester Institute of Technology, Flushing, NY, USA.

Matt Huenerfauth. June 2009. Generating Animations of American Sign Language Based on Data from Native Signers. Invited Speaker, The Haskins Laboratories at Yale University, New Haven, CT, USA.

Matt Huenerfauth. February 2009. A Linguistic Timing Model for Animations of American Sign Language. Perceptual Science Speaker Series, Center for Cognitive Science (RuCCS) and IGERT: Interdisciplinary Training Program in Perceptual Science, Rutgers University, New Brunswick, NJ, USA.

Matt Huenerfauth. September 2008. A Linguistically Motivated Model for Speed and Pausing in Animations of American Sign Language. CUNY Psycholinguistics "Supper" Speaker Series, Graduate Program in Linguistics, The Graduate School and University Center, The Rochester Institute of Technology, New York, NY, USA.

Matt Huenerfauth. March 2008. ASL Generation and Evaluation of ASL Systems. Guest Lectures, Computer Science 84010, Computational Linguistics, Ph.D. Program in Computer Science and Graduate Program in Linguistics, The Graduate School and University Center, The Rochester Institute of Technology, NY, USA.

Matt Huenerfauth. March 2008. Linguistic and Assistive Technology for Users with Disabilities. Guest Lecture, Computer Science 87100, Research at CUNY, Ph.D. Program in Computer Science, The Graduate School and University Center, The Rochester Institute of Technology, New York, NY, USA.

Matt Huenerfauth. November 2006. Assistive Technology for the Deaf: American Sign Language Machine Translation. Guest Lecture, Computer Science 87100, Research at CUNY, Ph.D. Program in Computer Science, The Graduate School and University Center, The Rochester Institute of Technology, New York, NY, USA.

Matt Huenerfauth. October 2006. Assistive Technology for the Deaf: American Sign Language Machine Translation. Colloquium, Ph.D. Program in Computer Science, The Graduate School and University Center, The Rochester Institute of Technology, New York, NY, USA.

Matt Huenerfauth. August 2006. Representing American Sign Language Classifier Predicates Using Spatially Parameterized Planning Templates. Science of Learning Symposium on Generalization of Knowledge, The Institute of Cognitive Science, University of Colorado, Boulder, CO, USA.

Matt Huenerfauth. April 2006. Assistive Technology for the Deaf: American Sign Language Machine Translation. Seminar, Harvard-MIT Division of Health Sciences & Technology and the MIT Department of Electrical Engineering & Computer Science, Cambridge, MA, USA.

Matt Huenerfauth. April 2006. Assistive Technology for the Deaf: American Sign Language Machine Translation. Seminar, Center for Language and Speech Processing, Johns Hopkins University, Baltimore, MD, USA.

Matt Huenerfauth. April 2005. Computational Linguistic Models of American Sign Language Classifier Predicates. The Second Symposium of the Penn Working Group in Language, University of Pennsylvania.

Matt Huenerfauth. March 2005. American Sign Language Natural Language Generation and Machine Translation. Poster Session, Graduate Research Symposium, School of Engineering and Applied Science, University of Pennsylvania.
[POSTER: Adobe Acrobat PDF]

Matt Huenerfauth. January 2005. Computers Assisting Deaf Communication. (A non-technical presentation on assistive technology and ASL machine translation.) Presented at meeting of the Lionesses Club of Springfield, PA. (A community organization committed to fund-raising for people with disabilities.)

Matt Huenerfauth.  April 2004.  Classifier Predicate Representations for an English to American Sign Language Machine Translation System.  Penn Working Group in Language, First Annual Symposium, University of Pennsylvania.

Matt Huenerfauth. January 2004. Motivating the Design of a Machine Translation System from English to American Sign Language. Penn Engineering Research Symposium.
Award: Best Graduate Student Presentation

Matt Huenerfauth. August 2003. Computers Assisting Deaf Communication. (A non-technical presentation on assistive technology and ASL machine translation.) Presented at meeting of the Lions Club of Springfield, PA. (A community organization committed to fund-raising for people with disabilities.)