$549K NSF CAREER Award to Study Tech for Blind Parents

I am honored and elated to share that my lab’s research on technologies for blind parents––specifically on voice assistants that support reading with their children––is being funded by an NSF CAREER Award. The $549,858, five-year grant, titled “Advancing Computing for Parents with Vision Impairments” will primarily go toward funding the work of my talented PhD students, including PhD candidate Kevin Storer, and compensating research participants for their time and expertise. For more details about the work, please see the official abstract below:

Over 25 million American adults with vision impairments have long been unable to participate fully in some of the most important roles in life: parent, spouse, neighbor, and more.  While innovations in accessible computing have radically advanced the independence of these people, the larger social contexts of interdependence and use are often neglected. For example, optical character recognition, visual crowd work, and text-to-speech technologies enable individual access to print text for the blind, but when a blind parent wants to co-read with their sighted child their goals go beyond mere access; they want to bond with their child by reading in their own voice and in ways that enhance Braille and print literacy skills for both themself and their child.  This project will contribute three novel voice-based technologies that will be freely disseminated so as to have broad impact.  It will sustain an ongoing community collaboration between the University of California, Irvine and the Dayle McIntosh Center for the Disabled, to teach future software engineers how to create accessible technologies and provide sighted assistance to the visually impaired population in the greater Los Angeles area. And it will support the careers of people with disabilities, who are underrepresented in STEM.

A growing number of accessible computing scholars argue that the field lacks a fundamental understanding of what “caring for” roles adults with vision impairments occupy, what interaction models are effective, and what accessibility challenges exist. As a result, technologies often fall short of supporting independence for the members of this community in that they do not enable full social integration. The dual research aims of this research address this gap by identifying both novel application domains and interaction techniques.  The project will conduct a content analysis of user-generated data coupled with interview data to answer the questions posed above.  Design-based research will then address what novel interaction models can be applied to voice assistants to facilitate parent-child bonding, parent Braille literacy, and child print literacy as a visually impaired parent co-reads with their sighted child.  Project outcomes will include a large-scale dataset generated by members of the target community, a taxonomy of untapped application domains and qualitative insights into user needs, as well as novel interaction models for voice assistants, all of which will combine to constitute the foundations for a nascent sub-field in accessible computing that focuses on technologies for interdependence.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s