ASSETS21 – Paper Accepted – Deinstitutionalizing Independence

Kevin Storer, my PhD student, and I have been collaborating around accessibility of reading practices of blind parents with their children. We are pleased to share that the theoretical exploration about what it means to do research of this nature, with people with disabilities in domestic settings, has been conditionally accepted at ASSETS 2021. More details and a preprint are coming soon. In the meantime, here’s an abstract:

The meaning of “homes” is complicated for disabled people because of the historical link between (de)institutionalization, housing, and civil rights. But, it is unclear whether and how this history impacts Accessible Computing (AC) research in domestic spaces. We performed Critical Discourse Analysis on 101 AC articles to explore how (de)institutionalization affects domestic AC research. We found (de)institutionalization motivates goals of “independence” for disabled people. Yet, discourses of housing reflected institutional logics which are in tension with “independence”—complicating how goals were set, housing was understood, and design was approached. We outline three discourses of housing in AC and identify parallels to those used to justify institutionalization in the USA. We reflect upon their consequences for AC research. We offer principles derived from the Independent Living Movement as frameworks for challenging institutional conceptions of housing, to open new avenues for more holistic and anti-ableist domestic AC research.

Storer, K., Branham, S.M. “Deinstitutionalizing Independence: Discourses of Disability and Housing in Accessible Computing.” In Proceedings of the ACM SIGACCESS Conference on Computers & Accessibility (ASSETS 21), Online Virtual Conference, October 18-22, 2021. (acceptance rate: 29%) to appear

ASSETS21 – Paper Accepted – Image Descriptions and Disability Identity

I am delighted that research led by my PhD student, Emory Edwards, with our collaborators Emily Blank and Michael Gilbert at Google, has been conditionally accepted at ASSETS 2021. More details and a preprint are coming soon. In the meantime, here’s an abstract:

Image accessibility is an established research area in Accessible Computing and a key area of digital accessibility for blind and low vision (BLV) people worldwide. Recent work has delved deeper into the question of how image descriptions should properly reflect the complexities of marginalized identity. However, when real subjects are not available to consult on their preferred identity terminology, as is the case with fictional representations of disability, the issue arises again of how to create accurate and sensitive image descriptions. We worked with 25 participants to assess and iteratively co-design image descriptions for nine fictional representations of people with disabilities. Through twelve focus groups and sixteen follow-up interviews, we discovered five key themes which we present here along with an analysis of the layers of interpretation at work in the production and consumption of image descriptions for fictional representations.

Edwards, E.J., Polster, K.L., Tuason, I., Gilbert, M., Blank, M., Branham, S.M. “‘That’s in the eye of the beholder’: Layers of Interpretation in Image Descriptions for Fictional Representations of People with Disabilities.” In Proceedings of the ACM SIGACCESS Conference on Computers & Accessibility (ASSETS 21), Online Virtual Conference, October 18-22, 2021. (acceptance rate: 29%) to appear

$549K NSF CAREER Award to Study Tech for Blind Parents

I am honored and elated to share that my lab’s research on technologies for blind parents––specifically on voice assistants that support reading with their children––is being funded by an NSF CAREER Award. The $549,858, five-year grant, titled “Advancing Computing for Parents with Vision Impairments” will primarily go toward funding the work of my talented PhD students, including PhD candidate Kevin Storer, and compensating research participants for their time and expertise. For more details about the work, please see the official abstract below:

Over 25 million American adults with vision impairments have long been unable to participate fully in some of the most important roles in life: parent, spouse, neighbor, and more.  While innovations in accessible computing have radically advanced the independence of these people, the larger social contexts of interdependence and use are often neglected. For example, optical character recognition, visual crowd work, and text-to-speech technologies enable individual access to print text for the blind, but when a blind parent wants to co-read with their sighted child their goals go beyond mere access; they want to bond with their child by reading in their own voice and in ways that enhance Braille and print literacy skills for both themself and their child.  This project will contribute three novel voice-based technologies that will be freely disseminated so as to have broad impact.  It will sustain an ongoing community collaboration between the University of California, Irvine and the Dayle McIntosh Center for the Disabled, to teach future software engineers how to create accessible technologies and provide sighted assistance to the visually impaired population in the greater Los Angeles area. And it will support the careers of people with disabilities, who are underrepresented in STEM.

A growing number of accessible computing scholars argue that the field lacks a fundamental understanding of what “caring for” roles adults with vision impairments occupy, what interaction models are effective, and what accessibility challenges exist. As a result, technologies often fall short of supporting independence for the members of this community in that they do not enable full social integration. The dual research aims of this research address this gap by identifying both novel application domains and interaction techniques.  The project will conduct a content analysis of user-generated data coupled with interview data to answer the questions posed above.  Design-based research will then address what novel interaction models can be applied to voice assistants to facilitate parent-child bonding, parent Braille literacy, and child print literacy as a visually impaired parent co-reads with their sighted child.  Project outcomes will include a large-scale dataset generated by members of the target community, a taxonomy of untapped application domains and qualitative insights into user needs, as well as novel interaction models for voice assistants, all of which will combine to constitute the foundations for a nascent sub-field in accessible computing that focuses on technologies for interdependence.

CHI21 – Paper Accepted – Transactional voice assistants

Building from our collaboration with Toyota, my senior PhD student, Ali Abdolrahmani, led this paper on how we can make voice assistants work better for both blind and sighted folks in contexts outside of the home. Check out our short video preview, and read the full paper!

Abdolrahmani, A., Gupta, M.H., Vader, M.-L., Kuber, R., Branham, S.M. “Towards More Transactional Voice Assistants: Investigating the Potential for a Multimodal Voice-Activated Indoor Navigation Assistant for Blind and Sighted Travelers.” In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI ’21), Online Virtual Conference (originally Yokohama, Japan), May 8-13, 2021. (acceptance rate: 26%)

CHI21 – Paper Accepted – Voice interfaces and childhood literacy

I was fortunate to collaborate with recent UCI graduate, Ying Xu, from the UCI School of Education on this exciting study of voice-based communication apps targeting children. When we compared recommended adult-child communication patterns for building early literacy skills with those currently available through voice interfaces, we find the latter very much lacking. Check out our video preview and full paper for design recommendations.

Xu, Y., Branham, S.M., Deng, X., Collins, P., Warschauer, M. “Are Current Voice Interfaces Designed to Support Children’s Language Development?” In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI ’21), Online Virtual Conference (originally Yokohama, Japan), May 8-13, 2021. (acceptance rate: 26%)

CHI21 – Paper Accepted – Latte: Automating use case testing for accessibility

I was fortunate to collaborate with some of my colleagues in Software Engineering here at UCI, on this work led by PhD student Navid Salehnamadi. Latte builds on the pervasive practices of GUI use case testing, instead simulating screen reader (for people who are blind) and switch (for people who have limited dexterity) navigation through Android apps. Check out the technical details, how this approach outperforms contemporary methods, and what we learn about the future of accessible app development.

Salehnamadi, N., Alshayban, A., Lin, J.-W., Ahmed, I., Branham, S.M., Malek, S. “Latte: Use-Case and Assistive-Service Driven Automated Accessibility Testing Framework for Android.” In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI ’21), Online Virtual Conference (originally Yokohama, Japan), May 8-13, 2021. (acceptance rate: 26%)

Video – 10 min summary of Voice Assistant research

Brews and Brains at UCI is a student-led initiative to support science communication to the general public, a topic near and dear to my heart. So, when they invited me to share my team’s research on voice assistants and people with vision impairments at a local pub, I was all in. This event took place on October 15, 2019. As of December, the work I draw on is or will soon be reported in academic-ese in various venues:

  • Storer, K., Judge, T.K, Branham, S.M. “‘All in the Same Boat’: Tradeoffs of Voice Assistant Ownership for Mixed-Visual-Ability Families.” CHI 2020, forthcoming
  • Abdolrahmani, A., Storer, K.M., Mukkath Roy, A.R., Kuber, R., & Branham, S.M. Blind Leading the Sighted: Drawing Design Insights from Blind Users Towards More Productivity-Oriented Voice Interfaces. TACCESS Journal. forthcoming
  • Branham, S.M. & Mukkath Roy, A.R. “Reading Between the Guidelines: How Commercial Voice Assistant Guidelines Hinder Accessibility for Blind Users.” ASSETS 2019
  • Storer, K. & Branham, S.M. “That’s the Way Sighted People Do It: What Blind Parents Can Teach Technology Designers About Co-Reading with Children.” DIS 2019

This was fun to make, and I hope you find it fun and accessible to watch. Many thanks to Brews and Brains, who honored my request to caption the video, and who didn’t tease me when I went for a wine glass instead of a stein:)