Award – Named Among Popular Science’s “Brilliant 10”

In what can only be described as a surreal distinction, Popular Science has generously named me among their “Brilliant 10” early-career innovators in STEM for 2021. As described on their website:

Fresh eyes can change the world, and a world stressed by a pandemic, climate change, and inequity is one more ripe for change than we have ever experienced before. That’s why, after a five-year break, Popular Science is bringing back the Brilliant 10: an annual roster of early-career scientists and engineers developing ingenious approaches to problems across a range of disciplines. To find those innovators, we embarked on a nationwide search, vetting hundreds of researchers from institutions of all stripes and sizes. These thinkers represent our best hopes for navigating the unprecedented challenges of tomorrow—and today.

For more details, check out the spotlight:

$2.85m NSF Award to Broaden STEM Opportunities for Students with Disabilities

I am delighted to be a Co-PI (UCI share: $71,004) on three-year NSF grant “BPC-AE: AccessComputing Fourth Extension” (Award #2137312). PI Richard Ladner brings together collaborators from his home institution (Sheryl Burgstahler and Amy Ko), as well as a new “Leadership Corps” team (Raja Kushalnagar, Elaine Short, and myself) to expand the impact of their already massively successful AccessComputing program. As part of the Leadership Corps, I will work to strengthen industrial partnerships towards expanding the pipeline for students with disabilities into STEM careers. Can’t wait!

Google’s Material.io blog spotlights our co-designed inclusive image set

The products of an 18-month-long collaboration between my lab and Google designers and researchers is finally seeing the light of day! With a forthcoming ASSETS research article, and now a blog post on Material Design, we are delighted to share our co-developed inclusive design imagery. What sets this collection apart is that it depicts (often excluded) people with disabilities and other marginalized identities, and all images come with carefully crafted alt text / image descriptions. So, they are actually accessible to people with various disabilities.

We are excited, also, to share that these images will not only be used internally by Google designers to imagine more inclusive technologies. They will also be shipped on all new Google Chromebooks as accessible, inclusive avatar options at system setup.

Many thanks are owed to my advisee, PhD candidate Emory Edwards, for leading the team here at UCI. Thanks are also owed to Emily Blank and Michael Gilbert, our collaborators at Google. And, of course, we are deeply thankful to the many people with disabilities who shared their feedback to refine these images and craft alt text.

~$11m Jacobs CERES Award to Study EdTech for Children with Disabilities

PIs Candice Odgers and Gillian Hayes were kind enough to bring me along as one of two Co-Investigators (the other being Stephen Schueller) at UCI in this thrilling new investment from the Jacobs Foundation. The ~$11,000,000, five-year grant establishes CERES, a global project for Connecting the EdTech Research Ecosystem. The generous gift will enable my lab to develop educational technologies that include the needs of children with disabilities from the earliest stages of design. For more about this project, see articles in the LATimes and UCI.

ASSETS21 – Paper Accepted – Deinstitutionalizing Independence

Kevin Storer, my PhD student, and I have been collaborating around accessibility of reading practices of blind parents with their children. We are pleased to share that the theoretical exploration about what it means to do research of this nature, with people with disabilities in domestic settings, has been conditionally accepted at ASSETS 2021. More details and a preprint are coming soon. In the meantime, here’s an abstract:

The meaning of “homes” is complicated for disabled people because of the historical link between (de)institutionalization, housing, and civil rights. But, it is unclear whether and how this history impacts Accessible Computing (AC) research in domestic spaces. We performed Critical Discourse Analysis on 101 AC articles to explore how (de)institutionalization affects domestic AC research. We found (de)institutionalization motivates goals of “independence” for disabled people. Yet, discourses of housing reflected institutional logics which are in tension with “independence”—complicating how goals were set, housing was understood, and design was approached. We outline three discourses of housing in AC and identify parallels to those used to justify institutionalization in the USA. We reflect upon their consequences for AC research. We offer principles derived from the Independent Living Movement as frameworks for challenging institutional conceptions of housing, to open new avenues for more holistic and anti-ableist domestic AC research.

Storer, K., Branham, S.M. “Deinstitutionalizing Independence: Discourses of Disability and Housing in Accessible Computing.” In Proceedings of the ACM SIGACCESS Conference on Computers & Accessibility (ASSETS 21), Online Virtual Conference, October 18-22, 2021. (acceptance rate: 29%) to appear

ASSETS21 – Paper Accepted – Image Descriptions and Disability Identity

I am delighted that research led by my PhD student, Emory Edwards, with our collaborators Emily Blank and Michael Gilbert at Google, has been conditionally accepted at ASSETS 2021. More details and a preprint are coming soon. In the meantime, here’s an abstract:

Image accessibility is an established research area in Accessible Computing and a key area of digital accessibility for blind and low vision (BLV) people worldwide. Recent work has delved deeper into the question of how image descriptions should properly reflect the complexities of marginalized identity. However, when real subjects are not available to consult on their preferred identity terminology, as is the case with fictional representations of disability, the issue arises again of how to create accurate and sensitive image descriptions. We worked with 25 participants to assess and iteratively co-design image descriptions for nine fictional representations of people with disabilities. Through twelve focus groups and sixteen follow-up interviews, we discovered five key themes which we present here along with an analysis of the layers of interpretation at work in the production and consumption of image descriptions for fictional representations.

Edwards, E.J., Polster, K.L., Tuason, I., Gilbert, M., Blank, M., Branham, S.M. “‘That’s in the eye of the beholder’: Layers of Interpretation in Image Descriptions for Fictional Representations of People with Disabilities.” In Proceedings of the ACM SIGACCESS Conference on Computers & Accessibility (ASSETS 21), Online Virtual Conference, October 18-22, 2021. (acceptance rate: 29%) to appear

$549K NSF CAREER Award to Study Tech for Blind Parents

I am honored and elated to share that my lab’s research on technologies for blind parents––specifically on voice assistants that support reading with their children––is being funded by an NSF CAREER Award. The $549,858, five-year grant, titled “Advancing Computing for Parents with Vision Impairments” will primarily go toward funding the work of my talented PhD students, including PhD candidate Kevin Storer, and compensating research participants for their time and expertise. For more details about the work, please see the official abstract below:

Over 25 million American adults with vision impairments have long been unable to participate fully in some of the most important roles in life: parent, spouse, neighbor, and more.  While innovations in accessible computing have radically advanced the independence of these people, the larger social contexts of interdependence and use are often neglected. For example, optical character recognition, visual crowd work, and text-to-speech technologies enable individual access to print text for the blind, but when a blind parent wants to co-read with their sighted child their goals go beyond mere access; they want to bond with their child by reading in their own voice and in ways that enhance Braille and print literacy skills for both themself and their child.  This project will contribute three novel voice-based technologies that will be freely disseminated so as to have broad impact.  It will sustain an ongoing community collaboration between the University of California, Irvine and the Dayle McIntosh Center for the Disabled, to teach future software engineers how to create accessible technologies and provide sighted assistance to the visually impaired population in the greater Los Angeles area. And it will support the careers of people with disabilities, who are underrepresented in STEM.

A growing number of accessible computing scholars argue that the field lacks a fundamental understanding of what “caring for” roles adults with vision impairments occupy, what interaction models are effective, and what accessibility challenges exist. As a result, technologies often fall short of supporting independence for the members of this community in that they do not enable full social integration. The dual research aims of this research address this gap by identifying both novel application domains and interaction techniques.  The project will conduct a content analysis of user-generated data coupled with interview data to answer the questions posed above.  Design-based research will then address what novel interaction models can be applied to voice assistants to facilitate parent-child bonding, parent Braille literacy, and child print literacy as a visually impaired parent co-reads with their sighted child.  Project outcomes will include a large-scale dataset generated by members of the target community, a taxonomy of untapped application domains and qualitative insights into user needs, as well as novel interaction models for voice assistants, all of which will combine to constitute the foundations for a nascent sub-field in accessible computing that focuses on technologies for interdependence.

Video – Innovating the Future of Work with Blind People

Last April, I was delighted to engage in conversation about my lab’s research as part of the HCI and the Future of Work and Wellbeing dialogue series, hosted virtually at Wellesley College. Alongside my PhD students, Ali Abdolrahmani, Kevin Storer, and Emory Edwards, we shared what we have learned about the future of work based on the experiences of people who are blind or low vision. The title, abstract, and video recording of our lively dialogue can be found below.

Title: Innovating the Future of Work with Blind People

Abstract: Time and again, when technologists have imagined the future of work, they have done so without consideration of people who are blind. Look no further than the display you are currently reading—the first displays and touchscreens appeared in the 1960s and 70s, while the first screen reader to make them accessible wasn’t invented until 1986. This is not atypical; most technologies are indeed “retrofit” for accessibility, often years and decades after their first introduction. Given this, how exactly do blind people work in the 21st century? What technical barriers do they face, and to what extent are barriers technical as opposed to sociocultural? How do we break the innovate-retrofit cycle, and what role can HCI scholars and practitioners play? For the past 7 years, my research has explored these questions with blind students and collaborators, through qualitative inquiry and participatory design–an approach, I argue, that not only results in accessible technologies from the start, but that also can lead to radical innovation that improves work for all. I look forward to engaging these ideas in dialogue with you.

CHI21 – Paper Accepted – Latte: Automating use case testing for accessibility

I was fortunate to collaborate with some of my colleagues in Software Engineering here at UCI, on this work led by PhD student Navid Salehnamadi. Latte builds on the pervasive practices of GUI use case testing, instead simulating screen reader (for people who are blind) and switch (for people who have limited dexterity) navigation through Android apps. Check out the technical details, how this approach outperforms contemporary methods, and what we learn about the future of accessible app development.

Salehnamadi, N., Alshayban, A., Lin, J.-W., Ahmed, I., Branham, S.M., Malek, S. “Latte: Use-Case and Assistive-Service Driven Automated Accessibility Testing Framework for Android.” In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI ’21), Online Virtual Conference (originally Yokohama, Japan), May 8-13, 2021. (acceptance rate: 26%)

CHI21 – Paper Accepted – Voice interfaces and childhood literacy

I was fortunate to collaborate with recent UCI graduate, Ying Xu, from the UCI School of Education on this exciting study of voice-based communication apps targeting children. When we compared recommended adult-child communication patterns for building early literacy skills with those currently available through voice interfaces, we find the latter very much lacking. Check out our video preview and full paper for design recommendations.

Xu, Y., Branham, S.M., Deng, X., Collins, P., Warschauer, M. “Are Current Voice Interfaces Designed to Support Children’s Language Development?” In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI ’21), Online Virtual Conference (originally Yokohama, Japan), May 8-13, 2021. (acceptance rate: 26%)