ASSETS21 – Paper Accepted – Deinstitutionalizing Independence

Kevin Storer, my PhD student, and I have been collaborating around accessibility of reading practices of blind parents with their children. We are pleased to share that the theoretical exploration about what it means to do research of this nature, with people with disabilities in domestic settings, has been conditionally accepted at ASSETS 2021. More details and a preprint are coming soon. In the meantime, here’s an abstract:

The meaning of “homes” is complicated for disabled people because of the historical link between (de)institutionalization, housing, and civil rights. But, it is unclear whether and how this history impacts Accessible Computing (AC) research in domestic spaces. We performed Critical Discourse Analysis on 101 AC articles to explore how (de)institutionalization affects domestic AC research. We found (de)institutionalization motivates goals of “independence” for disabled people. Yet, discourses of housing reflected institutional logics which are in tension with “independence”—complicating how goals were set, housing was understood, and design was approached. We outline three discourses of housing in AC and identify parallels to those used to justify institutionalization in the USA. We reflect upon their consequences for AC research. We offer principles derived from the Independent Living Movement as frameworks for challenging institutional conceptions of housing, to open new avenues for more holistic and anti-ableist domestic AC research.

Storer, K., Branham, S.M. “Deinstitutionalizing Independence: Discourses of Disability and Housing in Accessible Computing.” In Proceedings of the ACM SIGACCESS Conference on Computers & Accessibility (ASSETS 21), Online Virtual Conference, October 18-22, 2021. (acceptance rate: 29%) to appear

ASSETS21 – Paper Accepted – Image Descriptions and Disability Identity

I am delighted that research led by my PhD student, Emory Edwards, with our collaborators Emily Blank and Michael Gilbert at Google, has been conditionally accepted at ASSETS 2021. More details and a preprint are coming soon. In the meantime, here’s an abstract:

Image accessibility is an established research area in Accessible Computing and a key area of digital accessibility for blind and low vision (BLV) people worldwide. Recent work has delved deeper into the question of how image descriptions should properly reflect the complexities of marginalized identity. However, when real subjects are not available to consult on their preferred identity terminology, as is the case with fictional representations of disability, the issue arises again of how to create accurate and sensitive image descriptions. We worked with 25 participants to assess and iteratively co-design image descriptions for nine fictional representations of people with disabilities. Through twelve focus groups and sixteen follow-up interviews, we discovered five key themes which we present here along with an analysis of the layers of interpretation at work in the production and consumption of image descriptions for fictional representations.

Edwards, E.J., Polster, K.L., Tuason, I., Gilbert, M., Blank, M., Branham, S.M. “‘That’s in the eye of the beholder’: Layers of Interpretation in Image Descriptions for Fictional Representations of People with Disabilities.” In Proceedings of the ACM SIGACCESS Conference on Computers & Accessibility (ASSETS 21), Online Virtual Conference, October 18-22, 2021. (acceptance rate: 29%) to appear

$549K NSF CAREER Award to Study Tech for Blind Parents

I am honored and elated to share that my lab’s research on technologies for blind parents––specifically on voice assistants that support reading with their children––is being funded by an NSF CAREER Award. The $549,858, five-year grant, titled “Advancing Computing for Parents with Vision Impairments” will primarily go toward funding the work of my talented PhD students, including PhD candidate Kevin Storer, and compensating research participants for their time and expertise. For more details about the work, please see the official abstract below:

Over 25 million American adults with vision impairments have long been unable to participate fully in some of the most important roles in life: parent, spouse, neighbor, and more.  While innovations in accessible computing have radically advanced the independence of these people, the larger social contexts of interdependence and use are often neglected. For example, optical character recognition, visual crowd work, and text-to-speech technologies enable individual access to print text for the blind, but when a blind parent wants to co-read with their sighted child their goals go beyond mere access; they want to bond with their child by reading in their own voice and in ways that enhance Braille and print literacy skills for both themself and their child.  This project will contribute three novel voice-based technologies that will be freely disseminated so as to have broad impact.  It will sustain an ongoing community collaboration between the University of California, Irvine and the Dayle McIntosh Center for the Disabled, to teach future software engineers how to create accessible technologies and provide sighted assistance to the visually impaired population in the greater Los Angeles area. And it will support the careers of people with disabilities, who are underrepresented in STEM.

A growing number of accessible computing scholars argue that the field lacks a fundamental understanding of what “caring for” roles adults with vision impairments occupy, what interaction models are effective, and what accessibility challenges exist. As a result, technologies often fall short of supporting independence for the members of this community in that they do not enable full social integration. The dual research aims of this research address this gap by identifying both novel application domains and interaction techniques.  The project will conduct a content analysis of user-generated data coupled with interview data to answer the questions posed above.  Design-based research will then address what novel interaction models can be applied to voice assistants to facilitate parent-child bonding, parent Braille literacy, and child print literacy as a visually impaired parent co-reads with their sighted child.  Project outcomes will include a large-scale dataset generated by members of the target community, a taxonomy of untapped application domains and qualitative insights into user needs, as well as novel interaction models for voice assistants, all of which will combine to constitute the foundations for a nascent sub-field in accessible computing that focuses on technologies for interdependence.

Video – Innovating the Future of Work with Blind People

Last April, I was delighted to engage in conversation about my lab’s research as part of the HCI and the Future of Work and Wellbeing dialogue series, hosted virtually at Wellesley College. Alongside my PhD students, Ali Abdolrahmani, Kevin Storer, and Emory Edwards, we shared what we have learned about the future of work based on the experiences of people who are blind or low vision. The title, abstract, and video recording of our lively dialogue can be found below.

Title: Innovating the Future of Work with Blind People

Abstract: Time and again, when technologists have imagined the future of work, they have done so without consideration of people who are blind. Look no further than the display you are currently reading—the first displays and touchscreens appeared in the 1960s and 70s, while the first screen reader to make them accessible wasn’t invented until 1986. This is not atypical; most technologies are indeed “retrofit” for accessibility, often years and decades after their first introduction. Given this, how exactly do blind people work in the 21st century? What technical barriers do they face, and to what extent are barriers technical as opposed to sociocultural? How do we break the innovate-retrofit cycle, and what role can HCI scholars and practitioners play? For the past 7 years, my research has explored these questions with blind students and collaborators, through qualitative inquiry and participatory design–an approach, I argue, that not only results in accessible technologies from the start, but that also can lead to radical innovation that improves work for all. I look forward to engaging these ideas in dialogue with you.

CHI21 – Paper Accepted – Latte: Automating use case testing for accessibility

I was fortunate to collaborate with some of my colleagues in Software Engineering here at UCI, on this work led by PhD student Navid Salehnamadi. Latte builds on the pervasive practices of GUI use case testing, instead simulating screen reader (for people who are blind) and switch (for people who have limited dexterity) navigation through Android apps. Check out the technical details, how this approach outperforms contemporary methods, and what we learn about the future of accessible app development.

Salehnamadi, N., Alshayban, A., Lin, J.-W., Ahmed, I., Branham, S.M., Malek, S. “Latte: Use-Case and Assistive-Service Driven Automated Accessibility Testing Framework for Android.” In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI ’21), Online Virtual Conference (originally Yokohama, Japan), May 8-13, 2021. (acceptance rate: 26%)

CHI21 – Paper Accepted – Voice interfaces and childhood literacy

I was fortunate to collaborate with recent UCI graduate, Ying Xu, from the UCI School of Education on this exciting study of voice-based communication apps targeting children. When we compared recommended adult-child communication patterns for building early literacy skills with those currently available through voice interfaces, we find the latter very much lacking. Check out our video preview and full paper for design recommendations.

Xu, Y., Branham, S.M., Deng, X., Collins, P., Warschauer, M. “Are Current Voice Interfaces Designed to Support Children’s Language Development?” In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI ’21), Online Virtual Conference (originally Yokohama, Japan), May 8-13, 2021. (acceptance rate: 26%)

CHI21 – Paper Accepted – Transactional voice assistants

Building from our collaboration with Toyota, my senior PhD student, Ali Abdolrahmani, led this paper on how we can make voice assistants work better for both blind and sighted folks in contexts outside of the home. Check out our short video preview, and read the full paper!

Abdolrahmani, A., Gupta, M.H., Vader, M.-L., Kuber, R., Branham, S.M. “Towards More Transactional Voice Assistants: Investigating the Potential for a Multimodal Voice-Activated Indoor Navigation Assistant for Blind and Sighted Travelers.” In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI ’21), Online Virtual Conference (originally Yokohama, Japan), May 8-13, 2021. (acceptance rate: 26%)

Blog – Academic Job Market – Pt.1 – Materials

Quick links: Research StatementTeaching StatementDiversity StatementCover Letter

Going on the academic job market in search of a tenure-track position at a research-focused institution can be scary––it was for me, at least. By the time I got up the nerve, I had a non-linear career path (4 years post-PhD in a teaching-focused position). I had dramatically changed research topics twice (advisor change in grad school, and once again for my postdoc). And, I didn’t really understand the current landscape of academia in my field, HCI, in part because my advisor never had to navigate it (Steve Harrison came directly from industry and was a Professor of Practice).

Fast forward to 2019. I am very happily seated in an office in Donald Bren Hall at UC Irvine’s Department of Informatics in my second year as an Assistant Professor. And, when I look back, I realize much of that fear was truly unnecessary. I have been collecting stories of other scholars with non-linear paths (mostly through Geraldine Fitzpatrick’s Changing Academic Life podcast, which I highly recommend), and reflecting on what I wish I had known just a couple years ago. So, in this post––which I plan to extend in chunks over time––I will share some of the resources and advice from kind mentors who helped me make it through, as well as some things I would do differently if I could have another go. I hope, wherever you may be on your journey and whatever you ultimately decide, you find parts of this post useful as you plan next steps.

Materials

I benefitted immensely from the job materials posted publicly by scholars like Jon Froehlich and Erika Poole, and materials shared by mentors like Amy Hurst. Don’t be shy to poke around the websites of your academic heroes, or even ask them directly, for copies of their materials. In the spirit of paying it forward, I am happy to share my:

  • Research Statement
    Notes: I decided to go for a two-page statement, though for a TT research position, longer statements are common. My assumption is that most faculty don’t have time to read more.
  • Teaching Statement
    Notes: I put aside advice to (1) make this one page only, (2) make this about my philosophy as opposed to my practice. Having worked three years as a full-time Lecturer, I had a significant amount of teaching experience under my belt, so I opted to showcase this in two pages with evidence. Your mileage may vary.
  • Diversity Statement
    Notes: As with my teaching statement, I opted to focus on my practice. Diversity and inclusion are a core part of my identity and the research, teaching, and service I seek out. If this isn’t the case for you, my example may be less useful.
  • Cover Letter
    Notes: The cover letter should be highly tailored for each institution, but it also needs to tell the core story of your research, teaching, and service. In this copy, I’ve removed the bits that were specific to my plans at UCI.

Preview: Rounding Up Job Ads

The next section I will write will revolve around which mailing lists I joined and which websites I scoured, as well as how I managed all of the positions in a spreadsheet. Perhaps the best advice I will give will relate to how you can make the job opportunities come to you:) Stay tuned for this and other sections, including:

  • Getting Feedback on Your Materials
  • Knowing When You’re Ready & the Narrative of “Fit”
  • Preparing for Phone and On-Site Interviews

Video – 10 min summary of Voice Assistant research

Brews and Brains at UCI is a student-led initiative to support science communication to the general public, a topic near and dear to my heart. So, when they invited me to share my team’s research on voice assistants and people with vision impairments at a local pub, I was all in. This event took place on October 15, 2019. As of December, the work I draw on is or will soon be reported in academic-ese in various venues:

  • Storer, K., Judge, T.K, Branham, S.M. “‘All in the Same Boat’: Tradeoffs of Voice Assistant Ownership for Mixed-Visual-Ability Families.” CHI 2020, forthcoming
  • Abdolrahmani, A., Storer, K.M., Mukkath Roy, A.R., Kuber, R., & Branham, S.M. Blind Leading the Sighted: Drawing Design Insights from Blind Users Towards More Productivity-Oriented Voice Interfaces. TACCESS Journal. forthcoming
  • Branham, S.M. & Mukkath Roy, A.R. “Reading Between the Guidelines: How Commercial Voice Assistant Guidelines Hinder Accessibility for Blind Users.” ASSETS 2019
  • Storer, K. & Branham, S.M. “That’s the Way Sighted People Do It: What Blind Parents Can Teach Technology Designers About Co-Reading with Children.” DIS 2019

This was fun to make, and I hope you find it fun and accessible to watch. Many thanks to Brews and Brains, who honored my request to caption the video, and who didn’t tease me when I went for a wine glass instead of a stein:)