See my CV, my Google Scholar page, and my lab website for more details.
My research sits at the intersection of Human-Computer Interaction and Accessible Computing, primarily focusing on the lived experiences of people who are blind and low vision and the ways technologies undermine and empower their interdependent relations with others in everyday social settings. While much of my research focuses on documenting the basic phenomenology and cultural meanings in these interactions, a portion of my work includes designing, building, and testing novel technologies to improve the social integration of people with vision disabilities. Most recently, my students and I have primarily been conducting work within the following domains.
Blind adults with little to no usable vision have many reading technologies available to them. For example, a screen reader can assist in reading websites and digital documents; OCR-enabled image-to-text apps can assist in reading print materials; and audiobooks can assist in reading books. However, until the research I proposed in my NSF CAREER Award, there had yet to be scientific exploration into what technologies can assist in reading books with their children. Our early findings revealed that blind parents want to read picture books with their sighted children, but digital literacy apps for kids are rarely accessible; print-braille books lack image descriptions; and image-to-text AI systems produce inaccurate and incomplete descriptions. Moreover, we found that existing reading technologies lack the intimacy parents desire when reading with children. Through this work, we contributed a framework and the notion of Intimate Assistive Technology, which motivate future technology design that supports both collaborative accessibility (e.g., both parent and child can access the material simultaneously) and intimacy (e.g., parent and child can cuddle up while using the technology). Drawing on our lab’s prior work with voice assistants like Alexa, we began to explore the potential of using such a platform to enable more accessible, intimate reading experiences We currently have a functional prototype of this system, ReadWithUs, which we have deployed in a technology probe study with blind parents. This work is funded by NSF CISE (Award #2048145).
Although totally blind people can read, they are often prevented from doing so because the creators of digital materials fail to include proper metadata. PDF files, for example, are notoriously inaccessible because they require costly software (Adobe Acrobat Pro) and specialized knowledge to produce, and yet this is the primary format in which scientists archive knowledge. Images in digital documents can erect additional barriers when the text alternatives to visual images are poorly constructed or absent altogether. When our work looked at prior research and authors’ actual practices of creating accessible images, we found that blind people were rarely consulted, and we called for more in situ evaluations of image accessibility with screen reader users. In line with our recommendation, we conducted the first study of digital image accessibility in pictorials—an image-dense publication format prevalent in ACM SIGCHI venues--via an observational study of blind screen reader users. Our study found that pictorials were rife with image accessibility barriers, and our observational approach led us to identify novel classifications for such barriers. Our paper, formatted as a pictorial and an exemplar of pictorial accessibility in and of itself, was recognized with a Best Paper Honorable Mention Award. We are currently working on a follow-up study of image accessibility in more traditional publication formats, using our observational approach to reveal new insights.
Just as authors are often unaware of the need to make accessible digital documents, software developers are often unaware of the need to make accessible software. One consequence is that people who are blind or low vision, especially those in the software industry itself, lack on-the-job access to digital tools. Prior research has documented accessibility issues with tools like IDEs, but our research is the first to consider the accessibility of the many software development meetings that are required of software professionals. Through interviews with dozens of blind and low vision software development professionals, we documented how digital workplace inaccessibility demanded additional labor from blind and low vision individuals, sometimes resulting in them being forced to disclose their disability identity and negatively affecting their upward and lateral career mobility. The latter study earned a Distinguished Paper Award. Currently, we are investigating the do-it-yourself (DIY) software tools that blind and low vision developers make and use at work to overcome accessibility issues, as well as the opportunities for Generative AI technologies to reduce the burden. We have one paper in preparation for a Fall deadline, and another in preparation for submission in the new year. This work is funded by NSF SHF.
The road to becoming a blind or low vision software developer is one marked by many hurdles, not least the leap one must take between graduating high school and entering college. Blind and low vision students are multiple times less likely to attend college or graduate from college than sighted peers. Technology barriers play a substantial role in the “leaky pipeline” for such students, especially in computing majors. Yet, prior research has yet study the social and accessibility challenges faced during this critical transition period. The focus of this research stream was informed by my lab’s prior work, which found that the social science notion of liminality during life transition can help explain the challenges older adults with vision loss experience as they try to adopt smartphones, and in fact multiple life transitions can compound and amplify oppression—an idea we refer to as Intersecting Liminality. This led us to ponder technology adoption by blind individuals during other life transitions. When teachers from IrvineUnified School District contacted me to discuss how to prepare their blind and low vision students for the technologies they will use in college, this solidified our focus on the high-school-to-college and adolescence-to-adulthood transitions. We are currently conducting a study with blind and low vision computing students who recently made the transition to college, and we are targeting a full paper submission in early 2025. This work is partially funded by Jacobs CERES, NSF CISE (Award #2137312), and internal funding.