Designing more accessible augmented reality for people with visual impairments
New augmented reality technology promises to revolutionize how people get information about the world around them and collaborate virtually. But the AR devices and interfaces commonly portrayed in popular media share a crucial, and overlooked, limitation — they’re all heavily reliant on visual cues. This is an extension of the already very visual web, and if these new technologies aren’t designed with accessibility as a central feature, they risk leaving people with visual impairments behind.
This is the motivation for a new project by Prof. Anhong Guo at the University of Michigan. The work, called “Making Collaborative Augmented Reality Experiences Accessible for People with Visual Impairments,” is supported by the Google Research Scholar Program to ensure that these emerging technologies are “universally accessible and useful.”
The project has an emphasis on collaborative AR experiences and enabling blind users to interact seamlessly with their peers. Along the way, Guo and his collaborators will explore how all AR systems, collaborative or otherwise, can best interface with blind users, as well as how AR can potentially be used to overcome accessibility limitations in existing collaborative technologies.
“AR content is often primarily visual,” says Guo, “and exposes neither the semantics nor the interfaces that would enable it to be accessed by accessibility services, such as screen readers.”
Prior work in Guo’s lab has analyzed 105 existing mobile AR apps to identify basic, building-block steps that are common to different types of AR experiences. These, Guo says, provide an opportunity to design accessible alternatives that can change AR experiences from the bottom up. The researchers have identified several approaches that have the potential to bridge the gap for visually impaired users, including spatial audio and speech that changes with changes in the physical environment, spatially-distributed audio markers, or using different styles of speech for physical and digital objects. As they develop these new design approaches, Guo’s group plans to establish AR accessibility guidelines and wrap them into easy-to-use developer toolkits.
The other main thrust of the work is exploring how AR can make virtual collaboration more accessible, both in existing and emerging applications. The first application Guo’s group is tackling is collaborative document editing. While interfaces like text editors are amenable to audio interfaces, collaborative overlays “create a third dimension” that needs to be interpreted simultaneously, Guo says. With a browser extension called CollabAlly, the group intends to extract the collaborative processes going on in real-time, like collaborators viewing the document, comments, and changes, and provide an audio representation.
“CollabAlly will also leverage accessible browser dialog popups, voice-guided interactions and other enabling accessible interaction techniques to allow easy navigation for blind users to navigate through the document space,” says Guo.
These two directions culminate in the group’s work on new collaborative AR technologies. This presents the most challenges, Guo says, since blind users will need to process information about their physical environment, AR content around them, and continuous changes made by collaborators simultaneously. Their solutions will take advantage of the building-block approach to reconstructing basic interactions with an accessible interface, as well as techniques that enable end-users to customize virtual collaboration spaces based on their own physical environment.