Understanding the Use of a Large Language Model-Powered Guide to Make Virtual Reality Accessible for Blind and Low Vision People

2026-03-10Human-Computer Interaction

Human-Computer InteractionArtificial IntelligenceEmerging Technologies
AI summary

The authors studied how a smart AI guide can help people who are blind or have low vision use social virtual reality. They created an AI-powered guide using a large language model and tested it with 16 participants in VR where other users were pretend teammates. They found that participants used the guide mainly as a helpful tool when alone, but acted more friendly toward it when others were around, giving it nicknames and talking about its mistakes. The authors suggest ways to improve AI guides to make VR more accessible for blind and low vision users.

virtual realitysocial VRaccessibilityblind and low vision (BLV)large language modelAI guideuser studyhuman-computer interactionassistive technology
Authors
Jazmin Collins, Sharon Y Lin, Tianqi Liu, Andrea Stevenson Won, Shiri Azenkot
Abstract
As social virtual reality (VR) grows more popular, addressing accessibility for blind and low vision (BLV) users is increasingly critical. Researchers have proposed an AI "sighted guide" to help users navigate VR and answer their questions, but it has not been studied with users. To address this gap, we developed a large language model (LLM)-powered guide and studied its use with 16 BLV participants in virtual environments with confederates posing as other users. We found that when alone, participants treated the guide as a tool, but treated it companionably around others, giving it nicknames, rationalizing its mistakes with its appearance, and encouraging confederate-guide interaction. Our work furthers understanding of guides as a versatile method for VR accessibility and presents design recommendations for future guides.