Apple Developing AI-Powered Map for the Visually Impaired

Technology is continuously expanding the possibilities for individuals with visual impairments. However, navigating daily life independently remains a significant challenge for them. Complex urban environments and insufficient infrastructure particularly complicate wayfinding. Addressing these needs, Apple and Columbia University have developed an AI-powered system that allows visually impaired individuals to virtually explore streets.
Introducing SceneScout: Virtual Street Exploration

Apple and Columbia University have collaboratively developed a new AI-powered system to help visually impaired or low-vision individuals explore streets more safely and independently. This prototype, named SceneScout, is currently in the research phase and can describe street views using Apple Maps data.
SceneScout aims to reduce the uncertainty faced by visually impaired or low-vision individuals when deciding to travel in an unfamiliar environment. While most traditional tools offer real-time directions, this new system allows users to gather more information about their surroundings before traveling.
How SceneScout Works: Two Exploration Modes

The system converts visual data into text descriptions. Users can virtually explore details they might encounter along a route or the general environment of a specific area. The developed system has two different usage modes:
- Route Preview Mode: In this mode, users can view information such as sidewalk conditions, intersection layouts, and bus stops before starting their journey.
- Virtual Exploration Mode: This mode allows users to define an environmental description based on their interests, and the system then provides directions by describing the streets accordingly.
AI Integration and Initial Test Results
SceneScout is powered by a GPT-4o-based AI model that works with panoramic images obtained from Apple Maps. The system analyzes visual data and translates it into text in the form of short, medium, or long descriptions. The web interface is designed to be compatible with screen reader technologies.
Initial tests involved 10 visually impaired individuals experienced in using screen readers. Participants found the Virtual Exploration mode particularly useful. However, the test results also highlighted some shortcomings of the system. Approximately 72% of the generated descriptions were accurate, but some descriptions contained incorrect information. For instance, a crossing without an auditory signal might be described as having one, or some street signs were misidentified.
You Might Also Like;
- The End of Chatbots: Why 2026 is the Year of AI Agents (And What It Means for Your Job)
- Would You Choose a Robot Body or a Digital Mind?
- Nano Banana Pro: Free 4K Photo Upscaling Tool
Follow us on TWITTER (X) and be instantly informed about the latest developments…










