Daniel Kish has been blind since the age of one, but navigates the world with the help of echolocation. He makes small clicks of sounds, which when reflected from surfaces around him let him create images of the world in the brain—not dissimilar to a bat’s sonar. When neuroscientists imaged his brain using echolocation, they found that the visual part of his brain were working just as they would in a person who has eyes.
Now, Noelle Stiles and Shinsuke Shimojo, from the California Institute of Technology, are trying to take echolocation to the next level with a pair of glasses they have built that converts images into sounds. The results have been published in Scientific Reports.
Stiles and Shimojo asked sighted volunteers to match images of natural textures with the sound that appealed to them the most, while blind volunteers felt the textures and chose a matching sound. That data was fed into an algorithm to create an intuitive video-to-sound conversion. For instance, dark patches have low pitch and bright patches high pitch.
When blind people never exposed to the idea before used the device, Stiles and Shimojo found that they did as well at matching shapes to sound as people who had been trained on the device—about 33% better than chance alone. Crucially, when they reversed the algorithm, a control group of volunteers found the task much harder.
In the current form of the device, the sound is fed to them via headphones, which is not ideal because it might block out other ambient noises that can help the blind. One workaround would be to use bone conduction, whereby converted video is directly fed to the inner ear through the skull.
The device is still in development, but Stiles and Shimojo have high hopes.
For instance, by depending on clicks of sound alone, Kish can cycle on normal roads, go tree-climbing, and do many other things that were once thought impossible for the blind. A pair of smart glasses could allow him and other blind people to go even further.