Normally, we think of data as visual, something we transform into charts and graphs when we want to see trend lines. But the ear is exquisitely sensitive and has abilities the eye doesn’t.
Lisa Muratori is a professor of physical therapy who works with patients suffering from neurological conditions, like Parkinson’s, that might impair their strides. “Gait is important,” she notes—if you’re walking too slowly or unevenly, you’re more liable to have accidents.
One tricky part of her practice is helping a patient figure out when their gait is drifting away from a stable pattern. Muratori’s solution: Put sensors in their shoes, which creates a terrific stream of data. The numbers show precisely when that walk goes wonky. But how should she show the patients that data? If you’re trying not to fall while wandering down the sidewalk, it’s crazy to peer at a screen.
So Muratori shifted senses, from the eyes to the ears, training patients to listen to their data. She collaborated with Margaret Schedel, a professor of music at Stony Brook University, to design software that picks up when a person’s stride goes off-kilter and alerts them by distorting the sound of an audiobook or music or whatever is playing in their earbuds. That way patients can instantly—and almost subconsciously—perceive errors and correct them. It’s an example of an intriguing new evolution in our big-data world: sonification, expressing data through sound.
Read More at Wired
Read the rest at Wired