Analyzing Data Through Sound
Why isn’t more data analysis done over mediums of sound? It’s one of two senses nearly all my electronics engage me with. The other is visual, and is the main/only way people analyze large data sets. Why not take advantage of the other?
It seems like our hearing is primed to pick up minute changes, just as much as our sight. Blind individuals can learn to navigate the world using only sound. As I sit, hurtling on this train, I can hear every shake of the car. That’s more than I can say of my feet feeling the shake through the floor.
What are the modalities of sound? In what ways can you convey a quantifiable change?
- Volume
- Pitch
- Interval
- Direction (in stereo output)
- Distortion
- Inflection
- Combination (chords)
What if there were a standardized method of hearing your data? I’m thinking of something akin to a line graph, but conveyed over your headphones. For an extremely simple example, you can plot two dimensional visual space in two dimensions of sound. At the barest here, x
and y
become volume and pitch.
Auditory analysis of data seems ripe for passive work, too. You can listen to a live stream of audio formed from your site’s live visitors while reading a novel. It’s hard to miss a discordant note or change in volume, even when attention is elsewhere.
Josh Beckman