Evolver.fm has itself evolved; its editor edits Spotify Insights, which explores music and how people listen to it through data, while Evolver.fm focuses mostly on the music hacking scene (Music Hack Day projects and other next-generation ideas). If you know about a music hack or fledgling app we might want to cover, please send us a tip.
The winter break is over, and with it, for many of us, all of that driving. It can be nice. The car is one of the last places where many of us focus on music, the way hi-fi enthusiasts used to, sat in the middle of their couches for the perfect signal from each speaker, concentrating on each note.
Driving through some rougher weather a few days ago, I decided to check out the local radio stations, and somehow misheard a Nirvana song, as can happen when listening to a fuzzy station with freezing rain and other road noise in competition, all while listening at a low volume for the benefit of a sleeping backseat passenger.
My brain picked up on just a few elements of the vocals, and somehow reconstructed an entirely different song out of what I was hearing. I only heard it for about a minute until my brain recognized what I was hearing, and started filling in the correct missing parts to reconstruct the song.
This has happened to me before, and a quick survey of the car’s occupants plus a relative revealed that all three of us have misheard songs playing at low volumes in a bad listening environment like a car in freezing rain, or without extensive insulation from normal highway road noise, as completely different songs.
If this has happened to you, you know that once you recognize a super-familiar song, your brain adds the parts of the song you can’t hear. As a result, you can actually experience the whole song while not hearing all of the notes.
This started me thinking… Could this be the reason we only ever seem to hear the same 50 or so songs on any given radio station? Because we need to be able to “hear” the songs in order to stay on the station, and we won’t be able to do that with unfamiliar music, because static, road noise, and low listening volumes make it impossible?
Records have been mastered for poor listening environments for years (Phil Spector’s “wall of sound” works on AM radio, for instance), but this is something different: Programming music in part based on the idea that people need to be super-familiar with a song in order to be able to hear it in a noisy listening environment.
Right now in Las Vegas, hordes of audio and auto manufacturers are debuting connected cars, announcing collaborations, and meeting behind closed doors to try to extend streaming to more cars (which is incidentally great news for recording artists and sound recording copyright holders, because AM and FM radio don’t pay them royalties in the US). And when they do, they might heed this lesson: When there’s lots of background noise in a car (sensors could help with this, or by passing the car speed and weather information to the music player), it could pay to program more familiar, rather than obscure, music.
Of course, “obscure” music means different things to different people; another advantage of personalized streaming over broadcast is that it can play all of the “obscure” stuff that’s super-familiar to you, and you’ll be able to hear it as clearly as the well-worn favorites of most FM radio stations.
Image by Seattle Municipal Archives (Flickr: Car in rain, 1980s) [CC BY 2.0], via Wikimedia Commons