It’s a popular debate among digital music insiders: Which is better at suggesting music, humans or machines?
On the 100-percent-human side, you’ll find Pandora and iTunes Genius. The former relies on musicologists to assign characteristics to songs to create its popular user-customizable artist stations. Apple’s iTunes Genius, like Netflix’s movie recommendation algorithm, relies on collaborative filtering. If a lot of people have both the Beatles and Aerosmith in their collections, Genius will recommend music by one to fans of the other.
Pandora founder Tim Westergren told me at one point that his staffers were able to process about 13,000 songs per month — a number that may have since risen, but still explains why its catalog of hand-picked, human-evaluated music has yet to reach one million songs.
At the other end of the human/machine continuum, you’ll find stuff like The Echo Nest’s audio analysis tool, which parses raw musical data to find similarities between songs based purely on the way they sound. (The Echo Nest publishes Evolver.fm.)
EMusic’s new Radio service, available to subscribers for free, though the rest of us can listen to up to ten hours, takes both approaches – human and machine — to produce streaming radio playlists. Basically, eMusic’s editorial staff, long regarded as some of the most knowledgable tastemakers in the game, uses The Echo Nest’s musical information database to curate music for the service. That way, they can consider including music they don’t even know about yet, whether because it was just released, if it’s too obscure, or if it’s just not on their mind for whatever reason.
As eMusic CTO Richard Caccappolo put it, this represents “a cure for writer’s block” for eMusic’s editorial staff. Humans are still in control of the eMusic Radio Beta, but they rely on machines — just like most of us do in real life.
To be fair, The Echo Nest’s API harnesses human input from the web based on what people are saying about music, so the “machine” part of this equation actually incorporates human input too. So eMusic Radio Beta might best be expressed as follows:
eMusic Radio Beta = Humans x (Machines + Humans)