All it would take is a room of user-interface experts experimenting with different models to “push” your music to you in interesting new ways, based on artist similarity, based on your listening history, based on mood, based on time of day, based on time of year — hell, based on the weather and on news feeds. It wouldn’t merely be flashy; it would, I think, profoundly change the way you interact with your music library.
The groundwork has sort of been laid for that already. There are many music services that offer tailored experiences…rate a song up and you get it more often (sometimes even similar songs more often). Rate it down and you never hear it again.
You could create a personalized system where you identify your mood, or what sort of music you’re interested, or what color socks you’re wearing…then rate songs up or down. In theory, with time, it could simply train itself to serve music based upon those factors on just your hardware.
To do it well would be expensive, a central database could combine the data from people, perhaps sort it by overall preferences (for instance, sometimes I like The Distillers…so perhaps my opinion is held to be less relevant on when I want to hear Paul Simon for someone who just listens to light rock).
I think the problem is that a truly intelligent, easy to use, service would take a massive amount of infrastructure. It would have to deal with music on a song, album, and artist level…tying all of those to musical taste and emotional description. It would either have to be based upon a sales model, or be done by a company with enough market penetration (Read: Google) to mine data at a very massive rate.