Smart Speakers and The Consumption of Music

In a previous post, in which I announced my new position as Insights Strategist for Universal Music Group, I expressed an interest in the evolving media consumption behavior as a result of connected devices.

Source: Edison Research

With the dawn of the internet of things came the introduction of smart cars, connected home automation devices, and wearable technologies among other nifty connected devices. While these gadgets are all fascinating smart technologies, none have taken hold in US households as much as smart speakers. In January 2018, smart speakers were being used by consumers that fell within the “Early Adopters” and “Early Majority” stages of the innovation curve. According to an Adobe Analytics study, almost 50% of US consumers will own a smart speaker after the 2018 holiday season.

The smart speaker revolution is undeniable.

What does this mean for the consumption of music?

In the same Adobe study mentioned above, 70% of the respondents reported using their smart speakers for music consumption, which makes it the primary activity followed by weather forecasting (64%), and alarms/reminder (46%).

In my personal experience, using a smart speaker seems to remove the friction when wanting to listen to music. When I want to listen to music, I don’t need to manually look up an artist, album, song, genre, etc. There’s a clear consumer pain point that was being addressed. However, since most smart speakers don’t have a screen, that means the results for voice queries for music have to be much accurate. If we were to look up an artist on a search engine or music streaming platform, we’re given several songs or albums to to choose from. With the lack of a screen to refer to, consumers are given the one algorithmic-driven result deemed most appropriate by smart speakers. That means that these smart devices have one shot to get the customer experience right and pull up the “right” song.

Keeping in mind choice paralysis (there are times when I want to listen to music, but feel a little overwhelmed by the vast catalogue of music out in the world) and as consumers interact with smart speakers in much more intuitive and natural ways (as opposed to written queries) the dependence on genre or mood queries will play a key role in music consumption. But, with the melting pot of music genres, how does one categorize the genre-bending band Gorillaz, for example? In an ethnographic study that Edision Research conducted, we can see the toddler asks Alexa to play “Elsa” and “Frozen”. Besides the fact that pronunciation is an essential factor for smart speakers to deal with (think about how many consumers might be mispronouncing an artist name or lyrics), the smart speaker device should comprehend that the “Elsa” and “Frozen” prompt means to play “Let It Go”. But doesn’t this change if there’s an artist named “Elsa”?

All this means that there’s going to be a lazer-like focus on getting the music metadata right to serve up the right music at the right moment.

This is an extremely fascinating time to be alive. Voice is here and seems to be the future.

P.S. While there might be some apprehension from digital immigrants to use smart speakers, isn’t it fascinating to think that the same toddler from above is going to grow up naturally accepting Alexa as a digital assistant?