A Quick Update...

Fumigations suck.

My apartment building’s being fumigated for termites this week. I’m not one to make excuses for myself, but (ok, you can cue in the violins here…) because of the necessary preparations for the fumigation, I wasn’t able to write this week’s post. I know that life gets in the way sometimes, but I can’t help to feel a little guilty of not following through with my goal of creating a weekly post. That being the case, instead of promising a post once a week on my blog, I’ll be posting once every two weeks from here on out.

This extra week will also give me a bit more time to write up more fleshed out posts. I have a couple interesting posts in the pipeline covering topics from the UX of Amazon Alexa to Deezer, but again, I need a little more time to produce better content, instead of spewing the content out for the sake of having a post for the week.

All this being said, I thought I’d leave you guys with some thought-provoking figures I came across in Statista’s Digital Economy Compass 2018 report that was published early this year.

When it comes to biometric technology, I would have expected wearables to outpace smartphones until the market’s saturated. However, according to Statista, smartphones will outpace wearables in 2019 in terms of the share of the technology that will include biometric sensors of some sort.

What are the implications for health and fitness related companies? Clearly there’ll be more consumer data to collect, but, when looking at the evolving digital landscape from the music industry perspective, this type of data provides further opportunities to contextualize music/playlists - a obvious unique selling proposition for major music streaming players like Spotify and fitness-related startups like Studio, which are working towards building “exciting digital experience far beyond what a traditional treadmill offers”.

How The Music Album Strengthens Brand Relationships

In a recent write-up from Music Business Worldwide, the consumption of the album versus singles was discussed. An idea was pointed out about fans being encouraged to develop a less-committed relationship with new artists due to the nature of music streaming platforms. It’s clear that in today’s attention economy, consumer behavior is shifting to consume singles as opposed to entire albums. When looking at the consumption of the streaming consumption of Drake’s 25 track long album Scorpion from earlier this year, 63% of global streams from Spotify came from three songs: “God’s Plan”, “In My Feelings” and “Nice for What”.

This brought up an internal debate within myself on the topic of how artists can establish a more committed relationships with consumers, turning them into superfans. I hypothesized that the underlying idea to driving fandom is through emotion.

The Artist As A Brand

Through academic studies on branding like one conducted by Robert Heath, David Brandt, Agnes Nairn, and Eivi Lyon, we’ve come to understand that it’s the emotional creativity, and not the rational message in advertising that builds brand relationships. How does this idea apply to the consumer and the relationship to an artist?

When we look at communication appeal through the lens of Daniel Kahneman’s “Thinking, Fast and Slow”, an emotionally-driven form of communication makes a quicker and greater impact on the System 1 part of our brain, which is capable of making quick decisions based on very little information. Emotions and feelings will always be formed pre-cognitively and pre-attentively before any information processing takes place, as argued by neuroscientist Antonio Damasio. As a result, we can establish that emotions influence us in ways we don’t even realize.

Furthermore, not only is mental engagement greater when there’s a greater level of emotion in the message, but emotion helps drive long-term memory encoding. According to Peter Pynta, “what’s key to turning [a recent memory] into [a] long-term memory is how intensely emotion is experienced. The more intense, the more likely it is to be remembered.”

So, emotions influence us on an unconscious level AND emotions help drive long-term memory encoding? Sounds like emotion-based communication is the way to go for brands…

Being that artists are essentially brands, it’s pretty fascinating to think about the implications of emotional communication to potentially consuming audiences. Let’s take Drake and his long-standing rap beef with Pusha-T earlier this year as an example. Whether or not you think Drake won the rap feud with King Push (imho, Drake got bodied), the novelty of the rap battle and lyrical jabs thrown by both artists drew attention and made us feel some type of way. These emotions not only captured our attention to the feud at hand, but also helped embed this as a long-term memory for many rap fans. At the end of the day, whether it was concerning the lyrical abilities of each of the rappers or the dirt thrown around, the rap battle was also a well-played means to a marketing end, considering that Pusha-T dropped his album DAYTONA a few days before the beef reached its peak with “The Story of Adidon”.

Where the Album comes into play

In our “attention economy”, we face constant cognitive overload from brands in our everyday lives. When it comes to music consumption, things are no different. It’s overwhelming to think about the insurmountable amount of music when we log onto our go-to music streaming platforms. Due to the cognitive overload, most of the times the artist selected is the one that’s top of mind.

So, how to artists/brands become top of mind? Well, as mentioned above, it’s through emotion-based communication that we come to recall brands, not only in the short-term, but also years from now since the emotionally-charged message that we recall down the line. Artists can use emotional narratives to not only draw attention and engagement with frontline repertoire (a new album release), but also leverage emotion to help encode their brand in the consumers mind for catalogue engagement years from now.

Source:  Tumblr

Source: Tumblr

An album as a body of work, as opposed to releasing solely singles, could serve as a tactic to continue to build the brand narrative of the artist and thus drive fans up the fan pyramid. One album that does a great job on building an artist narrative is Kendrick Lamar’s chart-topping, conceptual album Good Kid, M.A.A.D. City. Within the album, Lamar “chronicles [his] experiences in his native Compton and its harsh realities, in a nonlinear narrative. The songs address issues such as economic disenfranchisement, retributive gang violence and downtrodden women, while analyzing their residual effects on individuals and families.” If you’ve gone through any of the the experiences narrated by Lamar at any level, the storytelling hits home and you get the feels. Could this level of story-telling have been achieve by solely releasing singles?

While I’m aware that some artists are album artists, while others are singles artists, my key message is that an album gives the chance to further build the narrative for the artist. While artists have the ability to relay emotionally-charged messages to potential audiences through marketing-related stunts, an album gives the chance to further nurture the narrative sonically through the music of the artists.

Binaural Beats on Spotify

In my later days of being a Visual Artist (I normally go with the title of “Photographer” for the sake of simplicity, but photography was only one of the mediums I used for conceptual projects), I was pretty fascinated with the concept of synesthesia, which ultimately led to studying binaural beats for a video project. Since then, I’ve been using these tones to help me focus when reading or writing. While the science behind the cognitive effects of listening to binaural beats seems to be far from complete, there’s a chance that listening to binaural beats might not have an actual effect on my cognitive processing, but I chose to continuing listening to them even if it’s just the placebo effect I might be experiencing during mentally-demanding tasks.

On a day when I have to write, I usually hop on youtube to listen to this specific “beat”, but my curiosity got the best of me and I decided to check if there were any binaural beats on Spotify. It turns out there’s handful of Spotify-curated ASMR (autonomous sensory meridian response) playlists, one with over 125k followers containing binaural beats.

Source: Statista

A criticism frequently brought up on Spotify’s business model is the operating expense that comes in the form of royalties they have to dish out royalties to the three major record label companies for the act of licensing out their IP on the Spotify platform. In order to decrease market share from the record labels, which held 79% of the share of listening on Spotify in 2017, and as a result decrease operating expenses, the streaming company has been launching strategic initiatives that included the suspected low-royalty ‘fake artists’ fiasco earlier this year.

I would think that binaural beats would potentially provide higher-margin revenue for Spotify. It looks as if there are actual people (artists) that upload these tracks, as seen by the artist of the first track on the ASMR Sleep Sounds playlist, Creative Calm ASMR. However, who’s to say that some of these beats aren’t coming from Spotify themselves? I mean, they’re just tonal frequencies so how difficult would it be to export these frequencies and upload them onto the platform?

Most of the tracks are 2-3 minutes in length and, in my own consumption behavior, I normally listen to binaural beats on YouTube an average of 1-2 hours. That would mean that if I’m listening to a binaural tone for 2 hours on Spotify, I would be looping a track 40 times. What about people that use binaural tones as a sleeping aid? If someone sleeping about 7 hours a night, a track would be looped 140 times! With the potential of stealing share of listening via binaural tones, I’m not quite sure why binaural tones on Spotify aren’t advertised.

Test Driving YouTube Music

As I had mentioned in a previous post, humans are creatures of habit. Being that I feel invested in my premium Spotify account due to the the fact that I have the comfort of navigating a user interface I’m familiar with, I figured I’d step out of my comfort zone and give YouTube’s paid music streaming service a try. I was also going into the experience pretty curious about the music recommendations from the streaming service. Being that the Google-owned platform has their own proprietary recommendation algorithm and understanding its potential for new music discovery for consumers, my curiosity was getting the best of me.

The on-boarding was fairly fluid. Upon staling the app, I was asking to log in using my existing YouTube account information. Resourceful on behalf of the YouTube Music app since this applies already established consumption behaviors from your existing YouTube account. The app also had me to select artists that I liked, further collecting more data on my listening preference. After setting up, the app presented the main screen (mind you, I usually stream when I’m at the gym, hence the slew of rap artists and hip-hop recommendations). Two things struck me on the main page: the endless personalized “Your Mixtape” playlist and the simplistic approach to the total number of tabs shown at the bottom of the screen. The bottom bar is extremely similar to Spotify’s latest app redesign for users in their paid premium tier, which rolled out the redesign after I started my YouTube Music trial run. The main difference is that while YouTube Music has a Hotlist button, Spotify has a Search button in the same middle positioning. YouTube has a search button in the upper right corner of the screen.

The hotlist shows a selection of new and trending videos. What I found really convenient was the option of selecting whether you wanted to play the video or just the audio version of the track. Being that I primarily stream music in the gym, I’ve been finding the Spotify vertical videos a bit of a nuisance. It’s great content, I don’t necessarily want to sift through videos to find the right track for the moment.

When it came to the Search function, one thing I found limiting was the lack of searching through a voice query. Knowing how much voice interfaces are going to play a pivotal role in the years to come, I was pretty surprised to not find this option available. Other than that, the Search function was easier to navigate than Spotify’s. In similar fashion to Spotify, you scroll down the screen to see the results, whether it was a song you were looking for, an album, a music video, or a playlist featuring the artist of interest. However, what I found convenient were the buttons underneath the search text box, in order to jump to the section of interest, instead of having to endlessly scroll down the search results.

The last thing found a little annoying with YouTube Music was the lack of an option to add a track to the queue. One could drag and drop a track to position the song to be played next, but that a lot of dragging and dropping if you want to customize an existing playlist.

Now this might be subjective (in fact, I know it is) with a hint of confirmation bias, but I thought the suggested tracks from YouTube Music streaming service was much more in line with my personal taste, was fitting to the playlist being listened to, and most importantly consistently included new artists in the mix. While Spotify hits the first two of the three points above, it’s not very successful in introducing me to new artists (at least in my experience). Due to all of the music consumption data YouTube/Google’s been collecting from me for years, in addition to the slew of artist- and user-generated content they have on their platform, they might have the leg up on new artist discovery for consumers.

Smart Speakers and The Consumption of Music

In a previous post, in which I announced my new position as Insights Strategist for Universal Music Group, I expressed an interest in the evolving media consumption behavior as a result of connected devices.

Source:  Edison Research

With the dawn of the internet of things came the introduction of smart cars, connected home automation devices, and wearable technologies among other nifty connected devices. While these gadgets are all fascinating smart technologies, none have taken hold in US households as much as smart speakers. In January 2018, smart speakers were being used by consumers that fell within the “Early Adopters” and “Early Majority” stages of the innovation curve. According to an Adobe Analytics study, almost 50% of US consumers will own a smart speaker after the 2018 holiday season.

The smart speaker revolution is undeniable.

What does this mean for the consumption of music?

In the same Adobe study mentioned above, 70% of the respondents reported using their smart speakers for music consumption, which makes it the primary activity followed by weather forecasting (64%), and alarms/reminder (46%).

In my personal experience, using a smart speaker seems to remove the friction when wanting to listen to music. When I want to listen to music, I don’t need to manually look up an artist, album, song, genre, etc. There’s a clear consumer pain point that was being addressed. However, since most smart speakers don’t have a screen, that means the results for voice queries for music have to be much accurate. If we were to look up an artist on a search engine or music streaming platform, we’re given several songs or albums to to choose from. With the lack of a screen to refer to, consumers are given the one algorithmic-driven result deemed most appropriate by smart speakers. That means that these smart devices have one shot to get the customer experience right and pull up the “right” song.

Keeping in mind choice paralysis (there are times when I want to listen to music, but feel a little overwhelmed by the vast catalogue of music out in the world) and as consumers interact with smart speakers in much more intuitive and natural ways (as opposed to written queries) the dependence on genre or mood queries will play a key role in music consumption. But, with the melting pot of music genres, how does one categorize the genre-bending band Gorillaz, for example? In an ethnographic study that Edision Research conducted, we can see the toddler asks Alexa to play “Elsa” and “Frozen”. Besides the fact that pronunciation is an essential factor for smart speakers to deal with (think about how many consumers might be mispronouncing an artist name or lyrics), the smart speaker device should comprehend that the “Elsa” and “Frozen” prompt means to play “Let It Go”. But doesn’t this change if there’s an artist named “Elsa”?

All this means that there’s going to be a lazer-like focus on getting the music metadata right to serve up the right music at the right moment.

This is an extremely fascinating time to be alive. Voice is here and seems to be the future.

P.S. While there might be some apprehension from digital immigrants to use smart speakers, isn’t it fascinating to think that the same toddler from above is going to grow up naturally accepting Alexa as a digital assistant?