Faster, Louder, More Explicit: How Music Has Evolved Over the Years (and How My Own Musical Taste Compares)

I have so many questions about this photo shoot. Is JT lost? What is he pondering? Is he looking at his friends as they drive away and abandon him to the horses? What kind of friends abandon their friend to the horses? Isn’t it uncomfortable to kneel on the ground like this? Is being a Man of the Woods really worth having your friends abandon you to the horses?
  • Popularity: The popularity of a track is a value between 0 and 100, with 100 being the most popular. The popularity is calculated by algorithm and is based, in the most part, on the total number of plays the track has had and how recent those plays are.
  • Danceability: Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable.
  • Acousticness: A confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic.
  • Energy: Energy is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy.
  • Tempo: The overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration.
  • Loudness: The overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Values typical range between -60 and 0 db.
  • Instrumentalness: Predicts whether a track contains no vocals. The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content.
  • Speechiness: Speechiness detects the presence of spoken words in a track. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks.
  • Valence: A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry).
  • Explicit: Whether or not the track has explicit lyrics.
  • Key: The estimated overall key of the section. The values in this field ranging from 0 to 11 mapping to pitches using standard Pitch Class notation (E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on).
Unfortunately in 2020 all I can think of when looking at this photo is “why is no one wearing masks and staying 6 feet apart?”
EDM concert. See comment above about masks and 6-feet-apart.

Music has gotten more in-your-face (in-your-ears?) over time

Over the past century, music has become a lot less acoustic, higher energy, and somewhat more danceable.

Song length, mood, and key have stayed fairly consistent

  • Don’t Stop Believing
  • Yesterday
  • Let It Go
  • Someone Like You
  • Call Me Maybe
  • Wagon Wheel
  • With or Without You
  • Auld Lang Syne

Popular music: faster, louder, higher energy, electronic-heavy, vocal-heavy

According to Spotify, “popularity is calculated by algorithm and is based, in the most part, on the total number of plays the track has had and how recent those plays are.”

My own music taste begs to differ

Unfortunately for my rich-and-famous dreams, what makes music popular to the world isn’t perfectly aligned with what makes music “popular” to me. Which brings me back to what I started this whole blog post with: how does my music taste compare to my partner’s, and the Spotify world’s?

To wrap it up

As an amateur guitar player, music holds a very special place in my heart. It was a blast to get to dig into this Spotify dataset.

  • In line with people’s preferences, music has become faster, louder, higher-energy, more danceable, more electronic-heavy, more vocal-heavy, and more explicit over especially the past half-century.
  • If I want to get my song played on the radio station, I’ll need to make some changes to my musical style. In particular, I need to get more danceable and more electronic-heavy. My partner basically has no shot. It’s true. He’s too far off.
  • SQL continues to amaze. It’s such a powerful yet relatively simple tool to analyze huge amounts of data. No way I could have done this analysis in one weekend otherwise. No way I could have done this analysis probably at all: Excel / Google Sheets would very likely have been unable to handle the sheer volume of this dataset.
  • I would really like to build on my (rudimentary) Javascript skills. I am grateful to Kaggle for supplying the dataset I used for this blog post, but even that massive dataset is still just a portion of all the data Spotify itself has. Being able to tap into APIs directly will allow analyses like this one to be more robust and comprehensive.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Annie L. Lin

Annie L. Lin

People & ops leader | data storyteller & nerd