Top Quotes: “This Is What It Sounds Like: What the Music You Love Says About You” — Susan Rogers
Introduction
“The music that delivers the maximum gratification to you is determined by seven influential dimensions of musical listening: authenticity, realism, novelty, melody, lyrics, rhythm, and timbre. Collectively, your natural response to each of these seven dimensions forms a personalized “listener profile” unique to you. Your listener profile determines your thoughts, feelings, and physical responses when you listen to music.”
“The sweet spots on your profile were formed through biology, experience, and happenstance. Your random neural wiring, your exposure to musical culture in the time and place you grew up, and the sheer chance of hearing this record and not that record at crucial moments in your life all shaped the kind of listener you are and influence the kind of music you can fall in love with.”
“The music you respond to most powerfully can reveal those parts of yourself that are the most “you” those places your mind unerringly returns to when it is daydreaming or fantasizing. Thus, by learning about the qualities of music that match up best with your listener profile, you will not only become a better listener, you’ll become better acquainted with your innermost nature. Perhaps one reason we prize our own notion of musical authenticity is because our conscious experience of authenticity is rooted in the brain network that embodies our self-image.
Be it records or romantic partners, we fall in love with the ones who make us feel like our best and truest self.”
“For listeners whose brains have diminished connections between their reward network and their musical networks, the outcome can be musical anhedonia: an absence of pleasurable responses to music.
Musical anhedonia affects an estimated 5 to 10 percent of the population. Individuals with this condition generate a normal amount of dopamine activity in response to art, food, money, and other types of stimuli – just not to music.”
“As you listened to each record, what did you see in your mind’s eye? Maybe you envisioned a story unfolding that featured yourself, the vocalist, or make-believe characters. According to research I conducted with my coauthor, about 19 percent of people picture a story based on the lyrics when they listen to their favorite music. Or did you picture musicians performing the song — a band playing onstage, in a studio, or in a video? Around 17 percent of people visualize the performers. Or did you imagine that you were singing or performing the music? (About 11 percent of listeners.) Perhaps scenery unrelated to the lyrics took shape in your mind — a river, a mountain, a planet? (About 3 percent.) Or things you would like to build or create? (About 6 percent.) You may have seen imaginary worlds, such as in science-fiction movies (about 9 percent). Or did you see patterns of colors or shapes that didn’t represent anything specific? Just over 1 percent of music listeners see abstract shapes and colors. If “Born on the Bayou” or “The Grid” happened to be familiar to you, hearing them again may have prompted you to recollect scenes from a time in your life when you listened to them. Indeed, the most common form of visualization while listening to music is autobiographical memories (25 percent of listeners).
Or maybe you are a member of the roughly 9 percent of the population who don’t see any mental images while listening to music?”
“These days, abstract records — records that mostly or entirely feature computer-controlled, machine-based sounds rather than humans playing acoustic (including electronically amplified) instruments- have rapidly come to dominate the global musical soundscape. By this definition, nearly all the Billboard No. 1 hits in 2021 were highly abstract. The only exceptions were Adele’s “Easy on Me,” with its sparse piano and kick drum arrangement, and Mariah Carey’s enduring holiday favorite “All I Want for Christmas Is You,” recorded in 1994, when realism still dominated the airwaves. The acoustic guitar in Lil Nas X’s “MONTERO (Call Me by Your Name)” and the piano in Olivia Rodrigo’s “drivers license” are typical of today’s abstract records: a lone traditional acoustic instrument is enveloped by sampled and machine-generated sounds.”
“When I was coming up in the music industry in the 1970s and ‘80s, the [high fidelity] engineer’s craft still consisted of selecting the right equipment, materials, and techniques to re-create a musical performance so faithfully that the listener could imagine that she was sitting directly in front of the band.”
“While the advent of photography caused some painters to question their aims, it also opened a gateway to a new creative universe. If you couldn’t paint the world as it was, then you were free to paint the world as it could be — or even paint things that had no resemblance to the physical world at all.”
“Previously, only elite recording studios possessed the necessary equipment for ultra-high fidelity, and only established musicians with major record label contracts could afford the exorbitant rates to work in those rooms. Likewise, top-grade musical instruments and A-list session players were only available to artists endowed with enough record-company cash to afford them. The limitations of our tools and personnel forced most record makers to work from the materials to the vision. Our task was to make the best record we could from whatever equipment, engineers, and performers we could get our hands on. If we wanted a Hammond B3 organ and the studio didn’t have one, we rewrote the part.
Today’s record makers can work from the vision to the materials. Practically any sound you can think of is at your disposal or easily created. For the cost of a laptop computer, you can amass a library of sounds and recording software with a level of fidelity that once cost thousands of dollars a day.”
“Contemporary producers use DAWs to fabricate tones, noises, melodies, and rhythms that don’t necessarily originate with people performing them on instruments. Digital sound designers can conjure up audio chimeras: synthetic waveforms that can blend the sounds of a guitar and a trombone, for instance, or a parakeet and a frog, or a man and a baby. When you listen to these digital apparitions, it is more challenging to visualize them because you are unsure of what you are hearing. And when you can’t identify an object — as when viewing abstract art — your mind begins to explore highly inventive possibilities. This can deepen your immersion in the experience.”
“If Ariana Grande misses a note (rare but it happens), her producers can grab the miscued B-flat on the computer screen and turn it into a B. In the past, artists and producers would often let mistakes go by to foster realistic emotional authenticity.”
“The vast majority of today’s records are technically perfect . .. but physically impossible to perform live. Or, at least, impossible to perform the way they sound on the record.
These days, my students often hear any “coloring outside the lines” as a sign of unacceptable sloppiness and a lack of effort. According to many young listeners, if it can be fixed, it should be. Consider one of the simplest performance gestures of all: breathing. Compare the wonderfully prominent inhales and exhales heard on Regina Spektor’s vocal performance of “Eet” with the almost total absence of them on Ariana Grande’s “7 Rings,” a track that makes me tense every time I listen to it because I’m constantly thinking, “Breathe, woman, breathe!””
“According to the rap pioneer Carlton Ri-denhour, a.k.a. Chuck D, hip-hop was invented in 1973 when Clive Campbell, a.k.a. DJ Kool Herc, threw a “back to school jam” for his little sister in the basement of their home in the Bronx. Using his turntables, Herc invented a way of looping drum sections on vinyl records to create prolonged drum breaks for dancers. During one of these drum breaks, Campbell’s friend Coke La Rock grabbed a mic and began calling out friends’ names and rapping improvised lyrics over the beats.”
“As I continued through childhood and adolescence, I was rewarded not only emotionally and intellectually for exploring new forms of music but socially as well. In my early twenties, I went to a King Crimson concert with friends. Although the band’s elaborate prog rock was not “my music,” afterward my reward system said to my risk-aversion system, I told you so! This was fun! The feel-good neurotransmitters released from being with excited friends while listening to unfamiliar music reinforced my growing tendency to be open-minded to different musical styles.
On the other hand, I will never forget the first time I tried Vegemite, a malty food spread made from discarded yeast. It was the equivalent of tripping over a rock and doing a face plant. Stunningly unpleasant. A few other youthful excursions into culinary novelty occurred in less-than-ideal social circumstances and resulted in similar reactions. Now it was my risk-aversion system’s chance to say, I warned you! Don’t ever do that again! Perhaps it was simply the luck of the draw, but I was learning through experience to play it safe when it came to food, even while my musical risk-taking was reaping rewards.
Maybe if my early food experiences had exposed me to exotic dishes that I had naturally enjoyed, or if I had tried unfamiliar cuisine in settings where I received warm social reinforcement, an appetite for epicurean adventure might have taken hold. But that’s not what happened. As a result, I quickly came to embrace my own limited set of comfortingly familiar dishes.”
“That’s precisely what many listeners experienced while listening to the Rolling Stone poll’s top vote getter: Marvin Gaye’s masterpiece What’s Going On.
His 1971 album is a pained call to halt the social discord that was pulling America apart near the end of the Vietnam era. For many listeners, the title track, “What’s Going On,” delivers high-octane rewards on three of the primary musical dimensions: the song features a heart-wrenching melody, socially poignant lyrics, and an irresistible rhythm that interplay in perfect musical com-patibility, thanks to Gaye’s artistry.
The drums and congas are upbeat, jaunty almost. The rhythm guitar joins in with the chord progression — adding tension and anchoring the rhythm section. A saxophone calls out the melodic theme, then steps back and lets Marvin take over in a voice as smooth as the softest suede, singing the opening line: “Mother, mother, there’s too many of you crying.” Then the bass does what bass does best: treads the boards between a rhythmic role and a harmonic role, adding the subtle shade of secondary emotions — worry and hope — in equal measure. Behind it all the strings raise a dominant melodic cry, wordlessly begging us to stop fighting each other. Marvin tells us, that “we’ve got to find a way to bring some understanding here today.” The gentle backing vocals let Marvin take the spotlight, but as the record continues, they split into factions, chattering away and abandoning the script, just as a chaotic, diverse society does. Then, redemption!
All the voices come together as the song approaches its end. The strings glide faster, like birds, going higher and higher yet never getting away from the tonal center, a relentless pressure scaling its way upward, toward release. For many listeners, our brain networks handling melody, lyrics, and rhythm all respond in the sympathetic resonance of connection. Your body can sink into the groove and lock onto its inviting rhythms, your heart can get tugged into the deep end of the emotional waters by the melodic strings, and your mind can sadly recognize that fifty years after the record’s release, its lyrical message is still woefully timely.”
“In humans and other animals, producing a sequence of high-pitched, clear tones is common in happy or prosocial contexts, while producing low-pitched, noisy tones are more common in aggressive or threatening contexts. Short calls with ascending pitches have an arousing effect, while long calls with descending pitches are calming. Dissonant melodic intervals, such as the two alternating tones produced by many emergency-response sirens around the world, are fear-inducing.”
“To ensure that our minds can flexibly deal with ambiguities in speech, the speech pathway automatically “forgets” the initial meaning of words through a process known as habituation. (The same process of habituation also makes us gradually stop noticing the smell of squeezed lemons as we cook or the patter of rain on the roof.) Speech habituation accounts for the “Wait, what?!” reaction by helping us reinterpret words when our initial interpretation may have been wrong, such as taking a moment to realize that the newspaper headline “Fireman Helps Dog Bite Victim” doesn’t mean “the fireman enabled the dog to bite the victim” but, rather, “the fireman aided the man bitten by the dog.””
“A few years earlier, her research team had analyzed a half million newborns’ cries gathered from around the world. Wermke made the remarkable discovery that shortly after birth, babies appear to cry in their native tongue. French infants tend to cry with a rising melodic contour. German infants tend to cry with a falling contour. These disparate contours match the disparate prosody of French and German speech.
In the final trimester of pregnancy, a fetus’s auditory system has developed well enough that some muffled sounds are audible as the baby floats in its gestational soup. Brain development at this stage is just complex enough to allow us to learn simple melodic phrasings, such as intonations, which is one reason why newborns prefer the familiar sound of their mother’s voice over the sound of an unfamiliar voice.”
“Instrumental music from France sounds French, just as music from England sounds English. Tonal languages such as Mandarin and Vietnamese (where pitch changes determine the meaning of certain words) tend to have wider intervals and more frequent contour changes than non-tonal languages and so does the music composed by tonal language speakers. Although many music listeners seek out and enjoy melodies from around the world, the melodies of our native tongue sound the most familiar and speak the most passionately to us, because our minds were soaking up these melodic contours before we were even born.
The fact that we have a deep preference for the melodies of the native culture we are born into means that we may be naturally prone to disliking unfamiliar melodies, a fact supported by the empirical observation that people express less affection for what they consider to be “foreign” music. Sadly, this distinction has enabled the military to wield music as a weapon of coercion. Uncooperative Iraqi prisoners held captive during the Iraq War were persuaded to talk to interrogators after long barrages of heavy metal music, such as Metallica’s’ “Enter Sandman,” alternated with American children’s songs, including the purple dinosaur Barney’s theme, “I Love You.”
While retaking the city of Fallujah, American soldiers blasted their favorite rock and rap music from loudspeakers mounted on gun turrets. This tactic was described by one soldier as a sonic “smoke bomb,” adding that “our guys have been getting really creative in finding sounds they think would make the enemy upset.” They were not the first Americans to use music as a cudgel. The Panamanian dictator Manuel Noriega, an opera lover, surrendered after his Vatican embassy refuge was blasted for a week with high-volume AC/DC, Mötley Crüe, Led Zeppelin, and other rock music.”
“This is the immense power of lyrics: to enable us to momentarily become someone else.”
“Lyrics provide us with a private dressing room to “try on” the words of others to see if we can fit them onto our own persona.”
“Pop music lyricists usually leave the time, place, and persons unnamed, and this makes it easier for listeners to fit the lyrics into their own personal story. Modern song lyrics don’t typically describe events but are in-the-moment descriptions of ongoing emotions or circumstances.”
“Scientists have used the psychological concept of self-congruity to posit that music listeners prefer artists whose personality traits (gleaned through lyrics and appearance) are assumed to match their own. This idea was supported by a large-scale study published in 2020. The data added to the body of evidence examining music and social identity. Music listeners report that they choose artists and records that reflect and reinforce their personality traits and address their psychological or social needs.”
“Words that express our secret fears and longings resonate most powerfully with us. Whether you are drawn to hallucinogenic scribblings inspired by drug trips, the solemn paeans of grand opera, the sultry ballads of seduction, or gritty “slice of life” street poems, the lyrics you revel in the most reflect who you are, what you value, and — every now and then — who you’d like to be.”
“Patel’s graduate students introduced him to a series of You Tube videos starring a sulfur-crested cockatoo standing just over a foot tall. Snowball wasn’t merely dancing in these videos. The bird was grooving with all the flair and passion of a mohawked B-boy at a street jam. In the earliest video, Snowball is perched regally on the back of a club chair in a modest suburban home. The Backstreet Boys’ “Everybody (Backstreet’s Back)” plays. When the drums disappear from the music for a bar or two (called a breakdown), Snowball stops moving ... only to pick it back up again when the groove resumes. Just like humans on the dance floor, Snowball frequently changes it up: he swings to the right, swings to the left, then lets loose during the loudest parts of the music.
Other videos soon followed, featuring other dance tracks: Queen, Michael Jackson, even German polkas. Snowball grooved to them all. It wasn’t long before Madison Avenue called. Snowball starred in a Taco Bell commercial, doing his best to synchronize to the less-upbeat Rupert Holmes song “Escape (The Piña Colada Song).” The best-selling nature author Sy Montgomery penned Snowball’s story in a children’s book. Soon the little cockatoo became the most celebrated non-human dancer on earth.
Before Snowball’s videos appeared, scientists knew that some animals could imitate human movements, so that if you started bouncing your head to music, for instance, a seal or an elephant might bounce along with you. But the idea of an animal spontaneously finding a personal tactus and choosing dance moves according to its own private notion of what the music called for was, for Patel, as preposterous as “a dog reading a newspaper out loud.””
“Two particular features of Snowball’s dancing caught Patel’s attention for their scientific significance. First, whenever the music sped up or slowed down, Snowball would adjust his movements to stay in time. This is a classic test of synchronization because a dancer must listen for the timing between each beat and anticipate how soon he needs to make his move if he is to land at the same moment the percussion does. Even more impressive, Snowball could stay in time even if the beat dropped out entirely. In order to stay in time during breakdowns, dancers must continue hearing the rhythm in their minds so as to be on beat when the drums renter. Snowball thus became the first non-sapiens creature to unequivocally demonstrate beat induction.
Entrainment is a passive skill. All it requires is that we move in sync with a simple, evenly timed signal, whether a dripping faucet, a car’s turn signal, or a ticking clock. When the pulse stops, entrainment stops and so does the listener’s movements. Beat induction is an active skill. It requires that our mind create its own tactus out of a complex and ambiguous auditory pattern and maintain this “mental” rhythm when the beats go silent during a breakdown. Snowball handles breakdowns without a hitch, as you can see at the 0:32 mark in his performance to “Everybody (Backstreet’s Back).” He continues to bob in time with the groove in his head. His movements are still in perfect sync when the silence ends and the drums abruptly return.”
“Although it might have been distressing for Mathieu to learn that he was beat-deaf, it must have been comforting to know that the reason he always seemed to be an “Elaine” on the dance floor wasn’t due to any failure of willpower or lack of physical coordination.
He simply possessed a biological quirk that prevented his brain from organizing beats into a tactus.”
“You can hear reggaeton syncopation in the drum track of “Yo Perreo Sola” by Bad Bunny. Listen to this track and pay attention to how your body chooses to move to it. Compare your tactus for this song with the tactus you felt when you listened to “Stoned and Starving.” You may recognize that these two rhythms seem to direct your body into different kinds of motion: side to side during “Yo Perreo Sola” and up and down during “Stoned and Starving.””
“Our auditory system reaches its peak perceptual ability in our late teens. Rapid growth happens between the ages of eight and eleven, as the brain prepares itself for the upcoming changes that puberty will introduce. Without training, the musical perceptual aptitude of most people plateaus before their teens. Studies show that the performance of eleven-year-olds on music-perception tasks is the same as that of adults who were never trained in music.”
“Humans historically engaged in sexual activity at night, writes psychoacoustician Josh McDermott. The dimness of the forest or cave made it difficult to see your mate’s face and body, so we evolved to have a high erotic regard for the sound of voices in the dark. For their part, female voices change their pitch as they proceed through the menstrual cycle and are therefore considered an “honest” indicator of fertility. Males tend to favor a female voice that is gentle and breathy – think Scarlett Johansson – and they instinctively associate this timbre with femininity. The attractiveness of a woman’s voice (as perceived by men) is a strong predictor of a woman’s sexual promiscuity (measured as the age of her first intercourse, the number of her sexual partners, and the number of times she cheats on a committed partner). As it turns out, women with sexy voices have more sex than women with sexy bodies.”
“The pitch of our voices depends upon the length of our vocal cords. Interestingly, humans are one of the rare species to exhibit sexual dimorphism in our voices. Listen to a horse whinny, a dog bark, or a cat meow. You can’t tell if the vocalizing animal is male or female. Young humans exhibit identical vocal timbres: boys and girls sound alike. But when boys go through puberty, the release of testosterone lengthens their vocal cords and triggers the development of the “laryngeal prominence,” more commonly known as the Adam’s apple. After puberty, the average male voice is pitched an octave lower than the average female’s. This dimorphism makes the human voice gender expressive – we can use an adult’s voice to infer something about their sexual appeal.
Voice training helps singers expand their range and shift seamlessly from their chest voice to their head voice and back again. This takes strength and control, so when we hear a man singing in falsetto, it sends the message that he has an extra gear and the power to command it. Likewise, when we hear a woman sing in a deep chest voice, she is broadcasting that she possesses a strength that most women do not.”
“Deep female voices are especially impressive because males typically have a wider pitch range than do females. It is easier for men to constrict their vocal folds and sing higher than it is for women to lengthen theirs and sing lower, so men can sound feminine more easily than women can sound masculine.”
“Scientists have unearthed another revelation about the mind-wandering network, one even more relevant for explaining why we fall in love with a record. Our brain treats listening to music as a special form of daydreaming. When we’re immersed in the enjoyment of a favorite record, our mind-wandering network lights up like fireworks. This observation helps explain many of the mysteries of our deep bond with music.
When your brain is idling, the contents of your reveries – what the psychologist William James labeled “flights of the mind” — contribute to your conscious conception of your personal self. Whenever you daydream or fantasize, your mind drifts to places that are intimate and private, thinking about, what you like, what you need, and what you desire. Thus, when you listen to music you love – music that aligns with your sweet spots — you activate the part of your mind that fuels the deepest currents of your identity.”
“When participants heard music they rated as Dislike, the precuneus stopped communicating with the mind-wandering network and was “connected primarily only to itself.” This is astonishing. It suggests that our brain actively “rejects” disliked records, so when we hear music we don’t enjoy, our brain automatically takes action to prevent those styles from getting integrated into our self-image.
Wilkins and her team made another surprising discovery. They detected significant communication between the auditory networks and the hippocampus (a brain structure involved in memory formation) whenever subjects listened to music they rated as Like or Dislike. That might have led the researchers to conclude that our hippocampus gets activated whenever we listen to any music, but Wilkins noticed that the communication between the auditory networks and the hippocampus was reduced when subjects listened to their “Favorite” music.
Wilkins and her team suggest that when we hear a favorite song, our memory circuits kick into retrieval mode rather than encoding mode and we automatically “play back” memories of people, places, and events that we associate with the song. Wilkins’s intriguing hypothesis is supported by the research Ogi and I did on visualizations during music enjoyment: we found that the most common form of mental visualization that listeners experienced while listening to their favorite records was autobiographical memories.”