Susan Rogers – a recording engineer, producer and music cognition expert – visited the dCS Lina Lounge at CanJam New York earlier this month to discuss the fascinating world of psychoacoustics. Here we share some highlights from her talk, which explored how we process sound and the importance of considering our unique profiles when choosing audio equipment such as headphones and amps…
Tags: Culture,Susan Rogers started working in music studios in 1978. In the two decades that followed, she went from repairing studio equipment to working with Prince and engineering some of the most iconic albums of the 1980s and 90s.
After forging a successful career in music production, Susan decided to switch studio life for education. She spent eight years studying neuroscience and psychoacoustics, before teaching record production and music cognition at Berklee College of Music. She was director of Berklee’s Music Cognition and Perception lab until her recent retirement, where she worked on various research projects exploring how we process and respond to music.
In 2021, she published a book, This Is What It Sounds Like: What The Music You Love Says About You, which explains how we listen and what makes us fall in love with certain musical works.
We were honoured to have Susan partner with us at CanJam New York this year to deliver two talks for visitors to the Lina listening lounge.
Susan discussed how our brains and bodies respond to sound, how our hearing changes and develops through our lifetimes and how musicians and non-musicians differ in their listening abilities.
She also discussed how our physical characteristics impact our listening experience, and the importance of considering your unique profile when selecting audio equipment such as headphones and amps. Alongside this, she discussed how she came to be working for one of the world’s most revered musicians.
Below are some excerpts and highlights from the event.
From Prince to Barenaked Ladies & Berklee
Susan began with describing how she carved out a career in music production and how she went from mixing multi-million selling records to studying the intricacies of our auditory system.
She began working in music studios in Los Angeles in 1978. At the time, there were very few women working in music production, and a career in engineering seemed an unlikely, if not impossible, goal. “You just didn’t see women as recording engineers and record producers, and I didn’t hold out hope that I could be anything like that,” she explained.
This led her to pursue an alternative path in audio electronics, an area where she said “gender didn’t matter”. She learned how to fix studio consoles and tape machines, among other equipment, and landed a job as a studio technician working with Crosby Stills & Nash.
“There was one thing I could do where my gender didn’t matter...repair equipment”
In 1983, she received a call that Prince was looking for a technician – an opportunity that allowed her to move into working on records instead of console repairs. “It was a call that changed my life,” she explained. “He put me in the engineering chair. I worked on Purple Rain, Around The World in a Day, The Parade album, Sign of the Times, the Black Album, and all the stuff that we did in between [including films, tours and collaborations with Sheila-E]. I had a great time with him – I left in 1988 exhausted, but we did a lot of good work together,” she added.
Susan moved back to Los Angeles, where she mixed, produced and engineered records for artists including David Byrne, Nil Lara and Jeff Black. In 1998 she co-produced and engineered the Barenaked Ladies’ smash-hit Stunt which sold 5 million copies (a huge figure in the pre-streaming era). The commercial success of this project enabled her to step out of the studio and take her career in a different direction. She enrolled as a college freshman aged 44 and went on to receive a degree and doctorate from McGill before teaching at Berklee.
From toddlers to teens: auditory development and the impact of musical training
After discussing her background in music production and neuroscience, Susan went on to discuss how our brains and bodies process and respond to sound, explaining how signals travel from our eardrums to our auditory cortex. She explained how our brains have evolved to prioritise certain types of sound such as speech and how we are able to effectively filter out sounds that we don’t want to hear so we can focus on those that we do (in other words, boosting frequencies within a certain range, while suppressing others that fall outside of this).
She then explained how our hearing changes and develops throughout our lifetime. Young children, for example, have less refined hearing than teenagers and young adults – “they don’t have good high frequency resolution” – and our hearing continues to improve throughout childhood, peaking around college age.
Our auditory system also continues developing past toddlerhood, but is fully formed for most people by the time we hit adolescence.
“By age 12, it has to stop because your body is getting ready for puberty,” said Susan. “This means that unless we’re taking music lessons, our auditory processing circuitry, all of our nuclei, our auditory nerve bundle and our auditory cortex is done [developing] by age 12.”
Analytic vs synthetic: How musicians hear differently to non-musicians
Whilst most people’s auditory systems finish developing before we hit puberty, this is not the case for those who receive musical training.
As Susan explained to CanJam attendees, the auditory processing path of trained musicians continues to evolve, bringing physical changes that affect our music processing capabilities. “The nuclei get fatter and thicker, and the auditory nerves, your wiring, grow what’s called dendritic spines – more little branches.” These developments allow musicians to become better and faster at processing the subtle differences between sounds.
This, in turn, enables musicians to listen in a different way to non-musicians. Citing a test she used to conduct with students at Berklee (which you can try out online here), Susan explained how musicians are capable of listening analytically – meaning they are able to hear different frequency components individually. Non-musicians, however, can only listen synthetically – meaning they focus on the ‘global whole’ or complete sum of parts.
Through her work at Berklee, Susan found that whilst she was unable to process certain subtleties in a musical work in the same way as students who could listen analytically – lacking the neural infrastructure required to do so – this allowed her to assess the complete sound of a recording or musical work without getting “bogged down” by focusing on minute details.
Reflecting on these different listening modes, she said audio manufacturers developing products should carry out testing with both musicians and non-musicians (something that is done extensively at dCS).
Head Related Transfer Function: how physical characteristics affect our listening experience
After discussing the effects of musical training on auditory development, Susan went on to explain how unique physical characteristics – such as the shape and size of our head, the distance between our ears and the length of time it takes for sound to travel along the auditory processing path – can also impact how we process sound. This can lead to significant differences in the levels to which our brains boost sound at certain frequencies, said Susan, with this varying by as much as 15dB between listeners (a phenomenon known as head related transfer function).
These differences are most prominent at frequencies between 4kHz and 8kHz – however this is not a factor when listening with headphones. When we wear headphones, Susan explained, pressure waves are delivered straight to our auditory canal. As a result, headphone manufacturers have to introduce their own filters to compensate for the lack of natural filtering that takes place when pressure waves that we perceive as sound hit our ear, or pinna.
This, in turn, means that headphones can sound very different to different people.
“Suppose your ears boost 5kHz [frequencies] and cut 4kHz, and your friends ears boost 3kHz and cut 6kHz – putting on the exact same [pair of headphones] is going to sound very different to you because you just eliminated the filter that you’ve worn and listened through your whole life,” she explained.
Finding the right audio equipment for your profile
With this in mind, Susan recommended that listeners consider individual characteristics when choosing headphones and spend time listening to find the pair best suited to their unique profile – just as they would if shopping for eyeglasses.
“When you’re choosing headphones, don’t just go by the reviews or the price tag or what famous engineer uses them,” she said, adding: “Listen to them … consider your filter, consider there can be as much as 15dB of boosting and cutting in that crucial range from 1kHz up to about 10kHz … and that your ears are accustomed to your unique input stage ,to your unique filter.”
Hearing health and safe listening
Susan’s talk rounded off with reflecting on some of the factors that can negatively impact our hearing and explaining what this means for how we listen.
She explained how chronic stress can cause hearing loss, as our brain diverts resources away from our auditory system and to other parts of our body, and how our diets, sleeping habits and cortisol levels can all impact our ability to process sounds. She also talked through some of the different hearing tests that are available, such as auditory brain stem response tests (ABR) and screenings that gauge the health of our outer hair cells (which tend to decline as we age, limiting our ability to hear softer sounds such as speech).
In addition, she discussed some of the risks that listeners should be aware of if using headphones for extended periods of time, highlighting findings gathered from a project she conducted with students at Berklee, which aimed to identify which musicians were most at risk of hearing loss.
Susan gathered data from hundreds of students and was surprised to find that it was not drummers or guitar players who were most at risk but horn players and singers.
“Be careful with your headphones: love them, use them, rely on them, know them, cherish them, get the good ones … but mind their usage”
“When I looked at the data, one thing became clear: it’s not the instrument itself that’s causing noise induced hearing loss, but the proximity of the instrument to your ears,” she explained. “Drummers have their cymbals at least an arm’s length away, the guitar player’s [typically] on the other side of room from his amp, but horn players have the bell of a trumpet right next to their ear.”
“It’s the proximity of that pressure wave, the source of that pressure wave to your eardrum,” she added, “so be careful with your headphones: love them, use them, rely on them, know them, cherish them, get the good ones … but mind their usage.”
Whilst excessive use of headphones (for example, among those who have to spend several hours a day listening for work) can have a negative impact on our hearing, Susan also highlighted that it’s good to expose our auditory systems to a moderate amount of stress – ideally, through a mix of headphone and loudspeaker listening.
“A moderate amount of stress for our nervous systems is good for it…. People who live in the quietest environments tend to have bad hearing, because they’re not stimulating [their nervous system], so moderate stimulation especially if we’re working in music, is good,” she added.
Asked how much listening is too much, she recommended that listeners pay attention to their body and brain’s response, adding: “I think the best way to consider it is to consider your own body - you can feel when it’s getting fatigued.”
Subscribe to our Mailing List for news on upcoming dCS Events straight to your inbox.