The sound of language

UNIVERSITY PARK, Pa. — Sitting in a soundproof booth at the Center for Language Science (CLS), graduate student Caitlin Ting grabs a pair of full-size headphones, stretches them over her head and adjusts the soft cushion pads onto her ears. As two individual sounds resonate through her headset (bum bum…BUM BUM) — with each iteration growing higher in pitch — a computer program asks her to determine which of the two sound intervals is higher. She correctly clicks on the second option and proceeds to the next question.

Ting is participating in a computerized musical training study co-led by Evan Bradley, assistant professor of psychology at Penn State Brandywine, and Janet Van Hell, professor of psychology and linguistics and co-director of the CLS at Penn State University Park. Both Bradley and Van Hell hope the study will help them better understand how musical training affects the brain and whether it can change the ability to perceive other sounds, like language.

So far, 30 participants have taken part in the ongoing study, which involves listening and responding to speech and musical sounds for one hour per day, over four consecutive days. Music majors and professional musicians are excluded from the study.

Because data are collected in real time and synced to the cloud, Bradley is able to view individual participant reports remotely from his office at Brandywine, track participants’ progress and measure how well they are doing.

Although it’s early in the process, he’s already seeing a positive correlation between musical training and improved language ability.

“It’s kind of remarkable that after only four hours of musical training most participants saw an improvement in language perception, albeit a small improvement in a particular aspect of the task,” said Bradley.

Of course, taking musical training lessons can't teach you to speak a foreign language. But early results of the study show that the training increases listening skills, including sound discrimination, allowing more accurate processing of speech and voices, particularly when it comes to learning such languages as Mandarin or other “tonal” languages.

Tonal languages — those in which alternating pitch patterns change the meanings of words — are common throughout East Asia, sub-Saharan Africa and the native tribes in the Americas. Variations in tonal registers can sound musical and are often difficult to grasp for native English or non-tonal speakers. While most non-tonal languages lack pitch variation, such tonal languages as Cantonese and Vietnamese can convey different meanings from just one word with as many as six different tones.

As part of the study, Bradley developed tonal language listening tests to gauge understanding of pitch differences. Using a free and open source software package written in the Python programming language, he used pre-existing recordings of Mandarin speakers to make sound files that spoke the word ma (pronounced “mah”) in four different tones. Study participants are asked to identify the type of tone they are listening to by choosing the distinctive shape or contour — flat, rising, dipping or falling — that best matches the shape of the pitch.

According to Bradley, learning differences in pitch and tone could have significant implications in our understanding of the similarities between processing language and music and how our brains use sound to perceive language. Since different tones on the word “ma” can be the difference between saying “mother” or “horse,” distinguishing between pitch variations can also spare one from making an embarrassing remark.

“If we can use music to change how we represent sound in the very early parts of our auditory systems, which include parts of the brain stem and the primary auditory cortex that represent basic information about sound, that will benefit anything that relies on perceiving that sound,” said Bradley.

In the future, Bradley hopes to further customize the study to run different variations of the training to see if he can design musical exercises that help tune people’s ears specifically to learn certain languages.

For now though, he and Van Hell are continuing the collaboration between their respective campuses and will collect additional data from the study to determine if the benefits of musical training extend beyond simply being able to carry a tune.

For more IT stories at Penn State, visit

Last Updated February 23, 2016