Skip to content

Babies listen to much more speech than music at home

Speech and music are the dominant elements of a baby’s listening environment. While previous research has shown that speech plays a critical role in children’s language development, less is known about the music that babies listen to.

A new study from the University of Washington, published May 21 in Developmental science, is the first to compare the amount of music and speech that children listen to in childhood. The results showed that babies listen to spoken language more than music, and the gap widens as babies grow.

“We wanted to get a snapshot of what happens in babies’ home environments,” said corresponding author Christina Zhao, a research assistant professor of speech and hearing sciences at the University of Washington. “Many studies have looked at how many words babies hear at home and have shown that what is important for language development is the amount of speech directed at the baby. We realized that we know nothing about what kind of music babies listen to. listening and how it compares to speaking.

The researchers analyzed a data set of one-day audio recordings collected at home from infants who were learning English at ages 6, 10, 14, 18, and 24 months. At all ages, babies were exposed to more music from an electronic device than from an in-person source. This pattern was reversed for speech. While the percentage of speech intended for babies increased significantly over time, it remained the same for music.

“We are surprised by how little music there is in these recordings,” said Zhao, who is also director of the Laboratory of Early Auditory Perception (LEAP), located at the Institute of Learning and Brain Sciences (I-LABS). “Most of the music is not intended for babies. We can imagine that these are songs playing in the background or on the car radio. A lot of it is just ambient.”

This differs from the highly engaging, multisensory movement-oriented music intervention that Zhao and her team had previously implemented in laboratory settings. During these sessions, music was played while the babies were given instruments, and the researchers taught the caregivers how to synchronize their babies’ movement with the music. Then, a control group of babies came to the lab just to play.

“We did it twice,” Zhao said. “On both occasions, we saw the same result: that the musical intervention improved babies’ neural responses to speech sounds. That got us thinking about what would happen in the real world. This study is the first step toward that deeper question. important”.

Previous studies have relied heavily on qualitative and quantitative parent reports to examine musical input in infants’ environments, but parents tend to overestimate the amount they talk or sing to their children.

This study bridges the gap by analyzing one-day auditory recordings made with Language Environment Analysis (LENA) recording devices. The recordings, originally created for a separate study, documented the babies’ natural sound environment for up to 16 hours a day for two days at each recording age.

The researchers then collaborated in the process of annotating the LENA data through the citizen science platform Zooniverse. Volunteers were asked to determine whether there was voice or music in the clip. When speech or music was identified, listeners were asked whether it came from a personal or electronic source. Finally, they judged whether the speech or music was intended for a baby.

Since this research included a limited sample, the researchers are now interested in expanding their data set to determine if the result can be generalized to different cultures and populations. A follow-up study will examine the same type of LENA recordings of babies from Latino families. Since audio recordings lack context, researchers are also interested in knowing when musical moments occur in babies’ lives.

“We are curious to see if music input correlates with any later developmental milestones in these babies,” Zhao said. “We know that speech input is highly correlated with later language skills. In our data, we see that speech input and music are not correlated, so it’s not like a family that tends to talk more also has more music. “We are trying to see if music contributes more independently to certain aspects of development.”

Other co-authors were Lindsay Hippe, former UW honors thesis student and incoming speech-language pathology clinical research master’s student; Victoria Hennessy, LEAP research assistant/laboratory director; and Naja Ferjan Ramírez, assistant professor of linguistics and adjunct research professor at I-LABS. This study was funded by the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health.