Matches in SemOpenAlex for { <https://semopenalex.org/work/W807449959> ?p ?o ?g. }
Showing items 1 to 71 of
71
with 100 items per page.
- W807449959 endingPage "33" @default.
- W807449959 startingPage "6" @default.
- W807449959 abstract "You have accessThe ASHA LeaderFeature1 Mar 2007What Do Children Hear? How Auditory Maturation Affects Speech Perception Lynne Werner Lynne Werner Google Scholar More articles by this author https://doi.org/10.1044/leader.FTR1.12042007.6 SectionsAbout ToolsAdd to favorites ShareFacebookTwitterLinked In Auditory development is a prolonged process, despite the precocious development of the inner ear. Audiologists know that infants don’t respond to sound at the low intensities to which adults will respond. What hearing scientists have learned in 25 years of studying the development of hearing in infants and children is that youngsters’ immature thresholds in the sound booth reflect immature hearing, not just immature responses. These immaturities limit infants’ ability not only to detect a tone, but also to hear and to learn from sound in real environments. Moreover, the process of auditory development continues well into the school years, as children become more selective and more flexible in the way that they process sound. Clinicians implicitly understand that infants and children hear differently from adults, and this understanding shapes their interactions with infants and children. Research in auditory development has broader implications for clinical and educational practice—as well as public policy—as professionals work to reduce noise levels in homes and in schools and raise awareness of the effect of competing sound on infants’ and children’s ability to process speech. Auditory Development Auditory development progresses through three stages. During the first stage, the ability of the auditory system to encode sound precisely becomes mature. This stage lasts from full-term birth to about 6 months of age, and involves maturation of the middle ear and of the brainstem auditory pathways. During the second stage, from 6 months to about 5 years of age, the ability to focus on or select one feature of sound matures. During the third stage, from 6 years into adolescence, the ability to use different sound features flexibly under changing listening conditions matures. Both the second and third stages involve maturation of auditory cortex and central processing. Stage 1: Maturation of Sound Coding Newborns’ impressive ability to discriminate between speech sounds, to recognize voices, and even to recognize their native speech has been well-documented. Clearly, infants come into postnatal life ready to listen to sound and to learn from it. This process likely begins before birth. However, studies that have tested newborns’ discrimination of changes in the details of speech suggest that their representations of sound are coarser than adults’ in some ways. For example, they are more likely to notice a change in a syllable if the vowel changes rather than a consonant (Bertoncini, Bijeljac-Babic, Jusczyk, Kennedy, & Mehler, 1988). From examining very basic auditory abilities, researchers know that young infants’ thresholds for detecting sound are higher than adults’ and that their ability to separate or discriminate sounds of different frequencies is immature, particularly at frequencies above 3000 Hz than at lower frequencies (e.g., Olsho, Koch, Carter, Halpin, & Spetner, 1988; Olsho, Koch, & Halpin, 1987). Studies of the acoustical response of the ear of young infants point to the middle ear as a source of immature thresholds in quiet. The middle ear of an infant is less efficient than that of an adult in transmitting sound to the inner ear (Keefe, Bulen, Arehart, & Burns, 1993). The efficiency of high-frequency sound transmission through the middle ear improves considerably in the first year of life, with smaller progressive improvements across the frequency range of hearing continuing well into childhood (Okabe, Tanaka, Hamada, Miura, & Funai, 1988). Interestingly, the inner ear seems to be mature in newborns. Nonetheless, electrophysiological measures show a broader neural response to high-frequency sounds, matching the results of behavioral studies of infants (e.g., Abdala & Folsom, 1995). Furthermore, transmission time of the neural response through the brainstem auditory pathway is correlated with young infants’ ability to detect a high-frequency sound (Werner, Folsom, & Mancl, 1994). Limitations in basic auditory abilities would be expected to limit the precision with which a young infant can represent a complex sound, such as speech. Researchers speculate that one reason adults speak slowly, more clearly, and at a higher intensity to infants is to compensate for infants’ immature hearing. Stage 2: Maturation of Selective Listening and Discovering New Details in Sound By the time an infant is 6 months old, middle ear efficiency has improved and the transmission of information through the brainstem seems mature. However, behavioral tests of hearing still find higher response thresholds, in quiet and in noise, for infants at this age—and in fact, for children up to 4 years old (e.g., Schneider & Trehub, 1992). A small part of this immature sound detection can be due to simple inattentiveness, or infants’ not being on task at all times during the test. However, most of the difference seems to result from the way infants listen to sound. While an adult will focus on the frequencies in a sound that are expected to allow them to identify the sound, infants tend to listen in a broadband way. They listen to all frequencies rather than selecting the most informative. This difference is demonstrated in a simple task in which infants and adults learn to respond to a tone in noise (Bargones & Werner, 1994). On a large majority of the trials, the tone is presented at one “expected” frequency, but on some trials, a tone at a different, “unexpected” frequency is presented. Adults tend not to hear the unexpected frequencies, while infants detect the expected and unexpected frequencies equally well. The interpretation of this result is that infants listened for a broad range of frequencies, while adults listened only to the frequency at which they expected the signal to be presented. Could it be that infants just don’t form expectations about sound as adults do? Infants’ performance in other tasks suggests that they not only form expectations but also direct their attention to increase their sensitivity to sound under some conditions. For example, if a short burst of noise is presented to cue the listener that the target sound is about to occur, both infants and adults could detect the target sound better when it occurred at the expected time, rather than at a slightly earlier- or later-than-expected time (Parrish & Werner, 2004). This finding suggests that the infants learned that the sound they were supposed to detect usually occurred at a specific time and that they listened for the sound at that time but not at other times. This conclusion means that infants have the capacity to listen selectively under certain conditions. If infants can direct their attention to a particular time, why don’t they direct their attention to a particular frequency? Researchers speculate that it is maladaptive for infants to listen selectively to a sound like speech, in which the important frequencies change depending on the speaker, the context, the language, and other factors. It may be more sensible for infants to continue to listen broadly to speech until considerable listening experience in many situations allows them to learn where the important speech cues occur. In fact, research suggests that adults learning a second language have difficulty in part because they listen to the aspects of speech they have learned to listen to in their native language, while ignoring cues in other frequency ranges that are important for the second language (e.g., Best, McRoberts, & Sithole, 1988). One result of infants’ broadband listening is that it makes it difficult for them to separate target sounds from competing sounds. Adults can have problems separating a target from competing sounds when the competing sounds change over time, and infants also have special difficulties when the competing sounds vary. For infants, though, just having a competing sound in the background seems to make it difficult for them to hear a target. For example, the presence of a competing sound, even one that is far from the target sound in frequency, increases infants’ threshold for the target sound (L. J. Leibold & Werner, 2006). This susceptibility to interference from competing sounds appears to continue until children are 4 or 5 years old (L. Leibold & Neff, in press). This finding implies that learning about sound will be more difficult for infants and preschool children in noisy environments and those in which there are several competing sources of sound. Research underway in several laboratories is attempting to determine whether infants and children are able to use some of the strategies that adults use to separate target and competing sounds. The development of selective listening involves not only picking out one sound among several, but also listening to the details in complex sounds such as speech. In a series of studies, Nittrouer (2006) has shown that young children tend to make decisions about the identity of a syllable or a word on the basis of global acoustic differences rather than on fine acoustic details. Nittrouer’s findings are consistent with the idea that children do not focus in on specific frequencies. Apparently, it is only with years of exposure under a variety of conditions that children notice the details in speech. Stage 3: Maturation of Perceptual Flexibility By school age, children appear to have mastered selective listening: they are no longer as influenced by background sounds as younger children and they appear to focus on informative aspects of sound. School-aged children are still less consistent than adults in the way they categorize speech sounds, and researchers can still identify listening conditions that are more difficult for school-aged children than for adults. Children are less consistent than adults in identifying speech sounds because once they have discovered the multiple redundant acoustic differences between sounds, they have trouble when all of those differences are not available to them. For example, Hazan and Barrett (2000) found that when they synthetically altered syllables so that they are distinguished by only one acoustic cue, 6-year-old children were much less consistent in identifying the syllables than when multiple acoustic cues were available. Older children and adults were as consistent at categorizing the syllables with one cue as with multiple cues. Similarly, in the presence of noise or reverberation, some speech cues may be difficult to hear because of masking or distortion, while others remain usable. Under such conditions, adults can switch to the more reliable cue, while children apparently cannot. Finally, speech perception may be a relatively automatic process for young adults, based on years of practice. For school-aged children, however, perceiving speech in difficult listening situations may be less automatic, requiring greater attention and allocation of more processing resources. Any additional demands on attention may be impossible for children to manage. Wightman and Kistler (2005) showed recently that adults could separate two voices presented to one ear; their ability to do so was little affected by yet another voice presented to the opposite ear. Children ages 6-9, in contrast, could separate the two voices in one ear fairly well, but their performance deteriorated markedly when another voice was added to the opposite ear. One explanation of this result is that the adults could separate the original two voices because they had sufficient processing resources to block out the voice in the opposite ear, while the children required so much effort to separate the original two voices that they had no processing resources available to block out the third voice. Implications of Auditory Development Developmental studies of infants and young children are beginning to explain why difficult listening conditions are nearly always more challenging for children than for adults. Early in infancy, fundamental auditory processes limit infants’ ability to represent the fine acoustic details in the sounds they hear. However, even after the auditory system is able to represent those details, infants and preschool children do not appear to use all the details available to them. It is as if the system remains unselective during this stage of development, so that children will learn to use the appropriate acoustic information even though the frequencies at which it will occur are uncertain. Finally, school-aged children seem to have the acoustic details available to them, and they are able to attend to those details. Auditory development in this final stage involves learning to use different details flexibly with changes in listening conditions and acquiring the practice needed to make speech perception an automatic process. The results of these studies have implications in many realms. For the audiologist, they suggest that infants and children with hearing impairment need to hear the broadest possible range of frequencies to learn how to understand speech most effectively. For the speech-language pathologist, they suggest that children may not always hear all of the acoustic details in speech, even when those details are available to them. For those charged with designing the environments in which infants and children live and learn, they underscore the importance of reducing the levels of noise and reverberation to optimize auditory learning. References Abdala C., & Folsom R. C. (1995). Frequency contribution to the click-evoked auditory brain stem response in human adults and infants.Journal of the Acoustical Society of America,( 97(4), 2394–2404. CrossrefGoogle Scholar Bargones J. Y., & Werner L. A. (1994). Adults listen selectively; Infants do not.Psychological Science,( 5(3), 170–174. CrossrefGoogle Scholar Bertoncini J., Bijeljac-Babic R., Jusczyk P. W., Kennedy L. J., & Mehler J. (1988). An investigation of young infants’ perceptual representations of speech sounds.Journal of Experimental Psychology [General],( 117(1), 21–33. CrossrefGoogle Scholar Best C. T., McRoberts G. W., & Sithole N. M. (1988). Examination of perceptual reorganization for nonnative speech contrasts: Zulu click discrimination by English-speaking adults and infants.Journal of Experimental Psychology [Human Perception and Performance],( 14(3), 345–360. CrossrefGoogle Scholar Hazan V., & Barrett S. (2000). The development of phonemic categorization in children aged 6–12.Journal of Phonetics,( 28(4), 377–396. CrossrefGoogle Scholar Keefe D. H., Bulen J. C., Arehart K. H., & Burns E. M. (1993). Ear-canal impedance and reflection coefficient in human infants and adults.Journal of the Acoustical Society of America,– 942617–2638. CrossrefGoogle Scholar Leibold L., & Neff D. L. (in press). Effects of masker-spectral variability and masker fringes in children and adults.Journal of the Acoustical Society of America. Google Scholar Leibold L. J., & Werner L. A. (2006). Effect of masker-frequency variability on the detecion performance of infants and adults.Journal of the Acoustical Society of America,( 119(6), 3960–3970. CrossrefGoogle Scholar Nittrouer S. (2006). Children hear the forest (L).Journal of the Acoustical Society of America,( 120(4), 1799–1802. CrossrefGoogle Scholar Okabe K. S., Tanaka S., Hamada H., Miura T., & Funai H. (1988). Acoustic impedance measured on normal ears of children.Journal of the Acoustical Society of Japan,, 9, 287–294. CrossrefGoogle Scholar Olsho L. W., Koch E. G., Carter E. A., Halpin C. F., & Spetner N. B. (1988). Pure-tone sensitivity of human infants.Journal of the Acoustical Society of America,( 84(4), 1316–1324. CrossrefGoogle Scholar Olsho L. W., Koch E. G., & Halpin C. F. (1987). Level and age effects in infant frequency discrimination.Journal of the Acoustical Society of America,, 82, 454–464. CrossrefGoogle Scholar Parrish H. K., & Werner L. A. (2004). Listening windows in infants and adults., Paper presented at the American Auditory Society, Scottsdale, AZ. Google Scholar Saffran J., Werker J., & Werner L. A. (2006). The infant’s auditory world: Hearing, speech and the beginnings of language.In Damon W., Lerner R. M., Kuhn D. & Siegler R. S. (Eds.), Handbook of Child Psychology, Vol. 2, Cognition, Perception, and Language (6th ed.). New York: Wiley. Google Scholar Schneider B. A., & Trehub S. E. (1992). Sources of developmental change in auditory sensitivity.In Werner L. A. & Rubel E. W. (Eds.), Developmental psychoacoustics (pp. 3–46). Washington, D.C.: American Psychological Association. CrossrefGoogle Scholar Werner L. A., Folsom R. C., & Mancl L. R. (1994). The relationship between auditory brainstem response latencies and behavioral thresholds in normal hearing infants and adults.Hearing Research,, 77, 88–98. CrossrefGoogle Scholar Wightman F. L., & Kistler D. J. (2005). Informational masking of speech in children: Effects of ipsilateral and contralateral distracters.Journal of the Acoustical Society of America,( 118(5), 3164–3176. CrossrefGoogle Scholar Author Notes Lynne Werner, is professor of speech and hearing sciences at the University of Washington in Seattle. Her research focuses on the development of hearing and listening in infants. Contact her by e-mail at [email protected]. Advertising Disclaimer | Advertise With Us Advertising Disclaimer | Advertise With Us Additional Resources FiguresSourcesRelatedDetailsCited ByJournal of Speech, Language, and Hearing Research64:7 (2897-2908)16 Jul 2021Infants' Preference for Child-Directed Speech Over Time-Reversed Speech in On-Channel and Off-Channel MaskingOsnat Segal, Nitzan Kligler and Liat Kishon-Rabin Volume 12Issue 4March 2007 Get Permissions Add to your Mendeley library History Published in print: Mar 1, 2007 Metrics Downloaded 2,020 times Topicsasha-topicsleader_do_tagasha-article-typesCopyright & Permissions© 2007 American Speech-Language-Hearing AssociationLoading ..." @default.
- W807449959 created "2016-06-24" @default.
- W807449959 creator A5007672871 @default.
- W807449959 date "2007-03-01" @default.
- W807449959 modified "2023-09-26" @default.
- W807449959 title "What Do Children Hear? How Auditory Maturation Affects Speech Perception" @default.
- W807449959 cites W1970824760 @default.
- W807449959 cites W1981338937 @default.
- W807449959 cites W1995422333 @default.
- W807449959 cites W2010428176 @default.
- W807449959 cites W2023393433 @default.
- W807449959 cites W2037458383 @default.
- W807449959 cites W2043956553 @default.
- W807449959 cites W2074464338 @default.
- W807449959 cites W2124532644 @default.
- W807449959 cites W2154095608 @default.
- W807449959 cites W2168100340 @default.
- W807449959 cites W2169800988 @default.
- W807449959 cites W2485569465 @default.
- W807449959 cites W4243012563 @default.
- W807449959 doi "https://doi.org/10.1044/leader.ftr1.12042007.6" @default.
- W807449959 hasPublicationYear "2007" @default.
- W807449959 type Work @default.
- W807449959 sameAs 807449959 @default.
- W807449959 citedByCount "9" @default.
- W807449959 countsByYear W8074499592015 @default.
- W807449959 countsByYear W8074499592016 @default.
- W807449959 countsByYear W8074499592021 @default.
- W807449959 countsByYear W8074499592022 @default.
- W807449959 countsByYear W8074499592023 @default.
- W807449959 crossrefType "journal-article" @default.
- W807449959 hasAuthorship W807449959A5007672871 @default.
- W807449959 hasConcept C15744967 @default.
- W807449959 hasConcept C169760540 @default.
- W807449959 hasConcept C180747234 @default.
- W807449959 hasConcept C26760741 @default.
- W807449959 hasConcept C3020799230 @default.
- W807449959 hasConcept C46312422 @default.
- W807449959 hasConcept C548259974 @default.
- W807449959 hasConcept C71924100 @default.
- W807449959 hasConcept C99209842 @default.
- W807449959 hasConceptScore W807449959C15744967 @default.
- W807449959 hasConceptScore W807449959C169760540 @default.
- W807449959 hasConceptScore W807449959C180747234 @default.
- W807449959 hasConceptScore W807449959C26760741 @default.
- W807449959 hasConceptScore W807449959C3020799230 @default.
- W807449959 hasConceptScore W807449959C46312422 @default.
- W807449959 hasConceptScore W807449959C548259974 @default.
- W807449959 hasConceptScore W807449959C71924100 @default.
- W807449959 hasConceptScore W807449959C99209842 @default.
- W807449959 hasIssue "4" @default.
- W807449959 hasLocation W8074499591 @default.
- W807449959 hasOpenAccess W807449959 @default.
- W807449959 hasPrimaryLocation W8074499591 @default.
- W807449959 hasRelatedWork W1967136428 @default.
- W807449959 hasRelatedWork W1994638219 @default.
- W807449959 hasRelatedWork W2117160415 @default.
- W807449959 hasRelatedWork W2133935771 @default.
- W807449959 hasRelatedWork W2139934183 @default.
- W807449959 hasRelatedWork W2377201310 @default.
- W807449959 hasRelatedWork W2440689688 @default.
- W807449959 hasRelatedWork W2530107864 @default.
- W807449959 hasRelatedWork W3096244078 @default.
- W807449959 hasRelatedWork W4251290435 @default.
- W807449959 hasVolume "12" @default.
- W807449959 isParatext "false" @default.
- W807449959 isRetracted "false" @default.
- W807449959 magId "807449959" @default.
- W807449959 workType "article" @default.