Matching Items (5)
Filtering by

Clear all filters

155938-Thumbnail Image.png
Description
Music training is associated with measurable physiologic changes in the auditory pathway. Benefits of music training have also been demonstrated in the areas of working memory, auditory attention, and speech perception in noise. The purpose of this study was to determine whether long-term auditory experience secondary to music

Music training is associated with measurable physiologic changes in the auditory pathway. Benefits of music training have also been demonstrated in the areas of working memory, auditory attention, and speech perception in noise. The purpose of this study was to determine whether long-term auditory experience secondary to music training enhances the ability to detect, learn, and recall new words.

Participants consisted of 20 young adult musicians and 20 age-matched non-musicians. In addition to completing word recognition and non-word detection tasks, each participant learned 10 nonsense words in a rapid word-learning task. All tasks were completed in quiet and in multi-talker babble. Next-day retention of the learned words was examined in isolation and in context. Cortical auditory evoked responses to vowel stimuli were recorded to obtain latencies and amplitudes for the N1, P2, and P3a components. Performance was compared across groups and listening conditions. Correlations between the behavioral tasks and the cortical auditory evoked responses were also examined.

No differences were found between groups (musicians vs. non-musicians) on any of the behavioral tasks. Nor did the groups differ in cortical auditory evoked response latencies or amplitudes, with the exception of P2 latencies, which were significantly longer in musicians than in non-musicians. Performance was significantly poorer in babble than in quiet on word recognition and non-word detection, but not on word learning, learned-word retention, or learned-word detection. CAEP latencies collapsed across group were significantly longer and amplitudes were significantly smaller in babble than in quiet. P2 latencies in quiet were positively correlated with word recognition in quiet, while P3a latencies in babble were positively correlated with word recognition and learned-word detection in babble. No other significant correlations were observed between CAEPs and performance on behavioral tasks.

These results indicated that, for young normal-hearing adults, auditory experience resulting from long-term music training did not provide an advantage for learning new information in either favorable (quiet) or unfavorable (babble) listening conditions. Results of the present study suggest that the relationship between music training and the strength of cortical auditory evoked responses may be more complex or too weak to be observed in this population.
ContributorsStewart, Elizabeth (Author) / Pittman, Andrea (Thesis advisor) / Cone, Barbara (Committee member) / Zhou, Yi (Committee member) / Arizona State University (Publisher)
Created2017
133916-Thumbnail Image.png
Description
The purpose of the present study was to determine if vocabulary knowledge is related to degree of hearing loss. A 50-question multiple-choice vocabulary test comprised of old and new words was administered to 43 adults with hearing loss (19 to 92 years old) and 51 adults with normal hearing (20

The purpose of the present study was to determine if vocabulary knowledge is related to degree of hearing loss. A 50-question multiple-choice vocabulary test comprised of old and new words was administered to 43 adults with hearing loss (19 to 92 years old) and 51 adults with normal hearing (20 to 40 years old). Degree of hearing loss ranged from mild to moderately-severe as determined by bilateral pure-tone thresholds. Education levels ranged from some high school to graduate degrees. It was predicted that knowledge of new words would decrease with increasing hearing loss, whereas knowledge of old words would be unaffected. The Test of Contemporary Vocabulary (TCV) was developed for this study and contained words with old and new definitions. The vocabulary scores were subjected to repeated-measures ANOVA with definition type (old and new) as the within-subjects factor. Hearing level and education were between-subjects factors, while age was entered as a covariate. The results revealed no main effect of age or education level, while a significant main effect of hearing level was observed. Specifically, performance for new words decreased significantly as degree of hearing loss increased. A similar effect was not observed for old words. These results indicate that knowledge of new definitions is inversely related to degree of hearing loss.
ContributorsMarzan, Nicole Ann (Author) / Pittman, Andrea (Thesis director) / Azuma, Tamiko (Committee member) / Wexler, Kathryn (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134484-Thumbnail Image.png
Description
The purpose of the present study was to determine if an automated speech perception task yields results that are equivalent to a word recognition test used in audiometric evaluations. This was done by testing 51 normally hearing adults using a traditional word recognition task (NU-6) and an automated Non-Word Detection

The purpose of the present study was to determine if an automated speech perception task yields results that are equivalent to a word recognition test used in audiometric evaluations. This was done by testing 51 normally hearing adults using a traditional word recognition task (NU-6) and an automated Non-Word Detection task. Stimuli for each task were presented in quiet as well as in six signal-to-noise ratios (SNRs) increasing in 3 dB increments (+0 dB, +3 dB, +6 dB, +9 dB, + 12 dB, +15 dB). A two one-sided test procedure (TOST) was used to determine equivalency of the two tests. This approach required the performance for both tasks to be arcsine transformed and converted to z-scores in order to calculate the difference in scores across listening conditions. These values were then compared to a predetermined criterion to establish if equivalency exists. It was expected that the TOST procedure would reveal equivalency between the traditional word recognition task and the automated Non-Word Detection Task. The results confirmed that the two tasks differed by no more than 2 test items in any of the listening conditions. Overall, the results indicate that the automated Non-Word Detection task could be used in addition to, or in place of, traditional word recognition tests. In addition, the features of an automated test such as the Non-Word Detection task offer additional benefits including rapid administration, accurate scoring, and supplemental performance data (e.g., error analyses) beyond those obtained in traditional speech perception measures.
ContributorsStahl, Amy Nicole (Author) / Pittman, Andrea (Thesis director) / Boothroyd, Arthur (Committee member) / McBride, Ingrid (Committee member) / School of Human Evolution and Social Change (Contributor) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
137603-Thumbnail Image.png
Description
The purpose of this study was to explore the effects of word type, phonotactic probability, word frequency, and neighborhood density on the vocabularies of children with mild-to-moderate hearing loss compared to children with normal hearing. This was done by assigning values for these parameters to each test item on the

The purpose of this study was to explore the effects of word type, phonotactic probability, word frequency, and neighborhood density on the vocabularies of children with mild-to-moderate hearing loss compared to children with normal hearing. This was done by assigning values for these parameters to each test item on the Peabody Picture Vocabulary Test (Version III, Form B) to quantify and characterize the performance of children with hearing loss relative to that of children with normal hearing. It was expected that PPVT IIIB scores would: 1) Decrease as the degree of hearing loss increased. 2) Increase as a function of age 3) Be more positively related to nouns than to verbs or attributes. 4) Be negatively related to phonotactic probability. 5) Be negatively related to word frequency 6) Be negatively related to neighborhood density. All but one of the expected outcomes was observed. PPVT IIIB performance decreased as hearing loss increased, and increased with age. Performance for nouns, verbs, and attributes increased with PPVT IIIB performance, whereas neighborhood density decreased. Phonotactic probability was expected to decrease as PPVT IIIB performance increased, but instead it increased due to the confounding effects of word length and the order of words on the test. Age and hearing level were rejected by the multiple regression analyses as contributors to PPVT IIIB performance for the children with hearing loss. Overall, the results indicate that there is a 2-year difference in vocabulary age between children with normal hearing and children with hearing loss, and that this may be due to factors external to the child (such as word frequency and phonotactic probability) rather than the child's age and hearing level. This suggests that children with hearing loss need continued clinical services (amplification) as well as additional support services in school throughout childhood.
ContributorsLatto, Allison Renee (Author) / Pittman, Andrea (Thesis director) / Gray, Shelley (Committee member) / Brinkley, Shara (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor)
Created2013-05
Description

Vocal emotion production is important for social interactions in daily life. Previous studies found that pre-lingually deafened cochlear implant (CI) children without residual acoustic hearing had significant deficits in producing pitch cues for vocal emotions as compared to post-lingually deafened CI adults, normal-hearing (NH) children, and NH adults. In light

Vocal emotion production is important for social interactions in daily life. Previous studies found that pre-lingually deafened cochlear implant (CI) children without residual acoustic hearing had significant deficits in producing pitch cues for vocal emotions as compared to post-lingually deafened CI adults, normal-hearing (NH) children, and NH adults. In light of the importance of residual acoustic hearing for the development of vocal emotion production, this study tested whether pre-lingually deafened CI children with residual acoustic hearing may produce similar pitch cues for vocal emotions as the other participant groups. Sixteen pre-lingually deafened CI children with residual acoustic hearing, nine post-lingually deafened CI adults with residual acoustic hearing, twelve NH children, and eleven NH adults were asked to produce ten semantically neutral sentences in happy or sad emotion. The results showed that there was no significant group effect for the ratio of mean fundamental frequency (F0) and the ratio of F0 standard deviation between emotions. Instead, CI children showed significantly greater intensity difference between emotions than CI adults, NH children, and NH adults. In CI children, aided pure-tone average hearing threshold of acoustic ear was correlated with the ratio of mean F0 and the ratio of duration between emotions. These results suggest that residual acoustic hearing with low-frequency pitch cues may facilitate the development of vocal emotion production in pre-lingually deafened CI children.

ContributorsMacdonald, Andrina Elizabeth (Author) / Luo, Xin (Thesis director) / Pittman, Andrea (Committee member) / College of Health Solutions (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05