Matching Items (7)
Filtering by

Clear all filters

136164-Thumbnail Image.png
Description
The increase of Traumatic Brain Injury (TBI) cases in recent war history has increased the urgency of research regarding how veterans are affected by TBIs. The purpose of this study was to evaluate the effects of TBI on speech recognition in noise. The AzBio Sentence Test was completed for signal-to-noise

The increase of Traumatic Brain Injury (TBI) cases in recent war history has increased the urgency of research regarding how veterans are affected by TBIs. The purpose of this study was to evaluate the effects of TBI on speech recognition in noise. The AzBio Sentence Test was completed for signal-to-noise ratios (S/N) from -10 dB to +15 dB for a control group of ten participants and one US military veteran with history of service-connected TBI. All participants had normal hearing sensitivity defined as thresholds of 20 dB or better at frequencies from 250-8000 Hz in addition to having tympanograms within normal limits. Comparison of the data collected on the control group versus the veteran suggested that the veteran performed worse than the majority of the control group on the AzBio Sentence Test. Further research with more participants would be beneficial to our understanding of how veterans with TBI perform on speech recognition tests in the presence of background noise.
ContributorsCorvasce, Erica Marie (Author) / Peterson, Kathleen (Thesis director) / Williams, Erica (Committee member) / Azuma, Tamiko (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor)
Created2015-05
133858-Thumbnail Image.png
Description
Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated.

Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated. The correlations between auditory/cognitive functions and speech/music recognition are examined. The results may demonstrate which factors can better explain the variable performance across elderly cochlear implant users.
ContributorsKolberg, Courtney Elizabeth (Author) / Luo, Xin (Thesis director) / Azuma, Tamiko (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
137669-Thumbnail Image.png
Description
When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder,

When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder, was used so the fundamental frequency (F0) of speech would be well represented. The results of this experiment showed that listeners are able to use the F0 information to aid in speaker segregation. As expected, recognition performance is the poorest when there was no difference in F0 between speakers, and listeners performed better as the difference in F0 increased. The type of errors that the listeners made was also analyzed. The results show that when an error was made in identifying the correct word from the target sentence, the response was usually (~60%) a word that was uttered in the competing sentence.
ContributorsStanley, Nicole Ernestine (Author) / Yost, William (Thesis director) / Dorman, Michael (Committee member) / Liss, Julie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Hugh Downs School of Human Communication (Contributor)
Created2013-05
137603-Thumbnail Image.png
Description
The purpose of this study was to explore the effects of word type, phonotactic probability, word frequency, and neighborhood density on the vocabularies of children with mild-to-moderate hearing loss compared to children with normal hearing. This was done by assigning values for these parameters to each test item on the

The purpose of this study was to explore the effects of word type, phonotactic probability, word frequency, and neighborhood density on the vocabularies of children with mild-to-moderate hearing loss compared to children with normal hearing. This was done by assigning values for these parameters to each test item on the Peabody Picture Vocabulary Test (Version III, Form B) to quantify and characterize the performance of children with hearing loss relative to that of children with normal hearing. It was expected that PPVT IIIB scores would: 1) Decrease as the degree of hearing loss increased. 2) Increase as a function of age 3) Be more positively related to nouns than to verbs or attributes. 4) Be negatively related to phonotactic probability. 5) Be negatively related to word frequency 6) Be negatively related to neighborhood density. All but one of the expected outcomes was observed. PPVT IIIB performance decreased as hearing loss increased, and increased with age. Performance for nouns, verbs, and attributes increased with PPVT IIIB performance, whereas neighborhood density decreased. Phonotactic probability was expected to decrease as PPVT IIIB performance increased, but instead it increased due to the confounding effects of word length and the order of words on the test. Age and hearing level were rejected by the multiple regression analyses as contributors to PPVT IIIB performance for the children with hearing loss. Overall, the results indicate that there is a 2-year difference in vocabulary age between children with normal hearing and children with hearing loss, and that this may be due to factors external to the child (such as word frequency and phonotactic probability) rather than the child's age and hearing level. This suggests that children with hearing loss need continued clinical services (amplification) as well as additional support services in school throughout childhood.
ContributorsLatto, Allison Renee (Author) / Pittman, Andrea (Thesis director) / Gray, Shelley (Committee member) / Brinkley, Shara (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor)
Created2013-05
Description
This creative project is a children's book entitled Sheldon the Shy Tortoise. Accompanying the story is a literature review of the research on childhood shyness. The purpose of the project is to gain a better of understanding of shyness in childhood. Topics covered in the literature review include risk factors

This creative project is a children's book entitled Sheldon the Shy Tortoise. Accompanying the story is a literature review of the research on childhood shyness. The purpose of the project is to gain a better of understanding of shyness in childhood. Topics covered in the literature review include risk factors and causes, negative social and behavioral effects, impact on academics, and treatment options. Using this information, the children's book was written. It aims to be fun for children to read while also providing insight and encouragement into some of the problems related to being shy. The story features animal characters and a relatively simple plot so it is easily understandable by the target audience of late-preschool and early-elementary children. The main character, Sheldon the tortoise, is often physically and metaphorically "stuck in his shell". He wants to participate in social activities but is afraid to do so. Through a series of events and interactions, Sheldon starts to come out of his shell in every sense of the phrase. The book is illustrated using photographs of hand-crocheted stuffed animals representing each of the characters. By incorporating scholarly research into the writing process, children will hopefully be able to gain an understanding of their shyness and ways to help decrease it. Teachers should be able to better understand their shy students and understand some of the unique challenges of working with shy children. This creative project helps convey necessary information to children and families during a critical period of development.
ContributorsRyan, Amanda (Author) / Hansen, Cory (Thesis director) / Bernstein, Katie (Committee member) / Department of Speech and Hearing Science (Contributor) / Sanford School of Social and Family Dynamics (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description

Vocal emotion production is important for social interactions in daily life. Previous studies found that pre-lingually deafened cochlear implant (CI) children without residual acoustic hearing had significant deficits in producing pitch cues for vocal emotions as compared to post-lingually deafened CI adults, normal-hearing (NH) children, and NH adults. In light

Vocal emotion production is important for social interactions in daily life. Previous studies found that pre-lingually deafened cochlear implant (CI) children without residual acoustic hearing had significant deficits in producing pitch cues for vocal emotions as compared to post-lingually deafened CI adults, normal-hearing (NH) children, and NH adults. In light of the importance of residual acoustic hearing for the development of vocal emotion production, this study tested whether pre-lingually deafened CI children with residual acoustic hearing may produce similar pitch cues for vocal emotions as the other participant groups. Sixteen pre-lingually deafened CI children with residual acoustic hearing, nine post-lingually deafened CI adults with residual acoustic hearing, twelve NH children, and eleven NH adults were asked to produce ten semantically neutral sentences in happy or sad emotion. The results showed that there was no significant group effect for the ratio of mean fundamental frequency (F0) and the ratio of F0 standard deviation between emotions. Instead, CI children showed significantly greater intensity difference between emotions than CI adults, NH children, and NH adults. In CI children, aided pure-tone average hearing threshold of acoustic ear was correlated with the ratio of mean F0 and the ratio of duration between emotions. These results suggest that residual acoustic hearing with low-frequency pitch cues may facilitate the development of vocal emotion production in pre-lingually deafened CI children.

ContributorsMacdonald, Andrina Elizabeth (Author) / Luo, Xin (Thesis director) / Pittman, Andrea (Committee member) / College of Health Solutions (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
132359-Thumbnail Image.png
Description
Cochlear implant (CI) successfully restores hearing sensation to profoundly deaf patients, but its
performance is limited by poor spectral resolution. Acoustic CI simulation has been widely used
in normal-­hearing (NH) listeners to study the effect of spectral resolution on speech perception,
while avoiding patient-­related confounds. It is unclear how speech production may change

Cochlear implant (CI) successfully restores hearing sensation to profoundly deaf patients, but its
performance is limited by poor spectral resolution. Acoustic CI simulation has been widely used
in normal-­hearing (NH) listeners to study the effect of spectral resolution on speech perception,
while avoiding patient-­related confounds. It is unclear how speech production may change with
the degree of spectral degradation of auditory feedback as experience by CI users. In this study,
a real-­time sinewave CI simulation was developed to provide NH subjects with auditory
feedback of different spectral resolution (1, 2, 4, and 8 channels). NH subjects were asked to
produce and identify vowels, as well as recognize sentences while listening to the real-­time CI
simulation. The results showed that sentence recognition scores with the real-­time CI simulation
improved with more channels, similar to those with the traditional off-­line CI simulation.
Perception of a vowel continuum “HEAD”-­ “HAD” was near chance with 1, 2, and 4 channels,
and greatly improved with 8 channels and full spectrum. The spectral resolution of auditory
feedback did not significantly affect any acoustic feature of vowel production (e.g., vowel space
area, mean amplitude, mean and variability of fundamental and formant frequencies). There
was no correlation between vowel production and perception. The lack of effect of auditory
feedback spectral resolution on vowel production was likely due to the limited exposure of NH
subjects to CI simulation and the limited frequency ranges covered by the sinewave carriers of
CI simulation. Future studies should investigate the effects of various CI processing parameters
on speech production using a noise-­band CI simulation.
ContributorsPerez Lustre, Sarahi (Author) / Luo, Xin (Thesis director) / Daliri, Ayoub (Committee member) / Division of Teacher Preparation (Contributor) / College of Health Solutions (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05