Matching Items (4)
Filtering by

Clear all filters

136164-Thumbnail Image.png
Description
The increase of Traumatic Brain Injury (TBI) cases in recent war history has increased the urgency of research regarding how veterans are affected by TBIs. The purpose of this study was to evaluate the effects of TBI on speech recognition in noise. The AzBio Sentence Test was completed for signal-to-noise

The increase of Traumatic Brain Injury (TBI) cases in recent war history has increased the urgency of research regarding how veterans are affected by TBIs. The purpose of this study was to evaluate the effects of TBI on speech recognition in noise. The AzBio Sentence Test was completed for signal-to-noise ratios (S/N) from -10 dB to +15 dB for a control group of ten participants and one US military veteran with history of service-connected TBI. All participants had normal hearing sensitivity defined as thresholds of 20 dB or better at frequencies from 250-8000 Hz in addition to having tympanograms within normal limits. Comparison of the data collected on the control group versus the veteran suggested that the veteran performed worse than the majority of the control group on the AzBio Sentence Test. Further research with more participants would be beneficial to our understanding of how veterans with TBI perform on speech recognition tests in the presence of background noise.
ContributorsCorvasce, Erica Marie (Author) / Peterson, Kathleen (Thesis director) / Williams, Erica (Committee member) / Azuma, Tamiko (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor)
Created2015-05
133858-Thumbnail Image.png
Description
Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated.

Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated. The correlations between auditory/cognitive functions and speech/music recognition are examined. The results may demonstrate which factors can better explain the variable performance across elderly cochlear implant users.
ContributorsKolberg, Courtney Elizabeth (Author) / Luo, Xin (Thesis director) / Azuma, Tamiko (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134484-Thumbnail Image.png
Description
The purpose of the present study was to determine if an automated speech perception task yields results that are equivalent to a word recognition test used in audiometric evaluations. This was done by testing 51 normally hearing adults using a traditional word recognition task (NU-6) and an automated Non-Word Detection

The purpose of the present study was to determine if an automated speech perception task yields results that are equivalent to a word recognition test used in audiometric evaluations. This was done by testing 51 normally hearing adults using a traditional word recognition task (NU-6) and an automated Non-Word Detection task. Stimuli for each task were presented in quiet as well as in six signal-to-noise ratios (SNRs) increasing in 3 dB increments (+0 dB, +3 dB, +6 dB, +9 dB, + 12 dB, +15 dB). A two one-sided test procedure (TOST) was used to determine equivalency of the two tests. This approach required the performance for both tasks to be arcsine transformed and converted to z-scores in order to calculate the difference in scores across listening conditions. These values were then compared to a predetermined criterion to establish if equivalency exists. It was expected that the TOST procedure would reveal equivalency between the traditional word recognition task and the automated Non-Word Detection Task. The results confirmed that the two tasks differed by no more than 2 test items in any of the listening conditions. Overall, the results indicate that the automated Non-Word Detection task could be used in addition to, or in place of, traditional word recognition tests. In addition, the features of an automated test such as the Non-Word Detection task offer additional benefits including rapid administration, accurate scoring, and supplemental performance data (e.g., error analyses) beyond those obtained in traditional speech perception measures.
ContributorsStahl, Amy Nicole (Author) / Pittman, Andrea (Thesis director) / Boothroyd, Arthur (Committee member) / McBride, Ingrid (Committee member) / School of Human Evolution and Social Change (Contributor) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
Description
Orofacial Myofunctional Disorder (OMD) is defined as “abnormal movement patterns of the face and mouth” by ASHA (2023). OMD leads to anterior carriage of the tongue, open mouth posture, mouth breathing, and tongue thrust swallow. Dentalization speech errors of /s/ and /z/ are also known to be caused by low and forward position

Orofacial Myofunctional Disorder (OMD) is defined as “abnormal movement patterns of the face and mouth” by ASHA (2023). OMD leads to anterior carriage of the tongue, open mouth posture, mouth breathing, and tongue thrust swallow. Dentalization speech errors of /s/ and /z/ are also known to be caused by low and forward position of the tongue (Wadsworth, Maui, & Stevens, 1998). This study used the OMES-E protocol to identify 10 out of 40 participants with OMD. A cut-off below 80% accuracy for the production of /s/ and /z/ sounds classified 6 out of 40 participants with speech errors. Then, a correlation was run between speech score and OMD classification; it was not significant. This raises the question, why do some people with OMD have moderate to severe speech errors of /s/ and /z/, and some who have OMD do not? This study aims to explore this question beyond the motor modality. Using an auditory perception paradigm, the first and second formants of the vowel /ɛ/ were shifted to approximate /æ/. The participant’s responses and compensations to these shifts were recorded in real time. Results of this perceptual test could suggest that perceptual/compensatory differences may explain why some people in the OMD population have speech errors and some do not.
ContributorsDeOrio, Sophia (Author) / Weinhold, Juliet (Thesis director) / Bruce, Laurel (Committee member) / Barrett, The Honors College (Contributor) / School of Public Affairs (Contributor) / College of Health Solutions (Contributor) / Sanford School of Social and Family Dynamics (Contributor)
Created2023-12