Matching Items (5)
Filtering by

Clear all filters

135446-Thumbnail Image.png
Description
The purpose of this study was to examine swallowing patterns using ultrasound technology subsequent to the implementation of two therapeutic interventions. Baseline swallow patterns were compared to swallows after implementation of therapeutic interventions common in both feeding therapy (FT) and orofacial myofunctional therapy (OMT). The interventions consist of stimulation of

The purpose of this study was to examine swallowing patterns using ultrasound technology subsequent to the implementation of two therapeutic interventions. Baseline swallow patterns were compared to swallows after implementation of therapeutic interventions common in both feeding therapy (FT) and orofacial myofunctional therapy (OMT). The interventions consist of stimulation of the tongue by z-vibe and tongue pops. Changes in swallowing patterns are described, and similarities of interventions across the two professions are discussed. Ultrasound research in the realm of swallowing is sparse despite having potential clinical application in both professions. In using ultrasound, this study outlines a protocol for utilization of a hand-held probe and reinforces a particular protocol described in the literature. Real-time ultrasound recordings of swallows for 19 adult female subjects were made. Participants with orofacial myofunctional disorder are compared to a group with typical swallowing and differences in swallowing patterns are described. Three stages of the oral phase of the swallow were assigned based on ultrasonic observation of the tongue shape. Analysis involves total duration of the swallow, duration of the three stages in relation to the total duration of the swallow, and the number of swallows required for the bolus to be cleared from the oral cavity. No significant effects of either intervention were found. Swallowing patterns showed a general trend to become faster in total duration subsequent to each intervention. An unexpected finding showed significant changes in the relationship between the bolus preparation stage and the bolus transportation stage when comparing the group classified as having a single swallow and the group classified as having multiple swallows.
ContributorsMckay, Michelle Diane (Author) / Weinhold, Juliet (Thesis director) / Scherer, Nancy (Committee member) / Department of Speech and Hearing Science (Contributor) / Sanford School of Social and Family Dynamics (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136164-Thumbnail Image.png
Description
The increase of Traumatic Brain Injury (TBI) cases in recent war history has increased the urgency of research regarding how veterans are affected by TBIs. The purpose of this study was to evaluate the effects of TBI on speech recognition in noise. The AzBio Sentence Test was completed for signal-to-noise

The increase of Traumatic Brain Injury (TBI) cases in recent war history has increased the urgency of research regarding how veterans are affected by TBIs. The purpose of this study was to evaluate the effects of TBI on speech recognition in noise. The AzBio Sentence Test was completed for signal-to-noise ratios (S/N) from -10 dB to +15 dB for a control group of ten participants and one US military veteran with history of service-connected TBI. All participants had normal hearing sensitivity defined as thresholds of 20 dB or better at frequencies from 250-8000 Hz in addition to having tympanograms within normal limits. Comparison of the data collected on the control group versus the veteran suggested that the veteran performed worse than the majority of the control group on the AzBio Sentence Test. Further research with more participants would be beneficial to our understanding of how veterans with TBI perform on speech recognition tests in the presence of background noise.
ContributorsCorvasce, Erica Marie (Author) / Peterson, Kathleen (Thesis director) / Williams, Erica (Committee member) / Azuma, Tamiko (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor)
Created2015-05
133858-Thumbnail Image.png
Description
Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated.

Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated. The correlations between auditory/cognitive functions and speech/music recognition are examined. The results may demonstrate which factors can better explain the variable performance across elderly cochlear implant users.
ContributorsKolberg, Courtney Elizabeth (Author) / Luo, Xin (Thesis director) / Azuma, Tamiko (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134576-Thumbnail Image.png
Description
Research on /r/ production previously used formant analysis as the primary acoustic analysis, with particular focus on the low third formant in the speech signal. Prior imaging of speech used X-Ray, MRI, and electromagnetic midsagittal articulometer systems. More recently, the signal processing technique of Mel-log spectral plots has been used

Research on /r/ production previously used formant analysis as the primary acoustic analysis, with particular focus on the low third formant in the speech signal. Prior imaging of speech used X-Ray, MRI, and electromagnetic midsagittal articulometer systems. More recently, the signal processing technique of Mel-log spectral plots has been used to study /r/ production in children and female adults. Ultrasound imaging of the tongue also has been used to image the tongue during speech production in both clinical and research settings. The current study attempts to describe /r/ production in three different allophonic contexts; vocalic, prevocalic, and postvocalic positions. Ultrasound analysis, formant analysis, Mel-log spectral plots, and /r/ duration were measured for /r/ production in 29 adult speakers (10 male, 19 female). A possible relationship between these variables was also explored. Results showed that the amount of superior constriction in the postvocalic /r/ allophone was significantly lower than the other /r/ allophones. Formant two was significantly lower and the distance between formant two and three was significantly higher for the prevocalic /r/ allophone. Vocalic /r/ had the longest average duration, while prevocalic /r/ had the shortest duration. Signal processing results revealed candidate Mel-bin values for accurate /r/ production for each allophone of /r/. The results indicate that allophones of /r/ can be distinguished based the different analyses. However, relationships between these analyses are still unclear. Future research is needed in order to gather more data on /r/ acoustics and articulation in order to find possible relationships between the analyses for /r/ production.
ContributorsHirsch, Megan Elizabeth (Author) / Weinhold, Juliet (Thesis director) / Gardner, Joshua (Committee member) / Department of Speech and Hearing Science (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
137669-Thumbnail Image.png
Description
When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder,

When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder, was used so the fundamental frequency (F0) of speech would be well represented. The results of this experiment showed that listeners are able to use the F0 information to aid in speaker segregation. As expected, recognition performance is the poorest when there was no difference in F0 between speakers, and listeners performed better as the difference in F0 increased. The type of errors that the listeners made was also analyzed. The results show that when an error was made in identifying the correct word from the target sentence, the response was usually (~60%) a word that was uttered in the competing sentence.
ContributorsStanley, Nicole Ernestine (Author) / Yost, William (Thesis director) / Dorman, Michael (Committee member) / Liss, Julie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Hugh Downs School of Human Communication (Contributor)
Created2013-05