Matching Items (5)
Filtering by

Clear all filters

136828-Thumbnail Image.png
Description
This study evaluated whether the Story Champs intervention is effective in bilingual kindergarten children who speak Spanish as their native language. Previous research by Spencer and Slocum (2010) found that monolingual, English-speaking participants made significant gains in narrative retelling after intervention. This study implemented the intervention in two languages and

This study evaluated whether the Story Champs intervention is effective in bilingual kindergarten children who speak Spanish as their native language. Previous research by Spencer and Slocum (2010) found that monolingual, English-speaking participants made significant gains in narrative retelling after intervention. This study implemented the intervention in two languages and examined its effects after ten sessions. Results indicate that some children benefited from the intervention and there was variability across languages as well.
ContributorsFernandez, Olga E (Author) / Restrepo, Laida (Thesis director) / Mesa, Carol (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / School of International Letters and Cultures (Contributor)
Created2014-05
137447-Thumbnail Image.png
Description
In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed

In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed to determine if these techniques correlated with the human data.
ContributorsJones, Hanna Vanessa (Author) / Liss, Julie (Thesis director) / Dorman, Michael (Committee member) / Borrie, Stephanie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Department of English (Contributor) / Speech and Hearing Science (Contributor)
Created2013-05
134576-Thumbnail Image.png
Description
Research on /r/ production previously used formant analysis as the primary acoustic analysis, with particular focus on the low third formant in the speech signal. Prior imaging of speech used X-Ray, MRI, and electromagnetic midsagittal articulometer systems. More recently, the signal processing technique of Mel-log spectral plots has been used

Research on /r/ production previously used formant analysis as the primary acoustic analysis, with particular focus on the low third formant in the speech signal. Prior imaging of speech used X-Ray, MRI, and electromagnetic midsagittal articulometer systems. More recently, the signal processing technique of Mel-log spectral plots has been used to study /r/ production in children and female adults. Ultrasound imaging of the tongue also has been used to image the tongue during speech production in both clinical and research settings. The current study attempts to describe /r/ production in three different allophonic contexts; vocalic, prevocalic, and postvocalic positions. Ultrasound analysis, formant analysis, Mel-log spectral plots, and /r/ duration were measured for /r/ production in 29 adult speakers (10 male, 19 female). A possible relationship between these variables was also explored. Results showed that the amount of superior constriction in the postvocalic /r/ allophone was significantly lower than the other /r/ allophones. Formant two was significantly lower and the distance between formant two and three was significantly higher for the prevocalic /r/ allophone. Vocalic /r/ had the longest average duration, while prevocalic /r/ had the shortest duration. Signal processing results revealed candidate Mel-bin values for accurate /r/ production for each allophone of /r/. The results indicate that allophones of /r/ can be distinguished based the different analyses. However, relationships between these analyses are still unclear. Future research is needed in order to gather more data on /r/ acoustics and articulation in order to find possible relationships between the analyses for /r/ production.
ContributorsHirsch, Megan Elizabeth (Author) / Weinhold, Juliet (Thesis director) / Gardner, Joshua (Committee member) / Department of Speech and Hearing Science (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
134531-Thumbnail Image.png
Description
Student to Student: A Guide to Anatomy is an anatomy guide written by students, for students. Its focus is on teaching the anatomy of the heart, lungs, nose, ears and throat in a manner that isn't overpowering or stress inducing. Daniel and I have taken numerous anatomy courses, and fully

Student to Student: A Guide to Anatomy is an anatomy guide written by students, for students. Its focus is on teaching the anatomy of the heart, lungs, nose, ears and throat in a manner that isn't overpowering or stress inducing. Daniel and I have taken numerous anatomy courses, and fully comprehend what it takes to have success in these classes. We found that the anatomy books recommended for these courses are often completely overwhelming, offering way more information than what is needed. This renders them near useless for a college student who just wants to learn the essentials. Why would a student even pick it up if they can't find what they need to learn? With that in mind, our goal was to create a comprehensive, easy to understand, and easy to follow guide to the heart, lungs and ENT (ear nose throat). We know what information is vital for test day, and wanted to highlight these key concepts and ideas in our guide. Spending just 60 to 90 minutes studying our guide should help any student with their studying needs. Whether the student has medical school aspirations, or if they simply just want to pass the class, our guide is there for them. We aren't experts, but we know what strategies and methods can help even the most confused students learn. Our guide can also be used as an introductory resource to our respective majors (Daniel-Biology, Charles-Speech and Hearing) for students who are undecided on what they want to do. In the future Daniel and I would like to see more students creating similar guides, and adding onto the "Student to Student' title with their own works... After all, who better to teach students than the students who know what it takes?
ContributorsKennedy, Charles (Co-author) / McDermand, Daniel (Co-author) / Kingsbury, Jeffrey (Thesis director) / Washo-Krupps, Delon (Committee member) / Department of Speech and Hearing Science (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
135494-Thumbnail Image.png
Description
Hearing and vision are two senses that most individuals use on a daily basis. The simultaneous presentation of competing visual and auditory stimuli often affects our sensory perception. It is often believed that vision is the more dominant sense over audition in spatial localization tasks. Recent work suggests that visual

Hearing and vision are two senses that most individuals use on a daily basis. The simultaneous presentation of competing visual and auditory stimuli often affects our sensory perception. It is often believed that vision is the more dominant sense over audition in spatial localization tasks. Recent work suggests that visual information can influence auditory localization when the sound is emanating from a physical location or from a phantom location generated through stereophony (the so-called "summing localization"). The present study investigates the role of cross-modal fusion in an auditory localization task. The focuses of the experiments are two-fold: (1) reveal the extent of fusion between auditory and visual stimuli and (2) investigate how fusion is correlated with the amount of visual bias a subject experiences. We found that fusion often occurs when light flash and "summing localization" stimuli were presented from the same hemifield. However, little correlation was observed between the magnitude of visual bias and the extent of perceived fusion between light and sound stimuli. In some cases, subjects reported distinctive locations for light and sound and still experienced visual capture.
ContributorsBalderas, Leslie Ann (Author) / Zhou, Yi (Thesis director) / Yost, William (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05