Matching Items (15)
Filtering by

Clear all filters

136828-Thumbnail Image.png
Description
This study evaluated whether the Story Champs intervention is effective in bilingual kindergarten children who speak Spanish as their native language. Previous research by Spencer and Slocum (2010) found that monolingual, English-speaking participants made significant gains in narrative retelling after intervention. This study implemented the intervention in two languages and

This study evaluated whether the Story Champs intervention is effective in bilingual kindergarten children who speak Spanish as their native language. Previous research by Spencer and Slocum (2010) found that monolingual, English-speaking participants made significant gains in narrative retelling after intervention. This study implemented the intervention in two languages and examined its effects after ten sessions. Results indicate that some children benefited from the intervention and there was variability across languages as well.
ContributorsFernandez, Olga E (Author) / Restrepo, Laida (Thesis director) / Mesa, Carol (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / School of International Letters and Cultures (Contributor)
Created2014-05
137447-Thumbnail Image.png
Description
In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed

In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed to determine if these techniques correlated with the human data.
ContributorsJones, Hanna Vanessa (Author) / Liss, Julie (Thesis director) / Dorman, Michael (Committee member) / Borrie, Stephanie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Department of English (Contributor) / Speech and Hearing Science (Contributor)
Created2013-05
134576-Thumbnail Image.png
Description
Research on /r/ production previously used formant analysis as the primary acoustic analysis, with particular focus on the low third formant in the speech signal. Prior imaging of speech used X-Ray, MRI, and electromagnetic midsagittal articulometer systems. More recently, the signal processing technique of Mel-log spectral plots has been used

Research on /r/ production previously used formant analysis as the primary acoustic analysis, with particular focus on the low third formant in the speech signal. Prior imaging of speech used X-Ray, MRI, and electromagnetic midsagittal articulometer systems. More recently, the signal processing technique of Mel-log spectral plots has been used to study /r/ production in children and female adults. Ultrasound imaging of the tongue also has been used to image the tongue during speech production in both clinical and research settings. The current study attempts to describe /r/ production in three different allophonic contexts; vocalic, prevocalic, and postvocalic positions. Ultrasound analysis, formant analysis, Mel-log spectral plots, and /r/ duration were measured for /r/ production in 29 adult speakers (10 male, 19 female). A possible relationship between these variables was also explored. Results showed that the amount of superior constriction in the postvocalic /r/ allophone was significantly lower than the other /r/ allophones. Formant two was significantly lower and the distance between formant two and three was significantly higher for the prevocalic /r/ allophone. Vocalic /r/ had the longest average duration, while prevocalic /r/ had the shortest duration. Signal processing results revealed candidate Mel-bin values for accurate /r/ production for each allophone of /r/. The results indicate that allophones of /r/ can be distinguished based the different analyses. However, relationships between these analyses are still unclear. Future research is needed in order to gather more data on /r/ acoustics and articulation in order to find possible relationships between the analyses for /r/ production.
ContributorsHirsch, Megan Elizabeth (Author) / Weinhold, Juliet (Thesis director) / Gardner, Joshua (Committee member) / Department of Speech and Hearing Science (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
134531-Thumbnail Image.png
Description
Student to Student: A Guide to Anatomy is an anatomy guide written by students, for students. Its focus is on teaching the anatomy of the heart, lungs, nose, ears and throat in a manner that isn't overpowering or stress inducing. Daniel and I have taken numerous anatomy courses, and fully

Student to Student: A Guide to Anatomy is an anatomy guide written by students, for students. Its focus is on teaching the anatomy of the heart, lungs, nose, ears and throat in a manner that isn't overpowering or stress inducing. Daniel and I have taken numerous anatomy courses, and fully comprehend what it takes to have success in these classes. We found that the anatomy books recommended for these courses are often completely overwhelming, offering way more information than what is needed. This renders them near useless for a college student who just wants to learn the essentials. Why would a student even pick it up if they can't find what they need to learn? With that in mind, our goal was to create a comprehensive, easy to understand, and easy to follow guide to the heart, lungs and ENT (ear nose throat). We know what information is vital for test day, and wanted to highlight these key concepts and ideas in our guide. Spending just 60 to 90 minutes studying our guide should help any student with their studying needs. Whether the student has medical school aspirations, or if they simply just want to pass the class, our guide is there for them. We aren't experts, but we know what strategies and methods can help even the most confused students learn. Our guide can also be used as an introductory resource to our respective majors (Daniel-Biology, Charles-Speech and Hearing) for students who are undecided on what they want to do. In the future Daniel and I would like to see more students creating similar guides, and adding onto the "Student to Student' title with their own works... After all, who better to teach students than the students who know what it takes?
ContributorsKennedy, Charles (Co-author) / McDermand, Daniel (Co-author) / Kingsbury, Jeffrey (Thesis director) / Washo-Krupps, Delon (Committee member) / Department of Speech and Hearing Science (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
133445-Thumbnail Image.png
Description
The objective of this study was to analyze the auditory feedback system and the pitch-shift reflex in relation to vibrato. 11 subjects (female n = 8, male n = 3) without speech, hearing, or neurological disorders were used. Compensation magnitude, adaptation magnitude, relative response phase, and passive and active perception

The objective of this study was to analyze the auditory feedback system and the pitch-shift reflex in relation to vibrato. 11 subjects (female n = 8, male n = 3) without speech, hearing, or neurological disorders were used. Compensation magnitude, adaptation magnitude, relative response phase, and passive and active perception were recorded when the subjects were subjected to auditory feedback perturbed by phasic amplitude and F0 modulation, or “vibrato”. “Tremolo,” or phasic amplitude modulation, was used as a control. Significant correlation was found between the ability to perceive vibrato and tremolo in active trials and the ability to perceive in passive trials (p=0.01). Passive perceptions were lower (more sensitive) than active perceptions (p< 0.01). Adaptation vibrato trials showed significant modulation magnitude (p=0.031), while tremolo did not. The two conditions were significantly different (p<0.01). There was significant phase change for both tremolo and vibrato, but vibrato phase change was greater, nearly 180° (p<0.01). In the compensation trials, the modulation change from control to vibrato trials was significantly greater than the change from control to tremolo (p=0.01). Vibrato and tremolo also had significantly different average phase change (p<0.01). It can be concluded that the auditory feedback system tries to cancel out dynamic pitch perturbations by cancelling them out out-of-phase. Similar systems must be used to adapt and to compensate to vibrato. Despite the auditory feedback system’s online monitoring, the passive perception was still better than active perception, possibly because it required only one task (perceiving) rather than two (perceiving and producing). The pitch-shift reflex compensates to the sensitivity of the auditory feedback system, as shown by the increased perception of vibrato over tremolo.
ContributorsHiggins, Alexis Brittany (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Luo, Xin (Committee member) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133025-Thumbnail Image.png
Description
During speech, the brain is constantly processing and monitoring speech output through the auditory feedback loop to ensure correct and accurate speech. If the speech signal is experimentally altered/perturbed while speaking, the brain compensates for the perturbations by changing speech output in the opposite direction of the perturbations. In this

During speech, the brain is constantly processing and monitoring speech output through the auditory feedback loop to ensure correct and accurate speech. If the speech signal is experimentally altered/perturbed while speaking, the brain compensates for the perturbations by changing speech output in the opposite direction of the perturbations. In this study, we designed an experiment that examined the compensatory responses in response to unexpected vowel perturbations during speech. We applied two types of perturbations. In one condition, the vowel /ɛ/ was perturbed toward the vowel /æ/ by simultaneously shifting both the first formant (F1) and the second formant (F2) at 3 different levels (.5=small, 1=medium, and 1.5=large shifts). In another condition, the vowel /ɛ/ was perturbed by shifting F1 at 3 different levels (small, medium, and large shifts). Our results showed that there was a significant perturbation-type effect, with participants compensating more in response to perturbation that shifted /ɛ/ toward /æ/. In addition, we found that there was a significant level effect, with the compensatory responses to level .5 being significantly smaller than the compensatory responses to levels 1 and 1.5, regardless of the perturbation pathway. We also found that responses to shift level 1 and shift level 1.5 did not differ. Overall, our results highlighted the importance of the auditory feedback loop during speech production and how the brain is more sensitive to auditory errors that change a vowel category (e.g., /ɛ/ to /æ/).
ContributorsFitzgerald, Lacee (Author) / Daliri, Ayoub (Thesis director) / Corianne, Rogalsky (Committee member) / College of Health Solutions (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
135494-Thumbnail Image.png
Description
Hearing and vision are two senses that most individuals use on a daily basis. The simultaneous presentation of competing visual and auditory stimuli often affects our sensory perception. It is often believed that vision is the more dominant sense over audition in spatial localization tasks. Recent work suggests that visual

Hearing and vision are two senses that most individuals use on a daily basis. The simultaneous presentation of competing visual and auditory stimuli often affects our sensory perception. It is often believed that vision is the more dominant sense over audition in spatial localization tasks. Recent work suggests that visual information can influence auditory localization when the sound is emanating from a physical location or from a phantom location generated through stereophony (the so-called "summing localization"). The present study investigates the role of cross-modal fusion in an auditory localization task. The focuses of the experiments are two-fold: (1) reveal the extent of fusion between auditory and visual stimuli and (2) investigate how fusion is correlated with the amount of visual bias a subject experiences. We found that fusion often occurs when light flash and "summing localization" stimuli were presented from the same hemifield. However, little correlation was observed between the magnitude of visual bias and the extent of perceived fusion between light and sound stimuli. In some cases, subjects reported distinctive locations for light and sound and still experienced visual capture.
ContributorsBalderas, Leslie Ann (Author) / Zhou, Yi (Thesis director) / Yost, William (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
148383-Thumbnail Image.png
Description

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions,

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions, especially regarding left-hemispheric strokes that disrupt typical speech-processing brain networks, and how musical training might affect the brain networks recruited for speech after a stroke. Thus, our study aims to explore some questions related to the above topics. We collected task-based functional MRI data from 12 subjects who previously experienced a left-hemispheric stroke. Subjects listened to blocks of spoken sentences and novel piano melodies during scanning to examine the differences in brain activations in response to speech and music. We hypothesized that speech stimuli would activate right frontal regions, and music stimuli would activate the right superior temporal regions more than speech (both findings not seen in previous studies of control subjects), as a result of functional changes in the brain, following the left-hemispheric stroke and particularly the loss of functionality in the left temporal lobe. We also hypothesized that the music stimuli would cause a stronger activation in right temporal cortex for participants who have had musical training than those who have not. Our results indicate that speech stimuli compared to rest activated the anterior superior temporal gyrus bilaterally and activated the right inferior frontal lobe. Music stimuli compared to rest did not activate the brain bilaterally, but rather only activated the right middle temporal gyrus. When the group analysis was performed with music experience as a covariate, we found that musical training did not affect activations to music stimuli specifically, but there was greater right hemisphere activation in several regions in response to speech stimuli as a function of more years of musical training. The results of the study agree with our hypotheses regarding the functional changes in the brain, but they conflict with our hypothesis about musical expertise. Overall, the study has generated interesting starting points for further explorations of how musical neural resources may be recruited for speech processing after damage to typical language networks.

ContributorsKarthigeyan, Vishnu R (Author) / Rogalsky, Corianne (Thesis director) / Daliri, Ayoub (Committee member) / Harrington Bioengineering Program (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148204-Thumbnail Image.png
Description

The purpose of this longitudinal study was to predict /r/ acquisition using acoustic signal processing. 19 children, aged 5-7 with inaccurate /r/, were followed until they turned 8 or acquired /r/, whichever came first. Acoustic and descriptive data from 14 participants were analyzed. The remaining 5 children continued to be

The purpose of this longitudinal study was to predict /r/ acquisition using acoustic signal processing. 19 children, aged 5-7 with inaccurate /r/, were followed until they turned 8 or acquired /r/, whichever came first. Acoustic and descriptive data from 14 participants were analyzed. The remaining 5 children continued to be followed. The study analyzed differences in spectral energy at the baseline acoustic signals of participants who eventually acquired /r/ compared to that of those who did not acquire /r/. Results indicated significant differences between groups in the baseline signals for vocalic and postvocalic /r/, suggesting that the acquisition of certain allophones may be predictable. Participants’ articulatory changes made during the progression of acquisition were also analyzed spectrally. A retrospective analysis described the pattern in which /r/ allophones were acquired, proposing that vocalic /r/ and the postvocalic variant of consonantal /r/ may be acquired prior to prevocalic /r/, and /r/ followed by low vowels may be acquired before /r/ followed by high vowels, although individual variations exist.

ContributorsConger, Sarah Grace (Author) / Weinhold, Juliet (Thesis director) / Daliri, Ayoub (Committee member) / Bruce, Laurel (Committee member) / College of Health Solutions (Contributor, Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

Over the past couple of years, the focus on the prevalence of hate-speech and misinformation on the internet has increased. Lawmakers feel that repealing or reforming Section 230 of the Communication Decency Act is the way to go, considering that the law has been used to protect companies from any

Over the past couple of years, the focus on the prevalence of hate-speech and misinformation on the internet has increased. Lawmakers feel that repealing or reforming Section 230 of the Communication Decency Act is the way to go, considering that the law has been used to protect companies from any liability in the past. In this podcast series, I will be explaining what Section 230 is, how it affects us, and what changes are being proposed. In doing so, I wish to shed a light on how the problems of the internet are not solely in the hands of social media giants and a 26-word long law, but all its users that make up our global community.

ContributorsAvi, Pratyush (Author) / Schmidt, Peter (Thesis director) / Voorhees, Matthew (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05