Matching Items (7)
Filtering by

Clear all filters

135399-Thumbnail Image.png
Description
Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken

Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken language engage completely different sensory modalities (visual-manual versus the more common auditory-oromotor) both languages share grammatical structures and contain syntactic intricacies innate to all languages. Thus, studies of multi-modal bilingualism (e.g. a native English speaker learning American Sign Language) can lead to a better understanding of the neurobiology of second language acquisition, and of language more broadly. For example, can the well-developed visual-spatial processing networks in English speakers support grammatical processing in sign language, as it relies heavily on location and movement? The present study furthers the understanding of the neural correlates of second language acquisition by studying late L2 normal hearing learners of American Sign Language (ASL). Twenty English speaking ASU students enrolled in advanced American Sign Language coursework participated in our functional Magnetic Resonance Imaging (fMRI) study. The aim was to identify the brain networks engaged in syntactic processing of ASL sentences in late L2 ASL learners. While many studies have addressed the neurobiology of acquiring a second spoken language, no previous study to our knowledge has examined the brain networks supporting syntactic processing in bimodal bilinguals. We examined the brain networks engaged while perceiving ASL sentences compared to ASL word lists, as well as written English sentences and word lists. We hypothesized that our findings in late bimodal bilinguals would largely coincide with the unimodal bilingual literature, but with a few notable differences including additional attention networks being engaged by ASL processing. Our results suggest that there is a high degree of overlap in sentence processing networks for ASL and English. There also are important differences in regards to the recruitment of speech comprehension, visual-spatial and domain-general brain networks. Our findings suggest that well-known sentence comprehension and syntactic processing regions for spoken languages are flexible and modality-independent.
ContributorsMickelsen, Soren Brooks (Co-author) / Johnson, Lisa (Co-author) / Rogalsky, Corianne (Thesis director) / Azuma, Tamiko (Committee member) / Howard, Pamela (Committee member) / Department of Speech and Hearing Science (Contributor) / School of Human Evolution and Social Change (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136164-Thumbnail Image.png
Description
The increase of Traumatic Brain Injury (TBI) cases in recent war history has increased the urgency of research regarding how veterans are affected by TBIs. The purpose of this study was to evaluate the effects of TBI on speech recognition in noise. The AzBio Sentence Test was completed for signal-to-noise

The increase of Traumatic Brain Injury (TBI) cases in recent war history has increased the urgency of research regarding how veterans are affected by TBIs. The purpose of this study was to evaluate the effects of TBI on speech recognition in noise. The AzBio Sentence Test was completed for signal-to-noise ratios (S/N) from -10 dB to +15 dB for a control group of ten participants and one US military veteran with history of service-connected TBI. All participants had normal hearing sensitivity defined as thresholds of 20 dB or better at frequencies from 250-8000 Hz in addition to having tympanograms within normal limits. Comparison of the data collected on the control group versus the veteran suggested that the veteran performed worse than the majority of the control group on the AzBio Sentence Test. Further research with more participants would be beneficial to our understanding of how veterans with TBI perform on speech recognition tests in the presence of background noise.
ContributorsCorvasce, Erica Marie (Author) / Peterson, Kathleen (Thesis director) / Williams, Erica (Committee member) / Azuma, Tamiko (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor)
Created2015-05
133858-Thumbnail Image.png
Description
Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated.

Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated. The correlations between auditory/cognitive functions and speech/music recognition are examined. The results may demonstrate which factors can better explain the variable performance across elderly cochlear implant users.
ContributorsKolberg, Courtney Elizabeth (Author) / Luo, Xin (Thesis director) / Azuma, Tamiko (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
137214-Thumbnail Image.png
Description
For this project the main goal was to create a curriculum aimed at fourth grade students. This curriculum was intended to introduce them to different forms of communication, and teach them the skills, attitudes, behavior, and knowledge that would enable them to be able to communicate and interact better with

For this project the main goal was to create a curriculum aimed at fourth grade students. This curriculum was intended to introduce them to different forms of communication, and teach them the skills, attitudes, behavior, and knowledge that would enable them to be able to communicate and interact better with a wide range of people with different types of communication styles. American Sign Language was used for this curriculum as an example of an alternative communication method. The project included developing teaching materials and lessons which made up the curriculum, after that this curriculum was implemented with 11 fourth grade students.
ContributorsStosz, Julia Taylor (Author) / Jordan, Michelle (Thesis director) / Howard, Pamela (Committee member) / Boxwell, Pamela (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor)
Created2014-05
137669-Thumbnail Image.png
Description
When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder,

When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder, was used so the fundamental frequency (F0) of speech would be well represented. The results of this experiment showed that listeners are able to use the F0 information to aid in speaker segregation. As expected, recognition performance is the poorest when there was no difference in F0 between speakers, and listeners performed better as the difference in F0 increased. The type of errors that the listeners made was also analyzed. The results show that when an error was made in identifying the correct word from the target sentence, the response was usually (~60%) a word that was uttered in the competing sentence.
ContributorsStanley, Nicole Ernestine (Author) / Yost, William (Thesis director) / Dorman, Michael (Committee member) / Liss, Julie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Hugh Downs School of Human Communication (Contributor)
Created2013-05
137603-Thumbnail Image.png
Description
The purpose of this study was to explore the effects of word type, phonotactic probability, word frequency, and neighborhood density on the vocabularies of children with mild-to-moderate hearing loss compared to children with normal hearing. This was done by assigning values for these parameters to each test item on the

The purpose of this study was to explore the effects of word type, phonotactic probability, word frequency, and neighborhood density on the vocabularies of children with mild-to-moderate hearing loss compared to children with normal hearing. This was done by assigning values for these parameters to each test item on the Peabody Picture Vocabulary Test (Version III, Form B) to quantify and characterize the performance of children with hearing loss relative to that of children with normal hearing. It was expected that PPVT IIIB scores would: 1) Decrease as the degree of hearing loss increased. 2) Increase as a function of age 3) Be more positively related to nouns than to verbs or attributes. 4) Be negatively related to phonotactic probability. 5) Be negatively related to word frequency 6) Be negatively related to neighborhood density. All but one of the expected outcomes was observed. PPVT IIIB performance decreased as hearing loss increased, and increased with age. Performance for nouns, verbs, and attributes increased with PPVT IIIB performance, whereas neighborhood density decreased. Phonotactic probability was expected to decrease as PPVT IIIB performance increased, but instead it increased due to the confounding effects of word length and the order of words on the test. Age and hearing level were rejected by the multiple regression analyses as contributors to PPVT IIIB performance for the children with hearing loss. Overall, the results indicate that there is a 2-year difference in vocabulary age between children with normal hearing and children with hearing loss, and that this may be due to factors external to the child (such as word frequency and phonotactic probability) rather than the child's age and hearing level. This suggests that children with hearing loss need continued clinical services (amplification) as well as additional support services in school throughout childhood.
ContributorsLatto, Allison Renee (Author) / Pittman, Andrea (Thesis director) / Gray, Shelley (Committee member) / Brinkley, Shara (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor)
Created2013-05
Description
This creative project is a children's book entitled Sheldon the Shy Tortoise. Accompanying the story is a literature review of the research on childhood shyness. The purpose of the project is to gain a better of understanding of shyness in childhood. Topics covered in the literature review include risk factors

This creative project is a children's book entitled Sheldon the Shy Tortoise. Accompanying the story is a literature review of the research on childhood shyness. The purpose of the project is to gain a better of understanding of shyness in childhood. Topics covered in the literature review include risk factors and causes, negative social and behavioral effects, impact on academics, and treatment options. Using this information, the children's book was written. It aims to be fun for children to read while also providing insight and encouragement into some of the problems related to being shy. The story features animal characters and a relatively simple plot so it is easily understandable by the target audience of late-preschool and early-elementary children. The main character, Sheldon the tortoise, is often physically and metaphorically "stuck in his shell". He wants to participate in social activities but is afraid to do so. Through a series of events and interactions, Sheldon starts to come out of his shell in every sense of the phrase. The book is illustrated using photographs of hand-crocheted stuffed animals representing each of the characters. By incorporating scholarly research into the writing process, children will hopefully be able to gain an understanding of their shyness and ways to help decrease it. Teachers should be able to better understand their shy students and understand some of the unique challenges of working with shy children. This creative project helps convey necessary information to children and families during a critical period of development.
ContributorsRyan, Amanda (Author) / Hansen, Cory (Thesis director) / Bernstein, Katie (Committee member) / Department of Speech and Hearing Science (Contributor) / Sanford School of Social and Family Dynamics (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05