Matching Items (5)
Filtering by

Clear all filters

133916-Thumbnail Image.png
Description
The purpose of the present study was to determine if vocabulary knowledge is related to degree of hearing loss. A 50-question multiple-choice vocabulary test comprised of old and new words was administered to 43 adults with hearing loss (19 to 92 years old) and 51 adults with normal hearing (20

The purpose of the present study was to determine if vocabulary knowledge is related to degree of hearing loss. A 50-question multiple-choice vocabulary test comprised of old and new words was administered to 43 adults with hearing loss (19 to 92 years old) and 51 adults with normal hearing (20 to 40 years old). Degree of hearing loss ranged from mild to moderately-severe as determined by bilateral pure-tone thresholds. Education levels ranged from some high school to graduate degrees. It was predicted that knowledge of new words would decrease with increasing hearing loss, whereas knowledge of old words would be unaffected. The Test of Contemporary Vocabulary (TCV) was developed for this study and contained words with old and new definitions. The vocabulary scores were subjected to repeated-measures ANOVA with definition type (old and new) as the within-subjects factor. Hearing level and education were between-subjects factors, while age was entered as a covariate. The results revealed no main effect of age or education level, while a significant main effect of hearing level was observed. Specifically, performance for new words decreased significantly as degree of hearing loss increased. A similar effect was not observed for old words. These results indicate that knowledge of new definitions is inversely related to degree of hearing loss.
ContributorsMarzan, Nicole Ann (Author) / Pittman, Andrea (Thesis director) / Azuma, Tamiko (Committee member) / Wexler, Kathryn (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135399-Thumbnail Image.png
Description
Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken

Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken language engage completely different sensory modalities (visual-manual versus the more common auditory-oromotor) both languages share grammatical structures and contain syntactic intricacies innate to all languages. Thus, studies of multi-modal bilingualism (e.g. a native English speaker learning American Sign Language) can lead to a better understanding of the neurobiology of second language acquisition, and of language more broadly. For example, can the well-developed visual-spatial processing networks in English speakers support grammatical processing in sign language, as it relies heavily on location and movement? The present study furthers the understanding of the neural correlates of second language acquisition by studying late L2 normal hearing learners of American Sign Language (ASL). Twenty English speaking ASU students enrolled in advanced American Sign Language coursework participated in our functional Magnetic Resonance Imaging (fMRI) study. The aim was to identify the brain networks engaged in syntactic processing of ASL sentences in late L2 ASL learners. While many studies have addressed the neurobiology of acquiring a second spoken language, no previous study to our knowledge has examined the brain networks supporting syntactic processing in bimodal bilinguals. We examined the brain networks engaged while perceiving ASL sentences compared to ASL word lists, as well as written English sentences and word lists. We hypothesized that our findings in late bimodal bilinguals would largely coincide with the unimodal bilingual literature, but with a few notable differences including additional attention networks being engaged by ASL processing. Our results suggest that there is a high degree of overlap in sentence processing networks for ASL and English. There also are important differences in regards to the recruitment of speech comprehension, visual-spatial and domain-general brain networks. Our findings suggest that well-known sentence comprehension and syntactic processing regions for spoken languages are flexible and modality-independent.
ContributorsMickelsen, Soren Brooks (Co-author) / Johnson, Lisa (Co-author) / Rogalsky, Corianne (Thesis director) / Azuma, Tamiko (Committee member) / Howard, Pamela (Committee member) / Department of Speech and Hearing Science (Contributor) / School of Human Evolution and Social Change (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136164-Thumbnail Image.png
Description
The increase of Traumatic Brain Injury (TBI) cases in recent war history has increased the urgency of research regarding how veterans are affected by TBIs. The purpose of this study was to evaluate the effects of TBI on speech recognition in noise. The AzBio Sentence Test was completed for signal-to-noise

The increase of Traumatic Brain Injury (TBI) cases in recent war history has increased the urgency of research regarding how veterans are affected by TBIs. The purpose of this study was to evaluate the effects of TBI on speech recognition in noise. The AzBio Sentence Test was completed for signal-to-noise ratios (S/N) from -10 dB to +15 dB for a control group of ten participants and one US military veteran with history of service-connected TBI. All participants had normal hearing sensitivity defined as thresholds of 20 dB or better at frequencies from 250-8000 Hz in addition to having tympanograms within normal limits. Comparison of the data collected on the control group versus the veteran suggested that the veteran performed worse than the majority of the control group on the AzBio Sentence Test. Further research with more participants would be beneficial to our understanding of how veterans with TBI perform on speech recognition tests in the presence of background noise.
ContributorsCorvasce, Erica Marie (Author) / Peterson, Kathleen (Thesis director) / Williams, Erica (Committee member) / Azuma, Tamiko (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor)
Created2015-05
137214-Thumbnail Image.png
Description
For this project the main goal was to create a curriculum aimed at fourth grade students. This curriculum was intended to introduce them to different forms of communication, and teach them the skills, attitudes, behavior, and knowledge that would enable them to be able to communicate and interact better with

For this project the main goal was to create a curriculum aimed at fourth grade students. This curriculum was intended to introduce them to different forms of communication, and teach them the skills, attitudes, behavior, and knowledge that would enable them to be able to communicate and interact better with a wide range of people with different types of communication styles. American Sign Language was used for this curriculum as an example of an alternative communication method. The project included developing teaching materials and lessons which made up the curriculum, after that this curriculum was implemented with 11 fourth grade students.
ContributorsStosz, Julia Taylor (Author) / Jordan, Michelle (Thesis director) / Howard, Pamela (Committee member) / Boxwell, Pamela (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor)
Created2014-05
137603-Thumbnail Image.png
Description
The purpose of this study was to explore the effects of word type, phonotactic probability, word frequency, and neighborhood density on the vocabularies of children with mild-to-moderate hearing loss compared to children with normal hearing. This was done by assigning values for these parameters to each test item on the

The purpose of this study was to explore the effects of word type, phonotactic probability, word frequency, and neighborhood density on the vocabularies of children with mild-to-moderate hearing loss compared to children with normal hearing. This was done by assigning values for these parameters to each test item on the Peabody Picture Vocabulary Test (Version III, Form B) to quantify and characterize the performance of children with hearing loss relative to that of children with normal hearing. It was expected that PPVT IIIB scores would: 1) Decrease as the degree of hearing loss increased. 2) Increase as a function of age 3) Be more positively related to nouns than to verbs or attributes. 4) Be negatively related to phonotactic probability. 5) Be negatively related to word frequency 6) Be negatively related to neighborhood density. All but one of the expected outcomes was observed. PPVT IIIB performance decreased as hearing loss increased, and increased with age. Performance for nouns, verbs, and attributes increased with PPVT IIIB performance, whereas neighborhood density decreased. Phonotactic probability was expected to decrease as PPVT IIIB performance increased, but instead it increased due to the confounding effects of word length and the order of words on the test. Age and hearing level were rejected by the multiple regression analyses as contributors to PPVT IIIB performance for the children with hearing loss. Overall, the results indicate that there is a 2-year difference in vocabulary age between children with normal hearing and children with hearing loss, and that this may be due to factors external to the child (such as word frequency and phonotactic probability) rather than the child's age and hearing level. This suggests that children with hearing loss need continued clinical services (amplification) as well as additional support services in school throughout childhood.
ContributorsLatto, Allison Renee (Author) / Pittman, Andrea (Thesis director) / Gray, Shelley (Committee member) / Brinkley, Shara (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor)
Created2013-05