This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 6 of 6
Filtering by

Clear all filters

133916-Thumbnail Image.png
Description
The purpose of the present study was to determine if vocabulary knowledge is related to degree of hearing loss. A 50-question multiple-choice vocabulary test comprised of old and new words was administered to 43 adults with hearing loss (19 to 92 years old) and 51 adults with normal hearing (20

The purpose of the present study was to determine if vocabulary knowledge is related to degree of hearing loss. A 50-question multiple-choice vocabulary test comprised of old and new words was administered to 43 adults with hearing loss (19 to 92 years old) and 51 adults with normal hearing (20 to 40 years old). Degree of hearing loss ranged from mild to moderately-severe as determined by bilateral pure-tone thresholds. Education levels ranged from some high school to graduate degrees. It was predicted that knowledge of new words would decrease with increasing hearing loss, whereas knowledge of old words would be unaffected. The Test of Contemporary Vocabulary (TCV) was developed for this study and contained words with old and new definitions. The vocabulary scores were subjected to repeated-measures ANOVA with definition type (old and new) as the within-subjects factor. Hearing level and education were between-subjects factors, while age was entered as a covariate. The results revealed no main effect of age or education level, while a significant main effect of hearing level was observed. Specifically, performance for new words decreased significantly as degree of hearing loss increased. A similar effect was not observed for old words. These results indicate that knowledge of new definitions is inversely related to degree of hearing loss.
ContributorsMarzan, Nicole Ann (Author) / Pittman, Andrea (Thesis director) / Azuma, Tamiko (Committee member) / Wexler, Kathryn (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135399-Thumbnail Image.png
Description
Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken

Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken language engage completely different sensory modalities (visual-manual versus the more common auditory-oromotor) both languages share grammatical structures and contain syntactic intricacies innate to all languages. Thus, studies of multi-modal bilingualism (e.g. a native English speaker learning American Sign Language) can lead to a better understanding of the neurobiology of second language acquisition, and of language more broadly. For example, can the well-developed visual-spatial processing networks in English speakers support grammatical processing in sign language, as it relies heavily on location and movement? The present study furthers the understanding of the neural correlates of second language acquisition by studying late L2 normal hearing learners of American Sign Language (ASL). Twenty English speaking ASU students enrolled in advanced American Sign Language coursework participated in our functional Magnetic Resonance Imaging (fMRI) study. The aim was to identify the brain networks engaged in syntactic processing of ASL sentences in late L2 ASL learners. While many studies have addressed the neurobiology of acquiring a second spoken language, no previous study to our knowledge has examined the brain networks supporting syntactic processing in bimodal bilinguals. We examined the brain networks engaged while perceiving ASL sentences compared to ASL word lists, as well as written English sentences and word lists. We hypothesized that our findings in late bimodal bilinguals would largely coincide with the unimodal bilingual literature, but with a few notable differences including additional attention networks being engaged by ASL processing. Our results suggest that there is a high degree of overlap in sentence processing networks for ASL and English. There also are important differences in regards to the recruitment of speech comprehension, visual-spatial and domain-general brain networks. Our findings suggest that well-known sentence comprehension and syntactic processing regions for spoken languages are flexible and modality-independent.
ContributorsMickelsen, Soren Brooks (Co-author) / Johnson, Lisa (Co-author) / Rogalsky, Corianne (Thesis director) / Azuma, Tamiko (Committee member) / Howard, Pamela (Committee member) / Department of Speech and Hearing Science (Contributor) / School of Human Evolution and Social Change (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136164-Thumbnail Image.png
Description
The increase of Traumatic Brain Injury (TBI) cases in recent war history has increased the urgency of research regarding how veterans are affected by TBIs. The purpose of this study was to evaluate the effects of TBI on speech recognition in noise. The AzBio Sentence Test was completed for signal-to-noise

The increase of Traumatic Brain Injury (TBI) cases in recent war history has increased the urgency of research regarding how veterans are affected by TBIs. The purpose of this study was to evaluate the effects of TBI on speech recognition in noise. The AzBio Sentence Test was completed for signal-to-noise ratios (S/N) from -10 dB to +15 dB for a control group of ten participants and one US military veteran with history of service-connected TBI. All participants had normal hearing sensitivity defined as thresholds of 20 dB or better at frequencies from 250-8000 Hz in addition to having tympanograms within normal limits. Comparison of the data collected on the control group versus the veteran suggested that the veteran performed worse than the majority of the control group on the AzBio Sentence Test. Further research with more participants would be beneficial to our understanding of how veterans with TBI perform on speech recognition tests in the presence of background noise.
ContributorsCorvasce, Erica Marie (Author) / Peterson, Kathleen (Thesis director) / Williams, Erica (Committee member) / Azuma, Tamiko (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor)
Created2015-05
133858-Thumbnail Image.png
Description
Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated.

Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated. The correlations between auditory/cognitive functions and speech/music recognition are examined. The results may demonstrate which factors can better explain the variable performance across elderly cochlear implant users.
ContributorsKolberg, Courtney Elizabeth (Author) / Luo, Xin (Thesis director) / Azuma, Tamiko (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
137326-Thumbnail Image.png
Description
The purpose of this paper is to evaluate the effectiveness of a craft book used for stimulation therapy on the phonetic sounds /ŋ/, /r/, /s/, /ʃ/, /tʃ/, and /θ/. The book is specifically geared toward children who do not qualify for speech remediation services but who may be at risk

The purpose of this paper is to evaluate the effectiveness of a craft book used for stimulation therapy on the phonetic sounds /ŋ/, /r/, /s/, /ʃ/, /tʃ/, and /θ/. The book is specifically geared toward children who do not qualify for speech remediation services but who may be at risk of a speech sound disorder. Four children participated in the study with ages ranging from 4;3-7;6. The study lasted for four weeks in which data was collected on a weekly basis via Likert Scale surveys in accordance with two conversational speech samples. The speech samples were phonetically transcribed with minimal differences pre and post use of the craft book. Data from the surveys give insight to the children’s favorite crafts, the level of difficulty of each craft, and the likelihood of the craft book to be used as part of a remediation program. The study had limitations in sample size, duration, and number of craft activities. Future revisions should include increasing the number of crafts available per chapter and incorporating into the introduction an educational component for parents.
ContributorsKolaz, Chloe Ann (Author) / Weinhold, Juliet (Thesis director) / Azuma, Tamiko (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / T. Denny Sanford School of Social and Family Dynamics (Contributor)
Created2014-05
Description
To localize different sound sources in an environment, the auditory system analyzes acoustic properties of sounds reaching the ears to determine the exact location of a sound source. Successful sound localization is important for improving signal detection and speech intelligibility in a noisy environment. Sound localization is not a uni-sensory

To localize different sound sources in an environment, the auditory system analyzes acoustic properties of sounds reaching the ears to determine the exact location of a sound source. Successful sound localization is important for improving signal detection and speech intelligibility in a noisy environment. Sound localization is not a uni-sensory experience, and can be influenced by visual information (e.g., the ventriloquist effect). Vision provides contexts and organizes the auditory space for the auditory system. This investigation tracks eye movement of human subjects using a non-invasive eye-tracking system and evaluates the impact of visual stimulation on localization of a phantom sound source generated through timing-based stereophony. It was hypothesized that gaze movement could reveal the way in which visual stimulation (LED lights) shifts the perception of a sound source. However, the results show that subjects do not always move their gaze towards the light direction even when they experience strong visual capture. On average, the gaze direction indicates the perceived sound location with and without light stimulation.
ContributorsFlores, Nancy Gloria (Author) / Zhou, Yi (Thesis director) / Azuma, Tamiko (Committee member) / Department of Speech and Hearing Science (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05