Matching Items (14)
Filtering by

Clear all filters

151721-Thumbnail Image.png
Description
Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and

Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and error responses, reaction times, and accuracy. Accuracy and reaction times were analyzed in terms of memory load (task condition), number of repetitions, effect of high vs. low print-frequency, and correlations with working memory span. Multinomial tree analyses were also completed to investigate source vs. item memory and revealed a mirror effect in episodic memory experiments (source memory), but a frequency advantage in span tasks (item memory). Interestingly enough, we did not observe an advantage for high working memory span individuals in frequency judgments, even when participants split their attention during the dual task (similar to a complex span task). However, we concluded that both the amount of attentional resources allocated and prior experience with an item affect how it is stored in memory.
ContributorsPeterson, Megan Paige (Author) / Azuma, Tamiko (Thesis advisor) / Gray, Shelley (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2013
152801-Thumbnail Image.png
Description
Everyday speech communication typically takes place face-to-face. Accordingly, the task of perceiving speech is a multisensory phenomenon involving both auditory and visual information. The current investigation examines how visual information influences recognition of dysarthric speech. It also explores where the influence of visual information is dependent upon age. Forty adults

Everyday speech communication typically takes place face-to-face. Accordingly, the task of perceiving speech is a multisensory phenomenon involving both auditory and visual information. The current investigation examines how visual information influences recognition of dysarthric speech. It also explores where the influence of visual information is dependent upon age. Forty adults participated in the study that measured intelligibility (percent words correct) of dysarthric speech in auditory versus audiovisual conditions. Participants were then separated into two groups: older adults (age range 47 to 68) and young adults (age range 19 to 36) to examine the influence of age. Findings revealed that all participants, regardless of age, improved their ability to recognize dysarthric speech when visual speech was added to the auditory signal. The magnitude of this benefit, however, was greater for older adults when compared with younger adults. These results inform our understanding of how visual speech information influences understanding of dysarthric speech.
ContributorsFall, Elizabeth (Author) / Liss, Julie (Thesis advisor) / Berisha, Visar (Committee member) / Gray, Shelley (Committee member) / Arizona State University (Publisher)
Created2014
153419-Thumbnail Image.png
Description
A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the

A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the bilateral placement of implants in both ears and to bimodal placement of a hearing aid in the contralateral ear if residual hearing is present. However, there is potentially more to subsequent speech perception for bilateral and bimodal cochlear implant users than the electric and acoustic input being received via these modalities. For normal listeners vision plays a role and Rosenblum (2005) points out it is a key feature of an integrated perceptual process. Logically, cochlear implant users should also benefit from integrated visual input. The question is how exactly does vision provide benefit to bilateral and bimodal users. Eight (8) bilateral and 5 bimodal participants received randomized experimental phrases previously generated by Liss et al. (1998) in auditory and audiovisual conditions. The participants recorded their perception of the input. Data were consequently analyzed for percent words correct, consonant errors, and lexical boundary error types. Overall, vision was found to improve speech perception for bilateral and bimodal cochlear implant participants. Each group experienced a significant increase in percent words correct when visual input was added. With vision bilateral participants reduced consonant place errors and demonstrated increased use of syllabic stress cues used in lexical segmentation. Therefore, results suggest vision might provide perceptual benefits for bilateral cochlear implant users by granting access to place information and by augmenting cues for syllabic stress in the absence of acoustic input. On the other hand vision did not provide the bimodal participants significantly increased access to place and stress cues. Therefore the exact mechanism by which bimodal implant users improved speech perception with the addition of vision is unknown. These results point to the complexities of audiovisual integration during speech perception and the need for continued research regarding the benefit vision provides to bilateral and bimodal cochlear implant users.
ContributorsLudwig, Cimarron (Author) / Liss, Julie (Thesis advisor) / Dorman, Michael (Committee member) / Azuma, Tamiko (Committee member) / Arizona State University (Publisher)
Created2015
153352-Thumbnail Image.png
Description
Language and music are fundamentally entwined within human culture. The two domains share similar properties including rhythm, acoustic complexity, and hierarchical structure. Although language and music have commonalities, abilities in these two domains have been found to dissociate after brain damage, leaving unanswered questions about their interconnectedness, including can one

Language and music are fundamentally entwined within human culture. The two domains share similar properties including rhythm, acoustic complexity, and hierarchical structure. Although language and music have commonalities, abilities in these two domains have been found to dissociate after brain damage, leaving unanswered questions about their interconnectedness, including can one domain support the other when damage occurs? Evidence supporting this question exists for speech production. Musical pitch and rhythm are employed in Melodic Intonation Therapy to improve expressive language recovery, but little is known about the effects of music on the recovery of speech perception and receptive language. This research is one of the first to address the effects of music on speech perception. Two groups of participants, an older adult group (n=24; M = 71.63 yrs) and a younger adult group (n=50; M = 21.88 yrs) took part in the study. A native female speaker of Standard American English created four different types of stimuli including pseudoword sentences of normal speech, simultaneous music-speech, rhythmic speech, and music-primed speech. The stimuli were presented binaurally and participants were instructed to repeat what they heard following a 15 second time delay. Results were analyzed using standard parametric techniques. It was found that musical priming of speech, but not simultaneous synchronized music and speech, facilitated speech perception in both the younger adult and older adult groups. This effect may be driven by rhythmic information. The younger adults outperformed the older adults in all conditions. The speech perception task relied heavily on working memory, and there is a known working memory decline associated with aging. Thus, participants completed a working memory task to be used as a covariate in analyses of differences across stimulus types and age groups. Working memory ability was found to correlate with speech perception performance, but that the age-related performance differences are still significant once working memory differences are taken into account. These results provide new avenues for facilitating speech perception in stroke patients and sheds light upon the underlying mechanisms of Melodic Intonation Therapy for speech production.
ContributorsLaCroix, Arianna (Author) / Rogalsky, Corianne (Thesis advisor) / Gray, Shelley (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2015
150607-Thumbnail Image.png
Description
Often termed the "gold standard" in the differential diagnosis of dysarthria, the etiology-based Mayo Clinic classification approach has been used nearly exclusively by clinicians since the early 1970s. However, the current descriptive method results in a distinct overlap of perceptual features across various etiologies, thus limiting the clinical utility of

Often termed the "gold standard" in the differential diagnosis of dysarthria, the etiology-based Mayo Clinic classification approach has been used nearly exclusively by clinicians since the early 1970s. However, the current descriptive method results in a distinct overlap of perceptual features across various etiologies, thus limiting the clinical utility of such a system for differential diagnosis. Acoustic analysis may provide a more objective measure for improvement in overall reliability (Guerra & Lovely, 2003) of classification. The following paper investigates the potential use of a taxonomical approach to dysarthria. The purpose of this study was to identify a set of acoustic correlates of perceptual dimensions used to group similarly sounding speakers with dysarthria, irrespective of disease etiology. The present study utilized a free classification auditory perceptual task in order to identify a set of salient speech characteristics displayed by speakers with varying dysarthria types and perceived by listeners, which was then analyzed using multidimensional scaling (MDS), correlation analysis, and cluster analysis. In addition, discriminant function analysis (DFA) was conducted to establish the feasibility of using the dimensions underlying perceptual similarity in dysarthria to classify speakers into both listener-derived clusters and etiology-based categories. The following hypothesis was identified: Because of the presumed predictive link between the acoustic correlates and listener-derived clusters, the DFA classification results should resemble the perceptual clusters more closely than the etiology-based (Mayo System) classifications. Results of the present investigation's MDS revealed three dimensions, which were significantly correlated with 1) metrics capturing rate and rhythm, 2) intelligibility, and 3) all of the long-term average spectrum metrics in the 8000 Hz band, which has been linked to degree of phonemic distinctiveness (Utianski et al., February 2012). A qualitative examination of listener notes supported the MDS and correlation results, with listeners overwhelmingly making reference to speaking rate/rhythm, intelligibility, and articulatory precision while participating in the free classification task. Additionally, acoustic correlates revealed by the MDS and subjected to DFA indeed predicted listener group classification. These results beget acoustic measurement as representative of listener perception, and represent the first phase in supporting the use of a perceptually relevant taxonomy of dysarthria.
ContributorsNorton, Rebecca (Author) / Liss, Julie (Thesis advisor) / Azuma, Tamiko (Committee member) / Ingram, David (Committee member) / Arizona State University (Publisher)
Created2012
156069-Thumbnail Image.png
Description
Military veterans have a significantly higher incidence of mild traumatic brain injury (mTBI), depression, and Post-traumatic stress disorder (PTSD) compared to civilians. Military veterans also represent a rapidly growing subgroup of college students, due in part to the robust and financially incentivizing educational benefits under the Post-9/11 GI Bill. The

Military veterans have a significantly higher incidence of mild traumatic brain injury (mTBI), depression, and Post-traumatic stress disorder (PTSD) compared to civilians. Military veterans also represent a rapidly growing subgroup of college students, due in part to the robust and financially incentivizing educational benefits under the Post-9/11 GI Bill. The overlapping cognitively impacting symptoms of service-related conditions combined with the underreporting of mTBI and psychiatric-related conditions, make accurate assessment of cognitive performance in military veterans challenging. Recent research findings provide conflicting information on cognitive performance patterns in military veterans. The purpose of this study was to determine whether service-related conditions and self-assessments predict performance on complex working memory and executive function tasks for military veteran college students. Sixty-one military veteran college students attending classes at Arizona State University campuses completed clinical neuropsychological tasks and experimental working memory and executive function tasks. The results revealed that a history of mTBI significantly predicted poorer performance in the areas of verbal working memory and decision-making. Depression significantly predicted poorer performance in executive function related to serial updating. In contrast, the commonly used clinical neuropsychological tasks were not sensitive service-related conditions including mTBI, PTSD, and depression. The differing performance patterns observed between the clinical tasks and the more complex experimental tasks support that researchers and clinicians should use tests that sufficiently tax verbal working memory and executive function when evaluating the subtle, higher-order cognitive deficits associated with mTBI and depression.
ContributorsGallagher, Karen Louise (Author) / Azuma, Tamiko (Thesis advisor) / Liss, Julie (Committee member) / Lavoie, Michael (Committee member) / Arizona State University (Publisher)
Created2017
157392-Thumbnail Image.png
Description
With a growing number of adults with autism spectrum disorder (ASD), more and more research has been conducted on majority male cohorts with ASD from young, adolescence, and some older age. Currently, males make up the majority of individuals diagnosed with ASD, however, recent research states that the gender ga

With a growing number of adults with autism spectrum disorder (ASD), more and more research has been conducted on majority male cohorts with ASD from young, adolescence, and some older age. Currently, males make up the majority of individuals diagnosed with ASD, however, recent research states that the gender gap is closing due to more advanced screening and a better understanding of how females with ASD present their symptoms. Little research has been published on the neurocognitive differences that exist between older adults with ASD compared to neurotypical (NT) counterparts, and nothing has specifically addressed older women with ASD. This study utilized neuroimaging and neuropsychological tests to examine differences between diagnosis and sex of four distinct groups: older men with ASD, older women with ASD, older NT men, and older NT women. In each group, hippocampal size (via FreeSurfer) was analyzed for differences as well as correlations with neuropsychological tests. Participants (ASD Female, n = 12; NT Female, n = 14; ASD Male, n = 30; NT Male = 22), were similar according to age, IQ, and education. The results of the study indicated that the ASD Group as a whole performed worse on executive functioning tasks (Wisconsin Card Sorting Test, Trails Making Test) and memory-related tasks (Rey Auditory Verbal Learning Test, Weschler Memory Scale: Visual Reproduction) compared to the NT Group. Interactions of sex by diagnosis approached significance only within the WCST non-perseverative errors, with the women with ASD performing worse than NT women, but no group differences between men. Effect sizes between the female groups (ASD female vs. NT female) showed more than double that of the male groups (ASD male vs. NT male) for all WCST and AVLT measures. Participants with ASD had significantly smaller right hippocampal volumes than NT participants. In addition, all older women showed larger hippocampal volumes when corrected for total intracranial volume (TIV) compared to all older men. Overall, NT Females had significant correlations across all neuropsychological tests and their hippocampal volumes whereas no other group had significant correlations. These results suggest a tighter coupling between hippocampal size and cognition in NT Females than NT Males and both sexes with ASD. This study promotes further understanding of the neuropsychological differences between older men and women, both with and without ASD. Further research is needed on a larger sample of older women with and without ASD.
ContributorsWebb, Christen Len (Author) / Braden, B. Blair (Thesis advisor) / Azuma, Tamiko (Committee member) / Dixon, Maria (Committee member) / Arizona State University (Publisher)
Created2019
157084-Thumbnail Image.png
Description
Cognitive deficits often accompany language impairments post-stroke. Past research has focused on working memory in aphasia, but attention is largely underexplored. Therefore, this dissertation will first quantify attention deficits post-stroke before investigating whether preserved cognitive abilities, including attention, can improve auditory sentence comprehension post-stroke. In Experiment 1a, three components of

Cognitive deficits often accompany language impairments post-stroke. Past research has focused on working memory in aphasia, but attention is largely underexplored. Therefore, this dissertation will first quantify attention deficits post-stroke before investigating whether preserved cognitive abilities, including attention, can improve auditory sentence comprehension post-stroke. In Experiment 1a, three components of attention (alerting, orienting, executive control) were measured in persons with aphasia and matched-controls using visual and auditory versions of the well-studied Attention Network Test. Experiment 1b then explored the neural resources supporting each component of attention in the visual and auditory modalities in chronic stroke participants. The results from Experiment 1a indicate that alerting, orienting, and executive control are uniquely affected by presentation modality. The lesion-symptom mapping results from Experiment 1b associated the left angular gyrus with visual executive control, the left supramarginal gyrus with auditory alerting, and Broca’s area (pars opercularis) with auditory orienting attention post-stroke. Overall, these findings indicate that perceptual modality may impact the lateralization of some aspects of attention, thus auditory attention may be more susceptible to impairment after a left hemisphere stroke.

Prosody, rhythm and pitch changes associated with spoken language may improve spoken language comprehension in persons with aphasia by recruiting intact cognitive abilities (e.g., attention and working memory) and their associated non-lesioned brain regions post-stroke. Therefore, Experiment 2 explored the relationship between cognition, two unique prosody manipulations, lesion location, and auditory sentence comprehension in persons with chronic stroke and matched-controls. The combined results from Experiment 2a and 2b indicate that stroke participants with better auditory orienting attention and a specific left fronto-parietal network intact had greater comprehension of sentences spoken with sentence prosody. For list prosody, participants with deficits in auditory executive control and/or short-term memory and the left angular gyrus and globus pallidus relatively intact, demonstrated better comprehension of sentences spoken with list prosody. Overall, the results from Experiment 2 indicate that following a left hemisphere stroke, individuals need good auditory attention and an intact left fronto-parietal network to benefit from typical sentence prosody, yet when cognitive deficits are present and this fronto-parietal network is damaged, list prosody may be more beneficial.
ContributorsLaCroix, Arianna (Author) / Rogalsky, Corianne (Thesis advisor) / Azuma, Tamiko (Committee member) / Braden, B. Blair (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2019
135492-Thumbnail Image.png
Description
This pilot study evaluated whether Story Champs and Puente de Cuentos helped bilingual preschoolers increase their usage of emotional terms and ability to tell stories. Participants in this study included 10 Spanish-English bilingual preschoolers. Intervention was conducted in 9 sessions over 3 days using the Test of Narrative Retell to

This pilot study evaluated whether Story Champs and Puente de Cuentos helped bilingual preschoolers increase their usage of emotional terms and ability to tell stories. Participants in this study included 10 Spanish-English bilingual preschoolers. Intervention was conducted in 9 sessions over 3 days using the Test of Narrative Retell to measure results. Results did not find significant gains in either emotional term usage or ability to tell stories, but the results were promising as a pilot study.
ContributorsSato, Leslie Mariko (Author) / Restrepo, Maria (Thesis director) / Dixon, Maria (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
157359-Thumbnail Image.png
Description
Speech intelligibility measures how much a speaker can be understood by a listener. Traditional measures of intelligibility, such as word accuracy, are not sufficient to reveal the reasons of intelligibility degradation. This dissertation investigates the underlying sources of intelligibility degradations from both perspectives of the speaker and the listener. Segmental

Speech intelligibility measures how much a speaker can be understood by a listener. Traditional measures of intelligibility, such as word accuracy, are not sufficient to reveal the reasons of intelligibility degradation. This dissertation investigates the underlying sources of intelligibility degradations from both perspectives of the speaker and the listener. Segmental phoneme errors and suprasegmental lexical boundary errors are developed to reveal the perceptual strategies of the listener. A comprehensive set of automated acoustic measures are developed to quantify variations in the acoustic signal from three perceptual aspects, including articulation, prosody, and vocal quality. The developed measures have been validated on a dysarthric speech dataset with various severity degrees. Multiple regression analysis is employed to show the developed measures could predict perceptual ratings reliably. The relationship between the acoustic measures and the listening errors is investigated to show the interaction between speech production and perception. The hypothesize is that the segmental phoneme errors are mainly caused by the imprecise articulation, while the sprasegmental lexical boundary errors are due to the unreliable phonemic information as well as the abnormal rhythm and prosody patterns. To test the hypothesis, within-speaker variations are simulated in different speaking modes. Significant changes have been detected in both the acoustic signals and the listening errors. Results of the regression analysis support the hypothesis by showing that changes in the articulation-related acoustic features are important in predicting changes in listening phoneme errors, while changes in both of the articulation- and prosody-related features are important in predicting changes in lexical boundary errors. Moreover, significant correlation has been achieved in the cross-validation experiment, which indicates that it is possible to predict intelligibility variations from acoustic signal.
ContributorsJiao, Yishan (Author) / Berisha, Visar (Thesis advisor) / Liss, Julie (Thesis advisor) / Zhou, Yi (Committee member) / Arizona State University (Publisher)
Created2019