Matching Items (8)
Filtering by

Clear all filters

151047-Thumbnail Image.png
Description
Working memory (WM) and attention deficits have been well documented in individuals with aphasia (IWA) (e.g. Caspari et al., 1998; Erickson et al., 1996; Tseng et al., 1993; Wright et al., 2003). Research into these cognitive domains has spurred a theoretical shift in how aphasia is conceptualized - from a

Working memory (WM) and attention deficits have been well documented in individuals with aphasia (IWA) (e.g. Caspari et al., 1998; Erickson et al., 1996; Tseng et al., 1993; Wright et al., 2003). Research into these cognitive domains has spurred a theoretical shift in how aphasia is conceptualized - from a purely linguistic disorder to a cognitive-information processing account. Language deficits experienced by IWA may result from WM impairments or from an inability to allocate cognitive effort to the tasks. However, how language impacts performance on these tasks has not been readily investigated. Further, there is a need for a more direct measure of effort invested to language tasks. Heart rate variability (HRV) is a physiological measure of cognitive workload that has been used to measure effort in neurologically intact participants. Objectives of the study included: (1) determining the feasibility of using HRV as a measure of effort IWA invest into verbal compared with spatial WM tasks, (2) Comparing participants' performance on verbal and spatial WM tasks; and (3) determining the relationship among performance, perceived task difficulty, and HRV across verbal and spatial tasks. Eleven IWA and 21 age- and education-matched controls completed verbal and spatial n-back tasks at three difficulty levels. Difficulty ratings were obtained before and after each task. Results indicated spatial WM was relatively preserved compared with verbal WM for the aphasia group. Additionally, the aphasia group was better at rating task difficulty after completing the tasks than they were at estimating task difficulty prior to completing the tasks. Significant baseline-task differences in HRV were found for both groups. Relationships between HRV and performance, and HRV and task difficulty were non-significant. Results suggest WM performance deficits in aphasia may be primarily driven by their language deficit. Baseline-task differences in HRV indicate effort is being allocated to the tasks. Difficulty ratings indicate IWA may underestimate task demands for both verbal and spatial stimuli. However, the extent to which difficulty ratings reflect effort allocated remains unclear. Additional research is necessary to further quantify the amount of effort IWA allocate to verbal and non-verbal tasks.
ContributorsChristensen, Stephanie Cotton (Author) / Wright, Heather H. (Thesis advisor) / Ross, Katherine B. (Committee member) / Allen, John J. B. (Committee member) / Katz, Richard C. (Committee member) / Arizona State University (Publisher)
Created2012
157084-Thumbnail Image.png
Description
Cognitive deficits often accompany language impairments post-stroke. Past research has focused on working memory in aphasia, but attention is largely underexplored. Therefore, this dissertation will first quantify attention deficits post-stroke before investigating whether preserved cognitive abilities, including attention, can improve auditory sentence comprehension post-stroke. In Experiment 1a, three components of

Cognitive deficits often accompany language impairments post-stroke. Past research has focused on working memory in aphasia, but attention is largely underexplored. Therefore, this dissertation will first quantify attention deficits post-stroke before investigating whether preserved cognitive abilities, including attention, can improve auditory sentence comprehension post-stroke. In Experiment 1a, three components of attention (alerting, orienting, executive control) were measured in persons with aphasia and matched-controls using visual and auditory versions of the well-studied Attention Network Test. Experiment 1b then explored the neural resources supporting each component of attention in the visual and auditory modalities in chronic stroke participants. The results from Experiment 1a indicate that alerting, orienting, and executive control are uniquely affected by presentation modality. The lesion-symptom mapping results from Experiment 1b associated the left angular gyrus with visual executive control, the left supramarginal gyrus with auditory alerting, and Broca’s area (pars opercularis) with auditory orienting attention post-stroke. Overall, these findings indicate that perceptual modality may impact the lateralization of some aspects of attention, thus auditory attention may be more susceptible to impairment after a left hemisphere stroke.

Prosody, rhythm and pitch changes associated with spoken language may improve spoken language comprehension in persons with aphasia by recruiting intact cognitive abilities (e.g., attention and working memory) and their associated non-lesioned brain regions post-stroke. Therefore, Experiment 2 explored the relationship between cognition, two unique prosody manipulations, lesion location, and auditory sentence comprehension in persons with chronic stroke and matched-controls. The combined results from Experiment 2a and 2b indicate that stroke participants with better auditory orienting attention and a specific left fronto-parietal network intact had greater comprehension of sentences spoken with sentence prosody. For list prosody, participants with deficits in auditory executive control and/or short-term memory and the left angular gyrus and globus pallidus relatively intact, demonstrated better comprehension of sentences spoken with list prosody. Overall, the results from Experiment 2 indicate that following a left hemisphere stroke, individuals need good auditory attention and an intact left fronto-parietal network to benefit from typical sentence prosody, yet when cognitive deficits are present and this fronto-parietal network is damaged, list prosody may be more beneficial.
ContributorsLaCroix, Arianna (Author) / Rogalsky, Corianne (Thesis advisor) / Azuma, Tamiko (Committee member) / Braden, B. Blair (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2019
133028-Thumbnail Image.png
Description
Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel

Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel out the resulting sensory feedback. Currently, there are no published accounts of the perception of tactile signals for motor tasks and contexts related to the lips during both speech planning and production. In this study, we measured the responsiveness of the somatosensory system during speech planning using light electrical stimulation below the lower lip by comparing perception during mixed speaking and silent reading conditions. Participants were asked to judge whether a constant near-threshold electrical stimulation (subject-specific intensity, 85% detected at rest) was present during different time points relative to an initial visual cue. In the speaking condition, participants overtly produced target words shown on a computer monitor. In the reading condition, participants read the same target words silently to themselves without any movement or sound. We found that detection of the stimulus was attenuated during speaking conditions while remaining at a constant level close to the perceptual threshold throughout the silent reading condition. Perceptual modulation was most intense during speech production and showed some attenuation just prior to speech production during the planning period of speech. This demonstrates that there is a significant decrease in the responsiveness of the somatosensory system during speech production as well as milliseconds before speech is even produced which has implications for speech disorders such as stuttering and schizophrenia with pronounced deficits in the somatosensory system.
ContributorsMcguffin, Brianna Jean (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Department of Psychology (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
134804-Thumbnail Image.png
Description
Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed

Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed with difficulty. While the presence of SEM in the stroke survivor population advances scientific understanding of movement capabilities following a stroke, published studies using the SEM phenomenon only examined one joint. The ability of SEM to generate multi-jointed movements is understudied and consequently limits SEM as a potential therapy tool. In order to apply SEM as a therapy tool however, the biomechanics of the arm in multi-jointed movement planning and execution must be better understood. Thus, the objective of our study was to evaluate if SEM could elicit multi-joint reaching movements that were accurate in an unrestrained, two-dimensional workspace. Data was collected from ten subjects with no previous neck, arm, or brain injury. Each subject performed a reaching task to five Targets that were equally spaced in a semi-circle to create a two-dimensional workspace. The subject reached to each Target following a sequence of two non-startling acoustic stimuli cues: "Get Ready" and "Go". A loud acoustic stimuli was randomly substituted for the "Go" cue. We hypothesized that SEM is accessible and accurate for unrestricted multi-jointed reaching tasks in a functional workspace and is therefore independent of movement direction. Our results found that SEM is possible in all five Target directions. The probability of evoking SEM and the movement kinematics (i.e. total movement time, linear deviation, average velocity) to each Target are not statistically different. Thus, we conclude that SEM is possible in a functional workspace and is not dependent on where arm stability is maximized. Moreover, coordinated preparation and storage of a multi-jointed movement is indeed possible.
ContributorsOssanna, Meilin Ryan (Author) / Honeycutt, Claire (Thesis director) / Schaefer, Sydney (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
148383-Thumbnail Image.png
Description

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions,

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions, especially regarding left-hemispheric strokes that disrupt typical speech-processing brain networks, and how musical training might affect the brain networks recruited for speech after a stroke. Thus, our study aims to explore some questions related to the above topics. We collected task-based functional MRI data from 12 subjects who previously experienced a left-hemispheric stroke. Subjects listened to blocks of spoken sentences and novel piano melodies during scanning to examine the differences in brain activations in response to speech and music. We hypothesized that speech stimuli would activate right frontal regions, and music stimuli would activate the right superior temporal regions more than speech (both findings not seen in previous studies of control subjects), as a result of functional changes in the brain, following the left-hemispheric stroke and particularly the loss of functionality in the left temporal lobe. We also hypothesized that the music stimuli would cause a stronger activation in right temporal cortex for participants who have had musical training than those who have not. Our results indicate that speech stimuli compared to rest activated the anterior superior temporal gyrus bilaterally and activated the right inferior frontal lobe. Music stimuli compared to rest did not activate the brain bilaterally, but rather only activated the right middle temporal gyrus. When the group analysis was performed with music experience as a covariate, we found that musical training did not affect activations to music stimuli specifically, but there was greater right hemisphere activation in several regions in response to speech stimuli as a function of more years of musical training. The results of the study agree with our hypotheses regarding the functional changes in the brain, but they conflict with our hypothesis about musical expertise. Overall, the study has generated interesting starting points for further explorations of how musical neural resources may be recruited for speech processing after damage to typical language networks.

ContributorsKarthigeyan, Vishnu R (Author) / Rogalsky, Corianne (Thesis director) / Daliri, Ayoub (Committee member) / Harrington Bioengineering Program (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
187646-Thumbnail Image.png
Description
Previous research of impulse control disorders, common in adults with a diagnosis of Parkinson’s, were reviewed to determine possible links between impulse control disorders in in adults with aphasia. Aphasia is a disorder often caused by a stroke that can impact speech and language both receptively and expressively. Impulse control

Previous research of impulse control disorders, common in adults with a diagnosis of Parkinson’s, were reviewed to determine possible links between impulse control disorders in in adults with aphasia. Aphasia is a disorder often caused by a stroke that can impact speech and language both receptively and expressively. Impulse control disorders (ICDs) (i.e., pathological gambling, hypersexuality, compulsive eating and shopping, etc.) have drastic consequences and can cause harm to the individual affected as well as their caregivers and family. This study sought to identify if symptoms of ICDs are prevalent in adults with aphasia by utilizing self-report surveys and a Go/No-Go impulsivity computer task. The findings of this study indicate that some impulsive factors are significantly heightened in adults who have had a stroke when compared to healthy same-age peers and that these differences are perhaps best captured by the self-report surveys. Despite a large amount of literature on the impact of stroke and quality of life, this area of impulse control has remained largely unexplored. Further investigation is warranted for the prevalence of impulse control disorders in adults with aphasia, however, this study is a step forward into understanding how aphasia and stroke affect the quality of life of those impacted.
ContributorsMajors, Madilyn (Author) / Rogalsky, Corianne (Thesis advisor) / Trueba, Elizabeth (Committee member) / Azuma, Tamiko (Committee member) / Arizona State University (Publisher)
Created2023
171445-Thumbnail Image.png
Description
Stroke is the leading cause of long-term disability in the U.S., with up to 60% of strokescausing speech loss. Individuals with severe stroke, who require the most frequent, intense speech therapy, often cannot adhere to treatments due to high cost and low success rates. Therefore, the ability to make functionally

Stroke is the leading cause of long-term disability in the U.S., with up to 60% of strokescausing speech loss. Individuals with severe stroke, who require the most frequent, intense speech therapy, often cannot adhere to treatments due to high cost and low success rates. Therefore, the ability to make functionally significant changes in individuals with severe post- stroke aphasia remains a key challenge for the rehabilitation community. This dissertation aimed to evaluate the efficacy of Startle Adjuvant Rehabilitation Therapy (START), a tele-enabled, low- cost treatment, to improve quality of life and speech in individuals with severe-to-moderate stroke. START is the exposure to startling acoustic stimuli during practice of motor tasks in individuals with stroke. START increases the speed and intensity of practice in severely impaired post-stroke reaching, with START eliciting muscle activity 2-3 times higher than maximum voluntary contraction. Voluntary reaching distance, onset, and final accuracy increased after a session of START, suggesting a rehabilitative effect. However, START has not been evaluated during impaired speech. The objective of this study is to determine if impaired speech can be elicited by startling acoustic stimuli, and if three days of START training can enhance clinical measures of moderate to severe post-stroke aphasia and apraxia of speech. This dissertation evaluates START in 42 individuals with post-stroke speech impairment via telehealth in a Phase 0 clinical trial. Results suggest that impaired speech can be elicited by startling acoustic stimuli and that START benefits individuals with severe-to-moderate post-stroke impairments in both linguistic and motor speech domains. This fills an important gap in aphasia care, as many speech therapies remain ineffective and financially inaccessible for patients with severe deficits. START is effective, remotely delivered, and may likely serve as an affordable adjuvant to traditional therapy for those that have poor access to quality care.
ContributorsSwann, Zoe Elisabeth (Author) / Honeycutt, Claire F (Thesis advisor) / Daliri, Ayoub (Committee member) / Rogalsky, Corianne (Committee member) / Liss, Julie (Committee member) / Schaefer, Sydney (Committee member) / Arizona State University (Publisher)
Created2022
165768-Thumbnail Image.png
Description

Aphasia is an acquired speech-language disorder that is brought upon because of post-stroke damage to the left hemisphere of the brain. Treatment for individuals with these speech production impairments can be challenging for clinicians because there is high variability in language recovery after stroke and lesion size does not predict

Aphasia is an acquired speech-language disorder that is brought upon because of post-stroke damage to the left hemisphere of the brain. Treatment for individuals with these speech production impairments can be challenging for clinicians because there is high variability in language recovery after stroke and lesion size does not predict language outcome (Lazar et al, 2008). It is also important to note that adequate integration between the sensory and motor systems is critical for many aspects of fluent speech and correcting speech errors. The present study seeks to investigate how delayed auditory-feedback paradigms, which alter the time scale of sensorimotor interactions in speech, might be useful in characterizing the speech production impairments in individuals with aphasia. To this end, six individuals with aphasia and nine age-matched control subjects were introduced to delayed auditory feedback at 4 different intervals during a sentence reading task. Our study found that the aphasia group generated more errors in 3 out of the 4 linguistic categories measured across all delay lengths, but that there was no significant main effect delay or interaction between group and delay. Acoustic analyses revealed variability among scores within the control and aphasia groups on all phoneme types. For example, acoustic analyses highlighted how the individual with conduction aphasia showed significantly longer amplitudes at all delays, and significantly larger duration at no delay, but that significance diminished as delay periods increased. Overall, this study suggests that delayed auditory feedback’s effects vary across individuals with aphasia and provides a base of research to be further built on by future testing of individuals with varying aphasia types and levels of severity.

ContributorsPettijohn, Madilyn (Author) / Rogalsky, Corianne (Thesis director) / Daliri, Ayoub (Committee member) / Barrett, The Honors College (Contributor) / College of Health Solutions (Contributor)
Created2022-05