Matching Items (29)
Filtering by

Clear all filters

152656-Thumbnail Image.png
Description
The basal ganglia are four sub-cortical nuclei associated with motor control and reward learning. They are part of numerous larger mostly segregated loops where the basal ganglia receive inputs from specific regions of cortex. Converging on these inputs are dopaminergic neurons that alter their firing based on received and/or predicted

The basal ganglia are four sub-cortical nuclei associated with motor control and reward learning. They are part of numerous larger mostly segregated loops where the basal ganglia receive inputs from specific regions of cortex. Converging on these inputs are dopaminergic neurons that alter their firing based on received and/or predicted rewarding outcomes of a behavior. The basal ganglia's output feeds through the thalamus back to the areas of the cortex where the loop originated. Understanding the dynamic interactions between the various parts of these loops is critical to understanding the basal ganglia's role in motor control and reward based learning. This work developed several experimental techniques that can be applied to further study basal ganglia function. The first technique used micro-volume injections of low concentration muscimol to decrease the firing rates of recorded neurons in a limited area of cortex in rats. Afterwards, an artificial cerebrospinal fluid flush was injected to rapidly eliminate the muscimol's effects. This technique was able to contain the effects of muscimol to approximately a 1 mm radius volume and limited the duration of the drug effect to less than one hour. This technique could be used to temporarily perturb a small portion of the loops involving the basal ganglia and then observe how these effects propagate in other connected regions. The second part applied self-organizing maps (SOM) to find temporal patterns in neural firing rate that are independent of behavior. The distribution of detected patterns frequency on these maps can then be used to determine if changes in neural activity are occurring over time. The final technique focused on the role of the basal ganglia in reward learning. A new conditioning technique was created to increase the occurrence of selected patterns of neural activity without utilizing any external reward or behavior. A pattern of neural activity in the cortex of rats was selected using an SOM. The pattern was then reinforced by being paired with electrical stimulation of the medial forebrain bundle triggering dopamine release in the basal ganglia. Ultimately, this technique proved unsuccessful possibly due to poor selection of the patterns being reinforced.
ContributorsBaldwin, Nathan Aaron (Author) / Helms Tillery, Stephen I (Thesis advisor) / Castaneda, Edward (Committee member) / Buneo, Christopher A (Committee member) / Muthuswamy, Jitendran (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2014
150222-Thumbnail Image.png
Description
An accurate sense of upper limb position is crucial to reaching movements where sensory information about upper limb position and target location is combined to specify critical features of the movement plan. This dissertation was dedicated to studying the mechanisms of how the brain estimates the limb position in space

An accurate sense of upper limb position is crucial to reaching movements where sensory information about upper limb position and target location is combined to specify critical features of the movement plan. This dissertation was dedicated to studying the mechanisms of how the brain estimates the limb position in space and the consequences of misestimation of limb position on movements. Two independent but related studies were performed. The first involved characterizing the neural mechanisms of limb position estimation in the non-human primate brain. Single unit recordings were obtained in area 5 of the posterior parietal cortex in order to examine the role of this area in estimating limb position based on visual and somatic signals (proprioceptive, efference copy). When examined individually, many area 5 neurons were tuned to the position of the limb in the workspace but very few neurons were modulated by visual feedback. At the population level however decoding of limb position was somewhat more accurate when visual feedback was provided. These findings support a role for area 5 in limb position estimation but also suggest that visual signals regarding limb position are only weakly represented in this area, and only at the population level. The second part of this dissertation focused on the consequences of misestimation of limb position for movement production. It is well known that limb movements are inherently variable. This variability could be the result of noise arising at one or more stages of movement production. Here we used biomechanical modeling and simulation techniques to characterize movement variability resulting from noise in estimating limb position ('sensing noise') and in planning required movement vectors ('planning noise'), and compared that to the variability expected due to noise in movement execution. We found that the effects of sensing and planning related noise on movement variability were dependent upon both the planned movement direction and the initial configuration of the arm and were different in many respects from the effects of execution noise.
ContributorsShi, Ying (Author) / Buneo, Christopher A (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / He, Jiping (Committee member) / Santos, Veronica (Committee member) / Arizona State University (Publisher)
Created2011
150499-Thumbnail Image.png
Description
The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result of the noise pervading the neural signals underlying sensorimotor processing. The specific influences and interactions of these noise processes remain

The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result of the noise pervading the neural signals underlying sensorimotor processing. The specific influences and interactions of these noise processes remain unclear. Thus several studies have been performed to elucidate the role and influence of sensorimotor noise on movement variability. The first study focuses on sensory integration and movement planning across the reaching workspace. An experiment was designed to examine the relative contributions of vision and proprioception to movement planning by measuring the rotation of the initial movement direction induced by a perturbation of the visual feedback prior to movement onset. The results suggest that contribution of vision was relatively consistent across the evaluated workspace depths; however, the influence of vision differed between the vertical and later axes indicate that additional factors beyond vision and proprioception influence movement planning of 3-dimensional movements. If the first study investigated the role of noise in sensorimotor integration, the second and third studies investigate relative influence of sensorimotor noise on reaching performance. Specifically, they evaluate how the characteristics of neural processing that underlie movement planning and execution manifest in movement variability during natural reaching. Subjects performed reaching movements with and without visual feedback throughout the movement and the patterns of endpoint variability were compared across movement directions. The results of these studies suggest a primary role of visual feedback noise in shaping patterns of variability and in determining the relative influence of planning and execution related noise sources. The final work considers a computational approach to characterizing how sensorimotor processes interact to shape movement variability. A model of multi-modal feedback control was developed to simulate the interaction of planning and execution noise on reaching variability. The model predictions suggest that anisotropic properties of feedback noise significantly affect the relative influence of planning and execution noise on patterns of reaching variability.
ContributorsApker, Gregory Allen (Author) / Buneo, Christopher A (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2012
153939-Thumbnail Image.png
Description
Sound localization can be difficult in a reverberant environment. Fortunately listeners can utilize various perceptual compensatory mechanisms to increase the reliability of sound localization when provided with ambiguous physical evidence. For example, the directional information of echoes can be perceptually suppressed by the direct sound to achieve a single, fused

Sound localization can be difficult in a reverberant environment. Fortunately listeners can utilize various perceptual compensatory mechanisms to increase the reliability of sound localization when provided with ambiguous physical evidence. For example, the directional information of echoes can be perceptually suppressed by the direct sound to achieve a single, fused auditory event in a process called the precedence effect (Litovsky et al., 1999). Visual cues also influence sound localization through a phenomenon known as the ventriloquist effect. It is classically demonstrated by a puppeteer who speaks without visible lip movements while moving the mouth of a puppet synchronously with his/her speech (Gelder and Bertelson, 2003). If the ventriloquist is successful, sound will be “captured” by vision and be perceived to be originating at the location of the puppet. This thesis investigates the influence of vision on the spatial localization of audio-visual stimuli. Participants seated in a sound-attenuated room indicated their perceived locations of either ISI or level-difference stimuli in free field conditions. Two types of stereophonic phantom sound sources, created by modulating the inter-stimulus time interval (ISI) or level difference between two loudspeakers, were used as auditory stimuli. The results showed that the light cues influenced auditory spatial perception to a greater extent for the ISI stimuli than the level difference stimuli. A binaural signal analysis further revealed that the greater visual bias for the ISI phantom sound sources was correlated with the increasingly ambiguous binaural cues of the ISI signals. This finding suggests that when sound localization cues are unreliable, perceptual decisions become increasingly biased towards vision for finding a sound source. These results support the cue saliency theory underlying cross-modal bias and extend this theory to include stereophonic phantom sound sources.
ContributorsMontagne, Christopher (Author) / Zhou, Yi (Thesis advisor) / Buneo, Christopher A (Thesis advisor) / Yost, William A. (Committee member) / Arizona State University (Publisher)
Created2015
156093-Thumbnail Image.png
Description
Understanding where our bodies are in space is imperative for motor control, particularly for actions such as goal-directed reaching. Multisensory integration is crucial for reducing uncertainty in arm position estimates. This dissertation examines time and frequency-domain correlates of visual-proprioceptive integration during an arm-position maintenance task. Neural recordings

Understanding where our bodies are in space is imperative for motor control, particularly for actions such as goal-directed reaching. Multisensory integration is crucial for reducing uncertainty in arm position estimates. This dissertation examines time and frequency-domain correlates of visual-proprioceptive integration during an arm-position maintenance task. Neural recordings were obtained from two different cortical areas as non-human primates performed a center-out reaching task in a virtual reality environment. Following a reach, animals maintained the end-point position of their arm under unimodal (proprioception only) and bimodal (proprioception and vision) conditions. In both areas, time domain and multi-taper spectral analysis methods were used to quantify changes in the spiking, local field potential (LFP), and spike-field coherence during arm-position maintenance.

In both areas, individual neurons were classified based on the spectrum of their spiking patterns. A large proportion of cells in the SPL that exhibited sensory condition-specific oscillatory spiking in the beta (13-30Hz) frequency band. Cells in the IPL typically had a more diverse mix of oscillatory and refractory spiking patterns during the task in response to changing sensory condition. Contrary to the assumptions made in many modelling studies, none of the cells exhibited Poisson-spiking statistics in SPL or IPL.

Evoked LFPs in both areas exhibited greater effects of target location than visual condition, though the evoked responses in the preferred reach direction were generally suppressed in the bimodal condition relative to the unimodal condition. Significant effects of target location on evoked responses were observed during the movement period of the task well.

In the frequency domain, LFP power in both cortical areas was enhanced in the beta band during the position estimation epoch of the task, indicating that LFP beta oscillations may be important for maintaining the ongoing state. This was particularly evident at the population level, with clear increase in alpha and beta power. Differences in spectral power between conditions also became apparent at the population level, with power during bimodal trials being suppressed relative to unimodal. The spike-field coherence showed confounding results in both the SPL and IPL, with no clear correlation between incidence of beta oscillations and significant beta coherence.
ContributorsVanGilder, Paul (Author) / Buneo, Christopher A (Thesis advisor) / Helms-Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / Muthuswamy, Jit (Committee member) / Foldes, Stephen (Committee member) / Arizona State University (Publisher)
Created2017
156177-Thumbnail Image.png
Description
The activation of the primary motor cortex (M1) is common in speech perception tasks that involve difficult listening conditions. Although the challenge of recognizing and discriminating non-native speech sounds appears to be an instantiation of listening under difficult circumstances, it is still unknown if M1 recruitment is facilitatory of second

The activation of the primary motor cortex (M1) is common in speech perception tasks that involve difficult listening conditions. Although the challenge of recognizing and discriminating non-native speech sounds appears to be an instantiation of listening under difficult circumstances, it is still unknown if M1 recruitment is facilitatory of second language speech perception. The purpose of this study was to investigate the role of M1 associated with speech motor centers in processing acoustic inputs in the native (L1) and second language (L2), using repetitive Transcranial Magnetic Stimulation (rTMS) to selectively alter neural activity in M1. Thirty-six healthy English/Spanish bilingual subjects participated in the experiment. The performance on a listening word-to-picture matching task was measured before and after real- and sham-rTMS to the orbicularis oris (lip muscle) associated M1. Vowel Space Area (VSA) obtained from recordings of participants reading a passage in L2 before and after real-rTMS, was calculated to determine its utility as an rTMS aftereffect measure. There was high variability in the aftereffect of the rTMS protocol to the lip muscle among the participants. Approximately 50% of participants showed an inhibitory effect of rTMS, evidenced by smaller motor evoked potentials (MEPs) area, whereas the other 50% had a facilitatory effect, with larger MEPs. This suggests that rTMS has a complex influence on M1 excitability, and relying on grand-average results can obscure important individual differences in rTMS physiological and functional outcomes. Evidence of motor support to word recognition in the L2 was found. Participants showing an inhibitory aftereffect of rTMS on M1 produced slower and less accurate responses in the L2 task, whereas those showing a facilitatory aftereffect of rTMS on M1 produced more accurate responses in L2. In contrast, no effect of rTMS was found on the L1, where accuracy and speed were very similar after sham- and real-rTMS. The L2 VSA measure was indicative of the aftereffect of rTMS to M1 associated with speech production, supporting its utility as an rTMS aftereffect measure. This result revealed an interesting and novel relation between cerebral motor cortex activation and speech measures.
ContributorsBarragan, Beatriz (Author) / Liss, Julie (Thesis advisor) / Berisha, Visar (Committee member) / Rogalsky, Corianne (Committee member) / Restrepo, Adelaida (Committee member) / Arizona State University (Publisher)
Created2018
157392-Thumbnail Image.png
Description
With a growing number of adults with autism spectrum disorder (ASD), more and more research has been conducted on majority male cohorts with ASD from young, adolescence, and some older age. Currently, males make up the majority of individuals diagnosed with ASD, however, recent research states that the gender ga

With a growing number of adults with autism spectrum disorder (ASD), more and more research has been conducted on majority male cohorts with ASD from young, adolescence, and some older age. Currently, males make up the majority of individuals diagnosed with ASD, however, recent research states that the gender gap is closing due to more advanced screening and a better understanding of how females with ASD present their symptoms. Little research has been published on the neurocognitive differences that exist between older adults with ASD compared to neurotypical (NT) counterparts, and nothing has specifically addressed older women with ASD. This study utilized neuroimaging and neuropsychological tests to examine differences between diagnosis and sex of four distinct groups: older men with ASD, older women with ASD, older NT men, and older NT women. In each group, hippocampal size (via FreeSurfer) was analyzed for differences as well as correlations with neuropsychological tests. Participants (ASD Female, n = 12; NT Female, n = 14; ASD Male, n = 30; NT Male = 22), were similar according to age, IQ, and education. The results of the study indicated that the ASD Group as a whole performed worse on executive functioning tasks (Wisconsin Card Sorting Test, Trails Making Test) and memory-related tasks (Rey Auditory Verbal Learning Test, Weschler Memory Scale: Visual Reproduction) compared to the NT Group. Interactions of sex by diagnosis approached significance only within the WCST non-perseverative errors, with the women with ASD performing worse than NT women, but no group differences between men. Effect sizes between the female groups (ASD female vs. NT female) showed more than double that of the male groups (ASD male vs. NT male) for all WCST and AVLT measures. Participants with ASD had significantly smaller right hippocampal volumes than NT participants. In addition, all older women showed larger hippocampal volumes when corrected for total intracranial volume (TIV) compared to all older men. Overall, NT Females had significant correlations across all neuropsychological tests and their hippocampal volumes whereas no other group had significant correlations. These results suggest a tighter coupling between hippocampal size and cognition in NT Females than NT Males and both sexes with ASD. This study promotes further understanding of the neuropsychological differences between older men and women, both with and without ASD. Further research is needed on a larger sample of older women with and without ASD.
ContributorsWebb, Christen Len (Author) / Braden, B. Blair (Thesis advisor) / Azuma, Tamiko (Committee member) / Dixon, Maria (Committee member) / Arizona State University (Publisher)
Created2019
135399-Thumbnail Image.png
Description
Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken

Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken language engage completely different sensory modalities (visual-manual versus the more common auditory-oromotor) both languages share grammatical structures and contain syntactic intricacies innate to all languages. Thus, studies of multi-modal bilingualism (e.g. a native English speaker learning American Sign Language) can lead to a better understanding of the neurobiology of second language acquisition, and of language more broadly. For example, can the well-developed visual-spatial processing networks in English speakers support grammatical processing in sign language, as it relies heavily on location and movement? The present study furthers the understanding of the neural correlates of second language acquisition by studying late L2 normal hearing learners of American Sign Language (ASL). Twenty English speaking ASU students enrolled in advanced American Sign Language coursework participated in our functional Magnetic Resonance Imaging (fMRI) study. The aim was to identify the brain networks engaged in syntactic processing of ASL sentences in late L2 ASL learners. While many studies have addressed the neurobiology of acquiring a second spoken language, no previous study to our knowledge has examined the brain networks supporting syntactic processing in bimodal bilinguals. We examined the brain networks engaged while perceiving ASL sentences compared to ASL word lists, as well as written English sentences and word lists. We hypothesized that our findings in late bimodal bilinguals would largely coincide with the unimodal bilingual literature, but with a few notable differences including additional attention networks being engaged by ASL processing. Our results suggest that there is a high degree of overlap in sentence processing networks for ASL and English. There also are important differences in regards to the recruitment of speech comprehension, visual-spatial and domain-general brain networks. Our findings suggest that well-known sentence comprehension and syntactic processing regions for spoken languages are flexible and modality-independent.
ContributorsMickelsen, Soren Brooks (Co-author) / Johnson, Lisa (Co-author) / Rogalsky, Corianne (Thesis director) / Azuma, Tamiko (Committee member) / Howard, Pamela (Committee member) / Department of Speech and Hearing Science (Contributor) / School of Human Evolution and Social Change (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
134926-Thumbnail Image.png
Description
The International Dyslexia Association defines dyslexia as a learning disorder that is characterized by poor spelling, decoding, and word recognition abilities. There is still no known cause of dyslexia, although it is a very common disability that affects 1 in 10 people. Previous fMRI and MRI research in dyslexia has

The International Dyslexia Association defines dyslexia as a learning disorder that is characterized by poor spelling, decoding, and word recognition abilities. There is still no known cause of dyslexia, although it is a very common disability that affects 1 in 10 people. Previous fMRI and MRI research in dyslexia has explored the neural correlations of hemispheric lateralization and phonemic awareness in dyslexia. The present study investigated the underlying neurobiology of five adults with dyslexia compared to age- and sex-matched control subjects using structural and functional magnetic resonance imaging. All subjects completed a large battery of behavioral tasks as part of a larger study and underwent functional and structural MRI acquisition. This data was collected and preprocessed at the University of Washington. Analyses focused on examining the neural correlates of hemispheric lateralization, letter reversal mistakes, reduced processing speed, and phonemic awareness. There were no significant findings of hemispheric differences between subjects with dyslexia and controls. The subject making the largest amount of letter reversal errors had deactivation in their cerebellum during the fMRI language task. Cerebellar white matter volume and surface area of the premotor cortex was the largest in the individual with the slowest reaction time to tapping. Phonemic decoding efficiency had a high correlation with neural activation in the primary motor cortex during the fMRI motor task (r=0.6). Findings from the present study suggest that brain regions utilized during motor control, such as the cerebellum, premotor cortex, and primary motor cortex, may have a larger role in dyslexia then previously considered. Future studies are needed to further distinguish the role of the cerebellum and other motor regions in relation to motor control and language processing deficits related to dyslexia.
ContributorsHoulihan, Chloe Carissa Prince (Author) / Rogalsky, Corianne (Thesis director) / Peter, Beate (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
155273-Thumbnail Image.png
Description
Audiovisual (AV) integration is a fundamental component of face-to-face communication. Visual cues generally aid auditory comprehension of communicative intent through our innate ability to “fuse” auditory and visual information. However, our ability for multisensory integration can be affected by damage to the brain. Previous neuroimaging studies have indicated the superior

Audiovisual (AV) integration is a fundamental component of face-to-face communication. Visual cues generally aid auditory comprehension of communicative intent through our innate ability to “fuse” auditory and visual information. However, our ability for multisensory integration can be affected by damage to the brain. Previous neuroimaging studies have indicated the superior temporal sulcus (STS) as the center for AV integration, while others suggest inferior frontal and motor regions. However, few studies have analyzed the effect of stroke or other brain damage on multisensory integration in humans. The present study examines the effect of lesion location on auditory and AV speech perception through behavioral and structural imaging methodologies in 41 left-hemisphere participants with chronic focal cerebral damage. Participants completed two behavioral tasks of speech perception: an auditory speech perception task and a classic McGurk paradigm measuring congruent (auditory and visual stimuli match) and incongruent (auditory and visual stimuli do not match, creating a “fused” percept of a novel stimulus) AV speech perception. Overall, participants performed well above chance on both tasks. Voxel-based lesion symptom mapping (VLSM) across all 41 participants identified several regions as critical for speech perception depending on trial type. Heschl’s gyrus and the supramarginal gyrus were identified as critical for auditory speech perception, the basal ganglia was critical for speech perception in AV congruent trials, and the middle temporal gyrus/STS were critical in AV incongruent trials. VLSM analyses of the AV incongruent trials were used to further clarify the origin of “errors”, i.e. lack of fusion. Auditory capture (auditory stimulus) responses were attributed to visual processing deficits caused by lesions in the posterior temporal lobe, whereas visual capture (visual stimulus) responses were attributed to lesions in the anterior temporal cortex, including the temporal pole, which is widely considered to be an amodal semantic hub. The implication of anterior temporal regions in AV integration is novel and warrants further study. The behavioral and VLSM results are discussed in relation to previous neuroimaging and case-study evidence; broadly, our findings coincide with previous work indicating that multisensory superior temporal cortex, not frontal motor circuits, are critical for AV integration.
ContributorsCai, Julia (Author) / Rogalsky, Corianne (Thesis advisor) / Azuma, Tamiko (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2017