Matching Items (13)
Filtering by

Clear all filters

171522-Thumbnail Image.png
Description
The brain uses the somatosensory system to interact with the environment and control movements. Additionally, many movement disorders are associated with deficits in the somatosensory sensory system. Thus, understanding the somatosensory system is essential for developing treatments for movement disorders. Previous studies have extensively examined the role of the somatosensory

The brain uses the somatosensory system to interact with the environment and control movements. Additionally, many movement disorders are associated with deficits in the somatosensory sensory system. Thus, understanding the somatosensory system is essential for developing treatments for movement disorders. Previous studies have extensively examined the role of the somatosensory system in controlling the lower and upper extremities; however, little is known about the contributions of the orofacial somatosensory system. The overall goal of this study was to determine factors that influence the sensitivity of the orofacial somatosensory system. To measure the somatosensory system's sensitivity, transcutaneous electrical current stimulation was applied to the skin overlaying the trigeminal nerve on the lower portion of the face. After applying stimulation, participants' sensitivity was determined through the detection of the electrical stimuli (i.e., perceptual threshold). The data analysis focused on the impact of (1) stimulation parameters, (2) electrode placement, and (3) motor tasks on the perceptual threshold. The results showed that, as expected, stimulation parameters (such as stimulation frequency and duration) influenced perceptual thresholds. However, electrode placement (left vs. right side of the face) and motor tasks (lip contraction vs. rest) did not influence perceptual thresholds. Overall, these findings have important implications for designing and developing therapeutic neuromodulation techniques based on trigeminal nerve stimulation.
ContributorsKhoury, Maya Elie (Author) / Daliri, Ayoub (Thesis advisor) / Patten, Jake (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2022
171445-Thumbnail Image.png
Description
Stroke is the leading cause of long-term disability in the U.S., with up to 60% of strokescausing speech loss. Individuals with severe stroke, who require the most frequent, intense speech therapy, often cannot adhere to treatments due to high cost and low success rates. Therefore, the ability to make functionally

Stroke is the leading cause of long-term disability in the U.S., with up to 60% of strokescausing speech loss. Individuals with severe stroke, who require the most frequent, intense speech therapy, often cannot adhere to treatments due to high cost and low success rates. Therefore, the ability to make functionally significant changes in individuals with severe post- stroke aphasia remains a key challenge for the rehabilitation community. This dissertation aimed to evaluate the efficacy of Startle Adjuvant Rehabilitation Therapy (START), a tele-enabled, low- cost treatment, to improve quality of life and speech in individuals with severe-to-moderate stroke. START is the exposure to startling acoustic stimuli during practice of motor tasks in individuals with stroke. START increases the speed and intensity of practice in severely impaired post-stroke reaching, with START eliciting muscle activity 2-3 times higher than maximum voluntary contraction. Voluntary reaching distance, onset, and final accuracy increased after a session of START, suggesting a rehabilitative effect. However, START has not been evaluated during impaired speech. The objective of this study is to determine if impaired speech can be elicited by startling acoustic stimuli, and if three days of START training can enhance clinical measures of moderate to severe post-stroke aphasia and apraxia of speech. This dissertation evaluates START in 42 individuals with post-stroke speech impairment via telehealth in a Phase 0 clinical trial. Results suggest that impaired speech can be elicited by startling acoustic stimuli and that START benefits individuals with severe-to-moderate post-stroke impairments in both linguistic and motor speech domains. This fills an important gap in aphasia care, as many speech therapies remain ineffective and financially inaccessible for patients with severe deficits. START is effective, remotely delivered, and may likely serve as an affordable adjuvant to traditional therapy for those that have poor access to quality care.
ContributorsSwann, Zoe Elisabeth (Author) / Honeycutt, Claire F (Thesis advisor) / Daliri, Ayoub (Committee member) / Rogalsky, Corianne (Committee member) / Liss, Julie (Committee member) / Schaefer, Sydney (Committee member) / Arizona State University (Publisher)
Created2022
187872-Thumbnail Image.png
Description
Multisensory integration is the process by which information from different sensory modalities is integrated by the nervous system. This process is important not only from a basic science perspective but also for translational reasons, e.g., for the development of closed-loop neural prosthetic systems. A mixed virtual reality platform was developed

Multisensory integration is the process by which information from different sensory modalities is integrated by the nervous system. This process is important not only from a basic science perspective but also for translational reasons, e.g., for the development of closed-loop neural prosthetic systems. A mixed virtual reality platform was developed to study the neural mechanisms of multisensory integration for the upper limb during motor planning. The platform allows for selection of different arms and manipulation of the locations of physical and virtual target cues in the environment. The system was tested with two non-human primates (NHP) trained to reach to multiple virtual targets. Arm kinematic data as well as neural spiking data from primary motor (M1) and dorsal premotor cortex (PMd) were collected. The task involved manipulating visual information about initial arm position by rendering the virtual avatar arm in either its actual position (veridical (V) condition) or in a different shifted (e.g., small vs large shifts) position (perturbed (P) condition) prior to movement. Tactile feedback was modulated in blocks by placing or removing the physical start cue on the table (tactile (T), and no-tactile (NT) conditions, respectively). Behaviorally, errors in initial movement direction were larger when the physical start cue was absent. Slightly larger directional errors were found in the P condition compared to the V condition for some movement directions. Both effects were consistent with the idea that erroneous or reduced information about initial hand location led to movement direction-dependent reach planning errors. Neural correlates of these behavioral effects were probed using population decoding techniques. For small shifts in the visual position of the arm, no differences in decoding accuracy between the T and NT conditions were observed in either M1 or PMd. However, for larger visual shifts, decoding accuracy decreased in the NT condition, but only in PMd. Thus, activity in PMd, but not M1, may reflect the uncertainty in reach planning that results when sensory cues regarding initial hand position are erroneous or absent.
ContributorsPhataraphruk, Preyaporn Kris (Author) / Buneo, Christopher A (Thesis advisor) / Zhou, Yi (Committee member) / Helms Tillery, Steve (Committee member) / Greger, Bradley (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2023
168768-Thumbnail Image.png
Description
Diffusion Tensor Imaging may be used to understand brain differences within PD. Within the last couple of decades there has been an explosion of learning and development in neuroimaging techniques. Today, it is possible to monitor and track where a brain is needing blood during a specific task without much

Diffusion Tensor Imaging may be used to understand brain differences within PD. Within the last couple of decades there has been an explosion of learning and development in neuroimaging techniques. Today, it is possible to monitor and track where a brain is needing blood during a specific task without much delay such as when using functional Magnetic Resonance Imaging (fMRI). It is also possible to track and visualize where and at which orientation water molecules in the brain are moving like in Diffusion Tensor Imaging (DTI). Data on certain diseases such as Parkinson’s Disease (PD) has grown considerably, and it is now known that people with PD can be assessed with cognitive tests in combination with neuroimaging to diagnose whether people with PD have cognitive decline in addition to any motor ability decline. The Montreal Cognitive Assessment (MoCA), Modified Semantic Fluency Test (MSF) and Mini-Mental State Exam (MMSE) are the primary tools and are often combined with fMRI or DTI for diagnosing if people with PD also have a mild cognitive impairment (MCI). The current thesis explored a group of cohort of PD patients and classified based on their MoCA, MSF, and Lexical Fluency (LF) scores. The results indicate specific brain differences in whether PD patients were low or high scorers on LF and MoCA scores. The current study’s findings adds to the existing literature that DTI may be more sensitive in detecting differences based on clinical scores.
ContributorsAndrade, Eric (Author) / Oforoi, Edward (Thesis advisor) / Zhou, Yi (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2022
193407-Thumbnail Image.png
Description
The ability to detect and correct errors during and after speech production is essential for maintaining accuracy and avoiding disruption in communication. Thus, it is crucial to understand the basic mechanisms underlying how the speech-motor system evaluates different errors and correspondingly corrects them. This study aims to explore the impact

The ability to detect and correct errors during and after speech production is essential for maintaining accuracy and avoiding disruption in communication. Thus, it is crucial to understand the basic mechanisms underlying how the speech-motor system evaluates different errors and correspondingly corrects them. This study aims to explore the impact of three different features of errors, introduced by formant perturbations, on corrective and adaptive responses: (1) magnitude of errors, (2) direction of errors, and (3) extent of exposure to errors. Participants were asked to produce the vowel /ε/ in the context of consonant-vowel-consonant words. Participant-specific formant perturbations were applied for three magnitudes of 0.5, 1, 1.5 along the /ε-æ/ line in two directions of simultaneous F1-F2 shift (i.e., shift in the ε-æ direction) and shift to outside the vowel space. Perturbations were applied randomly in a compensation paradigm, so each perturbed trial was preceded and succeeded by several unperturbed trials. It was observed that (1) corrective and adaptive responses were larger for larger magnitude errors, (2) corrective and adaptive responses were larger for errors in the /ε-æ/ direction, (3) corrective and adaptive responses were generally in the /ε-ɪ/ direction regardless of perturbation direction and magnitude, (4) corrective responses were larger for perturbations in the earlier trials of the experiment.
ContributorsSreedhar, Anuradha Jyothi (Author) / Daliri, Ayoub (Thesis advisor) / Rogalsky, Corianne (Committee member) / Zhou, Yi (Committee member) / Arizona State University (Publisher)
Created2024
153939-Thumbnail Image.png
Description
Sound localization can be difficult in a reverberant environment. Fortunately listeners can utilize various perceptual compensatory mechanisms to increase the reliability of sound localization when provided with ambiguous physical evidence. For example, the directional information of echoes can be perceptually suppressed by the direct sound to achieve a single, fused

Sound localization can be difficult in a reverberant environment. Fortunately listeners can utilize various perceptual compensatory mechanisms to increase the reliability of sound localization when provided with ambiguous physical evidence. For example, the directional information of echoes can be perceptually suppressed by the direct sound to achieve a single, fused auditory event in a process called the precedence effect (Litovsky et al., 1999). Visual cues also influence sound localization through a phenomenon known as the ventriloquist effect. It is classically demonstrated by a puppeteer who speaks without visible lip movements while moving the mouth of a puppet synchronously with his/her speech (Gelder and Bertelson, 2003). If the ventriloquist is successful, sound will be “captured” by vision and be perceived to be originating at the location of the puppet. This thesis investigates the influence of vision on the spatial localization of audio-visual stimuli. Participants seated in a sound-attenuated room indicated their perceived locations of either ISI or level-difference stimuli in free field conditions. Two types of stereophonic phantom sound sources, created by modulating the inter-stimulus time interval (ISI) or level difference between two loudspeakers, were used as auditory stimuli. The results showed that the light cues influenced auditory spatial perception to a greater extent for the ISI stimuli than the level difference stimuli. A binaural signal analysis further revealed that the greater visual bias for the ISI phantom sound sources was correlated with the increasingly ambiguous binaural cues of the ISI signals. This finding suggests that when sound localization cues are unreliable, perceptual decisions become increasingly biased towards vision for finding a sound source. These results support the cue saliency theory underlying cross-modal bias and extend this theory to include stereophonic phantom sound sources.
ContributorsMontagne, Christopher (Author) / Zhou, Yi (Thesis advisor) / Buneo, Christopher A (Thesis advisor) / Yost, William A. (Committee member) / Arizona State University (Publisher)
Created2015
156177-Thumbnail Image.png
Description
The activation of the primary motor cortex (M1) is common in speech perception tasks that involve difficult listening conditions. Although the challenge of recognizing and discriminating non-native speech sounds appears to be an instantiation of listening under difficult circumstances, it is still unknown if M1 recruitment is facilitatory of second

The activation of the primary motor cortex (M1) is common in speech perception tasks that involve difficult listening conditions. Although the challenge of recognizing and discriminating non-native speech sounds appears to be an instantiation of listening under difficult circumstances, it is still unknown if M1 recruitment is facilitatory of second language speech perception. The purpose of this study was to investigate the role of M1 associated with speech motor centers in processing acoustic inputs in the native (L1) and second language (L2), using repetitive Transcranial Magnetic Stimulation (rTMS) to selectively alter neural activity in M1. Thirty-six healthy English/Spanish bilingual subjects participated in the experiment. The performance on a listening word-to-picture matching task was measured before and after real- and sham-rTMS to the orbicularis oris (lip muscle) associated M1. Vowel Space Area (VSA) obtained from recordings of participants reading a passage in L2 before and after real-rTMS, was calculated to determine its utility as an rTMS aftereffect measure. There was high variability in the aftereffect of the rTMS protocol to the lip muscle among the participants. Approximately 50% of participants showed an inhibitory effect of rTMS, evidenced by smaller motor evoked potentials (MEPs) area, whereas the other 50% had a facilitatory effect, with larger MEPs. This suggests that rTMS has a complex influence on M1 excitability, and relying on grand-average results can obscure important individual differences in rTMS physiological and functional outcomes. Evidence of motor support to word recognition in the L2 was found. Participants showing an inhibitory aftereffect of rTMS on M1 produced slower and less accurate responses in the L2 task, whereas those showing a facilitatory aftereffect of rTMS on M1 produced more accurate responses in L2. In contrast, no effect of rTMS was found on the L1, where accuracy and speed were very similar after sham- and real-rTMS. The L2 VSA measure was indicative of the aftereffect of rTMS to M1 associated with speech production, supporting its utility as an rTMS aftereffect measure. This result revealed an interesting and novel relation between cerebral motor cortex activation and speech measures.
ContributorsBarragan, Beatriz (Author) / Liss, Julie (Thesis advisor) / Berisha, Visar (Committee member) / Rogalsky, Corianne (Committee member) / Restrepo, Adelaida (Committee member) / Arizona State University (Publisher)
Created2018
155273-Thumbnail Image.png
Description
Audiovisual (AV) integration is a fundamental component of face-to-face communication. Visual cues generally aid auditory comprehension of communicative intent through our innate ability to “fuse” auditory and visual information. However, our ability for multisensory integration can be affected by damage to the brain. Previous neuroimaging studies have indicated the superior

Audiovisual (AV) integration is a fundamental component of face-to-face communication. Visual cues generally aid auditory comprehension of communicative intent through our innate ability to “fuse” auditory and visual information. However, our ability for multisensory integration can be affected by damage to the brain. Previous neuroimaging studies have indicated the superior temporal sulcus (STS) as the center for AV integration, while others suggest inferior frontal and motor regions. However, few studies have analyzed the effect of stroke or other brain damage on multisensory integration in humans. The present study examines the effect of lesion location on auditory and AV speech perception through behavioral and structural imaging methodologies in 41 left-hemisphere participants with chronic focal cerebral damage. Participants completed two behavioral tasks of speech perception: an auditory speech perception task and a classic McGurk paradigm measuring congruent (auditory and visual stimuli match) and incongruent (auditory and visual stimuli do not match, creating a “fused” percept of a novel stimulus) AV speech perception. Overall, participants performed well above chance on both tasks. Voxel-based lesion symptom mapping (VLSM) across all 41 participants identified several regions as critical for speech perception depending on trial type. Heschl’s gyrus and the supramarginal gyrus were identified as critical for auditory speech perception, the basal ganglia was critical for speech perception in AV congruent trials, and the middle temporal gyrus/STS were critical in AV incongruent trials. VLSM analyses of the AV incongruent trials were used to further clarify the origin of “errors”, i.e. lack of fusion. Auditory capture (auditory stimulus) responses were attributed to visual processing deficits caused by lesions in the posterior temporal lobe, whereas visual capture (visual stimulus) responses were attributed to lesions in the anterior temporal cortex, including the temporal pole, which is widely considered to be an amodal semantic hub. The implication of anterior temporal regions in AV integration is novel and warrants further study. The behavioral and VLSM results are discussed in relation to previous neuroimaging and case-study evidence; broadly, our findings coincide with previous work indicating that multisensory superior temporal cortex, not frontal motor circuits, are critical for AV integration.
ContributorsCai, Julia (Author) / Rogalsky, Corianne (Thesis advisor) / Azuma, Tamiko (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2017
158812-Thumbnail Image.png
Description
Neuron models that behave like their biological counterparts are essential for computational neuroscience.Reduced neuron models, which abstract away biological mechanisms in the interest of speed and interpretability, have received much attention due to their utility in large scale simulations of the brain, but little care has been taken to ensure

Neuron models that behave like their biological counterparts are essential for computational neuroscience.Reduced neuron models, which abstract away biological mechanisms in the interest of speed and interpretability, have received much attention due to their utility in large scale simulations of the brain, but little care has been taken to ensure that these models exhibit behaviors that closely resemble real neurons.
In order to improve the verisimilitude of these reduced neuron models, I developed an optimizer that uses genetic algorithms to align model behaviors with those observed in experiments.
I verified that this optimizer was able to recover model parameters given only observed physiological data; however, I also found that reduced models nonetheless had limited ability to reproduce all observed behaviors, and that this varied by cell type and desired behavior.
These challenges can partly be surmounted by carefully designing the set of physiological features that guide the optimization. In summary, we found evidence that reduced neuron model optimization had the potential to produce reduced neuron models for only a limited range of neuron types.
ContributorsJarvis, Russell Jarrod (Author) / Crook, Sharon M (Thesis advisor) / Gerkin, Richard C (Thesis advisor) / Zhou, Yi (Committee member) / Abbas, James J (Committee member) / Arizona State University (Publisher)
Created2020
158884-Thumbnail Image.png
Description
It is increasingly common to see machine learning techniques applied in conjunction with computational modeling for data-driven research in neuroscience. Such applications include using machine learning for model development, particularly for optimization of parameters based on electrophysiological constraints. Alternatively, machine learning can be used to validate and enhance techniques for

It is increasingly common to see machine learning techniques applied in conjunction with computational modeling for data-driven research in neuroscience. Such applications include using machine learning for model development, particularly for optimization of parameters based on electrophysiological constraints. Alternatively, machine learning can be used to validate and enhance techniques for experimental data analysis or to analyze model simulation data in large-scale modeling studies, which is the approach I apply here. I use simulations of biophysically-realistic cortical neuron models to supplement a common feature-based technique for analysis of electrophysiological signals. I leverage these simulated electrophysiological signals to perform feature selection that provides an improved method for neuron-type classification. Additionally, I validate an unsupervised approach that extends this improved feature selection to discover signatures associated with neuron morphologies - performing in vivo histology in effect. The result is a simulation-based discovery of the underlying synaptic conditions responsible for patterns of extracellular signatures that can be applied to understand both simulation and experimental data. I also use unsupervised learning techniques to identify common channel mechanisms underlying electrophysiological behaviors of cortical neuron models. This work relies on an open-source database containing a large number of computational models for cortical neurons. I perform a quantitative data-driven analysis of these previously published ion channel and neuron models that uses information shared across models as opposed to information limited to individual models. The result is simulation-based discovery of model sub-types at two spatial scales which map functional relationships between activation/inactivation properties of channel family model sub-types to electrophysiological properties of cortical neuron model sub-types. Further, the combination of unsupervised learning techniques and parameter visualizations serve to integrate characterizations of model electrophysiological behavior across scales.
ContributorsHaynes, Reuben (Author) / Crook, Sharon M (Thesis advisor) / Gerkin, Richard C (Committee member) / Zhou, Yi (Committee member) / Baer, Steven (Committee member) / Armbruster, Hans D (Committee member) / Arizona State University (Publisher)
Created2020