Matching Items (13)
Filtering by

Clear all filters

153939-Thumbnail Image.png
Description
Sound localization can be difficult in a reverberant environment. Fortunately listeners can utilize various perceptual compensatory mechanisms to increase the reliability of sound localization when provided with ambiguous physical evidence. For example, the directional information of echoes can be perceptually suppressed by the direct sound to achieve a single, fused

Sound localization can be difficult in a reverberant environment. Fortunately listeners can utilize various perceptual compensatory mechanisms to increase the reliability of sound localization when provided with ambiguous physical evidence. For example, the directional information of echoes can be perceptually suppressed by the direct sound to achieve a single, fused auditory event in a process called the precedence effect (Litovsky et al., 1999). Visual cues also influence sound localization through a phenomenon known as the ventriloquist effect. It is classically demonstrated by a puppeteer who speaks without visible lip movements while moving the mouth of a puppet synchronously with his/her speech (Gelder and Bertelson, 2003). If the ventriloquist is successful, sound will be “captured” by vision and be perceived to be originating at the location of the puppet. This thesis investigates the influence of vision on the spatial localization of audio-visual stimuli. Participants seated in a sound-attenuated room indicated their perceived locations of either ISI or level-difference stimuli in free field conditions. Two types of stereophonic phantom sound sources, created by modulating the inter-stimulus time interval (ISI) or level difference between two loudspeakers, were used as auditory stimuli. The results showed that the light cues influenced auditory spatial perception to a greater extent for the ISI stimuli than the level difference stimuli. A binaural signal analysis further revealed that the greater visual bias for the ISI phantom sound sources was correlated with the increasingly ambiguous binaural cues of the ISI signals. This finding suggests that when sound localization cues are unreliable, perceptual decisions become increasingly biased towards vision for finding a sound source. These results support the cue saliency theory underlying cross-modal bias and extend this theory to include stereophonic phantom sound sources.
ContributorsMontagne, Christopher (Author) / Zhou, Yi (Thesis advisor) / Buneo, Christopher A (Thesis advisor) / Yost, William A. (Committee member) / Arizona State University (Publisher)
Created2015
156139-Thumbnail Image.png
Description
Exome sequencing was used to identify novel variants linked to amyotrophic lateral sclerosis (ALS), in a family without mutations in genes previously linked to ALS. A F115C mutation in the gene MATR3 was identified, and further examination of other ALS kindreds identified an additional three mutations in MATR3; S85C, P154S

Exome sequencing was used to identify novel variants linked to amyotrophic lateral sclerosis (ALS), in a family without mutations in genes previously linked to ALS. A F115C mutation in the gene MATR3 was identified, and further examination of other ALS kindreds identified an additional three mutations in MATR3; S85C, P154S and T622A. Matrin 3 is an RNA/DNA binding protein as well as part of the nuclear matrix. Matrin 3 interacts with TDP-43, a protein that is both mutated in some forms of ALS, and found in pathological inclusions in most ALS patients. Matrin 3 pathology, including mislocalization and rare cytoplasmic inclusions, was identified in spinal cord tissue from a patient carrying a mutation in Matrin 3, as well as sporadic ALS patients. In an effort to determine the mechanism of Matrin 3 linked ALS, the protein interactome of wild-type and ALS-linked MATR3 mutations was examined. Immunoprecipitation followed by mass spectrometry experiments were performed using NSC-34 cells expressing human wild-type or mutant Matrin 3. Gene ontology analysis identified a novel role for Matrin 3 in mRNA transport centered on proteins in the TRanscription and EXport (TREX) complex, known to function in mRNA biogenesis and nuclear export. ALS-linked mutations in Matrin 3 led to its re-distribution within the nucleus, decreased co-localization with endogenous Matrin 3 and increased co-localization with specific TREX components. Expression of disease-causing Matrin 3 mutations led to nuclear mRNA export defects of both global mRNA and more specifically the mRNA of TDP-43 and FUS. Our findings identify ALS-causing mutations in the gene MATR3, as well as a potential pathogenic mechanism attributable to MATR3 mutations and further link cellular transport defects to ALS.
ContributorsBoehringer, Ashley (Author) / Bowser, Robert (Thesis advisor) / Liss, Julie (Committee member) / Jensen, Kendall (Committee member) / Ladha, Shafeeq (Committee member) / Arizona State University (Publisher)
Created2018
156177-Thumbnail Image.png
Description
The activation of the primary motor cortex (M1) is common in speech perception tasks that involve difficult listening conditions. Although the challenge of recognizing and discriminating non-native speech sounds appears to be an instantiation of listening under difficult circumstances, it is still unknown if M1 recruitment is facilitatory of second

The activation of the primary motor cortex (M1) is common in speech perception tasks that involve difficult listening conditions. Although the challenge of recognizing and discriminating non-native speech sounds appears to be an instantiation of listening under difficult circumstances, it is still unknown if M1 recruitment is facilitatory of second language speech perception. The purpose of this study was to investigate the role of M1 associated with speech motor centers in processing acoustic inputs in the native (L1) and second language (L2), using repetitive Transcranial Magnetic Stimulation (rTMS) to selectively alter neural activity in M1. Thirty-six healthy English/Spanish bilingual subjects participated in the experiment. The performance on a listening word-to-picture matching task was measured before and after real- and sham-rTMS to the orbicularis oris (lip muscle) associated M1. Vowel Space Area (VSA) obtained from recordings of participants reading a passage in L2 before and after real-rTMS, was calculated to determine its utility as an rTMS aftereffect measure. There was high variability in the aftereffect of the rTMS protocol to the lip muscle among the participants. Approximately 50% of participants showed an inhibitory effect of rTMS, evidenced by smaller motor evoked potentials (MEPs) area, whereas the other 50% had a facilitatory effect, with larger MEPs. This suggests that rTMS has a complex influence on M1 excitability, and relying on grand-average results can obscure important individual differences in rTMS physiological and functional outcomes. Evidence of motor support to word recognition in the L2 was found. Participants showing an inhibitory aftereffect of rTMS on M1 produced slower and less accurate responses in the L2 task, whereas those showing a facilitatory aftereffect of rTMS on M1 produced more accurate responses in L2. In contrast, no effect of rTMS was found on the L1, where accuracy and speed were very similar after sham- and real-rTMS. The L2 VSA measure was indicative of the aftereffect of rTMS to M1 associated with speech production, supporting its utility as an rTMS aftereffect measure. This result revealed an interesting and novel relation between cerebral motor cortex activation and speech measures.
ContributorsBarragan, Beatriz (Author) / Liss, Julie (Thesis advisor) / Berisha, Visar (Committee member) / Rogalsky, Corianne (Committee member) / Restrepo, Adelaida (Committee member) / Arizona State University (Publisher)
Created2018
137282-Thumbnail Image.png
Description
A previous study demonstrated that learning to lift an object is context-based and that in the presence of both the memory and visual cues, the acquired sensorimotor memory to manipulate an object in one context interferes with the performance of the same task in presence of visual information about a

A previous study demonstrated that learning to lift an object is context-based and that in the presence of both the memory and visual cues, the acquired sensorimotor memory to manipulate an object in one context interferes with the performance of the same task in presence of visual information about a different context (Fu et al, 2012).
The purpose of this study is to know whether the primary motor cortex (M1) plays a role in the sensorimotor memory. It was hypothesized that temporary disruption of the M1 following the learning to minimize a tilt using a ‘L’ shaped object would negatively affect the retention of sensorimotor memory and thus reduce interference between the memory acquired in one context and the visual cues to perform the same task in a different context.
Significant findings were shown in blocks 1, 2, and 4. In block 3, subjects displayed insignificant amount of learning. However, it cannot be concluded that there is full interference in block 3. Therefore, looked into 3 effects in statistical analysis: the main effects of the blocks, the main effects of the trials, and the effects of the blocks and trials combined. From the block effects, there is a p-value of 0.001, and from the trial effects, the p-value is less than 0.001. Both of these effects indicate that there is learning occurring. However, when looking at the blocks * trials effects, we see a p-value of 0.002 < 0.05 indicating significant interaction between sensorimotor memories. Based on the results that were found, there is a presence of interference in all the blocks but not enough to justify the use of TMS in order to reduce interference because there is a partial reduction of interference from the control experiment. It is evident that the time delay might be the issue between context switches. By reducing the time delay between block 2 and 3 from 10 minutes to 5 minutes, I will hope to see significant learning to occur from the first trial to the second trial.
ContributorsHasan, Salman Bashir (Author) / Santello, Marco (Thesis director) / Kleim, Jeffrey (Committee member) / Helms Tillery, Stephen (Committee member) / Barrett, The Honors College (Contributor) / W. P. Carey School of Business (Contributor) / Harrington Bioengineering Program (Contributor)
Created2014-05
154197-Thumbnail Image.png
Description
Studies in Second Language Acquisition and Neurolinguistics have argued that adult learners when dealing with certain phonological features of L2, such as segmental and suprasegmental ones, face problems of articulatory placement (Esling, 2006; Abercrombie, 1967) and somatosensory stimulation (Guenther, Ghosh, & Tourville, 2006; Waldron, 2010). These studies have argued that

Studies in Second Language Acquisition and Neurolinguistics have argued that adult learners when dealing with certain phonological features of L2, such as segmental and suprasegmental ones, face problems of articulatory placement (Esling, 2006; Abercrombie, 1967) and somatosensory stimulation (Guenther, Ghosh, & Tourville, 2006; Waldron, 2010). These studies have argued that adult phonological acquisition is a complex matter that needs to be informed by a specialized sensorimotor theory of speech acquisition. They further suggested that traditional pronunciation pedagogy needs to be enhanced by an approach to learning offering learners fundamental and practical sensorimotor tools to advance the quality of L2 speech acquisition.



This foundational study designs a sensorimotor approach to pronunciation pedagogy and tests its effect on the L2 speech of five adult (late) learners of American English. Throughout an eight week classroom experiment, participants from different first language backgrounds received instruction on Articulatory Settings (Honickman, 1964) and the sensorimotor mechanism of speech acquisition (Waldron 2010; Guenther et al., 2006). In addition, they attended five adapted lessons of the Feldenkrais technique (Feldenkrais, 1972) designed to develop sensorimotor awareness of the vocal apparatus and improve the quality of L2 speech movement. I hypothesize that such sensorimotor learning triggers overall positive changes in the way L2 learners deal with speech articulators for L2 and that over time they develop better pronunciation.

After approximately eight hours of intervention, analysis of results shows participants’ improvement in speech rate, degree of accentedness, and speaking confidence, but mixed changes in word intelligibility and vowel space area. Albeit not statistically significant (p >.05), these results suggest that such a sensorimotor approach to L2 phonological acquisition warrants further consideration and investigation for use in the L2 classroom.
ContributorsLima, J. Alberto S., Jr (Author) / Pruitt, Kathryn (Thesis advisor) / Gelderen, Elly van (Thesis advisor) / Liss, Julie (Committee member) / James, Mark (Committee member) / Arizona State University (Publisher)
Created2015
155273-Thumbnail Image.png
Description
Audiovisual (AV) integration is a fundamental component of face-to-face communication. Visual cues generally aid auditory comprehension of communicative intent through our innate ability to “fuse” auditory and visual information. However, our ability for multisensory integration can be affected by damage to the brain. Previous neuroimaging studies have indicated the superior

Audiovisual (AV) integration is a fundamental component of face-to-face communication. Visual cues generally aid auditory comprehension of communicative intent through our innate ability to “fuse” auditory and visual information. However, our ability for multisensory integration can be affected by damage to the brain. Previous neuroimaging studies have indicated the superior temporal sulcus (STS) as the center for AV integration, while others suggest inferior frontal and motor regions. However, few studies have analyzed the effect of stroke or other brain damage on multisensory integration in humans. The present study examines the effect of lesion location on auditory and AV speech perception through behavioral and structural imaging methodologies in 41 left-hemisphere participants with chronic focal cerebral damage. Participants completed two behavioral tasks of speech perception: an auditory speech perception task and a classic McGurk paradigm measuring congruent (auditory and visual stimuli match) and incongruent (auditory and visual stimuli do not match, creating a “fused” percept of a novel stimulus) AV speech perception. Overall, participants performed well above chance on both tasks. Voxel-based lesion symptom mapping (VLSM) across all 41 participants identified several regions as critical for speech perception depending on trial type. Heschl’s gyrus and the supramarginal gyrus were identified as critical for auditory speech perception, the basal ganglia was critical for speech perception in AV congruent trials, and the middle temporal gyrus/STS were critical in AV incongruent trials. VLSM analyses of the AV incongruent trials were used to further clarify the origin of “errors”, i.e. lack of fusion. Auditory capture (auditory stimulus) responses were attributed to visual processing deficits caused by lesions in the posterior temporal lobe, whereas visual capture (visual stimulus) responses were attributed to lesions in the anterior temporal cortex, including the temporal pole, which is widely considered to be an amodal semantic hub. The implication of anterior temporal regions in AV integration is novel and warrants further study. The behavioral and VLSM results are discussed in relation to previous neuroimaging and case-study evidence; broadly, our findings coincide with previous work indicating that multisensory superior temporal cortex, not frontal motor circuits, are critical for AV integration.
ContributorsCai, Julia (Author) / Rogalsky, Corianne (Thesis advisor) / Azuma, Tamiko (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2017
187872-Thumbnail Image.png
Description
Multisensory integration is the process by which information from different sensory modalities is integrated by the nervous system. This process is important not only from a basic science perspective but also for translational reasons, e.g., for the development of closed-loop neural prosthetic systems. A mixed virtual reality platform was developed

Multisensory integration is the process by which information from different sensory modalities is integrated by the nervous system. This process is important not only from a basic science perspective but also for translational reasons, e.g., for the development of closed-loop neural prosthetic systems. A mixed virtual reality platform was developed to study the neural mechanisms of multisensory integration for the upper limb during motor planning. The platform allows for selection of different arms and manipulation of the locations of physical and virtual target cues in the environment. The system was tested with two non-human primates (NHP) trained to reach to multiple virtual targets. Arm kinematic data as well as neural spiking data from primary motor (M1) and dorsal premotor cortex (PMd) were collected. The task involved manipulating visual information about initial arm position by rendering the virtual avatar arm in either its actual position (veridical (V) condition) or in a different shifted (e.g., small vs large shifts) position (perturbed (P) condition) prior to movement. Tactile feedback was modulated in blocks by placing or removing the physical start cue on the table (tactile (T), and no-tactile (NT) conditions, respectively). Behaviorally, errors in initial movement direction were larger when the physical start cue was absent. Slightly larger directional errors were found in the P condition compared to the V condition for some movement directions. Both effects were consistent with the idea that erroneous or reduced information about initial hand location led to movement direction-dependent reach planning errors. Neural correlates of these behavioral effects were probed using population decoding techniques. For small shifts in the visual position of the arm, no differences in decoding accuracy between the T and NT conditions were observed in either M1 or PMd. However, for larger visual shifts, decoding accuracy decreased in the NT condition, but only in PMd. Thus, activity in PMd, but not M1, may reflect the uncertainty in reach planning that results when sensory cues regarding initial hand position are erroneous or absent.
ContributorsPhataraphruk, Preyaporn Kris (Author) / Buneo, Christopher A (Thesis advisor) / Zhou, Yi (Committee member) / Helms Tillery, Steve (Committee member) / Greger, Bradley (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2023
171522-Thumbnail Image.png
Description
The brain uses the somatosensory system to interact with the environment and control movements. Additionally, many movement disorders are associated with deficits in the somatosensory sensory system. Thus, understanding the somatosensory system is essential for developing treatments for movement disorders. Previous studies have extensively examined the role of the somatosensory

The brain uses the somatosensory system to interact with the environment and control movements. Additionally, many movement disorders are associated with deficits in the somatosensory sensory system. Thus, understanding the somatosensory system is essential for developing treatments for movement disorders. Previous studies have extensively examined the role of the somatosensory system in controlling the lower and upper extremities; however, little is known about the contributions of the orofacial somatosensory system. The overall goal of this study was to determine factors that influence the sensitivity of the orofacial somatosensory system. To measure the somatosensory system's sensitivity, transcutaneous electrical current stimulation was applied to the skin overlaying the trigeminal nerve on the lower portion of the face. After applying stimulation, participants' sensitivity was determined through the detection of the electrical stimuli (i.e., perceptual threshold). The data analysis focused on the impact of (1) stimulation parameters, (2) electrode placement, and (3) motor tasks on the perceptual threshold. The results showed that, as expected, stimulation parameters (such as stimulation frequency and duration) influenced perceptual thresholds. However, electrode placement (left vs. right side of the face) and motor tasks (lip contraction vs. rest) did not influence perceptual thresholds. Overall, these findings have important implications for designing and developing therapeutic neuromodulation techniques based on trigeminal nerve stimulation.
ContributorsKhoury, Maya Elie (Author) / Daliri, Ayoub (Thesis advisor) / Patten, Jake (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2022
171445-Thumbnail Image.png
Description
Stroke is the leading cause of long-term disability in the U.S., with up to 60% of strokescausing speech loss. Individuals with severe stroke, who require the most frequent, intense speech therapy, often cannot adhere to treatments due to high cost and low success rates. Therefore, the ability to make functionally

Stroke is the leading cause of long-term disability in the U.S., with up to 60% of strokescausing speech loss. Individuals with severe stroke, who require the most frequent, intense speech therapy, often cannot adhere to treatments due to high cost and low success rates. Therefore, the ability to make functionally significant changes in individuals with severe post- stroke aphasia remains a key challenge for the rehabilitation community. This dissertation aimed to evaluate the efficacy of Startle Adjuvant Rehabilitation Therapy (START), a tele-enabled, low- cost treatment, to improve quality of life and speech in individuals with severe-to-moderate stroke. START is the exposure to startling acoustic stimuli during practice of motor tasks in individuals with stroke. START increases the speed and intensity of practice in severely impaired post-stroke reaching, with START eliciting muscle activity 2-3 times higher than maximum voluntary contraction. Voluntary reaching distance, onset, and final accuracy increased after a session of START, suggesting a rehabilitative effect. However, START has not been evaluated during impaired speech. The objective of this study is to determine if impaired speech can be elicited by startling acoustic stimuli, and if three days of START training can enhance clinical measures of moderate to severe post-stroke aphasia and apraxia of speech. This dissertation evaluates START in 42 individuals with post-stroke speech impairment via telehealth in a Phase 0 clinical trial. Results suggest that impaired speech can be elicited by startling acoustic stimuli and that START benefits individuals with severe-to-moderate post-stroke impairments in both linguistic and motor speech domains. This fills an important gap in aphasia care, as many speech therapies remain ineffective and financially inaccessible for patients with severe deficits. START is effective, remotely delivered, and may likely serve as an affordable adjuvant to traditional therapy for those that have poor access to quality care.
ContributorsSwann, Zoe Elisabeth (Author) / Honeycutt, Claire F (Thesis advisor) / Daliri, Ayoub (Committee member) / Rogalsky, Corianne (Committee member) / Liss, Julie (Committee member) / Schaefer, Sydney (Committee member) / Arizona State University (Publisher)
Created2022
158812-Thumbnail Image.png
Description
Neuron models that behave like their biological counterparts are essential for computational neuroscience.Reduced neuron models, which abstract away biological mechanisms in the interest of speed and interpretability, have received much attention due to their utility in large scale simulations of the brain, but little care has been taken to ensure

Neuron models that behave like their biological counterparts are essential for computational neuroscience.Reduced neuron models, which abstract away biological mechanisms in the interest of speed and interpretability, have received much attention due to their utility in large scale simulations of the brain, but little care has been taken to ensure that these models exhibit behaviors that closely resemble real neurons.
In order to improve the verisimilitude of these reduced neuron models, I developed an optimizer that uses genetic algorithms to align model behaviors with those observed in experiments.
I verified that this optimizer was able to recover model parameters given only observed physiological data; however, I also found that reduced models nonetheless had limited ability to reproduce all observed behaviors, and that this varied by cell type and desired behavior.
These challenges can partly be surmounted by carefully designing the set of physiological features that guide the optimization. In summary, we found evidence that reduced neuron model optimization had the potential to produce reduced neuron models for only a limited range of neuron types.
ContributorsJarvis, Russell Jarrod (Author) / Crook, Sharon M (Thesis advisor) / Gerkin, Richard C (Thesis advisor) / Zhou, Yi (Committee member) / Abbas, James J (Committee member) / Arizona State University (Publisher)
Created2020