Matching Items (27)

133014-Thumbnail Image.png

Does Auditory Feedback Perturbation Influence Categorical Perception of Vowels?

Description

Speech perception and production are bidirectionally related, and they influence each other. The purpose of this study was to better understand the relationship between speech perception and speech production. It is known that applying auditory perturbations during speech production causes

Speech perception and production are bidirectionally related, and they influence each other. The purpose of this study was to better understand the relationship between speech perception and speech production. It is known that applying auditory perturbations during speech production causes subjects to alter their productions (e.g., change their formant frequencies). In other words, previous studies have examined the effects of altered speech perception on speech production. However, in this study, we examined potential effects of speech production on speech perception. Subjects completed a block of a categorical perception task followed by a block of a speaking or a listening task followed by another block of the categorical perception task. Subjects completed three blocks of the speaking task and three blocks of the listening task. In the three blocks of a given task (speaking or listening) auditory feedback was 1) normal, 2) altered to be less variable, or 3) altered to be more variable. Unlike previous studies, we used subject’s own speech samples to generate speech stimuli for the perception task. For each categorical perception block, we calculated subject’s psychometric function and determined subject’s categorical boundary. The results showed that subjects’ perceptual boundary remained stable in all conditions and all blocks. Overall, our results did not provide evidence for the effects of speech production on speech perception.

Contributors

Agent

Created

Date Created
2019-05

131285-Thumbnail Image.png

Dyslexia, Creativity, and Neural Adaptation

Description

Objective: A recent electroencephalogram (EEG) study of adults with dyslexia showed that individuals with dyslexia have diminished auditory sensory gating compared to typical controls. Previous studies done involving intoxication and its effect on sensory gating and creativity have shown that

Objective: A recent electroencephalogram (EEG) study of adults with dyslexia showed that individuals with dyslexia have diminished auditory sensory gating compared to typical controls. Previous studies done involving intoxication and its effect on sensory gating and creativity have shown that there is a positive correlation between creativity (divergent thinking problem solving) and sensory gating deficiency. With previous study findings, the link between dyslexia and sensory gating deficiency and the link between sensory gating deficiency and creativity have been shown, but not the link between dyslexia and creativity. This pilot study aims to address this knowledge gap using event-related potentials.

Methods: Two adults with dyslexia and 4 control adults participated in an auditory gating test using tone pairs. Latencies and Amplitudes for the N100 and P200 responses were recorded and analyzed. Participants were also administered the Abbreviated Torrance Test for Adults (ATTA), a test of creative ability designed to evaluate divergent thinking in individuals. Results were averaged and compared.

Results: The averaged difference in measured N100 amplitudes between tone 1 and tone 2 was significantly larger in the control group compared to the difference observed in the dyslexia group. In particular, one participant with dyslexia who had low scores on a task of rapid word recognition also showed no evidence of gating at the N100 component, whereas the other participant with dyslexia with good word recognition scores showed evidence of intact gating. The averaged difference in measured P200 amplitude between tone 1 and tone 2 was larger in the dyslexia group compared to the control group; however, the difference was small enough to be considered insignificant. The total average ATTA score for the control group was higher than the average of the dyslexia group. This difference in total average was less than one point on a 106-point scale.

Conclusions: Neural sensory gating occurs approximately 100 ms after the onset of a stimulus and is diminished in adults with dyslexia who also have deficits in rapid word recognition. There is a difference in creativity, in terms of divergent thinking, between those with dyslexia and those without (controls scored higher on average); however, the difference is not significant (less than one point). Dyslexia scores were more consistent than controls.

Contributors

Agent

Created

Date Created
2020-05

131919-Thumbnail Image.png

Speech Motor Learning Depends on Relevant Auditory Errors

Description

In the past, researchers have studied the elements of speech and how they work together in the human brain. Auditory feedback, an important aid in speech production, provides information to speakers and allows them to gain an understanding if the

In the past, researchers have studied the elements of speech and how they work together in the human brain. Auditory feedback, an important aid in speech production, provides information to speakers and allows them to gain an understanding if the prediction of their speech matches their production. The speech motor system uses auditory goals to determine errors in its auditory output during vowel production. We learn from discrepancies between our prediction and auditory feedback. In this study, we examined error assessment processes by systematically manipulating the correspondence between speech motor outputs and their auditory consequences while producing speech. We conducted a study (n = 14 adults) in which participants’ auditory feedback was perturbed to test their learning rate in two conditions. During the trials, participants repeated CVC words and were instructed to prolong the vowel each time. The adaptation trials were used to examine the reliance of auditory feedback and speech prediction by systematically changing the weight of auditory feedback. Participants heard their perturbed feedback through insert earphones in real time. Each speaker’s auditory feedback was perturbed according to task-relevant and task-irrelevant errors. Then, these perturbations were presented to subjects gradually and suddenly in the study. We found that adaptation was less extensive with task-irrelevant errors, adaptation did not saturate significantly in the sudden condition, and adaptation, which was expected to be extensive and faster in the task-relevant condition, was closer to the rate of adaptation in the task-irrelevant perturbation. Though adjustments are necessary, we found an efficient way for speakers to rely on auditory feedback more than their prediction. Furthermore, this research opens the door to future investigations in adaptation in speech and presents implications for clinical purposes (e.g. speech therapy).

Contributors

Agent

Created

Date Created
2020-05

128584-Thumbnail Image.png

The Relationship Between the Neural Computations for Speech and Music Perception is Context-Dependent: An Activation Likelihood Estimate Study

Description

The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing,

The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music.

Contributors

Agent

Created

Date Created
2015-08-11

128594-Thumbnail Image.png

Functional MRI Preprocessing in Lesioned Brains: Manual Versus Automated Region of Interest Analysis

Description

Functional magnetic resonance imaging (fMRI) has significant potential in the study and treatment of neurological disorders and stroke. Region of interest (ROI) analysis in such studies allows for testing of strong a priori clinical hypotheses with improved statistical power. A

Functional magnetic resonance imaging (fMRI) has significant potential in the study and treatment of neurological disorders and stroke. Region of interest (ROI) analysis in such studies allows for testing of strong a priori clinical hypotheses with improved statistical power. A commonly used automated approach to ROI analysis is to spatially normalize each participant’s structural brain image to a template brain image and define ROIs using an atlas. However, in studies of individuals with structural brain lesions, such as stroke, the gold standard approach may be to manually hand-draw ROIs on each participant’s non-normalized structural brain image. Automated approaches to ROI analysis are faster and more standardized, yet are susceptible to preprocessing error (e.g., normalization error) that can be greater in lesioned brains. The manual approach to ROI analysis has high demand for time and expertise, but may provide a more accurate estimate of brain response. In this study, commonly used automated and manual approaches to ROI analysis were directly compared by reanalyzing data from a previously published hypothesis-driven cognitive fMRI study, involving individuals with stroke. The ROI evaluated is the pars opercularis of the inferior frontal gyrus. Significant differences were identified in task-related effect size and percent-activated voxels in this ROI between the automated and manual approaches to ROI analysis. Task interactions, however, were consistent across ROI analysis approaches. These findings support the use of automated approaches to ROI analysis in studies of lesioned brains, provided they employ a task interaction design.

Contributors

Created

Date Created
2015-09-25

134926-Thumbnail Image.png

A Functional and Structural MRI Investigation of the Neural Signatures of Dyslexia in Adults

Description

The International Dyslexia Association defines dyslexia as a learning disorder that is characterized by poor spelling, decoding, and word recognition abilities. There is still no known cause of dyslexia, although it is a very common disability that affects 1 in

The International Dyslexia Association defines dyslexia as a learning disorder that is characterized by poor spelling, decoding, and word recognition abilities. There is still no known cause of dyslexia, although it is a very common disability that affects 1 in 10 people. Previous fMRI and MRI research in dyslexia has explored the neural correlations of hemispheric lateralization and phonemic awareness in dyslexia. The present study investigated the underlying neurobiology of five adults with dyslexia compared to age- and sex-matched control subjects using structural and functional magnetic resonance imaging. All subjects completed a large battery of behavioral tasks as part of a larger study and underwent functional and structural MRI acquisition. This data was collected and preprocessed at the University of Washington. Analyses focused on examining the neural correlates of hemispheric lateralization, letter reversal mistakes, reduced processing speed, and phonemic awareness. There were no significant findings of hemispheric differences between subjects with dyslexia and controls. The subject making the largest amount of letter reversal errors had deactivation in their cerebellum during the fMRI language task. Cerebellar white matter volume and surface area of the premotor cortex was the largest in the individual with the slowest reaction time to tapping. Phonemic decoding efficiency had a high correlation with neural activation in the primary motor cortex during the fMRI motor task (r=0.6). Findings from the present study suggest that brain regions utilized during motor control, such as the cerebellum, premotor cortex, and primary motor cortex, may have a larger role in dyslexia then previously considered. Future studies are needed to further distinguish the role of the cerebellum and other motor regions in relation to motor control and language processing deficits related to dyslexia.

Contributors

Agent

Created

Date Created
2016-12

135399-Thumbnail Image.png

The neurobiology of sentence comprehension: an fMRI study of late American Sign Language acquisition

Description

Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly

Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken language engage completely different sensory modalities (visual-manual versus the more common auditory-oromotor) both languages share grammatical structures and contain syntactic intricacies innate to all languages. Thus, studies of multi-modal bilingualism (e.g. a native English speaker learning American Sign Language) can lead to a better understanding of the neurobiology of second language acquisition, and of language more broadly. For example, can the well-developed visual-spatial processing networks in English speakers support grammatical processing in sign language, as it relies heavily on location and movement? The present study furthers the understanding of the neural correlates of second language acquisition by studying late L2 normal hearing learners of American Sign Language (ASL). Twenty English speaking ASU students enrolled in advanced American Sign Language coursework participated in our functional Magnetic Resonance Imaging (fMRI) study. The aim was to identify the brain networks engaged in syntactic processing of ASL sentences in late L2 ASL learners. While many studies have addressed the neurobiology of acquiring a second spoken language, no previous study to our knowledge has examined the brain networks supporting syntactic processing in bimodal bilinguals. We examined the brain networks engaged while perceiving ASL sentences compared to ASL word lists, as well as written English sentences and word lists. We hypothesized that our findings in late bimodal bilinguals would largely coincide with the unimodal bilingual literature, but with a few notable differences including additional attention networks being engaged by ASL processing. Our results suggest that there is a high degree of overlap in sentence processing networks for ASL and English. There also are important differences in regards to the recruitment of speech comprehension, visual-spatial and domain-general brain networks. Our findings suggest that well-known sentence comprehension and syntactic processing regions for spoken languages are flexible and modality-independent.

Contributors

Agent

Created

Date Created
2016-05

135887-Thumbnail Image.png

Is Cognitive Control Reliable? When means are not enough

Description

Most theories of cognitive control assume goal-directed behavior takes the form of performance monitor-executive function-action loop. Recent theories focus on how a single performance monitoring mechanism recruits executive function - dubbed single-process accounts. Namely, the conflict-monitoring hypothesis proposes that a

Most theories of cognitive control assume goal-directed behavior takes the form of performance monitor-executive function-action loop. Recent theories focus on how a single performance monitoring mechanism recruits executive function - dubbed single-process accounts. Namely, the conflict-monitoring hypothesis proposes that a single performance monitoring mechanism, housed in the anterior cingulate cortex, recruits executive functions for top-down control. This top-down control manifests as trial-to-trial micro adjustments to the speed and accuracy of responses. If these effects are produced by a single performance monitoring mechanism, then the size of these sequential trial-to-trial effects should be correlated across tasks. To this end, we conducted a large-scale (N=125) individual differences experiment to examine whether two sequential effects - the Gratton effect and error-related slowing effect - are correlated across a Simon, Flanker, and Stroop task. We find weak correlations for these effects across tasks which is inconsistent with single-process accounts.

Contributors

Agent

Created

Date Created
2015-12

148374-Thumbnail Image.png

The effect of corpus callosum agenesis on the communication between cerebral hemispheres

Description

Agenesis of the corpus callosum is the lack of the development of the corpus callosum. This condition can lead to impairments in language processing, epilepsy, and emotion and social functioning, but many individuals with this condition do not show any

Agenesis of the corpus callosum is the lack of the development of the corpus callosum. This condition can lead to impairments in language processing, epilepsy, and emotion and social functioning, but many individuals with this condition do not show any of these impairments. The present study investigated the connectivity of language and sensorimotor networks within an individual with agenesis of the corpus callosum using resting-state fMRI. The individual’s results were compared to those of neurotypical control subjects. It was hypothesized that the overall interhemispheric functional connectivity would be less than that of a control group in bilateral language networks, but the intrahemispheric connectivity, particularly within the sensorimotor network, would show greater functional connectivity. The results revealed significantly weaker functional connectivity in the individual with agenesis of the corpus callosum within the right ventral stream compared to the control group. There were no other significant inter or intrahemispheric differences in the functional connectivity of the language and sensorimotor networks. These findings lead us to conclude that the right hemisphere’s ventral stream perhaps relies upon connections with the left hemisphere’s language networks to maintain its typical functionality. The results of this study support the idea that, in the case of corpus callosum agenesis, the right language network may contribute differently to language processes than in neurotypical controls.

Contributors

Agent

Created

Date Created
2021-05

148383-Thumbnail Image.png

Utilizing functional MRI to Analyze Differences in the Brain in Response to Speech and Music Stimuli in Persons with Aphasia

Description

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions, especially regarding left-hemispheric strokes that disrupt typical speech-processing brain networks, and how musical training might affect the brain networks recruited for speech after a stroke. Thus, our study aims to explore some questions related to the above topics. We collected task-based functional MRI data from 12 subjects who previously experienced a left-hemispheric stroke. Subjects listened to blocks of spoken sentences and novel piano melodies during scanning to examine the differences in brain activations in response to speech and music. We hypothesized that speech stimuli would activate right frontal regions, and music stimuli would activate the right superior temporal regions more than speech (both findings not seen in previous studies of control subjects), as a result of functional changes in the brain, following the left-hemispheric stroke and particularly the loss of functionality in the left temporal lobe. We also hypothesized that the music stimuli would cause a stronger activation in right temporal cortex for participants who have had musical training than those who have not. Our results indicate that speech stimuli compared to rest activated the anterior superior temporal gyrus bilaterally and activated the right inferior frontal lobe. Music stimuli compared to rest did not activate the brain bilaterally, but rather only activated the right middle temporal gyrus. When the group analysis was performed with music experience as a covariate, we found that musical training did not affect activations to music stimuli specifically, but there was greater right hemisphere activation in several regions in response to speech stimuli as a function of more years of musical training. The results of the study agree with our hypotheses regarding the functional changes in the brain, but they conflict with our hypothesis about musical expertise. Overall, the study has generated interesting starting points for further explorations of how musical neural resources may be recruited for speech processing after damage to typical language networks.

Contributors

Agent

Created

Date Created
2021-05