Matching Items (36)
147824-Thumbnail Image.png
Description

Speech motor learning is important for learning to speak during childhood and maintaining the speech system throughout adulthood. Motor and auditory cortical regions play crucial roles in speech motor learning. This experiment aimed to use transcranial alternating current stimulation, a neurostimulation technique, to influence auditory and motor cortical activity. In

Speech motor learning is important for learning to speak during childhood and maintaining the speech system throughout adulthood. Motor and auditory cortical regions play crucial roles in speech motor learning. This experiment aimed to use transcranial alternating current stimulation, a neurostimulation technique, to influence auditory and motor cortical activity. In this study, we used an auditory-motor adaptation task as an experimental model of speech motor learning. Subjects repeated words while receiving formant shifts, which made the subjects’ speech sound different from their production. During the adaptation task, subjects received Beta (20 Hz), Alpha (10 Hz), or Sham stimulation. We applied the stimulation to the ventral motor cortex that is involved in planning speech movements. We found that the stimulation did not influence the magnitude of adaptation. We suggest that some limitations of the study may have contributed to the negative results.

ContributorsMannan, Arhum (Author) / Daliri, Ayoub (Thesis director) / Luo, Xin (Committee member) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148204-Thumbnail Image.png
Description

The purpose of this longitudinal study was to predict /r/ acquisition using acoustic signal processing. 19 children, aged 5-7 with inaccurate /r/, were followed until they turned 8 or acquired /r/, whichever came first. Acoustic and descriptive data from 14 participants were analyzed. The remaining 5 children continued to be

The purpose of this longitudinal study was to predict /r/ acquisition using acoustic signal processing. 19 children, aged 5-7 with inaccurate /r/, were followed until they turned 8 or acquired /r/, whichever came first. Acoustic and descriptive data from 14 participants were analyzed. The remaining 5 children continued to be followed. The study analyzed differences in spectral energy at the baseline acoustic signals of participants who eventually acquired /r/ compared to that of those who did not acquire /r/. Results indicated significant differences between groups in the baseline signals for vocalic and postvocalic /r/, suggesting that the acquisition of certain allophones may be predictable. Participants’ articulatory changes made during the progression of acquisition were also analyzed spectrally. A retrospective analysis described the pattern in which /r/ allophones were acquired, proposing that vocalic /r/ and the postvocalic variant of consonantal /r/ may be acquired prior to prevocalic /r/, and /r/ followed by low vowels may be acquired before /r/ followed by high vowels, although individual variations exist.

ContributorsConger, Sarah Grace (Author) / Weinhold, Juliet (Thesis director) / Daliri, Ayoub (Committee member) / Bruce, Laurel (Committee member) / College of Health Solutions (Contributor, Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148383-Thumbnail Image.png
Description

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions,

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions, especially regarding left-hemispheric strokes that disrupt typical speech-processing brain networks, and how musical training might affect the brain networks recruited for speech after a stroke. Thus, our study aims to explore some questions related to the above topics. We collected task-based functional MRI data from 12 subjects who previously experienced a left-hemispheric stroke. Subjects listened to blocks of spoken sentences and novel piano melodies during scanning to examine the differences in brain activations in response to speech and music. We hypothesized that speech stimuli would activate right frontal regions, and music stimuli would activate the right superior temporal regions more than speech (both findings not seen in previous studies of control subjects), as a result of functional changes in the brain, following the left-hemispheric stroke and particularly the loss of functionality in the left temporal lobe. We also hypothesized that the music stimuli would cause a stronger activation in right temporal cortex for participants who have had musical training than those who have not. Our results indicate that speech stimuli compared to rest activated the anterior superior temporal gyrus bilaterally and activated the right inferior frontal lobe. Music stimuli compared to rest did not activate the brain bilaterally, but rather only activated the right middle temporal gyrus. When the group analysis was performed with music experience as a covariate, we found that musical training did not affect activations to music stimuli specifically, but there was greater right hemisphere activation in several regions in response to speech stimuli as a function of more years of musical training. The results of the study agree with our hypotheses regarding the functional changes in the brain, but they conflict with our hypothesis about musical expertise. Overall, the study has generated interesting starting points for further explorations of how musical neural resources may be recruited for speech processing after damage to typical language networks.

ContributorsKarthigeyan, Vishnu R (Author) / Rogalsky, Corianne (Thesis director) / Daliri, Ayoub (Committee member) / Harrington Bioengineering Program (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148400-Thumbnail Image.png
Description

The brain continuously monitors speech output to detect potential errors between its sensory prediction and its sensory production (Daliri et al., 2020). When the brain encounters an error, it generates a corrective motor response, usually in the opposite direction, to reduce the effect of the error. Previous studies have shown

The brain continuously monitors speech output to detect potential errors between its sensory prediction and its sensory production (Daliri et al., 2020). When the brain encounters an error, it generates a corrective motor response, usually in the opposite direction, to reduce the effect of the error. Previous studies have shown that the type of auditory error received may impact a participant’s corrective response. In this study, we examined whether participants respond differently to categorical or non-categorical errors. We applied two types of perturbation in real-time by shifting the first formant (F1) and second formant (F2) at three different magnitudes. The vowel /ɛ/ was shifted toward the vowel /æ/ in the categorical perturbation condition. In the non-categorical perturbation condition, the vowel /ɛ/ was shifted to a sound outside of the vowel quadrilateral (increasing both F1 and F2). Our results showed that participants responded to the categorical perturbation while they did not respond to the non-categorical perturbation. Additionally, we found that in the categorical perturbation condition, as the magnitude of the perturbation increased, the magnitude of the response increased. Overall, our results suggest that the brain may respond differently to categorical and non-categorical errors, and the brain is highly attuned to errors in speech.

ContributorsCincera, Kirsten Michelle (Author) / Daliri, Ayoub (Thesis director) / Azuma, Tamiko (Committee member) / School of Sustainability (Contributor) / College of Health Solutions (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
131285-Thumbnail Image.png
Description
Objective: A recent electroencephalogram (EEG) study of adults with dyslexia showed that individuals with dyslexia have diminished auditory sensory gating compared to typical controls. Previous studies done involving intoxication and its effect on sensory gating and creativity have shown that there is a positive correlation between creativity (divergent thinking problem

Objective: A recent electroencephalogram (EEG) study of adults with dyslexia showed that individuals with dyslexia have diminished auditory sensory gating compared to typical controls. Previous studies done involving intoxication and its effect on sensory gating and creativity have shown that there is a positive correlation between creativity (divergent thinking problem solving) and sensory gating deficiency. With previous study findings, the link between dyslexia and sensory gating deficiency and the link between sensory gating deficiency and creativity have been shown, but not the link between dyslexia and creativity. This pilot study aims to address this knowledge gap using event-related potentials.

Methods: Two adults with dyslexia and 4 control adults participated in an auditory gating test using tone pairs. Latencies and Amplitudes for the N100 and P200 responses were recorded and analyzed. Participants were also administered the Abbreviated Torrance Test for Adults (ATTA), a test of creative ability designed to evaluate divergent thinking in individuals. Results were averaged and compared.

Results: The averaged difference in measured N100 amplitudes between tone 1 and tone 2 was significantly larger in the control group compared to the difference observed in the dyslexia group. In particular, one participant with dyslexia who had low scores on a task of rapid word recognition also showed no evidence of gating at the N100 component, whereas the other participant with dyslexia with good word recognition scores showed evidence of intact gating. The averaged difference in measured P200 amplitude between tone 1 and tone 2 was larger in the dyslexia group compared to the control group; however, the difference was small enough to be considered insignificant. The total average ATTA score for the control group was higher than the average of the dyslexia group. This difference in total average was less than one point on a 106-point scale.

Conclusions: Neural sensory gating occurs approximately 100 ms after the onset of a stimulus and is diminished in adults with dyslexia who also have deficits in rapid word recognition. There is a difference in creativity, in terms of divergent thinking, between those with dyslexia and those without (controls scored higher on average); however, the difference is not significant (less than one point). Dyslexia scores were more consistent than controls.
ContributorsDuran, Isaac (Author) / Peter, Beate (Thesis director) / Daliri, Ayoub (Committee member) / Rogalsky, Corianne (Committee member) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131325-Thumbnail Image.png
Description
The purpose of this study was to explore the relationship between acoustic indicators in speech and the presence of orofacial myofunctional disorder (OMD). This study analyzed the first and second formant frequencies (F1 and F2) of the four corner vowels [/i/, /u/, /æ/ and /ɑ/] found in the spontaneous

The purpose of this study was to explore the relationship between acoustic indicators in speech and the presence of orofacial myofunctional disorder (OMD). This study analyzed the first and second formant frequencies (F1 and F2) of the four corner vowels [/i/, /u/, /æ/ and /ɑ/] found in the spontaneous speech of thirty participants. It was predicted that speakers with orofacial myofunctional disorder would have a raised F1 and F2 because of habitual low and anterior tongue positioning. This study concluded no significant statistical differences in the formant frequencies. Further inspection of the total vowel space area of the OMD speakers suggested that OMD speakers had a smaller, more centralized vowel space. We concluded that more study of the total vowel space area for OMD speakers is warranted.
ContributorsWasson, Sarah Alicia (Co-author) / Wasson, Sarah (Co-author) / Weinhold, Juliet (Thesis director) / Daliri, Ayoub (Committee member) / College of Health Solutions (Contributor) / Hugh Downs School of Human Communication (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
132359-Thumbnail Image.png
Description
Cochlear implant (CI) successfully restores hearing sensation to profoundly deaf patients, but its
performance is limited by poor spectral resolution. Acoustic CI simulation has been widely used
in normal-­hearing (NH) listeners to study the effect of spectral resolution on speech perception,
while avoiding patient-­related confounds. It is unclear how speech production may change

Cochlear implant (CI) successfully restores hearing sensation to profoundly deaf patients, but its
performance is limited by poor spectral resolution. Acoustic CI simulation has been widely used
in normal-­hearing (NH) listeners to study the effect of spectral resolution on speech perception,
while avoiding patient-­related confounds. It is unclear how speech production may change with
the degree of spectral degradation of auditory feedback as experience by CI users. In this study,
a real-­time sinewave CI simulation was developed to provide NH subjects with auditory
feedback of different spectral resolution (1, 2, 4, and 8 channels). NH subjects were asked to
produce and identify vowels, as well as recognize sentences while listening to the real-­time CI
simulation. The results showed that sentence recognition scores with the real-­time CI simulation
improved with more channels, similar to those with the traditional off-­line CI simulation.
Perception of a vowel continuum “HEAD”-­ “HAD” was near chance with 1, 2, and 4 channels,
and greatly improved with 8 channels and full spectrum. The spectral resolution of auditory
feedback did not significantly affect any acoustic feature of vowel production (e.g., vowel space
area, mean amplitude, mean and variability of fundamental and formant frequencies). There
was no correlation between vowel production and perception. The lack of effect of auditory
feedback spectral resolution on vowel production was likely due to the limited exposure of NH
subjects to CI simulation and the limited frequency ranges covered by the sinewave carriers of
CI simulation. Future studies should investigate the effects of various CI processing parameters
on speech production using a noise-­band CI simulation.
ContributorsPerez Lustre, Sarahi (Author) / Luo, Xin (Thesis director) / Daliri, Ayoub (Committee member) / Division of Teacher Preparation (Contributor) / College of Health Solutions (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132284-Thumbnail Image.png
Description
This longitudinal study aimed to determine whether significant differences existed between the baseline inaccurate signals of the /r/ phoneme for children that eventually acquire or do not acquire /r/. Seventeen participants ages 5-8 who had not acquired /r/ in any of its allophonic contexts were recorded approximately every 3 months

This longitudinal study aimed to determine whether significant differences existed between the baseline inaccurate signals of the /r/ phoneme for children that eventually acquire or do not acquire /r/. Seventeen participants ages 5-8 who had not acquired /r/ in any of its allophonic contexts were recorded approximately every 3 months from the age of recruitment until they either acquired /r/ in conversation (80% accuracy) or turned eight years old. The recorded audio files were trimmed and labelled using Praat, and signal processing was used to compare initial and final recordings of three allophonic variations of /r/ (vocalic, prevocalic, postvocalic) for each participant. Differences were described using Mel-log Spectral plots. For each age group, initial recordings of participants that eventually acquired /r/ were compared to those of participants that did not acquire /r/. Participants that had not acquired /r/ and had yet to turn eight years old were compared by whether they were perceived to be improving or perceived not to be improving. Significant differences in Mel-log spectral plots will be discussed, and the implications of baseline differences will be highlighted, specifically with respect to the feasibility of identifying predictive markers for acquisition
on-acquisition of the difficult /r/ phoneme.
ContributorsHom, Rachel (Author) / Weinhold, Juliet (Thesis director) / Daliri, Ayoub (Committee member) / College of Health Solutions (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
131951-Thumbnail Image.png
Description
Previous research has showed that auditory modulation may be affected by pure tone
stimuli played prior to the onset of speech production. In this experiment, we are examining the
specificity of the auditory stimulus by implementing congruent and incongruent speech sounds in
addition to non-speech sound. Electroencephalography (EEG) data was recorded for eleven

Previous research has showed that auditory modulation may be affected by pure tone
stimuli played prior to the onset of speech production. In this experiment, we are examining the
specificity of the auditory stimulus by implementing congruent and incongruent speech sounds in
addition to non-speech sound. Electroencephalography (EEG) data was recorded for eleven adult
subjects in both speaking (speech planning) and silent reading (no speech planning) conditions.
Data analysis was accomplished manually as well as via generation of a MATLAB code to
combine data sets and calculate auditory modulation (suppression). Results of the P200
modulation showed that modulation was larger for incongruent stimuli than congruent stimuli.
However, this was not the case for the N100 modulation. The data for pure tone could not be
analyzed because the intensity of this stimulus was substantially lower than that of the speech
stimuli. Overall, the results indicated that the P200 component plays a significant role in
processing stimuli and determining the relevance of stimuli; this result is consistent with role of
P200 component in high-level analysis of speech and perceptual processing. This experiment is
ongoing, and we hope to obtain data from more subjects to support the current findings.
ContributorsTaylor, Megan Kathleen (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / School of Life Sciences (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131919-Thumbnail Image.png
Description
In the past, researchers have studied the elements of speech and how they work together in the human brain. Auditory feedback, an important aid in speech production, provides information to speakers and allows them to gain an understanding if the prediction of their speech matches their production. The speech motor

In the past, researchers have studied the elements of speech and how they work together in the human brain. Auditory feedback, an important aid in speech production, provides information to speakers and allows them to gain an understanding if the prediction of their speech matches their production. The speech motor system uses auditory goals to determine errors in its auditory output during vowel production. We learn from discrepancies between our prediction and auditory feedback. In this study, we examined error assessment processes by systematically manipulating the correspondence between speech motor outputs and their auditory consequences while producing speech. We conducted a study (n = 14 adults) in which participants’ auditory feedback was perturbed to test their learning rate in two conditions. During the trials, participants repeated CVC words and were instructed to prolong the vowel each time. The adaptation trials were used to examine the reliance of auditory feedback and speech prediction by systematically changing the weight of auditory feedback. Participants heard their perturbed feedback through insert earphones in real time. Each speaker’s auditory feedback was perturbed according to task-relevant and task-irrelevant errors. Then, these perturbations were presented to subjects gradually and suddenly in the study. We found that adaptation was less extensive with task-irrelevant errors, adaptation did not saturate significantly in the sudden condition, and adaptation, which was expected to be extensive and faster in the task-relevant condition, was closer to the rate of adaptation in the task-irrelevant perturbation. Though adjustments are necessary, we found an efficient way for speakers to rely on auditory feedback more than their prediction. Furthermore, this research opens the door to future investigations in adaptation in speech and presents implications for clinical purposes (e.g. speech therapy).
ContributorsLukowiak, Ariana (Author) / Daliri, Ayoub (Thesis director) / Rogalsky, Corianne (Committee member) / Sanford School of Social and Family Dynamics (Contributor) / College of Health Solutions (Contributor, Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-05