Matching Items (3)
Filtering by

Clear all filters

148400-Thumbnail Image.png
Description

The brain continuously monitors speech output to detect potential errors between its sensory prediction and its sensory production (Daliri et al., 2020). When the brain encounters an error, it generates a corrective motor response, usually in the opposite direction, to reduce the effect of the error. Previous studies have shown

The brain continuously monitors speech output to detect potential errors between its sensory prediction and its sensory production (Daliri et al., 2020). When the brain encounters an error, it generates a corrective motor response, usually in the opposite direction, to reduce the effect of the error. Previous studies have shown that the type of auditory error received may impact a participant’s corrective response. In this study, we examined whether participants respond differently to categorical or non-categorical errors. We applied two types of perturbation in real-time by shifting the first formant (F1) and second formant (F2) at three different magnitudes. The vowel /ɛ/ was shifted toward the vowel /æ/ in the categorical perturbation condition. In the non-categorical perturbation condition, the vowel /ɛ/ was shifted to a sound outside of the vowel quadrilateral (increasing both F1 and F2). Our results showed that participants responded to the categorical perturbation while they did not respond to the non-categorical perturbation. Additionally, we found that in the categorical perturbation condition, as the magnitude of the perturbation increased, the magnitude of the response increased. Overall, our results suggest that the brain may respond differently to categorical and non-categorical errors, and the brain is highly attuned to errors in speech.

ContributorsCincera, Kirsten Michelle (Author) / Daliri, Ayoub (Thesis director) / Azuma, Tamiko (Committee member) / School of Sustainability (Contributor) / College of Health Solutions (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
132359-Thumbnail Image.png
Description
Cochlear implant (CI) successfully restores hearing sensation to profoundly deaf patients, but its
performance is limited by poor spectral resolution. Acoustic CI simulation has been widely used
in normal-­hearing (NH) listeners to study the effect of spectral resolution on speech perception,
while avoiding patient-­related confounds. It is unclear how speech production may change

Cochlear implant (CI) successfully restores hearing sensation to profoundly deaf patients, but its
performance is limited by poor spectral resolution. Acoustic CI simulation has been widely used
in normal-­hearing (NH) listeners to study the effect of spectral resolution on speech perception,
while avoiding patient-­related confounds. It is unclear how speech production may change with
the degree of spectral degradation of auditory feedback as experience by CI users. In this study,
a real-­time sinewave CI simulation was developed to provide NH subjects with auditory
feedback of different spectral resolution (1, 2, 4, and 8 channels). NH subjects were asked to
produce and identify vowels, as well as recognize sentences while listening to the real-­time CI
simulation. The results showed that sentence recognition scores with the real-­time CI simulation
improved with more channels, similar to those with the traditional off-­line CI simulation.
Perception of a vowel continuum “HEAD”-­ “HAD” was near chance with 1, 2, and 4 channels,
and greatly improved with 8 channels and full spectrum. The spectral resolution of auditory
feedback did not significantly affect any acoustic feature of vowel production (e.g., vowel space
area, mean amplitude, mean and variability of fundamental and formant frequencies). There
was no correlation between vowel production and perception. The lack of effect of auditory
feedback spectral resolution on vowel production was likely due to the limited exposure of NH
subjects to CI simulation and the limited frequency ranges covered by the sinewave carriers of
CI simulation. Future studies should investigate the effects of various CI processing parameters
on speech production using a noise-­band CI simulation.
ContributorsPerez Lustre, Sarahi (Author) / Luo, Xin (Thesis director) / Daliri, Ayoub (Committee member) / Division of Teacher Preparation (Contributor) / College of Health Solutions (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
165262-Thumbnail Image.png
Description
Instrumental music has been used to evoke natural environments and their qualities for centuries, and composers have employed a variety of methods in order to successfully invoke such sensations in their listeners. When composers and sound teams for video game soundtracks write pieces to accompany in-game settings, they may use

Instrumental music has been used to evoke natural environments and their qualities for centuries, and composers have employed a variety of methods in order to successfully invoke such sensations in their listeners. When composers and sound teams for video game soundtracks write pieces to accompany in-game settings, they may use a similar set of strategies. The nature of these tracks as an accompaniment to an interactive visual media and as a piece that must be able to indefinitely loop leads them to emphasize environment over emotion, and thus draws out or exaggerates these same techniques. This study seeks to understand the relationships between the acoustics of various setting backing tracks and the perceptual qualities of environments that listeners feel they evoke through the statistical method of multidimensional scaling. The relationships of three perceptual factors (coldness, brightness, wetness) and two acoustic factors (beats-per-minute, spectral envelope slope) are of greatest interest in this study.
ContributorsJackson, Jalen (Author) / Azuma, Tamiko (Thesis director) / Patten, Kristopher (Committee member) / Barrett, The Honors College (Contributor) / Speech & Hearing Science (Contributor)
Created2022-05