Matching Items (5)
Filtering by

Clear all filters

151742-Thumbnail Image.png
Description
This research is focused on two separate but related topics. The first uses an electroencephalographic (EEG) brain-computer interface (BCI) to explore the phenomenon of motor learning transfer. The second takes a closer look at the EEG-BCI itself and tests an alternate way of mapping EEG signals into machine commands. We

This research is focused on two separate but related topics. The first uses an electroencephalographic (EEG) brain-computer interface (BCI) to explore the phenomenon of motor learning transfer. The second takes a closer look at the EEG-BCI itself and tests an alternate way of mapping EEG signals into machine commands. We test whether motor learning transfer is more related to use of shared neural structures between imagery and motor execution or to more generalized cognitive factors. Using an EEG-BCI, we train one group of participants to control the movements of a cursor using embodied motor imagery. A second group is trained to control the cursor using abstract motor imagery. A third control group practices moving the cursor using an arm and finger on a touch screen. We hypothesized that if motor learning transfer is related to the use of shared neural structures then the embodied motor imagery group would show more learning transfer than the abstract imaging group. If, on the other hand, motor learning transfer results from more general cognitive processes, then the abstract motor imagery group should also demonstrate motor learning transfer to the manual performance of the same task. Our findings support that motor learning transfer is due to the use of shared neural structures between imaging and motor execution of a task. The abstract group showed no motor learning transfer despite being better at EEG-BCI control than the embodied group. The fact that more participants were able to learn EEG-BCI control using abstract imagery suggests that abstract imagery may be more suitable for EEG-BCIs for some disabilities, while embodied imagery may be more suitable for others. In Part 2, EEG data collected in the above experiment was used to train an artificial neural network (ANN) to map EEG signals to machine commands. We found that our open-source ANN using spectrograms generated from SFFTs is fundamentally different and in some ways superior to Emotiv's proprietary method. Our use of novel combinations of existing technologies along with abstract and embodied imagery facilitates adaptive customization of EEG-BCI control to meet needs of individual users.
Contributorsda Silva, Flavio J. K (Author) / Mcbeath, Michael K (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Presson, Clark (Committee member) / Sugar, Thomas (Committee member) / Arizona State University (Publisher)
Created2013
187370-Thumbnail Image.png
Description
This project investigates the gleam-glum effect, a well-replicated phonetic emotion association in which words with the [i] vowel-sound (as in “gleam”) are judged more emotionally positive than words with the [Ʌ] vowel-sound (as in “glum”). The effect is observed across different modalities and languages and is moderated by mouth movements

This project investigates the gleam-glum effect, a well-replicated phonetic emotion association in which words with the [i] vowel-sound (as in “gleam”) are judged more emotionally positive than words with the [Ʌ] vowel-sound (as in “glum”). The effect is observed across different modalities and languages and is moderated by mouth movements relevant to word production. This research presents and tests an articulatory explanation for this association in three experiments. Experiment 1 supported the articulatory explanation by comparing recordings of 71 participants completing an emotional recall task and a word read-aloud task, showing that oral movements were more similar between positive emotional expressions and [i] articulation, and negative emotional expressions and [Ʌ] articulation. Experiment 2 partially supported the explanation with 98 YouTube recordings of natural speech. In Experiment 3, 149 participants judged emotions expressed by a speaker during [i] and [Ʌ] articulation. Contradicting the robust phonetic emotion association, participants judged more frequently that the speaker’s [Ʌ] articulatory movements were positive emotional expressions and [i] articulatory movements were negative emotional expressions. This is likely due to other visual emotional cues not related to oral movements and the order of word lists read by the speaker. Findings from the current project overall support an articulatory explanation for the gleam-glum effect, which has major implications for language and communication.
ContributorsYu, Shin-Phing (Author) / Mcbeath, Michael K (Thesis advisor) / Glenberg, Arthur M (Committee member) / Stone, Greg O (Committee member) / Coza, Aurel (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2023
154351-Thumbnail Image.png
Description
Watanabe, Náñez, and Sasaki (2001) introduced a phenomenon they named “task-irrelevant perceptual learning” in which near-threshold stimuli that are not essential to a given task can be associatively learned when consistently and concurrently paired with the focal task. The present study employs a visual paired-shapes recognition task, using colored

Watanabe, Náñez, and Sasaki (2001) introduced a phenomenon they named “task-irrelevant perceptual learning” in which near-threshold stimuli that are not essential to a given task can be associatively learned when consistently and concurrently paired with the focal task. The present study employs a visual paired-shapes recognition task, using colored polygon targets as salient attended focal stimuli, with the goal of comparing the increases in perceptual sensitivity observed when near-threshold stimuli are temporally paired in varying manners with focal targets. Experiment 1 separated and compared the target-acquisition and target-recognition phases and revealed that sensitivity improved most when the near-threshold motion stimuli were paired with the focal target-acquisition phase. The parameters of sensitivity improvement were motion detection, critical flicker fusion threshold (CFFT), and letter-orientation decoding. Experiment 2 tested perceptual learning of near-threshold stimuli when they were offset from the focal stimuli presentation by ±350 ms. Performance improvements in motion detection, CFFT, and decoding were significantly greater for the group in which near-threshold motion was presented after the focal target. Experiment 3 showed that participants with reading difficulties who were exposed to focal target-acquisition training improved in sensitivity in all visual measures. Experiment 4 tested whether near-threshold stimulus learning occurred cross-modally with auditory stimuli and served as an active control for the first, second, and third experiments. Here, a tone was paired with all focal stimuli, but the tone was 1 Hz higher or lower when paired with the targeted focal stimuli associated with recognition. In Experiment 4, there was no improvement in visual sensitivity, but there was significant improvement in tone discrimination. Thus, this study, as a whole, confirms that pairing near-threshold stimuli with focal stimuli can improve performance in just tone discrimination, or in motion detection, CFFT, and letter decoding. Findings further support the thesis that the act of trying to remember a focal target also elicited greater associative learning of correlated near-threshold stimulus than the act of recognizing a target. Finally, these findings support that we have developed a visual learning paradigm that may potentially mitigate some of the visual deficits that are often experienced by the reading disabled.
ContributorsHolloway, Steven Robert (Author) / Mcbeath, Michael K (Thesis advisor) / Macknik, Stephen (Committee member) / Homa, Donald (Committee member) / Náñez, Sr., José E (Committee member) / Arizona State University (Publisher)
Created2016
156081-Thumbnail Image.png
Description
Auditory scene analysis (ASA) is the process through which listeners parse and organize their acoustic environment into relevant auditory objects. ASA functions by exploiting natural regularities in the structure of auditory information. The current study investigates spectral envelope and its contribution to the perception of changes in pitch and loudness.

Auditory scene analysis (ASA) is the process through which listeners parse and organize their acoustic environment into relevant auditory objects. ASA functions by exploiting natural regularities in the structure of auditory information. The current study investigates spectral envelope and its contribution to the perception of changes in pitch and loudness. Experiment 1 constructs a perceptual continuum of twelve f0- and intensity-matched vowel phonemes (i.e. a pure timbre manipulation) and reveals spectral envelope as a primary organizational dimension. The extremes of this dimension are i (as in “bee”) and Ʌ (“bun”). Experiment 2 measures the strength of the relationship between produced f0 and the previously observed phonetic-pitch continuum at three different levels of phonemic constraint. Scat performances and, to a lesser extent, recorded interviews were found to exhibit changes in accordance with the natural regularity; specifically, f0 changes were correlated with the phoneme pitch-height continuum. The more constrained case of lyrical singing did not exhibit the natural regularity. Experiment 3 investigates participant ratings of pitch and loudness as stimuli vary in f0, intensity, and the phonetic-pitch continuum. Psychophysical functions derived from the results reveal that moving from i to Ʌ is equivalent to a .38 semitone decrease in f0 and a .75 dB decrease in intensity. Experiment 4 examines the potentially functional aspect of the pitch, loudness, and spectral envelope relationship. Detection thresholds of stimuli in which all three dimensions change congruently (f0 increase, intensity increase, Ʌ to i) or incongruently (no f0 change, intensity increase, i to Ʌ) are compared using an objective version of the method of limits. Congruent changes did not provide a detection benefit over incongruent changes; however, when the contribution of phoneme change was removed, congruent changes did offer a slight detection benefit, as in previous research. While this relationship does not offer a detection benefit at threshold, there is a natural regularity for humans to produce phonemes at higher f0s according to their relative position on the pitch height continuum. Likewise, humans have a bias to detect pitch and loudness changes in phoneme sweeps in accordance with the natural regularity.
ContributorsPatten, K. Jakob (Author) / Mcbeath, Michael K (Thesis advisor) / Amazeen, Eric L (Committee member) / Glenberg, Arthur W (Committee member) / Zhou, Yi (Committee member) / Arizona State University (Publisher)
Created2017
161818-Thumbnail Image.png
Description
Color perception has been widely studied and well modeled with respect to combining visible electromagnetic frequencies, yet new technology provides the means to better explore and test novel temporal frequency characteristics of color perception. Experiment 1 tests how reliably participants categorize static spectral rainbow colors, which can be a useful

Color perception has been widely studied and well modeled with respect to combining visible electromagnetic frequencies, yet new technology provides the means to better explore and test novel temporal frequency characteristics of color perception. Experiment 1 tests how reliably participants categorize static spectral rainbow colors, which can be a useful tool for efficiently identifying those with functional dichromacy, trichromacy, and tetrachromacy. The findings confirm that all individuals discern the four principal opponent process colors, red, yellow, green, and blue, with normal and potential tetrachromats seeing more distinct colors than color blind individuals. Experiment 2 tests the moving flicker fusion rate of the central electromagnetic frequencies within each color category found in Experiment 1 as a test of the Where system. It then compares this to the maximum temporal processing rate for discriminating direction of hue change with colors displayed serially as a test of the What system. The findings confirm respective processing thresholds of about 20 Hz for Where and 2-7 Hz for What processing systems. Experiment 3 tests conditions that optimize false colors based on the spinning Benham’s Top illusion. Findings indicate the same four principal colors emerge as in Experiment 1, but at low saturation levels for trichromats that diminish further for dichromats. Taken together, the three experiments provide an overview of the common categorical boundaries and temporal processing limits of human color vision.
ContributorsKrynen, Richard Chandler (Author) / Mcbeath, Michael K (Thesis advisor) / Homa, Donald (Committee member) / Newman, Nathan (Committee member) / Stone, Greg (Committee member) / Arizona State University (Publisher)
Created2021