Matching Items (6)
Filtering by

Clear all filters

152920-Thumbnail Image.png
Description
Categories are often defined by rules regarding their features. These rules may be intensely complex yet, despite the complexity of these rules, we are often able to learn them with sufficient practice. A possible explanation for how we arrive at consistent category judgments despite these difficulties would be that we

Categories are often defined by rules regarding their features. These rules may be intensely complex yet, despite the complexity of these rules, we are often able to learn them with sufficient practice. A possible explanation for how we arrive at consistent category judgments despite these difficulties would be that we may define these complex categories such as chairs, tables, or stairs by understanding the simpler rules defined by potential interactions with these objects. This concept, called grounding, allows for the learning and transfer of complex categorization rules if said rules are capable of being expressed in a more simple fashion by virtue of meaningful physical interactions. The present experiment tested this hypothesis by having participants engage in either a Rule Based (RB) or Information Integration (II) categorization task with instructions to engage with the stimuli in either a non-interactive or interactive fashion. If participants were capable of grounding the categories, which were defined in the II task with a complex visual rule, to a simpler interactive rule, then participants with interactive instructions should outperform participants with non-interactive instructions. Results indicated that physical interaction with stimuli had a marginally beneficial effect on category learning, but this effect seemed most prevalent in participants were engaged in an II task.
ContributorsCrawford, Thomas (Author) / Homa, Donald (Thesis advisor) / Glenberg, Arthur (Committee member) / McBeath, Michael (Committee member) / Brewer, Gene (Committee member) / Arizona State University (Publisher)
Created2014
150044-Thumbnail Image.png
Description
The purpose of this study was to investigate the effect of partial exemplar experience on category formation and use. Participants had either complete or limited access to the three dimensions that defined categories by dimensions within different modalities. The concept of "crucial dimension" was introduced and the role it plays

The purpose of this study was to investigate the effect of partial exemplar experience on category formation and use. Participants had either complete or limited access to the three dimensions that defined categories by dimensions within different modalities. The concept of "crucial dimension" was introduced and the role it plays in category definition was explained. It was hypothesized that the effects of partial experience are not explained by a shifting of attention between dimensions (Taylor & Ross, 2009) but rather by an increased reliance on prototypical values used to fill in missing information during incomplete experiences. Results indicated that participants (1) do not fill in missing information with prototypical values, (2) integrate information less efficiently between different modalities than within a single modality, and (3) have difficulty learning only when partial experience prevents access to diagnostic information.
ContributorsCrawford, Thomas (Author) / Homa, Donald (Thesis advisor) / Mcbeath, Micheal (Committee member) / Glenberg, Arthur (Committee member) / Arizona State University (Publisher)
Created2011
150444-Thumbnail Image.png
Description
The present study explores the role of motion in the perception of form from dynamic occlusion, employing color to help isolate the contributions of both visual pathways. Although the cells that respond to color cues in the environment usually feed into the ventral stream, humans can perceive motion based on

The present study explores the role of motion in the perception of form from dynamic occlusion, employing color to help isolate the contributions of both visual pathways. Although the cells that respond to color cues in the environment usually feed into the ventral stream, humans can perceive motion based on chromatic cues. The current study was designed to use grey, green, and red stimuli to successively limit the amount of information available to the dorsal stream pathway, while providing roughly equal information to the ventral system. Twenty-one participants identified shapes that were presented in grey, green, and red and were defined by dynamic occlusion. The shapes were then presented again in a static condition where the maximum occlusions were presented as before, but without motion. Results showed an interaction between the motion and static conditions in that when the speed of presentation increased, performance in the motion conditions became significantly less accurate than in the static conditions. The grey and green motion conditions crossed static performance at the same point, whereas the red motion condition crossed at a much slower speed. These data are consistent with a model of neural processing in which the main visual systems share information. Moreover, they support the notion that presenting stimuli in specific colors may help isolate perceptual pathways for scientific investigation. Given the potential for chromatic cues to target specific visual systems in the performance of dynamic object recognition, exploring these perceptual parameters may help our understanding of human visual processing.
ContributorsHolloway, Steven R. (Author) / McBeath, Michael K. (Thesis advisor) / Homa, Donald (Committee member) / Macknik, Stephen L. (Committee member) / Arizona State University (Publisher)
Created2011
137004-Thumbnail Image.png
Description
Brain-computer interface technology establishes communication between the brain and a computer, allowing users to control devices, machines, or virtual objects using their thoughts. This study investigates optimal conditions to facilitate learning to operate this interface. It compares two biofeedback methods, which dictate the relationship between brain activity and the movement

Brain-computer interface technology establishes communication between the brain and a computer, allowing users to control devices, machines, or virtual objects using their thoughts. This study investigates optimal conditions to facilitate learning to operate this interface. It compares two biofeedback methods, which dictate the relationship between brain activity and the movement of a virtual ball in a target-hitting task. Preliminary results indicate that a method in which the position of the virtual object directly relates to the amplitude of brain signals is most conducive to success. In addition, this research explores learning in the context of neural signals during training with a BCI task. Specifically, it investigates whether subjects can adapt to parameters of the interface without guidance. This experiment prompts subjects to modulate brain signals spectrally, spatially, and temporally, as well differentially to discriminate between two different targets. However, subjects are not given knowledge regarding these desired changes, nor are they given instruction on how to move the virtual ball. Preliminary analysis of signal trends suggests that some successful participants are able to adapt brain wave activity in certain pre-specified locations and frequency bands over time in order to achieve control. Future studies will further explore these phenomena, and future BCI projects will be advised by these methods, which will give insight into the creation of more intuitive and reliable BCI technology.
ContributorsLancaster, Jenessa Mae (Co-author) / Appavu, Brian (Co-author) / Wahnoun, Remy (Co-author, Committee member) / Helms Tillery, Stephen (Thesis director) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor) / Department of Psychology (Contributor)
Created2014-05
134052-Thumbnail Image.png
Description
It is a well-established finding in memory research that spacing or distributing information, as opposed to blocking all the information together, results in an enhanced memory of the learned material. Recently, researchers have decided to investigate if this spacing effect is also beneficial in category learning. In a set of

It is a well-established finding in memory research that spacing or distributing information, as opposed to blocking all the information together, results in an enhanced memory of the learned material. Recently, researchers have decided to investigate if this spacing effect is also beneficial in category learning. In a set of experiments, Carvalho & Goldstone (2013), demonstrated that a blocked presentation showed an advantage during learning, but that ultimately, the distributed presentation yielded better performance during a post-learning transfer test. However, we have identified a major methodological issue in this study that we believe contaminates the results in a way that leads to an inflation and misrepresentation of learning levels. The present study aimed to correct this issue and re-examine whether a blocked or distributed presentation enhances the learning and subsequent generalization of categories. We also introduced two shaping variables, category size and distortion level at transfer, in addition to the mode of presentation (blocked versus distributed). Results showed no significant differences of mode of presentation at either the learning or transfer phases, thus supporting our concern about the previous study. Additional findings showed benefits in learning categories with a greater category size, as well as higher classification accuracy of novel stimuli at lower-distortion levels.
ContributorsJacoby, Victoria Leigh (Author) / Homa, Donald (Thesis director) / Brewer, Gene (Committee member) / Davis, Mary (Committee member) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
154351-Thumbnail Image.png
Description
Watanabe, Náñez, and Sasaki (2001) introduced a phenomenon they named “task-irrelevant perceptual learning” in which near-threshold stimuli that are not essential to a given task can be associatively learned when consistently and concurrently paired with the focal task. The present study employs a visual paired-shapes recognition task, using colored

Watanabe, Náñez, and Sasaki (2001) introduced a phenomenon they named “task-irrelevant perceptual learning” in which near-threshold stimuli that are not essential to a given task can be associatively learned when consistently and concurrently paired with the focal task. The present study employs a visual paired-shapes recognition task, using colored polygon targets as salient attended focal stimuli, with the goal of comparing the increases in perceptual sensitivity observed when near-threshold stimuli are temporally paired in varying manners with focal targets. Experiment 1 separated and compared the target-acquisition and target-recognition phases and revealed that sensitivity improved most when the near-threshold motion stimuli were paired with the focal target-acquisition phase. The parameters of sensitivity improvement were motion detection, critical flicker fusion threshold (CFFT), and letter-orientation decoding. Experiment 2 tested perceptual learning of near-threshold stimuli when they were offset from the focal stimuli presentation by ±350 ms. Performance improvements in motion detection, CFFT, and decoding were significantly greater for the group in which near-threshold motion was presented after the focal target. Experiment 3 showed that participants with reading difficulties who were exposed to focal target-acquisition training improved in sensitivity in all visual measures. Experiment 4 tested whether near-threshold stimulus learning occurred cross-modally with auditory stimuli and served as an active control for the first, second, and third experiments. Here, a tone was paired with all focal stimuli, but the tone was 1 Hz higher or lower when paired with the targeted focal stimuli associated with recognition. In Experiment 4, there was no improvement in visual sensitivity, but there was significant improvement in tone discrimination. Thus, this study, as a whole, confirms that pairing near-threshold stimuli with focal stimuli can improve performance in just tone discrimination, or in motion detection, CFFT, and letter decoding. Findings further support the thesis that the act of trying to remember a focal target also elicited greater associative learning of correlated near-threshold stimulus than the act of recognizing a target. Finally, these findings support that we have developed a visual learning paradigm that may potentially mitigate some of the visual deficits that are often experienced by the reading disabled.
ContributorsHolloway, Steven Robert (Author) / Mcbeath, Michael K (Thesis advisor) / Macknik, Stephen (Committee member) / Homa, Donald (Committee member) / Náñez, Sr., José E (Committee member) / Arizona State University (Publisher)
Created2016