Matching Items (14)
Filtering by

Clear all filters

152061-Thumbnail Image.png
Description
Most people are experts in some area of information; however, they may not be knowledgeable about other closely related areas. How knowledge is generalized to hierarchically related categories was explored. Past work has found little to no generalization to categories closely related to learned categories. These results do not fit

Most people are experts in some area of information; however, they may not be knowledgeable about other closely related areas. How knowledge is generalized to hierarchically related categories was explored. Past work has found little to no generalization to categories closely related to learned categories. These results do not fit well with other work focusing on attention during and after category learning. The current work attempted to merge these two areas of by creating a category structure with the best chance to detect generalization. Participants learned order level bird categories and family level wading bird categories. Then participants completed multiple measures to test generalization to old wading bird categories, new wading bird categories, owl and raptor categories, and lizard categories. As expected, the generalization measures converged on a single overall pattern of generalization. No generalization was found, except for already learned categories. This pattern fits well with past work on generalization within a hierarchy, but do not fit well with theories of dimensional attention. Reasons why these findings do not match are discussed, as well as directions for future research.
ContributorsLancaster, Matthew E (Author) / Homa, Donald (Thesis advisor) / Glenberg, Arthur (Committee member) / Chi, Michelene (Committee member) / Brewer, Gene (Committee member) / Arizona State University (Publisher)
Created2013
151981-Thumbnail Image.png
Description
Magicians are informal cognitive scientists who regularly test their hypotheses in the real world. As such, they can provide scientists with novel hypotheses for formal psychological research as well as a real-world context in which to study them. One domain where magic can directly inform science is the deployment of

Magicians are informal cognitive scientists who regularly test their hypotheses in the real world. As such, they can provide scientists with novel hypotheses for formal psychological research as well as a real-world context in which to study them. One domain where magic can directly inform science is the deployment of attention in time and across modalities. Both magicians and scientists have an incomplete understanding of how attention operates in time, rather than in space. However, magicians have highlighted a set of variables that can create moments of visual attentional suppression, which they call "off-beats," and these variables can speak to modern models of temporal attention. The current research examines two of these variables under conditions ranging from artificial laboratory tasks to the (almost) natural viewing of magic tricks. Across three experiments, I show that the detection of subtle dot probes in a noisy visual display and pieces of sleight of hand in magic tricks can be influenced by the seemingly irrelevant rhythmic qualities of auditory stimuli (cross-modal attentional entrainment) and processes of working memory updating (akin to the attentional blink).
ContributorsBarnhart, Anthony S (Author) / Goldinger, Stephen D. (Thesis advisor) / Glenberg, Arthur M. (Committee member) / Homa, Donald (Committee member) / Simons, Daniel J. (Committee member) / Arizona State University (Publisher)
Created2013
152920-Thumbnail Image.png
Description
Categories are often defined by rules regarding their features. These rules may be intensely complex yet, despite the complexity of these rules, we are often able to learn them with sufficient practice. A possible explanation for how we arrive at consistent category judgments despite these difficulties would be that we

Categories are often defined by rules regarding their features. These rules may be intensely complex yet, despite the complexity of these rules, we are often able to learn them with sufficient practice. A possible explanation for how we arrive at consistent category judgments despite these difficulties would be that we may define these complex categories such as chairs, tables, or stairs by understanding the simpler rules defined by potential interactions with these objects. This concept, called grounding, allows for the learning and transfer of complex categorization rules if said rules are capable of being expressed in a more simple fashion by virtue of meaningful physical interactions. The present experiment tested this hypothesis by having participants engage in either a Rule Based (RB) or Information Integration (II) categorization task with instructions to engage with the stimuli in either a non-interactive or interactive fashion. If participants were capable of grounding the categories, which were defined in the II task with a complex visual rule, to a simpler interactive rule, then participants with interactive instructions should outperform participants with non-interactive instructions. Results indicated that physical interaction with stimuli had a marginally beneficial effect on category learning, but this effect seemed most prevalent in participants were engaged in an II task.
ContributorsCrawford, Thomas (Author) / Homa, Donald (Thesis advisor) / Glenberg, Arthur (Committee member) / McBeath, Michael (Committee member) / Brewer, Gene (Committee member) / Arizona State University (Publisher)
Created2014
152859-Thumbnail Image.png
Description
Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts

Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts over time (Chun and Jiang, 1998; 1999). Given that people learn these repeated distractor configurations and identities, might they also implicitly encode semantic information about distractors, if this information is predictive of the target location? We investigated this question with a series of visual search experiments using real-world stimuli within a contextual cueing paradigm (Chun and Jiang, 1998). Specifically, we tested whether participants could learn, through experience, that the target images they are searching for are always located near specific categories of distractors, such as food items or animals. We also varied the spatial consistency of target locations, in order to rule out implicit learning of repeated target locations. Results suggest that participants implicitly learned the target-predictive categories of distractors and used this information during search, although these results failed to reach significance. This lack of significance may have been due the relative simplicity of the search task, however, and several new experiments are proposed to further investigate whether repeated category information can benefit search.
ContributorsWalenchok, Stephen C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / Arizona State University (Publisher)
Created2014
150150-Thumbnail Image.png
Description
Learning and transfer were investigated for a categorical structure in which relevant stimulus information could be mapped without loss from one modality to another. The category space was composed of three non-overlapping, linearly-separable categories. Each stimulus was composed of a sequence of on-off events that varied in duration and number

Learning and transfer were investigated for a categorical structure in which relevant stimulus information could be mapped without loss from one modality to another. The category space was composed of three non-overlapping, linearly-separable categories. Each stimulus was composed of a sequence of on-off events that varied in duration and number of sub-events (complexity). Categories were learned visually, haptically, or auditorily, and transferred to the same or an alternate modality. The transfer set contained old, new, and prototype stimuli, and subjects made both classification and recognition judgments. The results showed an early learning advantage in the visual modality, with transfer performance varying among the conditions in both classification and recognition. In general, classification accuracy was highest for the category prototype, with false recognition of the category prototype higher in the cross-modality conditions. The results are discussed in terms of current theories in modality transfer, and shed preliminary light on categorical transfer of temporal stimuli.
ContributorsFerguson, Ryan (Author) / Homa, Donald (Thesis advisor) / Goldinger, Stephen (Committee member) / Glenberg, Arthur (Committee member) / Arizona State University (Publisher)
Created2011
150044-Thumbnail Image.png
Description
The purpose of this study was to investigate the effect of partial exemplar experience on category formation and use. Participants had either complete or limited access to the three dimensions that defined categories by dimensions within different modalities. The concept of "crucial dimension" was introduced and the role it plays

The purpose of this study was to investigate the effect of partial exemplar experience on category formation and use. Participants had either complete or limited access to the three dimensions that defined categories by dimensions within different modalities. The concept of "crucial dimension" was introduced and the role it plays in category definition was explained. It was hypothesized that the effects of partial experience are not explained by a shifting of attention between dimensions (Taylor & Ross, 2009) but rather by an increased reliance on prototypical values used to fill in missing information during incomplete experiences. Results indicated that participants (1) do not fill in missing information with prototypical values, (2) integrate information less efficiently between different modalities than within a single modality, and (3) have difficulty learning only when partial experience prevents access to diagnostic information.
ContributorsCrawford, Thomas (Author) / Homa, Donald (Thesis advisor) / Mcbeath, Micheal (Committee member) / Glenberg, Arthur (Committee member) / Arizona State University (Publisher)
Created2011
150444-Thumbnail Image.png
Description
The present study explores the role of motion in the perception of form from dynamic occlusion, employing color to help isolate the contributions of both visual pathways. Although the cells that respond to color cues in the environment usually feed into the ventral stream, humans can perceive motion based on

The present study explores the role of motion in the perception of form from dynamic occlusion, employing color to help isolate the contributions of both visual pathways. Although the cells that respond to color cues in the environment usually feed into the ventral stream, humans can perceive motion based on chromatic cues. The current study was designed to use grey, green, and red stimuli to successively limit the amount of information available to the dorsal stream pathway, while providing roughly equal information to the ventral system. Twenty-one participants identified shapes that were presented in grey, green, and red and were defined by dynamic occlusion. The shapes were then presented again in a static condition where the maximum occlusions were presented as before, but without motion. Results showed an interaction between the motion and static conditions in that when the speed of presentation increased, performance in the motion conditions became significantly less accurate than in the static conditions. The grey and green motion conditions crossed static performance at the same point, whereas the red motion condition crossed at a much slower speed. These data are consistent with a model of neural processing in which the main visual systems share information. Moreover, they support the notion that presenting stimuli in specific colors may help isolate perceptual pathways for scientific investigation. Given the potential for chromatic cues to target specific visual systems in the performance of dynamic object recognition, exploring these perceptual parameters may help our understanding of human visual processing.
ContributorsHolloway, Steven R. (Author) / McBeath, Michael K. (Thesis advisor) / Homa, Donald (Committee member) / Macknik, Stephen L. (Committee member) / Arizona State University (Publisher)
Created2011
156857-Thumbnail Image.png
Description
Previous research from Rajsic et al. (2015, 2017) suggests that a visual form of confirmation bias arises during visual search for simple stimuli, under certain conditions, wherein people are biased to seek stimuli matching an initial cue color even when this strategy is not optimal. Furthermore, recent research from our

Previous research from Rajsic et al. (2015, 2017) suggests that a visual form of confirmation bias arises during visual search for simple stimuli, under certain conditions, wherein people are biased to seek stimuli matching an initial cue color even when this strategy is not optimal. Furthermore, recent research from our lab suggests that varying the prevalence of cue-colored targets does not attenuate the visual confirmation bias, although people still fail to detect rare targets regardless of whether they match the initial cue (Walenchok et al. under review). The present investigation examines the boundary conditions of the visual confirmation bias under conditions of equal, low, and high cued-target frequency. Across experiments, I found that: (1) People are strongly susceptible to the low-prevalence effect, often failing to detect rare targets regardless of whether they match the cue (Wolfe et al., 2005). (2) However, they are still biased to seek cue-colored stimuli, even when such targets are rare. (3) Regardless of target prevalence, people employ strategies when search is made sufficiently burdensome with distributed items and large search sets. These results further support previous findings that the low-prevalence effect arises from a failure to perceive rare items (Hout et al., 2015), while visual confirmation bias is a bias of attentional guidance (Rajsic et al., 2015, 2017).
ContributorsWalenchok, Stephen Charles (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / McClure, Samuel M. (Committee member) / Arizona State University (Publisher)
Created2018
133028-Thumbnail Image.png
Description
Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel

Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel out the resulting sensory feedback. Currently, there are no published accounts of the perception of tactile signals for motor tasks and contexts related to the lips during both speech planning and production. In this study, we measured the responsiveness of the somatosensory system during speech planning using light electrical stimulation below the lower lip by comparing perception during mixed speaking and silent reading conditions. Participants were asked to judge whether a constant near-threshold electrical stimulation (subject-specific intensity, 85% detected at rest) was present during different time points relative to an initial visual cue. In the speaking condition, participants overtly produced target words shown on a computer monitor. In the reading condition, participants read the same target words silently to themselves without any movement or sound. We found that detection of the stimulus was attenuated during speaking conditions while remaining at a constant level close to the perceptual threshold throughout the silent reading condition. Perceptual modulation was most intense during speech production and showed some attenuation just prior to speech production during the planning period of speech. This demonstrates that there is a significant decrease in the responsiveness of the somatosensory system during speech production as well as milliseconds before speech is even produced which has implications for speech disorders such as stuttering and schizophrenia with pronounced deficits in the somatosensory system.
ContributorsMcguffin, Brianna Jean (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Department of Psychology (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
134804-Thumbnail Image.png
Description
Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed

Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed with difficulty. While the presence of SEM in the stroke survivor population advances scientific understanding of movement capabilities following a stroke, published studies using the SEM phenomenon only examined one joint. The ability of SEM to generate multi-jointed movements is understudied and consequently limits SEM as a potential therapy tool. In order to apply SEM as a therapy tool however, the biomechanics of the arm in multi-jointed movement planning and execution must be better understood. Thus, the objective of our study was to evaluate if SEM could elicit multi-joint reaching movements that were accurate in an unrestrained, two-dimensional workspace. Data was collected from ten subjects with no previous neck, arm, or brain injury. Each subject performed a reaching task to five Targets that were equally spaced in a semi-circle to create a two-dimensional workspace. The subject reached to each Target following a sequence of two non-startling acoustic stimuli cues: "Get Ready" and "Go". A loud acoustic stimuli was randomly substituted for the "Go" cue. We hypothesized that SEM is accessible and accurate for unrestricted multi-jointed reaching tasks in a functional workspace and is therefore independent of movement direction. Our results found that SEM is possible in all five Target directions. The probability of evoking SEM and the movement kinematics (i.e. total movement time, linear deviation, average velocity) to each Target are not statistically different. Thus, we conclude that SEM is possible in a functional workspace and is not dependent on where arm stability is maximized. Moreover, coordinated preparation and storage of a multi-jointed movement is indeed possible.
ContributorsOssanna, Meilin Ryan (Author) / Honeycutt, Claire (Thesis director) / Schaefer, Sydney (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12