Matching Items (7)
Filtering by

Clear all filters

152006-Thumbnail Image.png
Description
When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are

When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are searching. However, unlike laboratory experiments, searchers in the real-world rarely have perfect knowledge regarding the appearance of their target. In five experiments (with nearly 1,000 participants), we examined how the precision of the observer's template affects their ability to conduct visual search. Specifically, we simulated template imprecision in two ways: First, by contaminating our searchers' templates with inaccurate features, and second, by introducing extraneous features to the template that were unhelpful. In those experiments we recorded the eye movements of our searchers in order to make inferences regarding the extent to which attentional guidance and decision-making are hindered by template imprecision. We also examined a third way in which templates may become imprecise; namely, that they may deteriorate over time. Overall, our findings support a dual-function theory of the target template, and highlight the importance of examining template precision in future research.
ContributorsHout, Michael C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Reichle, Erik (Committee member) / Arizona State University (Publisher)
Created2013
152678-Thumbnail Image.png
Description
Recognition memory was investigated for naturalistic dynamic scenes. Although visual recognition for static objects and scenes has been investigated previously and found to be extremely robust in terms of fidelity and retention, visual recognition for dynamic scenes has received much less attention. In four experiments, participants view a number of

Recognition memory was investigated for naturalistic dynamic scenes. Although visual recognition for static objects and scenes has been investigated previously and found to be extremely robust in terms of fidelity and retention, visual recognition for dynamic scenes has received much less attention. In four experiments, participants view a number of clips from novel films and are then tasked to complete a recognition test containing frames from the previously viewed films and difficult foil frames. Recognition performance is good when foils are taken from other parts of the same film (Experiment 1), but degrades greatly when foils are taken from unseen gaps from within the viewed footage (Experiments 3 and 4). Removing all non-target frames had a serious effect on recognition performance (Experiment 2). Across all experiments, presenting the films as a random series of clips seemed to have no effect on recognition performance. Patterns of accuracy and response latency in Experiments 3 and 4 appear to be a result of a serial-search process. It is concluded that visual representations of dynamic scenes may be stored as units of events, and participant's old
ew judgments of individual frames were better characterized by a cued-recall paradigm than traditional recognition judgments.
ContributorsFerguson, Ryan (Author) / Homa, Donald (Thesis advisor) / Goldinger, Stephen (Committee member) / Glenberg, Arthur (Committee member) / Brewer, Gene (Committee member) / Arizona State University (Publisher)
Created2014
152859-Thumbnail Image.png
Description
Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts

Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts over time (Chun and Jiang, 1998; 1999). Given that people learn these repeated distractor configurations and identities, might they also implicitly encode semantic information about distractors, if this information is predictive of the target location? We investigated this question with a series of visual search experiments using real-world stimuli within a contextual cueing paradigm (Chun and Jiang, 1998). Specifically, we tested whether participants could learn, through experience, that the target images they are searching for are always located near specific categories of distractors, such as food items or animals. We also varied the spatial consistency of target locations, in order to rule out implicit learning of repeated target locations. Results suggest that participants implicitly learned the target-predictive categories of distractors and used this information during search, although these results failed to reach significance. This lack of significance may have been due the relative simplicity of the search task, however, and several new experiments are proposed to further investigate whether repeated category information can benefit search.
ContributorsWalenchok, Stephen C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / Arizona State University (Publisher)
Created2014
150150-Thumbnail Image.png
Description
Learning and transfer were investigated for a categorical structure in which relevant stimulus information could be mapped without loss from one modality to another. The category space was composed of three non-overlapping, linearly-separable categories. Each stimulus was composed of a sequence of on-off events that varied in duration and number

Learning and transfer were investigated for a categorical structure in which relevant stimulus information could be mapped without loss from one modality to another. The category space was composed of three non-overlapping, linearly-separable categories. Each stimulus was composed of a sequence of on-off events that varied in duration and number of sub-events (complexity). Categories were learned visually, haptically, or auditorily, and transferred to the same or an alternate modality. The transfer set contained old, new, and prototype stimuli, and subjects made both classification and recognition judgments. The results showed an early learning advantage in the visual modality, with transfer performance varying among the conditions in both classification and recognition. In general, classification accuracy was highest for the category prototype, with false recognition of the category prototype higher in the cross-modality conditions. The results are discussed in terms of current theories in modality transfer, and shed preliminary light on categorical transfer of temporal stimuli.
ContributorsFerguson, Ryan (Author) / Homa, Donald (Thesis advisor) / Goldinger, Stephen (Committee member) / Glenberg, Arthur (Committee member) / Arizona State University (Publisher)
Created2011
150044-Thumbnail Image.png
Description
The purpose of this study was to investigate the effect of partial exemplar experience on category formation and use. Participants had either complete or limited access to the three dimensions that defined categories by dimensions within different modalities. The concept of "crucial dimension" was introduced and the role it plays

The purpose of this study was to investigate the effect of partial exemplar experience on category formation and use. Participants had either complete or limited access to the three dimensions that defined categories by dimensions within different modalities. The concept of "crucial dimension" was introduced and the role it plays in category definition was explained. It was hypothesized that the effects of partial experience are not explained by a shifting of attention between dimensions (Taylor & Ross, 2009) but rather by an increased reliance on prototypical values used to fill in missing information during incomplete experiences. Results indicated that participants (1) do not fill in missing information with prototypical values, (2) integrate information less efficiently between different modalities than within a single modality, and (3) have difficulty learning only when partial experience prevents access to diagnostic information.
ContributorsCrawford, Thomas (Author) / Homa, Donald (Thesis advisor) / Mcbeath, Micheal (Committee member) / Glenberg, Arthur (Committee member) / Arizona State University (Publisher)
Created2011
150444-Thumbnail Image.png
Description
The present study explores the role of motion in the perception of form from dynamic occlusion, employing color to help isolate the contributions of both visual pathways. Although the cells that respond to color cues in the environment usually feed into the ventral stream, humans can perceive motion based on

The present study explores the role of motion in the perception of form from dynamic occlusion, employing color to help isolate the contributions of both visual pathways. Although the cells that respond to color cues in the environment usually feed into the ventral stream, humans can perceive motion based on chromatic cues. The current study was designed to use grey, green, and red stimuli to successively limit the amount of information available to the dorsal stream pathway, while providing roughly equal information to the ventral system. Twenty-one participants identified shapes that were presented in grey, green, and red and were defined by dynamic occlusion. The shapes were then presented again in a static condition where the maximum occlusions were presented as before, but without motion. Results showed an interaction between the motion and static conditions in that when the speed of presentation increased, performance in the motion conditions became significantly less accurate than in the static conditions. The grey and green motion conditions crossed static performance at the same point, whereas the red motion condition crossed at a much slower speed. These data are consistent with a model of neural processing in which the main visual systems share information. Moreover, they support the notion that presenting stimuli in specific colors may help isolate perceptual pathways for scientific investigation. Given the potential for chromatic cues to target specific visual systems in the performance of dynamic object recognition, exploring these perceptual parameters may help our understanding of human visual processing.
ContributorsHolloway, Steven R. (Author) / McBeath, Michael K. (Thesis advisor) / Homa, Donald (Committee member) / Macknik, Stephen L. (Committee member) / Arizona State University (Publisher)
Created2011
153874-Thumbnail Image.png
Description
Emergent processes can roughly be defined as processes that self-arise from interactions without a centralized control. People have many robust misconceptions in explaining emergent process concepts such as natural selection and diffusion. This is because they lack a proper categorical representation of emergent processes and often misclassify these processes into

Emergent processes can roughly be defined as processes that self-arise from interactions without a centralized control. People have many robust misconceptions in explaining emergent process concepts such as natural selection and diffusion. This is because they lack a proper categorical representation of emergent processes and often misclassify these processes into the sequential processes category that they are more familiar with. The two kinds of processes can be distinguished by their second-order features that describe how one interaction relates to another interaction. This study investigated if teaching emergent second-order features can help people more correctly categorize new processes, it also compared different instructional methods in teaching emergent second-order features. The prediction was that learning emergent features should help more than learning sequential features because what most people lack is the representation of emergent processes. Results confirmed this by showing participants who generated emergent features and got correct features as feedback were better at distinguishing two kinds of processes compared to participants who rewrote second-order sequential features. Another finding was that participants who generated emergent features followed by reading correct features as feedback did better in distinguishing the processes than participants who only attempted to generate the emergent features without feedback. Finally, switching the order of instruction by teaching emergent features and then asking participants to explain the difference between emergent and sequential features resulted in equivalent learning gain as the experimental group that received feedback. These results proved teaching emergent second-order features helps people categorize processes and demonstrated the most efficient way to teach them.
ContributorsXu, Dongchen (Author) / Chi, Michelene (Thesis advisor) / Homa, Donald (Committee member) / Glenberg, Arthur (Committee member) / Arizona State University (Publisher)
Created2015