Matching Items (4)
Filtering by

Clear all filters

152006-Thumbnail Image.png
Description
When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are

When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are searching. However, unlike laboratory experiments, searchers in the real-world rarely have perfect knowledge regarding the appearance of their target. In five experiments (with nearly 1,000 participants), we examined how the precision of the observer's template affects their ability to conduct visual search. Specifically, we simulated template imprecision in two ways: First, by contaminating our searchers' templates with inaccurate features, and second, by introducing extraneous features to the template that were unhelpful. In those experiments we recorded the eye movements of our searchers in order to make inferences regarding the extent to which attentional guidance and decision-making are hindered by template imprecision. We also examined a third way in which templates may become imprecise; namely, that they may deteriorate over time. Overall, our findings support a dual-function theory of the target template, and highlight the importance of examining template precision in future research.
ContributorsHout, Michael C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Reichle, Erik (Committee member) / Arizona State University (Publisher)
Created2013
152678-Thumbnail Image.png
Description
Recognition memory was investigated for naturalistic dynamic scenes. Although visual recognition for static objects and scenes has been investigated previously and found to be extremely robust in terms of fidelity and retention, visual recognition for dynamic scenes has received much less attention. In four experiments, participants view a number of

Recognition memory was investigated for naturalistic dynamic scenes. Although visual recognition for static objects and scenes has been investigated previously and found to be extremely robust in terms of fidelity and retention, visual recognition for dynamic scenes has received much less attention. In four experiments, participants view a number of clips from novel films and are then tasked to complete a recognition test containing frames from the previously viewed films and difficult foil frames. Recognition performance is good when foils are taken from other parts of the same film (Experiment 1), but degrades greatly when foils are taken from unseen gaps from within the viewed footage (Experiments 3 and 4). Removing all non-target frames had a serious effect on recognition performance (Experiment 2). Across all experiments, presenting the films as a random series of clips seemed to have no effect on recognition performance. Patterns of accuracy and response latency in Experiments 3 and 4 appear to be a result of a serial-search process. It is concluded that visual representations of dynamic scenes may be stored as units of events, and participant's old
ew judgments of individual frames were better characterized by a cued-recall paradigm than traditional recognition judgments.
ContributorsFerguson, Ryan (Author) / Homa, Donald (Thesis advisor) / Goldinger, Stephen (Committee member) / Glenberg, Arthur (Committee member) / Brewer, Gene (Committee member) / Arizona State University (Publisher)
Created2014
152859-Thumbnail Image.png
Description
Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts

Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts over time (Chun and Jiang, 1998; 1999). Given that people learn these repeated distractor configurations and identities, might they also implicitly encode semantic information about distractors, if this information is predictive of the target location? We investigated this question with a series of visual search experiments using real-world stimuli within a contextual cueing paradigm (Chun and Jiang, 1998). Specifically, we tested whether participants could learn, through experience, that the target images they are searching for are always located near specific categories of distractors, such as food items or animals. We also varied the spatial consistency of target locations, in order to rule out implicit learning of repeated target locations. Results suggest that participants implicitly learned the target-predictive categories of distractors and used this information during search, although these results failed to reach significance. This lack of significance may have been due the relative simplicity of the search task, however, and several new experiments are proposed to further investigate whether repeated category information can benefit search.
ContributorsWalenchok, Stephen C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / Arizona State University (Publisher)
Created2014
150444-Thumbnail Image.png
Description
The present study explores the role of motion in the perception of form from dynamic occlusion, employing color to help isolate the contributions of both visual pathways. Although the cells that respond to color cues in the environment usually feed into the ventral stream, humans can perceive motion based on

The present study explores the role of motion in the perception of form from dynamic occlusion, employing color to help isolate the contributions of both visual pathways. Although the cells that respond to color cues in the environment usually feed into the ventral stream, humans can perceive motion based on chromatic cues. The current study was designed to use grey, green, and red stimuli to successively limit the amount of information available to the dorsal stream pathway, while providing roughly equal information to the ventral system. Twenty-one participants identified shapes that were presented in grey, green, and red and were defined by dynamic occlusion. The shapes were then presented again in a static condition where the maximum occlusions were presented as before, but without motion. Results showed an interaction between the motion and static conditions in that when the speed of presentation increased, performance in the motion conditions became significantly less accurate than in the static conditions. The grey and green motion conditions crossed static performance at the same point, whereas the red motion condition crossed at a much slower speed. These data are consistent with a model of neural processing in which the main visual systems share information. Moreover, they support the notion that presenting stimuli in specific colors may help isolate perceptual pathways for scientific investigation. Given the potential for chromatic cues to target specific visual systems in the performance of dynamic object recognition, exploring these perceptual parameters may help our understanding of human visual processing.
ContributorsHolloway, Steven R. (Author) / McBeath, Michael K. (Thesis advisor) / Homa, Donald (Committee member) / Macknik, Stephen L. (Committee member) / Arizona State University (Publisher)
Created2011