Matching Items (4)
Filtering by

Clear all filters

152006-Thumbnail Image.png
Description
When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are

When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are searching. However, unlike laboratory experiments, searchers in the real-world rarely have perfect knowledge regarding the appearance of their target. In five experiments (with nearly 1,000 participants), we examined how the precision of the observer's template affects their ability to conduct visual search. Specifically, we simulated template imprecision in two ways: First, by contaminating our searchers' templates with inaccurate features, and second, by introducing extraneous features to the template that were unhelpful. In those experiments we recorded the eye movements of our searchers in order to make inferences regarding the extent to which attentional guidance and decision-making are hindered by template imprecision. We also examined a third way in which templates may become imprecise; namely, that they may deteriorate over time. Overall, our findings support a dual-function theory of the target template, and highlight the importance of examining template precision in future research.
ContributorsHout, Michael C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Reichle, Erik (Committee member) / Arizona State University (Publisher)
Created2013
150716-Thumbnail Image.png
Description
Current theoretical debate, crossing the bounds of memory theory and mental imagery, surrounds the role of eye movements in successful encoding and retrieval. Although the eyes have been shown to revisit previously-viewed locations during retrieval, the functional role of these saccades is not known. Understanding the potential role of eye

Current theoretical debate, crossing the bounds of memory theory and mental imagery, surrounds the role of eye movements in successful encoding and retrieval. Although the eyes have been shown to revisit previously-viewed locations during retrieval, the functional role of these saccades is not known. Understanding the potential role of eye movements may help address classic questions in recognition memory. Specifically, are episodic traces rich and detailed, characterized by a single strength-driven recognition process, or are they better described by two separate processes, one for vague information and one for the retrieval of detail? Three experiments are reported, in which participants encoded audio-visual information while completing controlled patterns of eye movements. By presenting information in four sources (i.e., voices), assessments of specific and partial source memory were measured at retrieval. Across experiments, participants' eye movements at test were manipulated. Experiment 1 allowed free viewing, Experiment 2 required externally-cued fixations to previously-relevant (or irrelevant) screen locations, and Experiment 3 required externally-cued new or familiar oculomotor patterns to multiple screen locations in succession. Although eye movements were spontaneously reinstated when gaze was unconstrained during retrieval (Experiment 1), externally-cueing participants to re-engage in fixations or oculomotor patterns from encoding (Experiments 2 and 3) did not enhance retrieval. Across all experiments, participants' memories were well-described by signal-detection models of memory. Source retrieval was characterized by a continuous process, with evidence that source retrieval occurred following item memory failures, and additional evidence that participants partially recollected source, in the absence of specific item retrieval. Pupillometry provided an unbiased metric by which to compute receiver operating characteristic (ROC) curves, which were consistently curvilinear (but linear in z-space), supporting signal-detection predictions over those from dual-process theories. Implications for theoretical views of memory representations are discussed.
ContributorsPapesh, Megan H (Author) / Goldinger, Stephen D (Thesis advisor) / Brewer, Gene A. (Committee member) / Reichle, Erik D. (Committee member) / Homa, Donald (Committee member) / Glenberg, Arthur M. (Committee member) / Arizona State University (Publisher)
Created2012
150444-Thumbnail Image.png
Description
The present study explores the role of motion in the perception of form from dynamic occlusion, employing color to help isolate the contributions of both visual pathways. Although the cells that respond to color cues in the environment usually feed into the ventral stream, humans can perceive motion based on

The present study explores the role of motion in the perception of form from dynamic occlusion, employing color to help isolate the contributions of both visual pathways. Although the cells that respond to color cues in the environment usually feed into the ventral stream, humans can perceive motion based on chromatic cues. The current study was designed to use grey, green, and red stimuli to successively limit the amount of information available to the dorsal stream pathway, while providing roughly equal information to the ventral system. Twenty-one participants identified shapes that were presented in grey, green, and red and were defined by dynamic occlusion. The shapes were then presented again in a static condition where the maximum occlusions were presented as before, but without motion. Results showed an interaction between the motion and static conditions in that when the speed of presentation increased, performance in the motion conditions became significantly less accurate than in the static conditions. The grey and green motion conditions crossed static performance at the same point, whereas the red motion condition crossed at a much slower speed. These data are consistent with a model of neural processing in which the main visual systems share information. Moreover, they support the notion that presenting stimuli in specific colors may help isolate perceptual pathways for scientific investigation. Given the potential for chromatic cues to target specific visual systems in the performance of dynamic object recognition, exploring these perceptual parameters may help our understanding of human visual processing.
ContributorsHolloway, Steven R. (Author) / McBeath, Michael K. (Thesis advisor) / Homa, Donald (Committee member) / Macknik, Stephen L. (Committee member) / Arizona State University (Publisher)
Created2011
154351-Thumbnail Image.png
Description
Watanabe, Náñez, and Sasaki (2001) introduced a phenomenon they named “task-irrelevant perceptual learning” in which near-threshold stimuli that are not essential to a given task can be associatively learned when consistently and concurrently paired with the focal task. The present study employs a visual paired-shapes recognition task, using colored

Watanabe, Náñez, and Sasaki (2001) introduced a phenomenon they named “task-irrelevant perceptual learning” in which near-threshold stimuli that are not essential to a given task can be associatively learned when consistently and concurrently paired with the focal task. The present study employs a visual paired-shapes recognition task, using colored polygon targets as salient attended focal stimuli, with the goal of comparing the increases in perceptual sensitivity observed when near-threshold stimuli are temporally paired in varying manners with focal targets. Experiment 1 separated and compared the target-acquisition and target-recognition phases and revealed that sensitivity improved most when the near-threshold motion stimuli were paired with the focal target-acquisition phase. The parameters of sensitivity improvement were motion detection, critical flicker fusion threshold (CFFT), and letter-orientation decoding. Experiment 2 tested perceptual learning of near-threshold stimuli when they were offset from the focal stimuli presentation by ±350 ms. Performance improvements in motion detection, CFFT, and decoding were significantly greater for the group in which near-threshold motion was presented after the focal target. Experiment 3 showed that participants with reading difficulties who were exposed to focal target-acquisition training improved in sensitivity in all visual measures. Experiment 4 tested whether near-threshold stimulus learning occurred cross-modally with auditory stimuli and served as an active control for the first, second, and third experiments. Here, a tone was paired with all focal stimuli, but the tone was 1 Hz higher or lower when paired with the targeted focal stimuli associated with recognition. In Experiment 4, there was no improvement in visual sensitivity, but there was significant improvement in tone discrimination. Thus, this study, as a whole, confirms that pairing near-threshold stimuli with focal stimuli can improve performance in just tone discrimination, or in motion detection, CFFT, and letter decoding. Findings further support the thesis that the act of trying to remember a focal target also elicited greater associative learning of correlated near-threshold stimulus than the act of recognizing a target. Finally, these findings support that we have developed a visual learning paradigm that may potentially mitigate some of the visual deficits that are often experienced by the reading disabled.
ContributorsHolloway, Steven Robert (Author) / Mcbeath, Michael K (Thesis advisor) / Macknik, Stephen (Committee member) / Homa, Donald (Committee member) / Náñez, Sr., José E (Committee member) / Arizona State University (Publisher)
Created2016