Matching Items (5)
Filtering by

Clear all filters

152006-Thumbnail Image.png
Description
When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are

When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are searching. However, unlike laboratory experiments, searchers in the real-world rarely have perfect knowledge regarding the appearance of their target. In five experiments (with nearly 1,000 participants), we examined how the precision of the observer's template affects their ability to conduct visual search. Specifically, we simulated template imprecision in two ways: First, by contaminating our searchers' templates with inaccurate features, and second, by introducing extraneous features to the template that were unhelpful. In those experiments we recorded the eye movements of our searchers in order to make inferences regarding the extent to which attentional guidance and decision-making are hindered by template imprecision. We also examined a third way in which templates may become imprecise; namely, that they may deteriorate over time. Overall, our findings support a dual-function theory of the target template, and highlight the importance of examining template precision in future research.
ContributorsHout, Michael C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Reichle, Erik (Committee member) / Arizona State University (Publisher)
Created2013
152570-Thumbnail Image.png
Description
Current research has identified a specific type of visual experience that leads to faster cortical processing. Specifically, performance on perceptual learning of a directional-motion leads to faster cortical processing. This is important on two levels; first, cortical processing is positively correlated with cognitive functions and inversely related to age, frontal

Current research has identified a specific type of visual experience that leads to faster cortical processing. Specifically, performance on perceptual learning of a directional-motion leads to faster cortical processing. This is important on two levels; first, cortical processing is positively correlated with cognitive functions and inversely related to age, frontal lobe lesions, and some cognitive disorders. Second, temporal processing has been shown to be relatively stable over time. In order to expand on this line of research, we examined the effects of a different, but relevant visual experience (i.e., implied motion) on cortical processing. Previous fMRI studies have indicated that static images that imply motion activate area V5 or middle temporal/medial superior temporal complex (MT/MST+) of the visual cortex, the same brain region that is activated in response to real motion. Therefore, we hypothesized that visual experience of implied motion may parallel the positive relationship between real directional-motion and cortical processing. Seven subjects participated in a visual task of implied motion for 4 days, and a pre- and post-test of cortical processing. The results indicated that performance on implied motion is systematically different from performance on a dot motion task. Despite individual differences in performance, overall cortical processing increased from day 1 to day 4.
ContributorsVasefi, Aresh (Author) / Nanez, Jose (Thesis advisor) / Duran, Nicholas (Committee member) / Keil, Thomas J. (Committee member) / Arizona State University (Publisher)
Created2014
152859-Thumbnail Image.png
Description
Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts

Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts over time (Chun and Jiang, 1998; 1999). Given that people learn these repeated distractor configurations and identities, might they also implicitly encode semantic information about distractors, if this information is predictive of the target location? We investigated this question with a series of visual search experiments using real-world stimuli within a contextual cueing paradigm (Chun and Jiang, 1998). Specifically, we tested whether participants could learn, through experience, that the target images they are searching for are always located near specific categories of distractors, such as food items or animals. We also varied the spatial consistency of target locations, in order to rule out implicit learning of repeated target locations. Results suggest that participants implicitly learned the target-predictive categories of distractors and used this information during search, although these results failed to reach significance. This lack of significance may have been due the relative simplicity of the search task, however, and several new experiments are proposed to further investigate whether repeated category information can benefit search.
ContributorsWalenchok, Stephen C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / Arizona State University (Publisher)
Created2014
154879-Thumbnail Image.png
Description
The label-feedback hypothesis (Lupyan, 2007) proposes that language can modulate low- and high-level visual processing, such as “priming” a visual object. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, resulting in shorter reaction times (RTs) and higher accuracy. However, a design limitation made their

The label-feedback hypothesis (Lupyan, 2007) proposes that language can modulate low- and high-level visual processing, such as “priming” a visual object. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, resulting in shorter reaction times (RTs) and higher accuracy. However, a design limitation made their results challenging to assess. This study evaluated whether self-directed speech influences target locating (i.e. attentional guidance) or target identification after location (i.e. decision time), testing whether the Label Feedback Effect reflects changes in visual attention or some other mechanism (e.g. template maintenance in working memory). Across three experiments, search RTs and eye movements were analyzed from four within-subject conditions. People spoke target names, nonwords, irrelevant (absent) object names, or irrelevant (present) object names. Speaking target names weakly facilitates visual search, but speaking different names strongly inhibits search. The most parsimonious account is that language affects target maintenance during search, rather than visual perception.
ContributorsHebert, Katherine P (Author) / Goldinger, Stephen D (Thesis advisor) / Rogalsky, Corianne (Committee member) / McClure, Samuel M. (Committee member) / Arizona State University (Publisher)
Created2016
153609-Thumbnail Image.png
Description
Magnocellular-Dorsal pathway’s function had been related to reading ability, and visual perceptual learning can effectively increase the function of this neural pathway. Previous researches training people with a traditional dot motion paradigm and an integrated visual perceptual training “video game” called Ultimeyes pro, all showed improvement with regard to people’s

Magnocellular-Dorsal pathway’s function had been related to reading ability, and visual perceptual learning can effectively increase the function of this neural pathway. Previous researches training people with a traditional dot motion paradigm and an integrated visual perceptual training “video game” called Ultimeyes pro, all showed improvement with regard to people’s reading performance. This research used 2 paradigms in 2 groups in order to compare the 2 paradigms’ effect on improving people’s reading ability. We also measured participants’ critical flicker fusion threshold (CFFT), which is related to word decoding ability. The result did not show significant improvement of reading performance in each group, but overall the reading speed improved significantly. The result for CFFT in each group only showed significant improvement among people who trained with Ultimeyes pro. This result supports that the beneficial effect of visual perceptual learning training on people’s reading ability, and it suggests that Ultimeyes pro is more efficient than the traditional dot motion paradigm, and might have more application value.
ContributorsZhou, Tianyou (Author) / Nanez, Jose E (Thesis advisor) / Robles-Sotelo, Elias (Committee member) / Duran, Nicholas (Committee member) / Arizona State University (Publisher)
Created2015