Matching Items (6)
Filtering by

Clear all filters

150150-Thumbnail Image.png
Description
Learning and transfer were investigated for a categorical structure in which relevant stimulus information could be mapped without loss from one modality to another. The category space was composed of three non-overlapping, linearly-separable categories. Each stimulus was composed of a sequence of on-off events that varied in duration and number

Learning and transfer were investigated for a categorical structure in which relevant stimulus information could be mapped without loss from one modality to another. The category space was composed of three non-overlapping, linearly-separable categories. Each stimulus was composed of a sequence of on-off events that varied in duration and number of sub-events (complexity). Categories were learned visually, haptically, or auditorily, and transferred to the same or an alternate modality. The transfer set contained old, new, and prototype stimuli, and subjects made both classification and recognition judgments. The results showed an early learning advantage in the visual modality, with transfer performance varying among the conditions in both classification and recognition. In general, classification accuracy was highest for the category prototype, with false recognition of the category prototype higher in the cross-modality conditions. The results are discussed in terms of current theories in modality transfer, and shed preliminary light on categorical transfer of temporal stimuli.
ContributorsFerguson, Ryan (Author) / Homa, Donald (Thesis advisor) / Goldinger, Stephen (Committee member) / Glenberg, Arthur (Committee member) / Arizona State University (Publisher)
Created2011
152006-Thumbnail Image.png
Description
When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are

When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are searching. However, unlike laboratory experiments, searchers in the real-world rarely have perfect knowledge regarding the appearance of their target. In five experiments (with nearly 1,000 participants), we examined how the precision of the observer's template affects their ability to conduct visual search. Specifically, we simulated template imprecision in two ways: First, by contaminating our searchers' templates with inaccurate features, and second, by introducing extraneous features to the template that were unhelpful. In those experiments we recorded the eye movements of our searchers in order to make inferences regarding the extent to which attentional guidance and decision-making are hindered by template imprecision. We also examined a third way in which templates may become imprecise; namely, that they may deteriorate over time. Overall, our findings support a dual-function theory of the target template, and highlight the importance of examining template precision in future research.
ContributorsHout, Michael C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Reichle, Erik (Committee member) / Arizona State University (Publisher)
Created2013
156857-Thumbnail Image.png
Description
Previous research from Rajsic et al. (2015, 2017) suggests that a visual form of confirmation bias arises during visual search for simple stimuli, under certain conditions, wherein people are biased to seek stimuli matching an initial cue color even when this strategy is not optimal. Furthermore, recent research from our

Previous research from Rajsic et al. (2015, 2017) suggests that a visual form of confirmation bias arises during visual search for simple stimuli, under certain conditions, wherein people are biased to seek stimuli matching an initial cue color even when this strategy is not optimal. Furthermore, recent research from our lab suggests that varying the prevalence of cue-colored targets does not attenuate the visual confirmation bias, although people still fail to detect rare targets regardless of whether they match the initial cue (Walenchok et al. under review). The present investigation examines the boundary conditions of the visual confirmation bias under conditions of equal, low, and high cued-target frequency. Across experiments, I found that: (1) People are strongly susceptible to the low-prevalence effect, often failing to detect rare targets regardless of whether they match the cue (Wolfe et al., 2005). (2) However, they are still biased to seek cue-colored stimuli, even when such targets are rare. (3) Regardless of target prevalence, people employ strategies when search is made sufficiently burdensome with distributed items and large search sets. These results further support previous findings that the low-prevalence effect arises from a failure to perceive rare items (Hout et al., 2015), while visual confirmation bias is a bias of attentional guidance (Rajsic et al., 2015, 2017).
ContributorsWalenchok, Stephen Charles (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / McClure, Samuel M. (Committee member) / Arizona State University (Publisher)
Created2018
152859-Thumbnail Image.png
Description
Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts

Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts over time (Chun and Jiang, 1998; 1999). Given that people learn these repeated distractor configurations and identities, might they also implicitly encode semantic information about distractors, if this information is predictive of the target location? We investigated this question with a series of visual search experiments using real-world stimuli within a contextual cueing paradigm (Chun and Jiang, 1998). Specifically, we tested whether participants could learn, through experience, that the target images they are searching for are always located near specific categories of distractors, such as food items or animals. We also varied the spatial consistency of target locations, in order to rule out implicit learning of repeated target locations. Results suggest that participants implicitly learned the target-predictive categories of distractors and used this information during search, although these results failed to reach significance. This lack of significance may have been due the relative simplicity of the search task, however, and several new experiments are proposed to further investigate whether repeated category information can benefit search.
ContributorsWalenchok, Stephen C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / Arizona State University (Publisher)
Created2014
155315-Thumbnail Image.png
Description
In baseball, the difference between a win and loss can come down to a single call, such as when an umpire judges force outs at first base by typically comparing competing auditory and visual inputs of the ball-mitt sound and the foot-on-base sight. Yet, because the speed of sound in

In baseball, the difference between a win and loss can come down to a single call, such as when an umpire judges force outs at first base by typically comparing competing auditory and visual inputs of the ball-mitt sound and the foot-on-base sight. Yet, because the speed of sound in air only travels about 1100 feet per second, fans observing from several hundred feet away will receive auditory cues that are delayed a significant portion of a second, and thus conceivably could systematically differ in judgments compared to the nearby umpire. The current research examines two questions. 1. How reliably and with what biases do observers judge the order of visual versus auditory events? 2. Do observers making such order judgments from far away systematically compensate for delays due to the slow speed of sound? It is hypothesized that if any temporal bias occurs it is in the direction consistent with observers not accounting for the sound delay, such that increasing viewing distance will increase the bias to assume the sound occurred later. It was found that nearby observers are relatively accurate at judging if a sound occurred before or after a simple visual event (a flash), but exhibit a systematic bias to favor visual stimuli occurring first (by about 30 msec). In contrast, distant observers did not compensate for the delay of the speed of sound such that they systematically favored the visual cue occurring earlier as a function of viewing distance. When observers judged simple visual stimuli in motion relative to the same sound burst, the distance effect occurred as a function of the visual clarity of the ball arriving. In the baseball setting, using a large screen projection of baserunner, a diminished distance effect occurred due to the additional visual cues. In summary, observers generally do not account for the delay of sound due to distance.
ContributorsKrynen, R. Chandler (Author) / McBeath, Michael (Thesis advisor) / Homa, Donald (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2017
161818-Thumbnail Image.png
Description
Color perception has been widely studied and well modeled with respect to combining visible electromagnetic frequencies, yet new technology provides the means to better explore and test novel temporal frequency characteristics of color perception. Experiment 1 tests how reliably participants categorize static spectral rainbow colors, which can be a useful

Color perception has been widely studied and well modeled with respect to combining visible electromagnetic frequencies, yet new technology provides the means to better explore and test novel temporal frequency characteristics of color perception. Experiment 1 tests how reliably participants categorize static spectral rainbow colors, which can be a useful tool for efficiently identifying those with functional dichromacy, trichromacy, and tetrachromacy. The findings confirm that all individuals discern the four principal opponent process colors, red, yellow, green, and blue, with normal and potential tetrachromats seeing more distinct colors than color blind individuals. Experiment 2 tests the moving flicker fusion rate of the central electromagnetic frequencies within each color category found in Experiment 1 as a test of the Where system. It then compares this to the maximum temporal processing rate for discriminating direction of hue change with colors displayed serially as a test of the What system. The findings confirm respective processing thresholds of about 20 Hz for Where and 2-7 Hz for What processing systems. Experiment 3 tests conditions that optimize false colors based on the spinning Benham’s Top illusion. Findings indicate the same four principal colors emerge as in Experiment 1, but at low saturation levels for trichromats that diminish further for dichromats. Taken together, the three experiments provide an overview of the common categorical boundaries and temporal processing limits of human color vision.
ContributorsKrynen, Richard Chandler (Author) / Mcbeath, Michael K (Thesis advisor) / Homa, Donald (Committee member) / Newman, Nathan (Committee member) / Stone, Greg (Committee member) / Arizona State University (Publisher)
Created2021