Matching Items (3)
Filtering by

Clear all filters

150044-Thumbnail Image.png
Description
The purpose of this study was to investigate the effect of partial exemplar experience on category formation and use. Participants had either complete or limited access to the three dimensions that defined categories by dimensions within different modalities. The concept of "crucial dimension" was introduced and the role it plays

The purpose of this study was to investigate the effect of partial exemplar experience on category formation and use. Participants had either complete or limited access to the three dimensions that defined categories by dimensions within different modalities. The concept of "crucial dimension" was introduced and the role it plays in category definition was explained. It was hypothesized that the effects of partial experience are not explained by a shifting of attention between dimensions (Taylor & Ross, 2009) but rather by an increased reliance on prototypical values used to fill in missing information during incomplete experiences. Results indicated that participants (1) do not fill in missing information with prototypical values, (2) integrate information less efficiently between different modalities than within a single modality, and (3) have difficulty learning only when partial experience prevents access to diagnostic information.
ContributorsCrawford, Thomas (Author) / Homa, Donald (Thesis advisor) / Mcbeath, Micheal (Committee member) / Glenberg, Arthur (Committee member) / Arizona State University (Publisher)
Created2011
152859-Thumbnail Image.png
Description
Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts

Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts over time (Chun and Jiang, 1998; 1999). Given that people learn these repeated distractor configurations and identities, might they also implicitly encode semantic information about distractors, if this information is predictive of the target location? We investigated this question with a series of visual search experiments using real-world stimuli within a contextual cueing paradigm (Chun and Jiang, 1998). Specifically, we tested whether participants could learn, through experience, that the target images they are searching for are always located near specific categories of distractors, such as food items or animals. We also varied the spatial consistency of target locations, in order to rule out implicit learning of repeated target locations. Results suggest that participants implicitly learned the target-predictive categories of distractors and used this information during search, although these results failed to reach significance. This lack of significance may have been due the relative simplicity of the search task, however, and several new experiments are proposed to further investigate whether repeated category information can benefit search.
ContributorsWalenchok, Stephen C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / Arizona State University (Publisher)
Created2014
155315-Thumbnail Image.png
Description
In baseball, the difference between a win and loss can come down to a single call, such as when an umpire judges force outs at first base by typically comparing competing auditory and visual inputs of the ball-mitt sound and the foot-on-base sight. Yet, because the speed of sound in

In baseball, the difference between a win and loss can come down to a single call, such as when an umpire judges force outs at first base by typically comparing competing auditory and visual inputs of the ball-mitt sound and the foot-on-base sight. Yet, because the speed of sound in air only travels about 1100 feet per second, fans observing from several hundred feet away will receive auditory cues that are delayed a significant portion of a second, and thus conceivably could systematically differ in judgments compared to the nearby umpire. The current research examines two questions. 1. How reliably and with what biases do observers judge the order of visual versus auditory events? 2. Do observers making such order judgments from far away systematically compensate for delays due to the slow speed of sound? It is hypothesized that if any temporal bias occurs it is in the direction consistent with observers not accounting for the sound delay, such that increasing viewing distance will increase the bias to assume the sound occurred later. It was found that nearby observers are relatively accurate at judging if a sound occurred before or after a simple visual event (a flash), but exhibit a systematic bias to favor visual stimuli occurring first (by about 30 msec). In contrast, distant observers did not compensate for the delay of the speed of sound such that they systematically favored the visual cue occurring earlier as a function of viewing distance. When observers judged simple visual stimuli in motion relative to the same sound burst, the distance effect occurred as a function of the visual clarity of the ball arriving. In the baseball setting, using a large screen projection of baserunner, a diminished distance effect occurred due to the additional visual cues. In summary, observers generally do not account for the delay of sound due to distance.
ContributorsKrynen, R. Chandler (Author) / McBeath, Michael (Thesis advisor) / Homa, Donald (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2017