Matching Items (5)
150429-Thumbnail Image.png
Description
Three experiments used a spatial serial conditioning paradigm to assess the effectiveness of spatially informative conditioned stimuli in eliciting tracking behavior in pigeons. The experimental paradigm consisted of the simultaneous presentation of 2 key lights (CS2 and CTRL), followed by another key light (CS1), followed by food (the unconditioned stimulus

Three experiments used a spatial serial conditioning paradigm to assess the effectiveness of spatially informative conditioned stimuli in eliciting tracking behavior in pigeons. The experimental paradigm consisted of the simultaneous presentation of 2 key lights (CS2 and CTRL), followed by another key light (CS1), followed by food (the unconditioned stimulus or US). CS2 and CTRL were presented in 2 of 3 possible locations, randomly assigned; CS1 was always presented in the same location as CS2. CS2 was designed to signal the spatial, but not the temporal locus of CS1; CS1 signaled the temporal locus of the US. In Experiment 1, differential pecking on CS2 was observed even when CS2 was present throughout the interval between CS1s, but only in a minority of pigeons. A control condition verified that pecking on CS2 was not due to temporal proximity between CS2 and US. Experiment 2 demonstrated the reversibility of spatial conditioning between CS2 and CTRL. Asymptotic performance never involved tracking CTRL more than CS2 for any of 16 pigeons. It is inferred that pigeons learned the spatial association between CS2 and CS1, and that temporal contingency facilitated its expression as tracking behavior. In a third experiment, with pigeons responding to a touchscreen monitor, differential responding to CS2 was observed only when CS2 disambiguated the location of a random CS1. When the presentation location of CS1 was held constant, no differences in responding to CS2 or CTRL were observed.
ContributorsMazur, Gabriela (Author) / Sanabria, Federico (Thesis advisor) / Killeen, Peter R (Committee member) / Robles-Sotelo, Elias (Committee member) / Ho Chen Cheung, Timothy (Committee member) / Arizona State University (Publisher)
Created2011
150680-Thumbnail Image.png
Description
There have been conflicting accounts of animation's facilitation in learning from instructional media, being at best no different if not hindering performance. Procedural motor learning represents one of the few the areas in which animations have shown to be facilitative. These studies examine the effects of instructional media (animation vs.

There have been conflicting accounts of animation's facilitation in learning from instructional media, being at best no different if not hindering performance. Procedural motor learning represents one of the few the areas in which animations have shown to be facilitative. These studies examine the effects of instructional media (animation vs. static), rotation (facing vs. over the shoulder) and spatial abilities (low vs. high spatial abilities) on two procedural motor tasks, knot tying and endoscope reprocessing. Results indicate that for all conditions observed in which participants engaged in procedural motor learning tasks, performance was significantly improved with animations over static images. Further, performance was greater for rotations of instructional media that did not require participants to perform a mental rotation under some circumstances. Interactions between Media x Rotation suggest that media that was animated and did not require a participant to mentally rotate led to improved performance. Individual spatial abilities were found to influence total steps correct and total number of errors made in the knot tying task, but this was not observed in the endoscope task. These findings have implications for the design of instructional media for procedural motor tasks and provide strong support for the usage of animations in this context.
ContributorsGarland, T. B (Author) / Sanchez, Chris A (Thesis advisor) / Cooke, Nancy J. (Committee member) / Branaghan, Russel (Committee member) / Arizona State University (Publisher)
Created2012
155392-Thumbnail Image.png
Description
The medical field is constantly looking for technological solutions to reduce user-error and improve procedures. As a potential solution for healthcare environments, Augmented Reality (AR) has received increasing attention in the past few decades due to advances in computing capabilities, lower cost, and better displays (Sauer, Khamene, Bascle, Vogt, &

The medical field is constantly looking for technological solutions to reduce user-error and improve procedures. As a potential solution for healthcare environments, Augmented Reality (AR) has received increasing attention in the past few decades due to advances in computing capabilities, lower cost, and better displays (Sauer, Khamene, Bascle, Vogt, & Rubino, 2002). Augmented Reality, as defined in Ronald Azuma’s initial survey of AR, combines virtual and real-world environments in three dimensions and in real-time (Azuma, 1997). Because visualization displays used in AR are related to human physiologic and cognitive constraints, any new system must improve on previous methods and be consistently aligned with human abilities in mind (Drascic & Milgram, 1996; Kruijff, Swan, & Feiner, 2010; Ziv, Wolpe, Small, & Glick, 2006). Based on promising findings from aviation and driving (Liu & Wen, 2004; Sojourner & Antin, 1990; Ververs & Wickens, 1998), this study identifies whether the spatial proximity affordance provided by a head-mounted display or alternative heads up display might benefit to attentional performance in a simulated routine medical task. Additionally, the present study explores how tasks of varying relatedness may relate to attentional performance differences when these tasks are presented at different spatial distances.
Contributorsdel Rio, Richard A (Author) / Branaghan, Russell (Thesis advisor) / Gray, Rob (Committee member) / Chiou, Erin (Committee member) / Arizona State University (Publisher)
Created2017
155902-Thumbnail Image.png
Description
We experience spatial separation and temporal asynchrony between visual and

haptic information in many virtual-reality, augmented-reality, or teleoperation systems.

Three studies were conducted to examine the spatial and temporal characteristic of

multisensory integration. Participants interacted with virtual springs using both visual and

haptic senses, and their perception of stiffness and ability to differentiate stiffness

We experience spatial separation and temporal asynchrony between visual and

haptic information in many virtual-reality, augmented-reality, or teleoperation systems.

Three studies were conducted to examine the spatial and temporal characteristic of

multisensory integration. Participants interacted with virtual springs using both visual and

haptic senses, and their perception of stiffness and ability to differentiate stiffness were

measured. The results revealed that a constant visual delay increased the perceived stiffness,

while a variable visual delay made participants depend more on the haptic sensations in

stiffness perception. We also found that participants judged stiffness stiffer when they

interact with virtual springs at faster speeds, and interaction speed was positively correlated

with stiffness overestimation. In addition, it has been found that participants could learn an

association between visual and haptic inputs despite the fact that they were spatially

separated, resulting in the improvement of typing performance. These results show the

limitations of Maximum-Likelihood Estimation model, suggesting that a Bayesian

inference model should be used.
ContributorsSim, Sung Hun (Author) / Wu, Bing (Thesis advisor) / Cooke, Nancy J. (Committee member) / Gray, Robert (Committee member) / Branaghan, Russell (Committee member) / Arizona State University (Publisher)
Created2017
151390-Thumbnail Image.png
Description
Our ability to estimate the position of our body parts in space, a fundamentally proprioceptive process, is crucial for interacting with the environment and movement control. For proprioception to support these actions, the Central Nervous System has to rely on a stored internal representation of the body parts in space.

Our ability to estimate the position of our body parts in space, a fundamentally proprioceptive process, is crucial for interacting with the environment and movement control. For proprioception to support these actions, the Central Nervous System has to rely on a stored internal representation of the body parts in space. However, relatively little is known about this internal representation of arm position. To this end, I developed a method to map proprioceptive estimates of hand location across a 2-d workspace. In this task, I moved each subject's hand to a target location while the subject's eyes were closed. After returning the hand, subjects opened their eyes to verbally report the location of where their fingertip had been. Then, I reconstructed and analyzed the spatial structure of the pattern of estimation errors. In the first couple of experiments I probed the structure and stability of the pattern of errors by manipulating the hand used and tactile feedback provided when the hand was at each target location. I found that the resulting pattern of errors was systematically stable across conditions for each subject, subject-specific, and not uniform across the workspace. These findings suggest that the observed structure of pattern of errors has been constructed through experience, which has resulted in a systematically stable internal representation of arm location. Moreover, this representation is continuously being calibrated across the workspace. In the next two experiments, I aimed to probe the calibration of this structure. To this end, I used two different perturbation paradigms: 1) a virtual reality visuomotor adaptation to induce a local perturbation, 2) and a standard prism adaptation paradigm to induce a global perturbation. I found that the magnitude of the errors significantly increased to a similar extent after each perturbation. This small effect indicates that proprioception is recalibrated to a similar extent regardless of how the perturbation is introduced, suggesting that sensory and motor changes may be two independent processes arising from the perturbation. Moreover, I propose that the internal representation of arm location might be constructed with a global solution and not capable of local changes.
ContributorsRincon Gonzalez, Liliana (Author) / Helms Tillery, Stephen I (Thesis advisor) / Buneo, Christopher A (Thesis advisor) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Kleim, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2012