Matching Items (8)
151325-Thumbnail Image.png
Description
As technology enhances our communication capabilities, the number of distributed teams has risen in both public and private sectors. There is no doubt that these technological advancements have addressed a need for communication and collaboration of distributed teams. However, is all technology useful for effective collaboration? Are some methods (modalities)

As technology enhances our communication capabilities, the number of distributed teams has risen in both public and private sectors. There is no doubt that these technological advancements have addressed a need for communication and collaboration of distributed teams. However, is all technology useful for effective collaboration? Are some methods (modalities) of communication more conducive than others to effective performance and collaboration of distributed teams? Although previous literature identifies some differences in modalities, there is little research on geographically distributed mobile teams (DMTs) performing a collaborative task. To investigate communication and performance in this context, I developed the GeoCog system. This system is a mobile communications and collaboration platform enabling small, distributed teams of three to participate in a variant of the military-inspired game, "Capture the Flag". Within the task, teams were given one hour to complete as many "captures" as possible while utilizing resources to the advantage of the team. In this experiment, I manipulated the modality of communication across three conditions with text-based messaging only, vocal communication only, and a combination of the two conditions. It was hypothesized that bi-modal communication would yield superior performance compared to either single modality conditions. Results indicated that performance was not affected by modality. Further results, including communication analysis, are discussed within this paper.
ContributorsChampion, Michael (Author) / Cooke, Nancy J. (Thesis advisor) / Shope, Steven (Committee member) / Wu, Bing (Committee member) / Arizona State University (Publisher)
Created2012
153492-Thumbnail Image.png
Description
Although current urban search and rescue (USAR) robots are little more than remotely controlled cameras, the end goal is for them to work alongside humans as trusted teammates. Natural language communications and performance data are collected as a team of humans works to carry out a simulated search and rescue

Although current urban search and rescue (USAR) robots are little more than remotely controlled cameras, the end goal is for them to work alongside humans as trusted teammates. Natural language communications and performance data are collected as a team of humans works to carry out a simulated search and rescue task in an uncertain virtual environment. Conditions are tested emulating a remotely controlled robot versus an intelligent one. Differences in performance, situation awareness, trust, workload, and communications are measured. The Intelligent robot condition resulted in higher levels of performance and operator situation awareness (SA).
ContributorsBartlett, Cade Earl (Author) / Cooke, Nancy J. (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Wu, Bing (Committee member) / Arizona State University (Publisher)
Created2015
153937-Thumbnail Image.png
Description
The International Standards Organization (ISO) documentation utilizes Fitts’ law to determine the usability of traditional input devices like mouse and touchscreens for one- or two-dimensional operations. To test the hypothesis that Fitts’ Law can be applied to hand/air gesture based computing inputs, Fitts’ multi-directional target acquisition task is applied to

The International Standards Organization (ISO) documentation utilizes Fitts’ law to determine the usability of traditional input devices like mouse and touchscreens for one- or two-dimensional operations. To test the hypothesis that Fitts’ Law can be applied to hand/air gesture based computing inputs, Fitts’ multi-directional target acquisition task is applied to three gesture based input devices that utilize different technologies and two baseline devices, mouse and touchscreen. Three target distances and three target sizes were tested six times in a randomized order with a randomized order of the five input technologies. A total of 81 participants’ data were collected for the within subjects design study. Participants were instructed to perform the task as quickly and accurately as possible according to traditional Fitts’ testing procedures. Movement time, error rate, and throughput for each input technology were calculated.

Additionally, no standards exist for equating user experience with Fitts’ measures such as movement time, throughput, and error count. To test the hypothesis that a user’s experience can be predicted using Fitts’ measures of movement time, throughput and error count, an ease of use rating using a 5-point scale for each input type was collected from each participant. The calculated Mean Opinion Scores (MOS) were regressed on Fitts’ measures of movement time, throughput, and error count to understand the extent to which they can predict a user’s subjective rating.
ContributorsBurno, Rachael A (Author) / Wu, Bing (Thesis advisor) / Cooke, Nancy J. (Committee member) / Branaghan, Russell (Committee member) / Arizona State University (Publisher)
Created2015
155902-Thumbnail Image.png
Description
We experience spatial separation and temporal asynchrony between visual and

haptic information in many virtual-reality, augmented-reality, or teleoperation systems.

Three studies were conducted to examine the spatial and temporal characteristic of

multisensory integration. Participants interacted with virtual springs using both visual and

haptic senses, and their perception of stiffness and ability to differentiate stiffness

We experience spatial separation and temporal asynchrony between visual and

haptic information in many virtual-reality, augmented-reality, or teleoperation systems.

Three studies were conducted to examine the spatial and temporal characteristic of

multisensory integration. Participants interacted with virtual springs using both visual and

haptic senses, and their perception of stiffness and ability to differentiate stiffness were

measured. The results revealed that a constant visual delay increased the perceived stiffness,

while a variable visual delay made participants depend more on the haptic sensations in

stiffness perception. We also found that participants judged stiffness stiffer when they

interact with virtual springs at faster speeds, and interaction speed was positively correlated

with stiffness overestimation. In addition, it has been found that participants could learn an

association between visual and haptic inputs despite the fact that they were spatially

separated, resulting in the improvement of typing performance. These results show the

limitations of Maximum-Likelihood Estimation model, suggesting that a Bayesian

inference model should be used.
ContributorsSim, Sung Hun (Author) / Wu, Bing (Thesis advisor) / Cooke, Nancy J. (Committee member) / Gray, Robert (Committee member) / Branaghan, Russell (Committee member) / Arizona State University (Publisher)
Created2017
154219-Thumbnail Image.png
Description
ABSTRACT

The present studies investigated the separate effects of two types of visual feedback delay – increased latency and decreased updating rate – on performance – both actual (e.g. response time) and subjective (i.e. rating of perceived input device performance) – in 2-dimensional pointing tasks using a mouse as an input

ABSTRACT

The present studies investigated the separate effects of two types of visual feedback delay – increased latency and decreased updating rate – on performance – both actual (e.g. response time) and subjective (i.e. rating of perceived input device performance) – in 2-dimensional pointing tasks using a mouse as an input device. The first sub-study examined the effects of increased latency on performance using two separate experiments. In the first experiment the effects of constant latency on performance were tested, wherein participants completed blocks of trials with a constant level of latency. Additionally, after each block, participants rated their subjective experience of the input device performance at each level of latency. The second experiment examined the effects of variable latency on performance, where latency was randomized within blocks of trials.

The second sub-study investigated the effects of decreased updating rates on performance in the same manner as the first study, wherein experiment one tested the effect of constant updating rate on performance as well as subjective rating, and experiment two tested the effect of variable updating rate on performance. The findings suggest that latency is negative correlated with actual performance as well as subjective ratings of performance, and updating rate is positively correlated with actual performance as well as subjective ratings of performance.
ContributorsBrady, Kyle J (Author) / Wu, Bing (Thesis advisor) / Hout, Michael C (Committee member) / Branaghan, Russell (Committee member) / Arizona State University (Publisher)
Created2015
153204-Thumbnail Image.png
Description
As technology increases, so does the concern that the humanlike virtual characters and android robots being created today will fall into the uncanny valley. The current study aims to determine whether uncanny feelings from modern virtual characters and robots can be significantly affected by the mere exposure effect.

As technology increases, so does the concern that the humanlike virtual characters and android robots being created today will fall into the uncanny valley. The current study aims to determine whether uncanny feelings from modern virtual characters and robots can be significantly affected by the mere exposure effect. Previous research shows that mere exposure can increase positive feelings toward novel stimuli (Zajonc, 1968). It is predicted that the repeated exposure to virtual characters and robots can cause a significant decrease in uncanny feelings. The current study aimed to show that modern virtual characters and robots possessing uncanny traits will be rated significantly less uncanny after being viewed multiple times.
ContributorsCorral, Christopher (Author) / Song, Hyunjin (Thesis advisor) / Wu, Bing (Committee member) / Kuzel, Michael (Committee member) / Arizona State University (Publisher)
Created2014
127961-Thumbnail Image.png
Description

As gesture interfaces become more main-stream, it is increasingly important to investigate the behavioral characteristics of these interactions – particularly in three-dimensional (3D) space. In this study, Fitts’ method was extended to such input technologies, and the applicability of Fitts’ law to gesture-based interactions was examined. The experiment included three

As gesture interfaces become more main-stream, it is increasingly important to investigate the behavioral characteristics of these interactions – particularly in three-dimensional (3D) space. In this study, Fitts’ method was extended to such input technologies, and the applicability of Fitts’ law to gesture-based interactions was examined. The experiment included three gesture-based input devices that utilize different techniques to capture user movement, and compared them to conventional input technologies like touchscreen and mouse. Participants completed a target-acquisition test and were instructed to move a cursor from a home location to a spherical target as quickly and accurately as possible. Three distances and three target sizes were tested six times in a randomized order for all input devices. A total of 81 participants completed all tasks. Movement time, error rate, and throughput were calculated for each input technology. Results showed that the mean movement time was highly correlated with the target's index of difficulty for all devices, providing evidence that Fitts’ law can be extended and applied to gesture-based devices. Throughputs were found to be significantly lower for the gesture-based devices compared to mouse and touchscreen, and as the index of difficulty increased, the movement time increased significantly more for these gesture technologies. Error counts were statistically higher for all gesture-based input technologies compared to mouse. In addition, error counts for all inputs were highly correlated with target width, but little impact was shown by movement distance. Overall, the findings suggest that gesture-based devices can be characterized by Fitts’ law in a similar fashion to conventional 1D or 2D devices.

ContributorsBurno, Rachael A. (Author) / Wu, Bing (Author) / Doherty, Rina (Author) / Colett, Hannah (Author) / Elnaggar, Rania (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2015-10-23
127985-Thumbnail Image.png
Description

This paper describes a novel method for displaying data obtained by three-dimensional medical imaging, by which the position and orientation of a freely movable screen are optically tracked and used in real time to select the current slice from the data set for presentation. With this method, which we call

This paper describes a novel method for displaying data obtained by three-dimensional medical imaging, by which the position and orientation of a freely movable screen are optically tracked and used in real time to select the current slice from the data set for presentation. With this method, which we call a “freely moving in-situ medical image”, the screen and imaged data are registered to a common coordinate system in space external to the user, at adjustable scale, and are available for free exploration. The three-dimensional image data occupy empty space, as if an invisible patient is being sliced by the moving screen. A behavioral study using real computed tomography lung vessel data established the superiority of the in situ display over a control condition with the same free exploration, but displaying data on a fixed screen (ex situ), with respect to accuracy in the task of tracing along a vessel and reporting spatial relations between vessel structures. A “freely moving in-situ medical image” display appears from these measures to promote spatial navigation and understanding of medical data.

ContributorsShukla, Gaurav (Author) / Klatzky, Roberta L. (Author) / Wu, Bing (Author) / Wang, Bo (Author) / Galeotti, John (Author) / Chapmann, Brian (Author) / Stetten, George (Author) / New College of Interdisciplinary Arts and Sciences (Contributor)
Created2017-08-23