Matching Items (1,045)
Filtering by

Clear all filters

151325-Thumbnail Image.png
Description
As technology enhances our communication capabilities, the number of distributed teams has risen in both public and private sectors. There is no doubt that these technological advancements have addressed a need for communication and collaboration of distributed teams. However, is all technology useful for effective collaboration? Are some methods (modalities)

As technology enhances our communication capabilities, the number of distributed teams has risen in both public and private sectors. There is no doubt that these technological advancements have addressed a need for communication and collaboration of distributed teams. However, is all technology useful for effective collaboration? Are some methods (modalities) of communication more conducive than others to effective performance and collaboration of distributed teams? Although previous literature identifies some differences in modalities, there is little research on geographically distributed mobile teams (DMTs) performing a collaborative task. To investigate communication and performance in this context, I developed the GeoCog system. This system is a mobile communications and collaboration platform enabling small, distributed teams of three to participate in a variant of the military-inspired game, "Capture the Flag". Within the task, teams were given one hour to complete as many "captures" as possible while utilizing resources to the advantage of the team. In this experiment, I manipulated the modality of communication across three conditions with text-based messaging only, vocal communication only, and a combination of the two conditions. It was hypothesized that bi-modal communication would yield superior performance compared to either single modality conditions. Results indicated that performance was not affected by modality. Further results, including communication analysis, are discussed within this paper.
ContributorsChampion, Michael (Author) / Cooke, Nancy J. (Thesis advisor) / Shope, Steven (Committee member) / Wu, Bing (Committee member) / Arizona State University (Publisher)
Created2012
153492-Thumbnail Image.png
Description
Although current urban search and rescue (USAR) robots are little more than remotely controlled cameras, the end goal is for them to work alongside humans as trusted teammates. Natural language communications and performance data are collected as a team of humans works to carry out a simulated search and rescue

Although current urban search and rescue (USAR) robots are little more than remotely controlled cameras, the end goal is for them to work alongside humans as trusted teammates. Natural language communications and performance data are collected as a team of humans works to carry out a simulated search and rescue task in an uncertain virtual environment. Conditions are tested emulating a remotely controlled robot versus an intelligent one. Differences in performance, situation awareness, trust, workload, and communications are measured. The Intelligent robot condition resulted in higher levels of performance and operator situation awareness (SA).
ContributorsBartlett, Cade Earl (Author) / Cooke, Nancy J. (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Wu, Bing (Committee member) / Arizona State University (Publisher)
Created2015
153937-Thumbnail Image.png
Description
The International Standards Organization (ISO) documentation utilizes Fitts’ law to determine the usability of traditional input devices like mouse and touchscreens for one- or two-dimensional operations. To test the hypothesis that Fitts’ Law can be applied to hand/air gesture based computing inputs, Fitts’ multi-directional target acquisition task is applied to

The International Standards Organization (ISO) documentation utilizes Fitts’ law to determine the usability of traditional input devices like mouse and touchscreens for one- or two-dimensional operations. To test the hypothesis that Fitts’ Law can be applied to hand/air gesture based computing inputs, Fitts’ multi-directional target acquisition task is applied to three gesture based input devices that utilize different technologies and two baseline devices, mouse and touchscreen. Three target distances and three target sizes were tested six times in a randomized order with a randomized order of the five input technologies. A total of 81 participants’ data were collected for the within subjects design study. Participants were instructed to perform the task as quickly and accurately as possible according to traditional Fitts’ testing procedures. Movement time, error rate, and throughput for each input technology were calculated.

Additionally, no standards exist for equating user experience with Fitts’ measures such as movement time, throughput, and error count. To test the hypothesis that a user’s experience can be predicted using Fitts’ measures of movement time, throughput and error count, an ease of use rating using a 5-point scale for each input type was collected from each participant. The calculated Mean Opinion Scores (MOS) were regressed on Fitts’ measures of movement time, throughput, and error count to understand the extent to which they can predict a user’s subjective rating.
ContributorsBurno, Rachael A (Author) / Wu, Bing (Thesis advisor) / Cooke, Nancy J. (Committee member) / Branaghan, Russell (Committee member) / Arizona State University (Publisher)
Created2015
155902-Thumbnail Image.png
Description
We experience spatial separation and temporal asynchrony between visual and

haptic information in many virtual-reality, augmented-reality, or teleoperation systems.

Three studies were conducted to examine the spatial and temporal characteristic of

multisensory integration. Participants interacted with virtual springs using both visual and

haptic senses, and their perception of stiffness and ability to differentiate stiffness

We experience spatial separation and temporal asynchrony between visual and

haptic information in many virtual-reality, augmented-reality, or teleoperation systems.

Three studies were conducted to examine the spatial and temporal characteristic of

multisensory integration. Participants interacted with virtual springs using both visual and

haptic senses, and their perception of stiffness and ability to differentiate stiffness were

measured. The results revealed that a constant visual delay increased the perceived stiffness,

while a variable visual delay made participants depend more on the haptic sensations in

stiffness perception. We also found that participants judged stiffness stiffer when they

interact with virtual springs at faster speeds, and interaction speed was positively correlated

with stiffness overestimation. In addition, it has been found that participants could learn an

association between visual and haptic inputs despite the fact that they were spatially

separated, resulting in the improvement of typing performance. These results show the

limitations of Maximum-Likelihood Estimation model, suggesting that a Bayesian

inference model should be used.
ContributorsSim, Sung Hun (Author) / Wu, Bing (Thesis advisor) / Cooke, Nancy J. (Committee member) / Gray, Robert (Committee member) / Branaghan, Russell (Committee member) / Arizona State University (Publisher)
Created2017
ContributorsRavel, Maurice, 1875-1937 (Composer)