Matching Items (4)
Filtering by

Clear all filters

150067-Thumbnail Image.png
Description
The objective of this project was to evaluate human factors based cognitive aids on endoscope reprocessing. The project stems from recent failures in reprocessing (cleaning) endoscopes, contributing to the spread of harmful bacterial and viral agents between patients. Three themes were found to represent a majority of problems:

The objective of this project was to evaluate human factors based cognitive aids on endoscope reprocessing. The project stems from recent failures in reprocessing (cleaning) endoscopes, contributing to the spread of harmful bacterial and viral agents between patients. Three themes were found to represent a majority of problems: 1) lack of visibility (parts and tools were difficult to identify), 2) high memory demands, and 3) insufficient user feedback. In an effort to improve completion rate and eliminate error, cognitive aids were designed utilizing human factors principles that would replace existing manufacturer visual aids. Then, a usability test was conducted, which compared the endoscope reprocessing performance of novices using the standard manufacturer-provided visual aids and the new cognitive aids. Participants successfully completed 87.1% of the reprocessing procedure in the experimental condition with the use of the cognitive aids, compared to 46.3% in the control condition using only existing support materials. Twenty-five of sixty subtasks showed significant improvement in completion rates. When given a cognitive aid designed with human factors principles, participants were able to more successfully complete the reprocessing task. This resulted in an endoscope that was more likely to be safe for patient use.
ContributorsJolly, Jonathan D (Author) / Branaghan, Russell J (Thesis advisor) / Cooke, Nancy J. (Committee member) / Sanchez, Christopher (Committee member) / Arizona State University (Publisher)
Created2011
150046-Thumbnail Image.png
Description
This thesis describes a synthetic task environment, CyberCog, created for the purposes of 1) understanding and measuring individual and team situation awareness in the context of a cyber security defense task and 2) providing a context for evaluating algorithms, visualizations, and other interventions that are intended to improve cyber situation

This thesis describes a synthetic task environment, CyberCog, created for the purposes of 1) understanding and measuring individual and team situation awareness in the context of a cyber security defense task and 2) providing a context for evaluating algorithms, visualizations, and other interventions that are intended to improve cyber situation awareness. CyberCog provides an interactive environment for conducting human-in-loop experiments in which the participants of the experiment perform the tasks of a cyber security defense analyst in response to a cyber-attack scenario. CyberCog generates the necessary performance measures and interaction logs needed for measuring individual and team cyber situation awareness. Moreover, the CyberCog environment provides good experimental control for conducting effective situation awareness studies while retaining realism in the scenario and in the tasks performed.
ContributorsRajivan, Prashanth (Author) / Femiani, John (Thesis advisor) / Cooke, Nancy J. (Thesis advisor) / Lindquist, Timothy (Committee member) / Gary, Kevin (Committee member) / Arizona State University (Publisher)
Created2011
156622-Thumbnail Image.png
Description
Reasoning about the activities of cyber threat actors is critical to defend against cyber

attacks. However, this task is difficult for a variety of reasons. In simple terms, it is difficult

to determine who the attacker is, what the desired goals are of the attacker, and how they will

carry out their attacks.

Reasoning about the activities of cyber threat actors is critical to defend against cyber

attacks. However, this task is difficult for a variety of reasons. In simple terms, it is difficult

to determine who the attacker is, what the desired goals are of the attacker, and how they will

carry out their attacks. These three questions essentially entail understanding the attacker’s

use of deception, the capabilities available, and the intent of launching the attack. These

three issues are highly inter-related. If an adversary can hide their intent, they can better

deceive a defender. If an adversary’s capabilities are not well understood, then determining

what their goals are becomes difficult as the defender is uncertain if they have the necessary

tools to accomplish them. However, the understanding of these aspects are also mutually

supportive. If we have a clear picture of capabilities, intent can better be deciphered. If we

understand intent and capabilities, a defender may be able to see through deception schemes.

In this dissertation, I present three pieces of work to tackle these questions to obtain

a better understanding of cyber threats. First, we introduce a new reasoning framework

to address deception. We evaluate the framework by building a dataset from DEFCON

capture-the-flag exercise to identify the person or group responsible for a cyber attack.

We demonstrate that the framework not only handles cases of deception but also provides

transparent decision making in identifying the threat actor. The second task uses a cognitive

learning model to determine the intent – goals of the threat actor on the target system.

The third task looks at understanding the capabilities of threat actors to target systems by

identifying at-risk systems from hacker discussions on darkweb websites. To achieve this

task we gather discussions from more than 300 darkweb websites relating to malicious

hacking.
ContributorsNunes, Eric (Author) / Shakarian, Paulo (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Baral, Chitta (Committee member) / Cooke, Nancy J. (Committee member) / Arizona State University (Publisher)
Created2018
153207-Thumbnail Image.png
Description
Cyber threats are growing in number and sophistication making it important to continually study and improve all dimensions of cyber defense. Human teamwork in cyber defense analysis has been overlooked even though it has been identified as an important predictor of cyber defense performance. Also, to detect advanced forms of

Cyber threats are growing in number and sophistication making it important to continually study and improve all dimensions of cyber defense. Human teamwork in cyber defense analysis has been overlooked even though it has been identified as an important predictor of cyber defense performance. Also, to detect advanced forms of threats effective information sharing and collaboration between the cyber defense analysts becomes imperative. Therefore, through this dissertation work, I took a cognitive engineering approach to investigate and improve cyber defense teamwork. The approach involved investigating a plausible team-level bias called the information pooling bias in cyber defense analyst teams conducting the detection task that is part of forensics analysis through human-in-the-loop experimentation. The approach also involved developing agent-based models based on the experimental results to explore the cognitive underpinnings of this bias in human analysts. A prototype collaborative visualization tool was developed by considering the plausible cognitive limitations contributing to the bias to investigate whether a cognitive engineering-driven visualization tool can help mitigate the bias in comparison to off-the-shelf tools. It was found that participant teams conducting the collaborative detection tasks as part of forensics analysis, experience the information pooling bias affecting their performance. Results indicate that cognitive friendly visualizations can help mitigate the effect of this bias in cyber defense analysts. Agent-based modeling produced insights on internal cognitive processes that might be contributing to this bias which could be leveraged in building future visualizations. This work has multiple implications including the development of new knowledge about the science of cyber defense teamwork, a demonstration of the advantage of developing tools using a cognitive engineering approach, a demonstration of the advantage of using a hybrid cognitive engineering methodology to study teams in general and finally, a demonstration of the effect of effective teamwork on cyber defense performance.
ContributorsRajivan, Prashanth (Author) / Cooke, Nancy J. (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Janssen, Marcus (Committee member) / Arizona State University (Publisher)
Created2014