Matching Items (132)
187381-Thumbnail Image.png
Description
Artificial Intelligence (AI) systems have achieved outstanding performance and have been found to be better than humans at various tasks, such as sentiment analysis, and face recognition. However, the majority of these state-of-the-art AI systems use complex Deep Learning (DL) methods which present challenges for human experts to design and

Artificial Intelligence (AI) systems have achieved outstanding performance and have been found to be better than humans at various tasks, such as sentiment analysis, and face recognition. However, the majority of these state-of-the-art AI systems use complex Deep Learning (DL) methods which present challenges for human experts to design and evaluate such models with respect to privacy, fairness, and robustness. Recent examination of DL models reveals that representations may include information that could lead to privacy violations, unfairness, and robustness issues. This results in AI systems that are potentially untrustworthy from a socio-technical standpoint. Trustworthiness in AI is defined by a set of model properties such as non-discriminatory bias, protection of users’ sensitive attributes, and lawful decision-making. The characteristics of trustworthy AI can be grouped into three categories: Reliability, Resiliency, and Responsibility. Past research has shown that the successful integration of an AI model depends on its trustworthiness. Thus it is crucial for organizations and researchers to build trustworthy AI systems to facilitate the seamless integration and adoption of intelligent technologies. The main issue with existing AI systems is that they are primarily trained to improve technical measures such as accuracy on a specific task but are not considerate of socio-technical measures. The aim of this dissertation is to propose methods for improving the trustworthiness of AI systems through representation learning. DL models’ representations contain information about a given input and can be used for tasks such as detecting fake news on social media or predicting the sentiment of a review. The findings of this dissertation significantly expand the scope of trustworthy AI research and establish a new paradigm for modifying data representations to balance between properties of trustworthy AI. Specifically, this research investigates multiple techniques such as reinforcement learning for understanding trustworthiness in users’ privacy, fairness, and robustness in classification tasks like cyberbullying detection and fake news detection. Since most social measures in trustworthy AI cannot be used to fine-tune or train an AI model directly, the main contribution of this dissertation lies in using reinforcement learning to alter an AI system’s behavior based on non-differentiable social measures.
ContributorsMosallanezhad, Ahmadreza (Author) / Liu, Huan (Thesis advisor) / Mancenido, Michelle (Thesis advisor) / Doupe, Adam (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2023
157402-Thumbnail Image.png
Description
As deception in cyberspace becomes more dynamic, research in this area should also take a dynamic approach to battling deception and false information. Research has previously shown that people are no better than chance at detecting deception. Deceptive information in cyberspace, specifically on social media, is not exempt from this

As deception in cyberspace becomes more dynamic, research in this area should also take a dynamic approach to battling deception and false information. Research has previously shown that people are no better than chance at detecting deception. Deceptive information in cyberspace, specifically on social media, is not exempt from this pitfall. Current practices in social media rely on the users to detect false information and use appropriate discretion when deciding to share information online. This is ineffective and will predicatively end with users being unable to discern true from false information at all, as deceptive information becomes more difficult to distinguish from true information. To proactively combat inaccurate and deceptive information on social media, research must be conducted to understand not only the interaction effects of false content and user characteristics, but user behavior that stems from this interaction as well. This study investigated the effects of confirmation bias and susceptibility to deception on an individual’s choice to share information, specifically to understand how these factors relate to the sharing of false controversial information.
ContributorsChinzi, Ashley (Author) / Cooke, Nancy J. (Thesis advisor) / Chiou, Erin (Committee member) / Becker, David V (Committee member) / Arizona State University (Publisher)
Created2019
157384-Thumbnail Image.png
Description
Student pilots are the future of aviation and one of the biggest problems that they face as new pilots is fatigue. The survey was sent out asking if student pilots were fatigued, if they attribute flight training, school work, work outside of school, and social obligations to their sleep loss,

Student pilots are the future of aviation and one of the biggest problems that they face as new pilots is fatigue. The survey was sent out asking if student pilots were fatigued, if they attribute flight training, school work, work outside of school, and social obligations to their sleep loss, and how they spend their time on those activities. The survey was given to aviation students at Arizona State University (ASU) Polytechnic Campus. ASU student pilots were found to be fatigued through a single sample t-test. Other t-tests were done on each of the questions that asked student pilots how flight training, school work, work outside of school and social obligations affect their sleep loss. Flight training and school were found to be contributing to student pilots sleep loss. Work outside of school and social obligations were found to not be contributing to student pilots sleep loss. It was found that student pilots’ tendency to use a planner or calendar was found to not be significant. Along with this planning through the week when they will do assignments or study for exams was also not found to be significant. Students making lists of assignments and when they are due was also found to not be significant. The t-test also found that student pilots are neutral on the topic of whether good time management skills would help increase the amount of sleep that they get.
ContributorsHarris, Mariah Jean (Author) / Cooke, Nancy J. (Thesis advisor) / Nullmeyer, Robert (Thesis advisor) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2019
157345-Thumbnail Image.png
Description
The prospects of commercially available autonomous vehicles are surely tantalizing, however the implementation of these vehicles and their strain on the social dynamics between motorists and pedestrians remains unknown. Questions concerning how autonomous vehicles will communicate safety and intent to pedestrians remain largely unanswered. This study examines the efficacy of

The prospects of commercially available autonomous vehicles are surely tantalizing, however the implementation of these vehicles and their strain on the social dynamics between motorists and pedestrians remains unknown. Questions concerning how autonomous vehicles will communicate safety and intent to pedestrians remain largely unanswered. This study examines the efficacy of various proposed technologies for bridging the communication gap between self-driving cars and pedestrians. Displays utilizing words like “safe” and “danger” seem to be effective in communicating with pedestrians and other road users. Future research should attempt to study different external notification interfaces in real-life settings to more accurately gauge pedestrian responses.
ContributorsMuqolli, Endrit (Author) / Cooke, Nancy J. (Thesis advisor) / Chiou, Erin (Committee member) / Gray, Rob (Committee member) / Arizona State University (Publisher)
Created2019
157421-Thumbnail Image.png
Description
Human-robot interaction has expanded immensely within dynamic environments. The goals of human-robot interaction are to increase productivity, efficiency and safety. In order for the integration of human-robot interaction to be seamless and effective humans must be willing to trust the capabilities of assistive robots. A major priority for human-robot interaction

Human-robot interaction has expanded immensely within dynamic environments. The goals of human-robot interaction are to increase productivity, efficiency and safety. In order for the integration of human-robot interaction to be seamless and effective humans must be willing to trust the capabilities of assistive robots. A major priority for human-robot interaction should be to understand how human dyads have been historically effective within a joint-task setting. This will ensure that all goals can be met in human robot settings. The aim of the present study was to examine human dyads and the effects of an unexpected interruption. Humans’ interpersonal and individual levels of trust were studied in order to draw appropriate conclusions. Seventeen undergraduate and graduate level dyads were collected from Arizona State University. Participants were broken up into either a surprise condition or a baseline condition. Participants individually took two surveys in order to have an accurate understanding of levels of dispositional and individual levels of trust. The findings showed that participant levels of interpersonal trust were average. Surprisingly, participants who participated in the surprise condition afterwards, showed moderate to high levels of dyad trust. This effect showed that participants became more reliant on their partners when interrupted by a surprising event. Future studies will take this knowledge and apply it to human-robot interaction, in order to mimic the seamless team-interaction shown in historically effective dyads, specifically human team interaction.
ContributorsShaw, Alexandra Luann (Author) / Chiou, Erin (Thesis advisor) / Cooke, Nancy J. (Committee member) / Craig, Scotty (Committee member) / Arizona State University (Publisher)
Created2019
156595-Thumbnail Image.png
Description
Coastal areas are susceptible to man-made disasters, such as oil spills, which not

only have a dreadful impact on the lives of coastal communities and businesses but also

have lasting and hazardous consequences. The United States coastal areas, especially

the Gulf of Mexico, have witnessed devastating oil spills of varied sizes and durations

that

Coastal areas are susceptible to man-made disasters, such as oil spills, which not

only have a dreadful impact on the lives of coastal communities and businesses but also

have lasting and hazardous consequences. The United States coastal areas, especially

the Gulf of Mexico, have witnessed devastating oil spills of varied sizes and durations

that resulted in major economic and ecological losses. These disasters affected the oil,

housing, forestry, tourism, and fishing industries with overall costs exceeding billions

of dollars (Baade et al. (2007); Smith et al. (2011)). Extensive research has been

done with respect to oil spill simulation techniques, spatial optimization models, and

innovative strategies to deal with spill response and planning efforts. However, most

of the research done in those areas is done independently of each other, leaving a

conceptual void between them.

In the following work, this thesis presents a Spatial Decision Support System

(SDSS), which efficiently integrates the independent facets of spill modeling techniques

and spatial optimization to enable officials to investigate and explore the various

options to clean up an offshore oil spill to make a more informed decision. This

thesis utilizes Blowout and Spill Occurrence Model (BLOSOM) developed by Sim

et al. (2015) to simulate hypothetical oil spill scenarios, followed by the Oil Spill

Cleanup and Operational Model (OSCOM) developed by Grubesic et al. (2017) to

spatially optimize the response efforts. The results of this combination are visualized

in the SDSS, featuring geographical maps, so the boat ramps from which the response

should be launched can be easily identified along with the amount of oil that hits the

shore thereby visualizing the intensity of the impact of the spill in the coastal areas

for various cleanup targets.
ContributorsPydi Medini, Prannoy Chandra (Author) / Maciejewski, Ross (Thesis advisor) / Grubesic, Anthony (Committee member) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2018
156643-Thumbnail Image.png
Description
When looking at drawings of graphs, questions about graph density, community structures, local clustering and other graph properties may be of critical importance for analysis. While graph layout algorithms have focused on minimizing edge crossing, symmetry, and other such layout properties, there is not much known about how these algorithms

When looking at drawings of graphs, questions about graph density, community structures, local clustering and other graph properties may be of critical importance for analysis. While graph layout algorithms have focused on minimizing edge crossing, symmetry, and other such layout properties, there is not much known about how these algorithms relate to a user’s ability to perceive graph properties for a given graph layout. This study applies previously established methodologies for perceptual analysis to identify which graph drawing layout will help the user best perceive a particular graph property. A large scale (n = 588) crowdsourced experiment is conducted to investigate whether the perception of two graph properties (graph density and average local clustering coefficient) can be modeled using Weber’s law. Three graph layout algorithms from three representative classes (Force Directed - FD, Circular, and Multi-Dimensional Scaling - MDS) are studied, and the results of this experiment establish the precision of judgment for these graph layouts and properties. The findings demonstrate that the perception of graph density can be modeled with Weber’s law. Furthermore, the perception of the average clustering coefficient can be modeled as an inverse of Weber’s law, and the MDS layout showed a significantly different precision of judgment than the FD layout.
ContributorsSoni, Utkarsh (Author) / Maciejewski, Ross (Thesis advisor) / Kobourov, Stephen (Committee member) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2018
156622-Thumbnail Image.png
Description
Reasoning about the activities of cyber threat actors is critical to defend against cyber

attacks. However, this task is difficult for a variety of reasons. In simple terms, it is difficult

to determine who the attacker is, what the desired goals are of the attacker, and how they will

carry out their attacks.

Reasoning about the activities of cyber threat actors is critical to defend against cyber

attacks. However, this task is difficult for a variety of reasons. In simple terms, it is difficult

to determine who the attacker is, what the desired goals are of the attacker, and how they will

carry out their attacks. These three questions essentially entail understanding the attacker’s

use of deception, the capabilities available, and the intent of launching the attack. These

three issues are highly inter-related. If an adversary can hide their intent, they can better

deceive a defender. If an adversary’s capabilities are not well understood, then determining

what their goals are becomes difficult as the defender is uncertain if they have the necessary

tools to accomplish them. However, the understanding of these aspects are also mutually

supportive. If we have a clear picture of capabilities, intent can better be deciphered. If we

understand intent and capabilities, a defender may be able to see through deception schemes.

In this dissertation, I present three pieces of work to tackle these questions to obtain

a better understanding of cyber threats. First, we introduce a new reasoning framework

to address deception. We evaluate the framework by building a dataset from DEFCON

capture-the-flag exercise to identify the person or group responsible for a cyber attack.

We demonstrate that the framework not only handles cases of deception but also provides

transparent decision making in identifying the threat actor. The second task uses a cognitive

learning model to determine the intent – goals of the threat actor on the target system.

The third task looks at understanding the capabilities of threat actors to target systems by

identifying at-risk systems from hacker discussions on darkweb websites. To achieve this

task we gather discussions from more than 300 darkweb websites relating to malicious

hacking.
ContributorsNunes, Eric (Author) / Shakarian, Paulo (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Baral, Chitta (Committee member) / Cooke, Nancy J. (Committee member) / Arizona State University (Publisher)
Created2018
156624-Thumbnail Image.png
Description
In the last few years, there has been a tremendous increase in the use of big data. Most of this data is hard to understand because of its size and dimensions. The importance of this problem can be emphasized by the fact that Big Data Research and Development Initiative was

In the last few years, there has been a tremendous increase in the use of big data. Most of this data is hard to understand because of its size and dimensions. The importance of this problem can be emphasized by the fact that Big Data Research and Development Initiative was announced by the United States administration in 2012 to address problems faced by the government. Various states and cities in the US gather spatial data about incidents like police calls for service.

When we query large amounts of data, it may lead to a lot of questions. For example, when we look at arithmetic relationships between queries in heterogeneous data, there are a lot of differences. How can we explain what factors account for these differences? If we define the observation as an arithmetic relationship between queries, this kind of problem can be solved by aggravation or intervention. Aggravation views the value of our observation for different set of tuples while intervention looks at the value of the observation after removing sets of tuples. We call the predicates which represent these tuples, explanations. Observations by themselves have limited importance. For example, if we observe a large number of taxi trips in a specific area, we might ask the question: Why are there so many trips here? Explanations attempt to answer these kinds of questions.

While aggravation and intervention are designed for non spatial data, we propose a new approach for explaining spatially heterogeneous data. Our approach expands on aggravation and intervention while using spatial partitioning/clustering to improve explanations for spatial data. Our proposed approach was evaluated against a real-world taxi dataset as well as a synthetic disease outbreak datasets. The approach was found to outperform aggravation in precision and recall while outperforming intervention in precision.
ContributorsTahir, Anique (Author) / Elsayed, Mohamed (Thesis advisor) / Hsiao, Ihan (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2018
157253-Thumbnail Image.png
Description
Reading partners’ actions correctly is essential for successful coordination, but interpretation does not always reflect reality. Attribution biases, such as self-serving and correspondence biases, lead people to misinterpret their partners’ actions and falsely assign blame after an unexpected event. These biases thus further influence people’s trust in their partners, including

Reading partners’ actions correctly is essential for successful coordination, but interpretation does not always reflect reality. Attribution biases, such as self-serving and correspondence biases, lead people to misinterpret their partners’ actions and falsely assign blame after an unexpected event. These biases thus further influence people’s trust in their partners, including machine partners. The increasing capabilities and complexity of machines allow them to work physically with humans. However, their improvements may interfere with the accuracy for people to calibrate trust in machines and their capabilities, which requires an understanding of attribution biases’ effect on human-machine coordination. Specifically, the current thesis explores how the development of trust in a partner is influenced by attribution biases and people’s assignment of blame for a negative outcome. This study can also suggest how a machine partner should be designed to react to environmental disturbances and report the appropriate level of information about external conditions.
ContributorsHsiung, Chi-Ping (M.S.) (Author) / Chiou, Erin (Thesis advisor) / Cooke, Nancy J. (Thesis advisor) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2019