Matching Items (1,067)
Filtering by

Clear all filters

168720-Thumbnail Image.png
Description
Artificial intelligence (AI) has the potential to drive us towards a future in which all of humanity flourishes. It also comes with substantial risks of oppression and calamity. For example, social media platforms have knowingly and surreptitiously promoted harmful content, e.g., the rampant instances of disinformation and hate speech. Machine

Artificial intelligence (AI) has the potential to drive us towards a future in which all of humanity flourishes. It also comes with substantial risks of oppression and calamity. For example, social media platforms have knowingly and surreptitiously promoted harmful content, e.g., the rampant instances of disinformation and hate speech. Machine learning algorithms designed for combating hate speech were also found biased against underrepresented and disadvantaged groups. In response, researchers and organizations have been working to publish principles and regulations for the responsible use of AI. However, these conceptual principles also need to be turned into actionable algorithms to materialize AI for good. The broad aim of my research is to design AI systems that responsibly serve users and develop applications with social impact. This dissertation seeks to develop the algorithmic solutions for Socially Responsible AI (SRAI), a systematic framework encompassing the responsible AI principles and algorithms, and the responsible use of AI. In particular, it first introduces an interdisciplinary definition of SRAI and the AI responsibility pyramid, in which four types of AI responsibilities are described. It then elucidates the purpose of SRAI: how to bridge from the conceptual definitions to responsible AI practice through the three human-centered operations -- to Protect and Inform users, and Prevent negative consequences. They are illustrated in the social media domain given that social media has revolutionized how people live but has also contributed to the rise of many societal issues. The three representative tasks for each dimension are cyberbullying detection, disinformation detection and dissemination, and unintended bias mitigation. The means of SRAI is to develop responsible AI algorithms. Many issues (e.g., discrimination and generalization) can arise when AI systems are trained to improve accuracy without knowing the underlying causal mechanism. Causal inference, therefore, is intrinsically related to understanding and resolving these challenging issues in AI. As a result, this dissertation also seeks to gain an in-depth understanding of AI by looking into the precise relationships between causes and effects. For illustration, it introduces a recent work that applies deep learning to estimating causal effects and shows that causal learning algorithms can outperform traditional methods.
ContributorsCheng, Lu (Author) / Liu, Huan (Thesis advisor) / Varshney, Kush R. (Committee member) / Silva, Yasin N. (Committee member) / Wu, Carole-Jean (Committee member) / Candan, Kasim S. (Committee member) / Arizona State University (Publisher)
Created2022
168694-Thumbnail Image.png
Description
Retinotopic map, the map between visual inputs on the retina and neuronal activation in brain visual areas, is one of the central topics in visual neuroscience. For human observers, the map is typically obtained by analyzing functional magnetic resonance imaging (fMRI) signals of cortical responses to slowly moving visual stimuli

Retinotopic map, the map between visual inputs on the retina and neuronal activation in brain visual areas, is one of the central topics in visual neuroscience. For human observers, the map is typically obtained by analyzing functional magnetic resonance imaging (fMRI) signals of cortical responses to slowly moving visual stimuli on the retina. Biological evidences show the retinotopic mapping is topology-preserving/topological (i.e. keep the neighboring relationship after human brain process) within each visual region. Unfortunately, due to limited spatial resolution and the signal-noise ratio of fMRI, state of art retinotopic map is not topological. The topic was to model the topology-preserving condition mathematically, fix non-topological retinotopic map with numerical methods, and improve the quality of retinotopic maps. The impose of topological condition, benefits several applications. With the topological retinotopic maps, one may have a better insight on human retinotopic maps, including better cortical magnification factor quantification, more precise description of retinotopic maps, and potentially better exam ways of in Ophthalmology clinic.
ContributorsTu, Yanshuai (Author) / Wang, Yalin (Thesis advisor) / Lu, Zhong-Lin (Committee member) / Crook, Sharon (Committee member) / Yang, Yezhou (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2022
168710-Thumbnail Image.png
Description
The omnipresent data, growing number of network devices, and evolving attack techniques have been challenging organizations’ security defenses over the past decade. With humongous volumes of logs generated by those network devices, looking for patterns of malicious activities and identifying them in time is growing beyond the capabilities of their

The omnipresent data, growing number of network devices, and evolving attack techniques have been challenging organizations’ security defenses over the past decade. With humongous volumes of logs generated by those network devices, looking for patterns of malicious activities and identifying them in time is growing beyond the capabilities of their defense systems. Deep Learning, a subset of Machine Learning (ML) and Artificial Intelligence (AI), fills in this gapwith its ability to learn from huge amounts of data, and improve its performance as the data it learns from increases. In this dissertation, I bring forward security issues pertaining to two top threats that most organizations fear, Advanced Persistent Threat (APT), and Distributed Denial of Service (DDoS), along with deep learning models built towards addressing those security issues. First, I present a deep learning model, APT Detection, capable of detecting anomalous activities in a system. Evaluation of this model demonstrates how it can contribute to early detection of an APT attack with an Area Under the Curve (AUC) of up to 91% on a Receiver Operating Characteristic (ROC) curve. Second, I present DAPT2020, a first of its kind dataset capturing an APT attack exploiting web and system vulnerabilities in an emulated organization’s production network. Evaluation of the dataset using well known machine learning models demonstrates the need for better deep learning models to detect APT attacks. I then present DAPT2021, a semi-synthetic dataset capturing an APT attackexploiting human vulnerabilities, alongside 2 less skilled attacks. By emulating the normal behavior of the employees in a set target organization, DAPT2021 has been created to enable researchers study the causations and correlations among the captured data, a much-needed information to detect an underlying threat early. Finally, I present a distributed defense framework, SmartDefense, that can detect and mitigate over 90% of DDoS traffic at the source and over 97.5% of the remaining DDoS traffic at the Internet Service Provider’s (ISP’s) edge network. Evaluation of this work shows how by using attributes sent by customer edge network, SmartDefense can further help ISPs prevent up to 51.95% of the DDoS traffic from going to the destination.
ContributorsMyneni, Sowmya (Author) / Xue, Guoliang (Thesis advisor) / Doupe, Adam (Committee member) / Li, Baoxin (Committee member) / Baral, Chitta (Committee member) / Arizona State University (Publisher)
Created2022
168714-Thumbnail Image.png
Description
Deep neural network-based methods have been proved to achieve outstanding performance on object detection and classification tasks. Deep neural networks follow the ``deeper model with deeper confidence'' belief to gain a higher recognition accuracy. However, reducing these networks' computational costs remains a challenge, which impedes their deployment on embedded devices.

Deep neural network-based methods have been proved to achieve outstanding performance on object detection and classification tasks. Deep neural networks follow the ``deeper model with deeper confidence'' belief to gain a higher recognition accuracy. However, reducing these networks' computational costs remains a challenge, which impedes their deployment on embedded devices. For instance, the intersection management of Connected Autonomous Vehicles (CAVs) requires running computationally intensive object recognition algorithms on low-power traffic cameras. This dissertation aims to study the effect of a dynamic hardware and software approach to address this issue. Characteristics of real-world applications can facilitate this dynamic adjustment and reduce the computation. Specifically, this dissertation starts with a dynamic hardware approach that adjusts itself based on the toughness of input and extracts deeper features if needed. Next, an adaptive learning mechanism has been studied that use extracted feature from previous inputs to improve system performance. Finally, a system (ARGOS) was proposed and evaluated that can be run on embedded systems while maintaining the desired accuracy. This system adopts shallow features at inference time, but it can switch to deep features if the system desires a higher accuracy. To improve the performance, ARGOS distills the temporal knowledge from deep features to the shallow system. Moreover, ARGOS reduces the computation furthermore by focusing on regions of interest. The response time and mean average precision are adopted for the performance evaluation to evaluate the proposed ARGOS system.
ContributorsFarhadi, Mohammad (Author) / Yang, Yezhou (Thesis advisor) / Vrudhula, Sarma (Committee member) / Wu, Carole-Jean (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2022
168716-Thumbnail Image.png
Description
Stress is one of the critical factors in daily lives, as it has a profound impact onperformance at work and decision-making processes. With the development of IoT technology, smart wearables can handle diverse operations, including networking and recording biometric signals. Also, it has become easier for individual users to selfdetect stress with

Stress is one of the critical factors in daily lives, as it has a profound impact onperformance at work and decision-making processes. With the development of IoT technology, smart wearables can handle diverse operations, including networking and recording biometric signals. Also, it has become easier for individual users to selfdetect stress with recorded data since these wearables as well as their accompanying smartphones now have data processing capability. Edge computing on such devices enables real-time feedback and in turn preemptive identification of reactions to stress. This can provide an opportunity to prevent more severe consequences that might result if stress is unaddressed. From a system perspective, leveraging edge computing allows saving energy such as network bandwidth and latency since it processes data in proximity to the data source. It can also strengthen privacy by implementing stress prediction at local devices without transferring personal information to the public cloud. This thesis presents a framework for real-time stress prediction using Fitbit and machine learning with the support from cloud computing. Fitbit is a wearable tracker that records biometric measurements using optical sensors on the wrist. It also provides developers with platforms to design custom applications. I developed an application for the Fitbit and the user’s accompanying mobile device to collect heart rate fluctuations and corresponding stress levels entered by users. I also established the dataset collected from police cadets during their academy training program. Machine learning classifiers for stress prediction are built using classic models and TensorFlow in the cloud. Lastly, the classifiers are optimized using model compression techniques for deploying them on the smartphones and analyzed how efficiently stress prediction can be performed on the edge.
ContributorsSim, Sang-Hun (Author) / Zhao, Ming (Thesis advisor) / Roberts, Nicole (Committee member) / Zou, Jia (Committee member) / Arizona State University (Publisher)
Created2022
Description

In the age of growing technology, Computer Science (CS) professionals have come into high demand. However, despite popular demand there are not enough computer scientists to fill these roles. The current demographic of computer scientists consists mainly of white men. This apparent gender gap must be addressed to promote diversity

In the age of growing technology, Computer Science (CS) professionals have come into high demand. However, despite popular demand there are not enough computer scientists to fill these roles. The current demographic of computer scientists consists mainly of white men. This apparent gender gap must be addressed to promote diversity and inclusivity in a career that requires high creativity and innovation. To understand what enforces gender stereotypes and the gender gap within CS, survey and interview data were collected from both male and female senior students studying CS and those who have left the CS program at Arizona State University. Students were asked what experiences either diminished or reinforced their sense of belonging in this field as well as other questions related to their involvement in CS. Interview and survey data reveal a lack of representation within courses as well as lack of peer support are key factors that influence the involvement and retention of students in CS, especially women. This data was used to identify key factors that influence retention and what can be done to remedy the growing deficit of professionals in this field.

ContributorsKent, Victoria (Author) / Kappes, Janelle (Thesis director) / Forrest, Stephanie (Committee member) / Richa, Andrea (Committee member) / Barrett, The Honors College (Contributor) / School of Life Sciences (Contributor)
Created2023-05
Description

2018, Google researchers published the BERT (Bidirectional Encoder Representations from Transformers) model, which has since served as a starting point for hundreds of NLP (Natural Language Processing) related experiments and other derivative models. BERT was trained on masked-language modelling (sentence prediction) but its capabilities extend to more common NLP tasks,

2018, Google researchers published the BERT (Bidirectional Encoder Representations from Transformers) model, which has since served as a starting point for hundreds of NLP (Natural Language Processing) related experiments and other derivative models. BERT was trained on masked-language modelling (sentence prediction) but its capabilities extend to more common NLP tasks, such as language inference and text classification. Naralytics is a company that seeks to use natural language in order to be able to categorize users who create text into multiple categories – which is a modified version of classification. However, the text that Naralytics seeks to pull from exceed the maximum token length of 512 tokens that BERT supports – so this report discusses the research towards multiple BERT derivatives that seek to address this problem – and then implements a solution that addresses the multiple concerns that are attached to this kind of model.

ContributorsNgo, Nicholas (Author) / Carter, Lynn (Thesis director) / Lee, Gyou-Re (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / Economics Program in CLAS (Contributor)
Created2023-05
Description

With the extreme strides taken in physics in the early twentieth century, one of the biggest questions on the minds of scientists was what this new branch of quantum physics would be able to be used for. The twentieth century saw the rise of computers as devices that significantly aided

With the extreme strides taken in physics in the early twentieth century, one of the biggest questions on the minds of scientists was what this new branch of quantum physics would be able to be used for. The twentieth century saw the rise of computers as devices that significantly aided in calculations and performing algorithms. Because of the incredible success of computers and all of the groundbreaking possibilities that they afforded, research into using quantum mechanics for these systems was proposed. Although theoretical at the time, it was found that a computer that had the ability to leverage quantum mechanics would be far superior to any classical machine. This sparked a wave of interest in research and funding in this exciting new field. General-use quantum computers have the potential to disrupt countless industries and fields of study, like physics, medicine, engineering, cryptography, finance, meteorology, climatology, and more. The supremacy of quantum computers has not yet been reached, but the continued funding and research into this new technology ensures that one day humanity will be able to unlock the full potential of quantum computing.

ContributorsEaton, Jacob (Author) / Foy, Joseph (Thesis director) / Hines, Taylor (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2023-05
Description

The Oasis app is a self-appraisal tool for potential or current problem gamblers to take control of their habits by providing periodic check-in notifications during a gambling session and allowing users to see their progress over time. Oasis is backed by substantial background research surrounding addiction intervention methods, especially in

The Oasis app is a self-appraisal tool for potential or current problem gamblers to take control of their habits by providing periodic check-in notifications during a gambling session and allowing users to see their progress over time. Oasis is backed by substantial background research surrounding addiction intervention methods, especially in the field of self-appraisal messaging, and applies this messaging in a familiar mobile notification form that can effectively change user’s behavior. User feedback was collected and used to improve the app, and the results show a promising tool that could help those who need it in the future.

ContributorsBlunt, Thomas (Author) / Meuth, Ryan (Thesis director) / McDaniel, Troy (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description

The last few years have marked immense growth in the development of digital twins as developers continue to devise strategies to ensure their devices replicate their physical twin’s actions in a real-time virtual environment. The complexity and predictability of these environments can be the deciding factor for adequately testing a

The last few years have marked immense growth in the development of digital twins as developers continue to devise strategies to ensure their devices replicate their physical twin’s actions in a real-time virtual environment. The complexity and predictability of these environments can be the deciding factor for adequately testing a digital twin. As of the last year, a digital twin was in development for a capstone project at Arizona State University: CIA Research Labs - Mechanical Systems in Virtual Environments. The virtual device was initially designed for a fixed environment with known ahead-of-time obstacles. Due to the fact that the device was expected only to be traversing set environments, it was unknown how it would handle being driven in an environment with more randomized and unexpected obstacles. For this paper, the device was test driven in the original and environments with various levels of randomization to see how usable and durable the digital twin is despite only being built for environments with expected object locations. The research allowed the creators of this digital twin, utilizing the results of the trial runs and the number of obstacles unsuccessfully avoided, to understand how reliable the controls of the digital twin are when only trained for fixed terrains

ContributorsSassone, Skylar (Author) / Carter, Lynn (Thesis director) / Lewis, John (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05