This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 21 - 30 of 36
155822-Thumbnail Image.png
Description
LPMLN is a recent probabilistic logic programming language which combines both Answer Set Programming (ASP) and Markov Logic. It is a proper extension of Answer Set programs which allows for reasoning about uncertainty using weighted rules under the stable model semantics with a weight scheme that is adopted from Markov

LPMLN is a recent probabilistic logic programming language which combines both Answer Set Programming (ASP) and Markov Logic. It is a proper extension of Answer Set programs which allows for reasoning about uncertainty using weighted rules under the stable model semantics with a weight scheme that is adopted from Markov Logic. LPMLN has been shown to be related to several formalisms from the knowledge representation (KR) side such as ASP and P-Log, and the statistical relational learning (SRL) side such as Markov Logic Networks (MLN), Problog and Pearl’s causal models (PCM). Formalisms like ASP, P-Log, Problog, MLN, PCM have all been shown to embeddable in LPMLN which demonstrates the expressivity of the language. Interestingly, LPMLN has also been shown to reducible to ASP and MLN which is not only theoretically interesting, but also practically important from a computational point of view in that the reductions yield ways to compute LPMLN programs utilizing ASP and MLN solvers. Additionally, the reductions also allow the users to compute other formalisms which can be reduced to LPMLN.

This thesis realizes two implementations of LPMLN based on the reductions from LPMLN to ASP and LPMLN to MLN. This thesis first presents an implementation of LPMLN called LPMLN2ASP that uses standard ASP solvers for computing MAP inference using weak constraints, and marginal and conditional probabilities using stable models enumeration. Next, in this thesis, another implementation of LPMLN called LPMLN2MLN is presented that uses MLN solvers which apply completion to compute the tight fragment of LPMLN programs for MAP inference, marginal and conditional probabilities. The computation using ASP solvers yields exact inference as opposed to approximate inference using MLN solvers. Using these implementations, the usefulness of LPMLN for computing other formalisms is demonstrated by reducing them to LPMLN. The thesis also shows how the implementations are better than the native solvers of some of these formalisms on certain domains. The implementations make use of the current state of the art solving technologies in ASP and MLN, and therefore they benefit from any theoretical and practical advances in these technologies, thereby also benefiting the computation of other formalisms that can be reduced to LPMLN. Furthermore, the implementation also allows for certain SRL formalisms to be computed by ASP solvers, and certain KR formalisms to be computed by MLN solvers.
ContributorsTalsania, Samidh (Author) / Lee, Joohyung (Thesis advisor, Committee member) / Baral, Chitta (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2017
155419-Thumbnail Image.png
Description
Answer Set Programming (ASP) is one of the main formalisms in Knowledge Representation (KR) that is being widely applied in a large number of applications. While ASP is effective on Boolean decision problems, it has difficulty in expressing quantitative uncertainty and probability in a natural way.

Logic Programs under the answer

Answer Set Programming (ASP) is one of the main formalisms in Knowledge Representation (KR) that is being widely applied in a large number of applications. While ASP is effective on Boolean decision problems, it has difficulty in expressing quantitative uncertainty and probability in a natural way.

Logic Programs under the answer set semantics and Markov Logic Network (LPMLN) is a recent extension of answer set programs to overcome the limitation of the deterministic nature of ASP by adopting the log-linear weight scheme of Markov Logic. This thesis investigates the relationships between LPMLN and two other extensions of ASP: weak constraints to express a quantitative preference among answer sets, and P-log to incorporate probabilistic uncertainty. The studied relationships show how different extensions of answer set programs are related to each other, and how they are related to formalisms in Statistical Relational Learning, such as Problog and MLN, which have shown to be closely related to LPMLN. The studied relationships compare the properties of the involved languages and provide ways to compute one language using an implementation of another language.

This thesis first presents a translation of LPMLN into programs with weak constraints. The translation allows for computing the most probable stable models (i.e., MAP estimates) or probability distribution in LPMLN programs using standard ASP solvers so that the well-developed techniques in ASP can be utilized. This result can be extended to other formalisms, such as Markov Logic, ProbLog, and Pearl’s Causal Models, that are shown to be translatable into LPMLN.

This thesis also presents a translation of P-log into LPMLN. The translation tells how probabilistic nonmonotonicity (the ability of the reasoner to change his probabilistic model as a result of new information) of P-log can be represented in LPMLN, which yields a way to compute P-log using standard ASP solvers or MLN solvers.
ContributorsYang, Zhun (Author) / Lee, Joohyung (Thesis advisor) / Baral, Chitta (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2017
168710-Thumbnail Image.png
Description
The omnipresent data, growing number of network devices, and evolving attack techniques have been challenging organizations’ security defenses over the past decade. With humongous volumes of logs generated by those network devices, looking for patterns of malicious activities and identifying them in time is growing beyond the capabilities of their

The omnipresent data, growing number of network devices, and evolving attack techniques have been challenging organizations’ security defenses over the past decade. With humongous volumes of logs generated by those network devices, looking for patterns of malicious activities and identifying them in time is growing beyond the capabilities of their defense systems. Deep Learning, a subset of Machine Learning (ML) and Artificial Intelligence (AI), fills in this gapwith its ability to learn from huge amounts of data, and improve its performance as the data it learns from increases. In this dissertation, I bring forward security issues pertaining to two top threats that most organizations fear, Advanced Persistent Threat (APT), and Distributed Denial of Service (DDoS), along with deep learning models built towards addressing those security issues. First, I present a deep learning model, APT Detection, capable of detecting anomalous activities in a system. Evaluation of this model demonstrates how it can contribute to early detection of an APT attack with an Area Under the Curve (AUC) of up to 91% on a Receiver Operating Characteristic (ROC) curve. Second, I present DAPT2020, a first of its kind dataset capturing an APT attack exploiting web and system vulnerabilities in an emulated organization’s production network. Evaluation of the dataset using well known machine learning models demonstrates the need for better deep learning models to detect APT attacks. I then present DAPT2021, a semi-synthetic dataset capturing an APT attackexploiting human vulnerabilities, alongside 2 less skilled attacks. By emulating the normal behavior of the employees in a set target organization, DAPT2021 has been created to enable researchers study the causations and correlations among the captured data, a much-needed information to detect an underlying threat early. Finally, I present a distributed defense framework, SmartDefense, that can detect and mitigate over 90% of DDoS traffic at the source and over 97.5% of the remaining DDoS traffic at the Internet Service Provider’s (ISP’s) edge network. Evaluation of this work shows how by using attributes sent by customer edge network, SmartDefense can further help ISPs prevent up to 51.95% of the DDoS traffic from going to the destination.
ContributorsMyneni, Sowmya (Author) / Xue, Guoliang (Thesis advisor) / Doupe, Adam (Committee member) / Li, Baoxin (Committee member) / Baral, Chitta (Committee member) / Arizona State University (Publisher)
Created2022
168435-Thumbnail Image.png
Description
Artificial Intelligence, as the hottest research topic nowadays, is mostly driven by data. There is no doubt that data is the king in the age of AI. However, natural high-quality data is precious and rare. In order to obtain enough and eligible data to support AI tasks, data processing is

Artificial Intelligence, as the hottest research topic nowadays, is mostly driven by data. There is no doubt that data is the king in the age of AI. However, natural high-quality data is precious and rare. In order to obtain enough and eligible data to support AI tasks, data processing is always required. To be even worse, the data preprocessing tasks are often dull and heavy, which require huge human labors to deal with. Statistics show 70% - 80% of the data scientists' time is spent on data integration process. Among various reasons, schema changes that commonly exist in the data warehouse are one significant obstacle that impedes the automation of the end-to-end data integration process. Traditional data integration applications rely on data processing operators such as join, union, aggregation and so on. Those operations are fragile and can be easily interrupted by schema changes. Whenever schema changes happen, the data integration applications will require human labors to solve the interruptions and downtime. The industries as well as the data scientists need a new mechanism to handle the schema changes in data integration tasks. This work proposes a new direction of data integration applications based on deep learning models. The data integration problem is defined in the scenario of integrating tabular-format data with natural schema changes, using the cell-based data abstraction. In addition, data augmentation and adversarial learning are investigated to boost the model robustness to schema changes. The experiments are tested on two real-world data integration scenarios, and the results demonstrate the effectiveness of the proposed approach.
ContributorsWang, Zijie (Author) / Zou, Jia (Thesis advisor) / Baral, Chitta (Committee member) / Candan, K. Selcuk (Committee member) / Arizona State University (Publisher)
Created2021
168406-Thumbnail Image.png
Description
Enabling robots to physically engage with their environment in a safe and efficient manner is an essential step towards human-robot interaction. To date, robots usually operate as pre-programmed workers that blindly execute tasks in highly structured environments crafted by skilled engineers. Changing the robots’ behavior to cover new duties or

Enabling robots to physically engage with their environment in a safe and efficient manner is an essential step towards human-robot interaction. To date, robots usually operate as pre-programmed workers that blindly execute tasks in highly structured environments crafted by skilled engineers. Changing the robots’ behavior to cover new duties or handle variability is an expensive, complex, and time-consuming process. However, with the advent of more complex sensors and algorithms, overcoming these limitations becomes within reach. This work proposes innovations in artificial intelligence, language understanding, and multimodal integration to enable next-generation grasping and manipulation capabilities in autonomous robots. The underlying thesis is that multimodal observations and instructions can drastically expand the responsiveness and dexterity of robot manipulators. Natural language, in particular, can be used to enable intuitive, bidirectional communication between a human user and the machine. To this end, this work presents a system that learns context-aware robot control policies from multimodal human demonstrations. Among the main contributions presented are techniques for (a) collecting demonstrations in an efficient and intuitive fashion, (b) methods for leveraging physical contact with the environment and objects, (c) the incorporation of natural language to understand context, and (d) the generation of robust robot control policies. The presented approach and systems are evaluated in multiple grasping and manipulation settings ranging from dexterous manipulation to pick-and-place, as well as contact-rich bimanual insertion tasks. Moreover, the usability of these innovations, especially when utilizing human task demonstrations and communication interfaces, is evaluated in several human-subject studies.
ContributorsStepputtis, Simon (Author) / Ben Amor, Heni (Thesis advisor) / Baral, Chitta (Committee member) / Yang, Yezhou (Committee member) / Lee, Stefan (Committee member) / Arizona State University (Publisher)
Created2021
171495-Thumbnail Image.png
Description
Multimodal reasoning is one of the most interesting research fields because of the ability to interact with systems and the explainability of the models' behavior. Traditional multimodal research problems do not focus on complex commonsense reasoning (such as physical interactions). Although real-world objects have physical properties associated with them,

Multimodal reasoning is one of the most interesting research fields because of the ability to interact with systems and the explainability of the models' behavior. Traditional multimodal research problems do not focus on complex commonsense reasoning (such as physical interactions). Although real-world objects have physical properties associated with them, many of these properties (such as mass and coefficient of friction) are not captured directly by the imaging pipeline. Videos often capture objects, their motion, and the interactions between different objects. However, these properties can be estimated by utilizing cues from relative object motion and the dynamics introduced by collisions. This thesis introduces a new video question-answering task for reasoning about the implicit physical properties of objects in a scene, from videos. For this task, I introduce a dataset -- CRIPP-VQA (Counterfactual Reasoning about Implicit Physical Properties - Video Question Answering), which contains videos of objects in motion, annotated with hypothetical/counterfactual questions about the effect of actions (such as removing, adding, or replacing objects), questions about planning (choosing actions to perform to reach a particular goal), as well as descriptive questions about the visible properties of objects. Further, I benchmark the performance of existing video question-answering models on two test settings of CRIPP-VQA: i.i.d. and an out-of-distribution setting which contains objects with values of mass, coefficient of friction, and initial velocities that are not seen in the training distribution. Experiments reveal a surprising and significant performance gap in terms of answering questions about implicit properties (the focus of this thesis) and explicit properties (the focus of prior work) of objects.
ContributorsPatel, Maitreya Jitendra (Author) / Yang, Yezhou (Thesis advisor) / Baral, Chitta (Committee member) / Lee, Kookjin (Committee member) / Arizona State University (Publisher)
Created2022
168430-Thumbnail Image.png
Description
T-cells are an integral component of the immune system, enabling the body to distinguish between pathogens and the self. The primary mechanism which enables this is their T-cell receptors (TCR) which bind to antigen epitopes foreign to the body. This detection mechanism allows the T-cell to determine when an immune

T-cells are an integral component of the immune system, enabling the body to distinguish between pathogens and the self. The primary mechanism which enables this is their T-cell receptors (TCR) which bind to antigen epitopes foreign to the body. This detection mechanism allows the T-cell to determine when an immune response is necessary. The computational prediction of TCR-epitope binding is important to researchers for both medical applications and for furthering their understanding of the biological mechanisms that impact immunity. Models which have been developed for this purpose fail to account for the interrelationships between amino acids and demonstrate poor out-of-sample performance. Small changes to the amino acids in these protein sequences can drastically change their structure and function. In recent years, attention-based deep learning models have shown success in their ability to learn rich contextual representations of data. To capture the contextual biological relationships between the amino acids, a multi-head self-attention model was created to predict the binding affinity between given TCR and epitope sequences. By learning the structural nuances of the sequences, this model is able to improve upon existing model performance and grant insights into the underlying mechanisms which impact binding.
ContributorsCai, Michael Ray (Author) / Lee, Heewook (Thesis advisor) / Bang, Seojin (Committee member) / Baral, Chitta (Committee member) / Arizona State University (Publisher)
Created2021
189367-Thumbnail Image.png
Description
With the rise in social media usage and rapid communication, the proliferation of misinformation and fake news has become a pressing concern. The detection of multimodal fake news requires careful consideration of both image and textual semantics with proper alignment of the embedding space. Automated fake news detection has gained

With the rise in social media usage and rapid communication, the proliferation of misinformation and fake news has become a pressing concern. The detection of multimodal fake news requires careful consideration of both image and textual semantics with proper alignment of the embedding space. Automated fake news detection has gained significant attention in recent years. Existing research has focused on either capturing cross-modal inconsistency information or leveraging the complementary information within image-text pairs. However, the potential of powerful cross-modal contrastive learning methods and effective modality mixing remains an open-ended question. The thesis proposes a novel two-leg single-tower architecture equipped with self-attention mechanisms and custom contrastive loss to efficiently aggregate multimodal features. Furthermore, pretraining and fine-tuning are employed on the custom transformer model to classify fake news across the popular Twitter multimodal fake news dataset. The experimental results demonstrate the efficacy and robustness of the proposed approach, offering promising advancements in multimodal fake news detection research.
ContributorsLakhanpal, Sanyam (Author) / Lee, Kookjin (Thesis advisor) / Baral, Chitta (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2023
189394-Thumbnail Image.png
Description
One of the challenges in Artificial Intelligence (AI) is to integrate fast, automatic, and intuitive System-1 thinking with slow, deliberate, and logical System-2 thinking. While deep learning approaches excel at perception tasks for System-1, their reasoning capabilities for System-2 are limited. Besides, deep learning approaches are usually data-hungry, hard to

One of the challenges in Artificial Intelligence (AI) is to integrate fast, automatic, and intuitive System-1 thinking with slow, deliberate, and logical System-2 thinking. While deep learning approaches excel at perception tasks for System-1, their reasoning capabilities for System-2 are limited. Besides, deep learning approaches are usually data-hungry, hard to make use of explicit knowledge, and struggling with interpretability and justification. This dissertation presents three neuro-symbolic AI approaches that integrate neural networks (NNs) with symbolic AI methods to address these issues. The first approach presented in this dissertation is NeurASP, which combines NNs with Answer Set Programming (ASP), a logic programming formalism. NeurASP provides an effective way to integrate sub-symbolic and symbolic computation by treating NN outputs as probability distributions over atomic facts in ASP. The explicit knowledge encoded in ASP corrects mistakes in NN outputs and allows for better training with less data. To avoid NeurASP's bottleneck in symbolic computation, this dissertation presents a Constraint Loss via Straight-Through Estimators (CL-STE). CL-STE provides a systematic way to compile discrete logical constraints into a loss function over discretized NN outputs and scales significantly better than state-of-the-art neuro-symbolic methods. This dissertation also presents a finding when CL-STE was applied to Transformers. Transformers can be extended with recurrence to enhance its power for multi-step reasoning. Such Recurrent Transformer can straightforwardly be applied to visual constraint reasoning problems while successfully addressing the symbol grounding problem. Lastly, this dissertation addresses the limitation of pre-trained Large Language Models (LLMs) on multi-step logical reasoning problems with a dual-process neuro-symbolic reasoning system called LLM+ASP, where an LLM (e.g., GPT-3) serves as a highly effective few-shot semantic parser that turns natural language sentences into a logical form that can be used as input to ASP. LLM+ASP achieves state-of-the-art performance on several textual reasoning benchmarks and can handle robot planning tasks that an LLM alone fails to solve.
ContributorsYang, Zhun (Author) / Lee, Joohyung (Thesis advisor) / Baral, Chitta (Committee member) / Li, Baoxin (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2023
171740-Thumbnail Image.png
Description
An important objective of AI is to understand real-world observations and build up interactive communication with people. The ability to interpret and react to the perception reveals the important necessity of developing such a system across both the modalities of Vision (V) and Language (L). Although there have been massive

An important objective of AI is to understand real-world observations and build up interactive communication with people. The ability to interpret and react to the perception reveals the important necessity of developing such a system across both the modalities of Vision (V) and Language (L). Although there have been massive efforts on various VL tasks, e.g., Image/Video Captioning, Visual Question Answering, and Textual Grounding, very few of them focus on building the VL models with increased efficiency under real-world scenarios. The main focus of this dissertation is to comprehensively investigate the very uncharted efficient VL learning, aiming to build lightweight, data-efficient, and real-world applicable VL models. The proposed studies in this dissertation take three primary aspects into account when it comes to efficient VL, 1). Data Efficiency: collecting task-specific annotations is prohibitively expensive and so manual labor is not always attainable. Techniques are developed to assist the VL learning from implicit supervision, i.e., in a weakly- supervised fashion. 2). Continuing from that, efficient representation learning is further explored with increased scalability, leveraging a large image-text corpus without task-specific annotations. In particular, the knowledge distillation technique is studied for generic Representation Learning which proves to bring substantial performance gain to the regular representation learning schema. 3). Architectural Efficiency. Deploying the VL model on edge devices is notoriously challenging due to their cumbersome architectures. To further extend these advancements to the real world, a novel efficient VL architecture is designed to tackle the inference bottleneck and the inconvenient two-stage training. Extensive discussions have been conducted on several critical aspects that prominently influence the performances of compact VL models.
ContributorsFang, Zhiyuan (Author) / Yang, Yezhou (Thesis advisor) / Baral, Chitta (Committee member) / Liu, Huan (Committee member) / Liu, Zicheng (Committee member) / Arizona State University (Publisher)
Created2022