Matching Items (119)
132368-Thumbnail Image.png
Description
A defense-by-randomization framework is proposed as an effective defense mechanism against different types of adversarial attacks on neural networks. Experiments were conducted by selecting a combination of differently constructed image classification neural networks to observe which combinations applied to this framework were most effective in maximizing classification accuracy. Furthermore, the

A defense-by-randomization framework is proposed as an effective defense mechanism against different types of adversarial attacks on neural networks. Experiments were conducted by selecting a combination of differently constructed image classification neural networks to observe which combinations applied to this framework were most effective in maximizing classification accuracy. Furthermore, the reasons why particular combinations were more effective than others is explored.
ContributorsMazboudi, Yassine Ahmad (Author) / Yang, Yezhou (Thesis director) / Ren, Yi (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
131372-Thumbnail Image.png
Description
In the last decade, a large variety of algorithms have been developed for use in object tracking, environment mapping, and object classification. It is often difficult for beginners to fully predict the constraints that multirotors place on machine vision algorithms. The purpose of this paper is to explain

In the last decade, a large variety of algorithms have been developed for use in object tracking, environment mapping, and object classification. It is often difficult for beginners to fully predict the constraints that multirotors place on machine vision algorithms. The purpose of this paper is to explain some of the types of algorithms that can be applied to these aerial systems, why the constraints for these algorithms exist, and what could be done to mitigate them. This paper provides a summary of the processes involved in a popular filter-based tracking algorithm called MOSSE (Minimum Output Sum of Squared Error) and a particular implementation of SLAM (Simultaneous Localization and Mapping) called LSD SLAM.
ContributorsVan Hazel, Colton (Author) / Zhang, Wenlong (Thesis director) / Yang, Yezhou (Committee member) / Engineering Programs (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
Description
Propaganda bots are malicious bots on Twitter that spread divisive opinions and support political accounts. This project is based on detecting propaganda bots on Twitter using machine learning. Once I began to observe patterns within propaganda followers on Twitter, I determined that I could train algorithms to detect

Propaganda bots are malicious bots on Twitter that spread divisive opinions and support political accounts. This project is based on detecting propaganda bots on Twitter using machine learning. Once I began to observe patterns within propaganda followers on Twitter, I determined that I could train algorithms to detect these bots. The paper focuses on my development and process of training classifiers and using them to create a user-facing server that performs prediction functions automatically. The learning goals of this project were detailed, the focus of which was to learn some form of machine learning architecture. I needed to learn some aspect of large data handling, as well as being able to maintain these datasets for training use. I also needed to develop a server that would execute these functionalities on command. I wanted to be able to design a full-stack system that allowed me to create every aspect of a user-facing server that can execute predictions using the classifiers that I design.
Throughout this project, I decided on a number of learning goals to consider it a success. I needed to learn how to use the supporting libraries that would help me to design this system. I also learned how to use the Twitter API, as well as create the infrastructure behind it that would allow me to collect large amounts of data for machine learning. I needed to become familiar with common machine learning libraries in Python in order to create the necessary algorithms and pipelines to make predictions based on Twitter data.
This paper details the steps and decisions needed to determine how to collect this data and apply it to machine learning algorithms. I determined how to create labelled data using pre-existing Botometer ratings, and the levels of confidence I needed to label data for training. I use the scikit-learn library to create these algorithms to best detect these bots. I used a number of pre-processing routines to refine the classifiers’ precision, including natural language processing and data analysis techniques. I eventually move to remotely-hosted versions of the system on Amazon web instances to collect larger amounts of data and train more advanced classifiers. This leads to the details of my final implementation of a user-facing server, hosted on AWS and interfacing over Gmail’s IMAP server.
The current and future development of this system is laid out. This includes more advanced classifiers, better data analysis, conversions to third party Twitter data collection systems, and user features. I detail what it is I have learned from this exercise, and what it is I hope to continue working on.
ContributorsPeterson, Austin (Author) / Yang, Yezhou (Thesis director) / Sadasivam, Aadhavan (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
165084-Thumbnail Image.png
Description
This project aspires to develop an AI capable of playing on a variety of maps in a Risk-like board game. While AI has been successfully applied to many other board games, such as Chess and Go, most research is confined to a single board and is inflexible to topological changes.

This project aspires to develop an AI capable of playing on a variety of maps in a Risk-like board game. While AI has been successfully applied to many other board games, such as Chess and Go, most research is confined to a single board and is inflexible to topological changes. Further, almost all of these games are played on a rectangular grid. Contrarily, this project develops an AI player, referred to as GG-net, to play the online strategy game Warzone, which is based on the classic board game Risk. Warzone is played on a wide variety of irregularly shaped maps. Prior research has struggled to create an effective AI for Risk-like games due to the immense branching factor. The most successful attempts tended to rely on manually restricting the set of actions the AI considered while also engineering useful features for the AI to consider. GG-net uses no human knowledge, but rather a genetic algorithm combined with a graph neural network. Together, these methods allow GG-net to perform competitively across a multitude of maps. GG-net outperformed the built-in rule-based AI by 413 Elo (representing an 80.7% chance of winning) and an approach based on AlphaZero using graph neural networks by 304 Elo (representing a 74.2% chance of winning). This same advantage holds across both seen and unseen maps. GG-net appears to be a strong opponent on both small and medium maps, however, on large maps with hundreds of territories, inefficiencies in GG-net become more significant and GG-net struggles against the rule-based approach. Overall, GG-net was able to successfully learn the game and generalize across maps of a similar size, albeit further work is required for GG-net to become more successful on large maps.
ContributorsBauer, Andrew (Author) / Yang, Yezhou (Thesis director) / Harrison, Blake (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2022-05
168624-Thumbnail Image.png
Description
How to teach a machine to understand natural language? This question is a long-standing challenge in Artificial Intelligence. Several tasks are designed to measure the progress of this challenge. Question Answering is one such task that evaluates a machine's ability to understand natural language, where it reads a passage of

How to teach a machine to understand natural language? This question is a long-standing challenge in Artificial Intelligence. Several tasks are designed to measure the progress of this challenge. Question Answering is one such task that evaluates a machine's ability to understand natural language, where it reads a passage of text or an image and answers comprehension questions. In recent years, the development of transformer-based language models and large-scale human-annotated datasets has led to remarkable progress in the field of question answering. However, several disadvantages of fully supervised question answering systems have been observed. Such as generalizing to unseen out-of-distribution domains, linguistic style differences in questions, and adversarial samples. This thesis proposes implicitly supervised question answering systems trained using knowledge acquisition from external knowledge sources and new learning methods that provide inductive biases to learn question answering. In particular, the following research projects are discussed: (1) Knowledge Acquisition methods: these include semantic and abductive information retrieval for seeking missing knowledge, a method to represent unstructured text corpora as a knowledge graph, and constructing a knowledge base for implicit commonsense reasoning. (2) Learning methods: these include Knowledge Triplet Learning, a method over knowledge graphs; Test-Time Learning, a method to generalize to an unseen out-of-distribution context; WeaQA, a method to learn visual question answering using image captions without strong supervision; WeaSel, weakly supervised method for relative spatial reasoning; and a new paradigm for unsupervised natural language inference. These methods potentially provide a new research direction to overcome the pitfalls of direct supervision.
ContributorsBanerjee, Pratyay (Author) / Baral, Chitta (Thesis advisor) / Yang, Yezhou (Committee member) / Blanco, Eduardo (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2022
165124-Thumbnail Image.png
Description

Molecular pathology makes use of estimates of tumor content (tumor percentage) for pre-analytic and analytic purposes, such as molecular oncology testing, massive parallel sequencing, or next-generation sequencing (NGS), assessment of sample acceptability, accurate quantitation of variants, assessment of copy number changes (among other applications), determination of specimen viability for testing

Molecular pathology makes use of estimates of tumor content (tumor percentage) for pre-analytic and analytic purposes, such as molecular oncology testing, massive parallel sequencing, or next-generation sequencing (NGS), assessment of sample acceptability, accurate quantitation of variants, assessment of copy number changes (among other applications), determination of specimen viability for testing (since many assays require a minimum tumor content to report variants at the limit of detection) may all be improved with more accurate and reproducible estimates of tumor content. Currently, tumor percentages of samples submitted for molecular testing are estimated by visual examination of Hematoxylin and Eosin (H&E) stained tissue slides under the microscope by pathologists. These estimations can be automated, expedited, and rendered more accurate by applying machine learning methods on digital whole slide images (WSI).

ContributorsCirelli, Claire (Author) / Yang, Yezhou (Thesis director) / Yalim, Jason (Committee member) / Velu, Priya (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
168821-Thumbnail Image.png
Description
It is not merely an aggregation of static entities that a video clip carries, but alsoa variety of interactions and relations among these entities. Challenges still remain for a video captioning system to generate natural language descriptions focusing on the prominent interest and aligning with the latent aspects beyond observations. This work presents

It is not merely an aggregation of static entities that a video clip carries, but alsoa variety of interactions and relations among these entities. Challenges still remain for a video captioning system to generate natural language descriptions focusing on the prominent interest and aligning with the latent aspects beyond observations. This work presents a Commonsense knowledge Anchored Video cAptioNing (dubbed as CAVAN) approach. CAVAN exploits inferential commonsense knowledge to assist the training of video captioning model with a novel paradigm for sentence-level semantic alignment. Specifically, commonsense knowledge is queried to complement per training caption by querying a generic knowledge atlas ATOMIC, and form the commonsense- caption entailment corpus. A BERT based language entailment model trained from this corpus then serves as a commonsense discriminator for the training of video captioning model, and penalizes the model from generating semantically misaligned captions. With extensive empirical evaluations on MSR-VTT, V2C and VATEX datasets, CAVAN consistently improves the quality of generations and shows higher keyword hit rate. Experimental results with ablations validate the effectiveness of CAVAN and reveals that the use of commonsense knowledge contributes to the video caption generation.
ContributorsShao, Huiliang (Author) / Yang, Yezhou (Thesis advisor) / Jayasuriya, Suren (Committee member) / Xiao, Chaowei (Committee member) / Arizona State University (Publisher)
Created2022
168842-Thumbnail Image.png
Description
There has been an explosion in the amount of data on the internet because of modern technology – especially image data – as a consequence of an exponential growth in the number of cameras existing in the world right now; from more extensive surveillance camera systems to billions of people

There has been an explosion in the amount of data on the internet because of modern technology – especially image data – as a consequence of an exponential growth in the number of cameras existing in the world right now; from more extensive surveillance camera systems to billions of people walking around with smartphones in their pockets that come with built-in cameras. With this sudden increase in the accessibility of cameras, most of the data that is getting captured through these devices is ending up on the internet. Researchers soon took leverage of this data by creating large-scale datasets. However, generating a dataset – let alone a large-scale one – requires a lot of man-hours. This work presents an algorithm that makes use of optical flow and feature matching, along with utilizing localization outputs from a Mask R-CNN, to generate large-scale vehicle datasets without much human supervision. Additionally, this work proposes a novel multi-view vehicle dataset (MVVdb) of 500 vehicles which is also generated using the aforementioned algorithm.There are various research problems in computer vision that can leverage a multi-view dataset, e.g., 3D pose estimation, and 3D object detection. On the other hand, a multi-view vehicle dataset can be used for a 2D image to 3D shape prediction, generation of 3D vehicle models, and even a more robust vehicle make and model recognition. In this work, a ResNet is trained on the multi-view vehicle dataset to perform vehicle re-identification, which is fundamentally similar to a vehicle make and recognition problem – also showcasing the usability of the MVVdb dataset.
ContributorsGuha, Anubhab (Author) / Yang, Yezhou (Thesis advisor) / Lu, Duo (Committee member) / Banerjee, Ayan (Committee member) / Arizona State University (Publisher)
Created2022
190963-Thumbnail Image.png
Description
The need for robust verification and validation of automated vehicles (AVs) to ensure driving safety grows more urgent as increasing numbers of AVs are allowed to operate on open roads. To address this need, AV developers can present a safety case to regulators and the public that provides an evidence-based

The need for robust verification and validation of automated vehicles (AVs) to ensure driving safety grows more urgent as increasing numbers of AVs are allowed to operate on open roads. To address this need, AV developers can present a safety case to regulators and the public that provides an evidence-based justification of their assertion that an AV is safe to operate on open roads. This work aims to describe the development of a scenario-based testing methodology that contributes to this safety case. A high-level definition of this test selection and scoring methodology (TSSM) is first presented, along with an outline of its scope and key ideas. This is followed by a literature review that details the current state of the art in AV testing, including the driving performance metrics and equations that provide a basis for the TSSM. A chart-based method for quantifying an AV’s operational design domain (ODD) and behavioral competency portfolio is then described that provides the foundation for a scenario generation and filtration process. After outlining a method for the AV to progress through increasingly robust test methods based on its current technology readiness level (TRL), the generation and filtration of two sets of scenarios by the TSSM is outlined: a standardized set that can be used to compare the performance of vehicles with identical ODD and behavioral competency portfolios, and a set containing high-relevance scenarios that is partially randomized to ensure test integrity. A related framework for incorporating testing on open roads is subsequently specified. An equation for an overall AV driving performance score is then defined that quantifies the aggregate performance of the AV across all generated scenarios. The TSSM continues according to an iterative process, which includes a method for exploring edge and corner scenarios, until a stopping condition is achieved. Two proofs of concept are provided: a demonstration of the ability of the TSSM to pare scenarios from a preexisting database, and an example ODD and behavioral competency portfolio specification form. Finally, this work concludes by evaluating the TSSM and its proofs of concept and outlining possible future work on the methodology.
ContributorsO'Malley, Gavin (Author) / Wishart, Jeffrey (Thesis advisor) / Zhao, Junfeng (Thesis advisor) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2023