Matching Items (61)
Filtering by

Clear all filters

187323-Thumbnail Image.png
Description
Intelligent transportation systems (ITS) are a boon to modern-day road infrastructure. It supports traffic monitoring, road safety improvement, congestion reduction, and other traffic management tasks. For an ITS, roadside perception capability with cameras, LIDAR, and RADAR sensors is the key. Among various roadside perception technologies, vehicle keypoint detection is a

Intelligent transportation systems (ITS) are a boon to modern-day road infrastructure. It supports traffic monitoring, road safety improvement, congestion reduction, and other traffic management tasks. For an ITS, roadside perception capability with cameras, LIDAR, and RADAR sensors is the key. Among various roadside perception technologies, vehicle keypoint detection is a fundamental problem, which involves detecting and localizing specific points on a vehicle, such as the headlights, wheels, taillights, etc. These keypoints can be used to track the movement of the vehicles and their orientation. However, there are several challenges in vehicle keypoint detection, such as the variation in vehicle models and shapes, the presence of occlusion in traffic scenarios, the influence of weather and changing lighting conditions, etc. More importantly, existing traffic perception datasets for keypoint detection are mainly limited to the frontal view with sensors mounted on the ego vehicles. These datasets are not designed for traffic monitoring cameras that are mounted on roadside poles. There’s a huge advantage of capturing the data from roadside cameras as they can cover a much larger distance with a wider field of view in many different traffic scenes, but such a dataset is usually expensive to construct. In this research, I present SKOPE3D: Synthetic Keypoint Perception 3D dataset, a one-of-its-kind synthetic perception dataset generated using a simulator from the roadside perspective. It comes with 2D bounding boxes, 3D bounding boxes, tracking IDs, and 33 keypoints for each vehicle in the scene. The dataset consists of 25K frames spanning over 28 scenes with over 150K vehicles and 4.9M keypoints. A baseline keypoint RCNN model is trained on the dataset and is thoroughly evaluated on the test set. The experiments show the capability of the synthetic dataset and knowledge transferability between synthetic and real-world data.
ContributorsPahadia, Himanshu (Author) / Yang, Yezhou (Thesis advisor) / Lu, Duo (Committee member) / Farhadi Bajestani, Mohammad (Committee member) / Arizona State University (Publisher)
Created2023
189366-Thumbnail Image.png
Description
In recent years, there has been a growing emphasis on developing automated systems to enhance traffic safety, particularly in the detection of dilemma zones (DZ) at intersections. This study focuses on the automated detection of DZs at roundabouts using trajectory forecasting, presenting an advanced system with perception capabilities. The system

In recent years, there has been a growing emphasis on developing automated systems to enhance traffic safety, particularly in the detection of dilemma zones (DZ) at intersections. This study focuses on the automated detection of DZs at roundabouts using trajectory forecasting, presenting an advanced system with perception capabilities. The system utilizes a modular, graph-structured recurrent model that predicts the trajectories of various agents, accounting for agent dynamics and incorporating heterogeneous data such as semantic maps. This enables the system to facilitate traffic management decision-making and improve overall intersection safety. To assess the system's performance, a real-world dataset of traffic roundabout intersections was employed. The experimental results demonstrate that our Superpowered Trajectron++ system exhibits high accuracy in detecting DZ events, with a false positive rate of approximately 10%. Furthermore, the system has the remarkable ability to anticipate and identify dilemma events before they occur, enabling it to provide timely instructions to vehicles. These instructions serve as guidance, determining whether vehicles should come to a halt or continue moving through the intersection, thereby enhancing safety and minimizing potential conflicts. In summary, the development of automated systems for detecting DZs represents an important advancement in traffic safety. The Superpowered Trajectron++ system, with its trajectory forecasting capabilities and incorporation of diverse data sources, showcases improved accuracy in identifying DZ events and can effectively guide vehicles in making informed decisions at roundabout intersections.
ContributorsChelenahalli Satish, Manthan (Author) / Yang, Yezhou (Thesis advisor) / Lu, Duo (Committee member) / Farhadi, Mohammad (Committee member) / Arizona State University (Publisher)
Created2023
189367-Thumbnail Image.png
Description
With the rise in social media usage and rapid communication, the proliferation of misinformation and fake news has become a pressing concern. The detection of multimodal fake news requires careful consideration of both image and textual semantics with proper alignment of the embedding space. Automated fake news detection has gained

With the rise in social media usage and rapid communication, the proliferation of misinformation and fake news has become a pressing concern. The detection of multimodal fake news requires careful consideration of both image and textual semantics with proper alignment of the embedding space. Automated fake news detection has gained significant attention in recent years. Existing research has focused on either capturing cross-modal inconsistency information or leveraging the complementary information within image-text pairs. However, the potential of powerful cross-modal contrastive learning methods and effective modality mixing remains an open-ended question. The thesis proposes a novel two-leg single-tower architecture equipped with self-attention mechanisms and custom contrastive loss to efficiently aggregate multimodal features. Furthermore, pretraining and fine-tuning are employed on the custom transformer model to classify fake news across the popular Twitter multimodal fake news dataset. The experimental results demonstrate the efficacy and robustness of the proposed approach, offering promising advancements in multimodal fake news detection research.
ContributorsLakhanpal, Sanyam (Author) / Lee, Kookjin (Thesis advisor) / Baral, Chitta (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2023
171980-Thumbnail Image.png
Description
The increasing availability of data and advances in computation have spurred the development of data-driven approaches for modeling complex dynamical systems. These approaches are based on the idea that the underlying structure of a complex system can be discovered from data using mathematical and computational techniques. They also show promise

The increasing availability of data and advances in computation have spurred the development of data-driven approaches for modeling complex dynamical systems. These approaches are based on the idea that the underlying structure of a complex system can be discovered from data using mathematical and computational techniques. They also show promise for addressing the challenges of modeling high-dimensional, nonlinear systems with limited data. In this research expository, the state of the art in data-driven approaches for modeling complex dynamical systems is surveyed in a systemic way. First the general formulation of data-driven modeling of dynamical systems is discussed. Then several representative methods in feature engineering and system identification/prediction are reviewed, including recent advances and key challenges.
ContributorsShi, Wenlong (Author) / Ren, Yi (Thesis advisor) / Hong, Qijun (Committee member) / Jiao, Yang (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2022
157673-Thumbnail Image.png
Description
In this thesis, I present two new datasets and a modification to the existing models in the form of a novel attention mechanism for Natural Language Inference (NLI). The new datasets have been carefully synthesized from various existing corpora released for different tasks.

The task of NLI is to determine the

In this thesis, I present two new datasets and a modification to the existing models in the form of a novel attention mechanism for Natural Language Inference (NLI). The new datasets have been carefully synthesized from various existing corpora released for different tasks.

The task of NLI is to determine the possibility of a sentence referred to as “Hypothesis” being true given that another sentence referred to as “Premise” is true. In other words, the task is to identify whether the “Premise” entails, contradicts or remains neutral with regards to the “Hypothesis”. NLI is a precursor to solving many Natural Language Processing (NLP) tasks such as Question Answering and Semantic Search. For example, in Question Answering systems, the question is paraphrased to form a declarative statement which is treated as the hypothesis. The options are treated as the premise. The option with the maximum entailment score is considered as the answer. Considering the applications of NLI, the importance of having a strong NLI system can't be stressed enough.

Many large-scale datasets and models have been released in order to advance the field of NLI. While all of these models do get good accuracy on the test sets of the datasets they were trained on, they fail to capture the basic understanding of “Entities” and “Roles”. They often make the mistake of inferring that “John went to the market.” from “Peter went to the market.” failing to capture the notion of “Entities”. In other cases, these models don't understand the difference in the “Roles” played by the same entities in “Premise” and “Hypothesis” sentences and end up wrongly inferring that “Peter drove John to the stadium.” from “John drove Peter to the stadium.”

The lack of understanding of “Roles” can be attributed to the lack of such examples in the various existing datasets. The reason for the existing model’s failure in capturing the notion of “Entities” is not just due to the lack of such examples in the existing NLI datasets. It can also be attributed to the strict use of vector similarity in the “word-to-word” attention mechanism being used in the existing architectures.

To overcome these issues, I present two new datasets to help make the NLI systems capture the notion of “Entities” and “Roles”. The “NER Changed” (NC) dataset and the “Role-Switched” (RS) dataset contains examples of Premise-Hypothesis pairs that require the understanding of “Entities” and “Roles” respectively in order to be able to make correct inferences. This work shows how the existing architectures perform poorly on the “NER Changed” (NC) dataset even after being trained on the new datasets. In order to help the existing architectures, understand the notion of “Entities”, this work proposes a modification to the “word-to-word” attention mechanism. Instead of relying on vector similarity alone, the modified architectures learn to incorporate the “Symbolic Similarity” as well by using the Named-Entity features of the Premise and Hypothesis sentences. The new modified architectures not only perform significantly better than the unmodified architectures on the “NER Changed” (NC) dataset but also performs as well on the existing datasets.
ContributorsShrivastava, Ishan (Author) / Baral, Chitta (Thesis advisor) / Anwar, Saadat (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2019
157676-Thumbnail Image.png
Description
Autonomous systems that are out in the real world today deal with a slew of different data modalities to perform effectively in tasks ranging from robot navigation in complex maneuverable robots to identity verification in simpler static systems. The performance of the system heavily banks on the continuous supply of

Autonomous systems that are out in the real world today deal with a slew of different data modalities to perform effectively in tasks ranging from robot navigation in complex maneuverable robots to identity verification in simpler static systems. The performance of the system heavily banks on the continuous supply of data from all modalities. These systems can face drastically increased risk with the loss of one or multiple modalities due to an adverse scenario like that of hardware malfunction, inimical environmental conditions, etc. This thesis investigates modality hallucination and its efficacy in mitigating the risks posed to the autonomous system. Modality hallucination is proposed as one effective way to ensure consistent modality availability thereby reducing unfavorable consequences. While there has been a significant research effort in high-to-low dimensional modality hallucination, like that of RGB to depth, there is considerably lesser interest in the other direction( low-to-high dimensional modality prediction). This thesis serves to demonstrate the effectiveness of this low-to-high modality hallucination in reducing the uncertainty in the affected system while also ensuring that the method remains task agnostic.

A deep neural network based encoder-decoder architecture that aggregates multiple fields of view in its encoder blocks to recover the lost information of the affected modality from the extant modality is presented with evidence of its efficacy. The hallucination process is implemented by capturing a non-linear mapping between the data modalities and the learned mapping is used to aid the extant modality to mitigate the risk posed to the system in the adverse scenarios which involve modality loss. The results are compared with a well known generative model built for the task of image translation, as well as an off-the-shelf semantic segmentation architecture re-purposed for hallucination. To validate the practicality of hallucinated modality, extensive classification and segmentation experiments are conducted on the University of Washington's depth image database (UWRGBD) database and the New York University database (NYUD) and demonstrate that hallucination indeed lessens the negative effects of the modality loss.
ContributorsGunasekar, Kausic (Author) / Yang, Yezhou (Thesis advisor) / Qiu, Qiang (Committee member) / Amor, Heni Ben (Committee member) / Arizona State University (Publisher)
Created2019
157886-Thumbnail Image.png
Description
Visual navigation is a multi-disciplinary field across computer vision, machine learning and robotics. It is of great significance in both research and industrial applications. An intelligent agent with visual navigation ability will be capable of performing the following tasks: actively explore in environments, distinguish and localize a requested target and

Visual navigation is a multi-disciplinary field across computer vision, machine learning and robotics. It is of great significance in both research and industrial applications. An intelligent agent with visual navigation ability will be capable of performing the following tasks: actively explore in environments, distinguish and localize a requested target and approach the target following acquired strategies. Despite a variety of advances in mobile robotics, enabling an autonomous with above-mentioned abilities is still a challenging and complex task. However, the solution to the task is very likely to accelerate the landing of assistive robots.

Reinforcement learning is a method that trains autonomous robot based on rewarding desired behaviors to help it obtain an action policy that maximizes rewards while the robot interacting with the environment. Through trial and error, an agent learns sophisticated and skillful strategies to handle complex tasks in the environment. Inspired by navigation procedures of human beings that when navigating through environments, humans reason about accessible spaces and geometry of the environment a lot based on first-person view, figure out the destination and then ease over, this work develops a model that maps from pixels to actions and inherently estimate the target as well as the free-space map. The model has three major constituents: (i) a cognitive mapper that maps the topologic free-space map from first-person view images, (ii) a target recognition network that locates a desired object and (iii) an action policy deep reinforcement learning network. Further, a planner model with cascade architecture based on multi-scale semantic top-down occupancy map input is proposed.
ContributorsZheng, Shibin (Author) / Yang, Yezhou (Thesis advisor) / Zhang, Wenlong (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2019
157741-Thumbnail Image.png
Description
Question answering is a challenging problem and a long term goal of Artificial Intelligence. There are many approaches proposed to solve this problem, including end to end machine learning systems, Information Retrieval based approaches and Textual Entailment. Despite being popular, these methods find difficulty in solving problems that require multi

Question answering is a challenging problem and a long term goal of Artificial Intelligence. There are many approaches proposed to solve this problem, including end to end machine learning systems, Information Retrieval based approaches and Textual Entailment. Despite being popular, these methods find difficulty in solving problems that require multi level reasoning and combining independent pieces of knowledge, for example, a question like "What adaptation is necessary in intertidal ecosystems but not in reef ecosystems?'', requires the system to consider qualities, behaviour or features of an organism living in an intertidal ecosystem and compare with that of an organism in a reef ecosystem to find the answer. The proposed solution is to solve a genre of questions, which is questions based on "Adaptation, Variation and Behavior in Organisms", where there are various different independent sets of knowledge required for answering questions along with reasoning. This method is implemented using Answer Set Programming and Natural Language Inference (which is based on machine learning ) for finding which of the given options is more probable to be the answer by matching it with the knowledge base. To evaluate this approach, a dataset of questions and a knowledge base in the domain of "Adaptation, Variation and Behavior in Organisms" is created.
ContributorsBatni, Vaishnavi (Author) / Baral, Chitta (Thesis advisor) / Anwar, Saadat (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2019
157745-Thumbnail Image.png
Description
Artificial general intelligence consists of many components, one of which is Natural Language Understanding (NLU). One of the applications of NLU is Reading Comprehension where it is expected that a system understand all aspects of a text. Further, understanding natural procedure-describing text that deals with existence of entities and effects

Artificial general intelligence consists of many components, one of which is Natural Language Understanding (NLU). One of the applications of NLU is Reading Comprehension where it is expected that a system understand all aspects of a text. Further, understanding natural procedure-describing text that deals with existence of entities and effects of actions on these entities while doing reasoning and inference at the same time is a particularly difficult task. A recent natural language dataset by the Allen Institute of Artificial Intelligence, ProPara, attempted to address the challenges to determine entity existence and entity tracking in natural text.

As part of this work, an attempt is made to address the ProPara challenge. The Knowledge Representation and Reasoning (KRR) community has developed effective techniques for modeling and reasoning about actions and similar techniques are used in this work. A system consisting of Inductive Logic Programming (ILP) and Answer Set Programming (ASP) is used to address the challenge and achieves close to state-of-the-art results and provides an explainable model. An existing semantic role label parser is modified and used to parse the dataset.

On analysis of the learnt model, it was found that some of the rules were not generic enough. To overcome the issue, the Proposition Bank dataset is then used to add knowledge in an attempt to generalize the ILP learnt rules to possibly improve the results.
ContributorsBhattacharjee, Aurgho (Author) / Baral, Chitta (Thesis advisor) / Yang, Yezhou (Committee member) / Anwar, Saadat (Committee member) / Arizona State University (Publisher)
Created2019
158141-Thumbnail Image.png
Description
In a multi-robot system, locating a team robot is an important issue. If robots

can refer to the location of team robots based on information through passive action

recognition without explicit communication, various advantages (e.g. improving security

for military purposes) can be obtained. Specifically, when team robots follow

the same motion rule based on

In a multi-robot system, locating a team robot is an important issue. If robots

can refer to the location of team robots based on information through passive action

recognition without explicit communication, various advantages (e.g. improving security

for military purposes) can be obtained. Specifically, when team robots follow

the same motion rule based on information about adjacent robots, associations can

be found between robot actions. If the association can be analyzed, this can be a clue

to the remote robot. Using these clues, it is possible to infer remote robots which are

outside of the sensor range.

In this paper, a multi-robot system is constructed using a combination of Thymio

II robotic platforms and Raspberry pi controllers. Robots moving in chain-formation

take action using motion rules based on information obtained through passive action

recognition. To find associations between robots, a regression model is created using

Deep Neural Network (DNN) and Long Short-Term Memory (LSTM), one of state-of-art technologies.

The input data of the regression model is divided into historical data, which

are consecutive positions of the robot, and observed data, which is information about the

observed robot. Historical data is sequence data that is analyzed through the LSTM

layer. The accuracy of the regression model designed using DNN can vary depending

on the quantity and quality of the input. In this thesis, three different input situations

are assumed for comparison. First, the amount of observed data is different, second, the

type of observed data is different, and third, the history length is different. Comparative

models are constructed for each case, and prediction accuracy is compared to analyze

the effect of input data on the regression model. This exploration validates that these

methods from deep learning can reduce the communication demands in coordinated

motion of multi-robot systems
ContributorsKang, Sehyeok (Author) / Pavlic, Theodore P (Thesis advisor) / Richa, Andréa W. (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020