This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 191 - 196 of 196
Filtering by

Clear all filters

190971-Thumbnail Image.png
Description
The integration of Distributed Energy Resources (DER), including wind energy and photovoltaic (PV) panels, into power systems, increases the potential for events that could lead to outages and cascading failures. This risk is heightened by the limited dynamic information in energy grid datasets, primarily due to sparse Phasor Measurement Units

The integration of Distributed Energy Resources (DER), including wind energy and photovoltaic (PV) panels, into power systems, increases the potential for events that could lead to outages and cascading failures. This risk is heightened by the limited dynamic information in energy grid datasets, primarily due to sparse Phasor Measurement Units (PMUs) placement. This data quality issue underscores the need for effective methodologies to manage these challenges. One significant challenge is the data gaps in low-resolution (LR) data from RTU and smart meters, hindering robust machine learning (ML) applications. To address this, a systematic approach involves preparing data effectively and designing efficient event detection methods, utilizing both intrinsic physics and extrinsic correlations from power systems. The process begins by interpolating LR data using high-resolution (HR) data, aiming to create virtual PMUs for improved grid management. Current interpolation methods often overlook extrinsic spatial-temporal correlations and intrinsic governing equations like Ordinary Differential Equations (ODEs) or Differential Algebraic Equations (DAEs). Physics-Informed Neural Networks (PINNs) are used for this purpose, though they face challenges with limited LR samples. The solution involves exploring the embedding space governed by ODEs/DAEs, generating extrinsic correlations for initial LR data imputation, and enforcing intrinsic physical constraints for refinement. After data preparation, event data dimensions such as spatial, temporal, and measurement categories are recovered in a tensor. To prevent overfitting, common in traditional ML methods, tensor decomposition is used. This technique merges intrinsic and physical information across dimensions, yielding informative and compact feature vectors for efficient feature extraction and learning in event detection. Lastly, in grids with insufficient data, knowledge transfer from grids with similar event patterns is a viable solution. This involves optimizing projected and transferred vectors from tensor decomposition to maximize common knowledge utilization across grids. This strategy identifies common features, enhancing the robustness and efficiency of ML event detection models, even in scenarios with limited event data.
ContributorsMa, Zhihao (Author) / Weng, Yang (Thesis advisor) / Wu, Meng (Committee member) / Yu, Hongbin (Committee member) / Matavalam, Amarsagar Reddy Ramapuram (Committee member) / Arizona State University (Publisher)
Created2023
190708-Thumbnail Image.png
Description
Generative models are deep neural network-based models trained to learn the underlying distribution of a dataset. Once trained, these models can be used to sample novel data points from this distribution. Their impressive capabilities have been manifested in various generative tasks, encompassing areas like image-to-image translation, style transfer, image editing,

Generative models are deep neural network-based models trained to learn the underlying distribution of a dataset. Once trained, these models can be used to sample novel data points from this distribution. Their impressive capabilities have been manifested in various generative tasks, encompassing areas like image-to-image translation, style transfer, image editing, and more. One notable application of generative models is data augmentation, aimed at expanding and diversifying the training dataset to augment the performance of deep learning models for a downstream task. Generative models can be used to create new samples similar to the original data but with different variations and properties that are difficult to capture with traditional data augmentation techniques. However, the quality, diversity, and controllability of the shape and structure of the generated samples from these models are often directly proportional to the size and diversity of the training dataset. A more extensive and diverse training dataset allows the generative model to capture overall structures present in the data and generate more diverse and realistic-looking samples. In this dissertation, I present innovative methods designed to enhance the robustness and controllability of generative models, drawing upon physics-based, probabilistic, and geometric techniques. These methods help improve the generalization and controllability of the generative model without necessarily relying on large training datasets. I enhance the robustness of generative models by integrating classical geometric moments for shape awareness and minimizing trainable parameters. Additionally, I employ non-parametric priors for the generative model's latent space through basic probability and optimization methods to improve the fidelity of interpolated images. I adopt a hybrid approach to address domain-specific challenges with limited data and controllability, combining physics-based rendering with generative models for more realistic results. These approaches are particularly relevant in industrial settings, where the training datasets are small and class imbalance is common. Through extensive experiments on various datasets, I demonstrate the effectiveness of the proposed methods over conventional approaches.
ContributorsSingh, Rajhans (Author) / Turaga, Pavan (Thesis advisor) / Jayasuriya, Suren (Committee member) / Berisha, Visar (Committee member) / Fazli, Pooyan (Committee member) / Arizona State University (Publisher)
Created2023
193509-Thumbnail Image.png
Description
In the rapidly evolving field of computer vision, propelled by advancements in deeplearning, the integration of hardware-software co-design has become crucial to overcome the limitations of traditional imaging systems. This dissertation explores the integration of hardware-software co-design in computational imaging, particularly in light transport acquisition and Non-Line-of-Sight (NLOS) imaging. By leveraging projector-camera systems and

In the rapidly evolving field of computer vision, propelled by advancements in deeplearning, the integration of hardware-software co-design has become crucial to overcome the limitations of traditional imaging systems. This dissertation explores the integration of hardware-software co-design in computational imaging, particularly in light transport acquisition and Non-Line-of-Sight (NLOS) imaging. By leveraging projector-camera systems and computational techniques, this thesis address critical challenges in imaging complex environments, such as adverse weather conditions, low-light scenarios, and the imaging of reflective or transparent objects. The first contribution in this thesis is the theory, design, and implementation of a slope disparity gating system, which is a vertically aligned configuration of a synchronized raster scanning projector and rolling-shutter camera, facilitating selective imaging through disparity-based triangulation. This system introduces a novel, hardware-oriented approach to selective imaging, circumventing the limitations of post-capture processing. The second contribution of this thesis is the realization of two innovative approaches for spotlight optimization to improve localization and tracking for NLOS imaging. The first approach utilizes radiosity-based optimization to improve 3D localization and object identification for small-scale laboratory settings. The second approach introduces a learningbased illumination network along with a differentiable renderer and NLOS estimation network to optimize human 2D localization and activity recognition. This approach is validated on a large, room-scale scene with complex line-of-sight geometries and occluders. The third contribution of this thesis is an attention-based neural network for passive NLOS settings where there is no controllable illumination. The thesis demonstrates realtime, dynamic NLOS human tracking where the camera is moving on a mobile robotic platform. In addition, this thesis contains an appendix featuring temporally consistent relighting for portrait videos with applications in computer graphics and vision.
ContributorsChandran, Sreenithy (Author) / Jayasuriya, Suren (Thesis advisor) / Turaga, Pavan (Committee member) / Dasarathy, Gautam (Committee member) / Kubo, Hiroyuki (Committee member) / Arizona State University (Publisher)
Created2024
193513-Thumbnail Image.png
Description
Since the invention of the automobile, engineers have been designing and making newer and newer improvements to them in order to provide customers with safer, faster, more reliable, and more comfortable vehicles. With each new generation, new technology can be seen being introduced into mainstream products, one of which that

Since the invention of the automobile, engineers have been designing and making newer and newer improvements to them in order to provide customers with safer, faster, more reliable, and more comfortable vehicles. With each new generation, new technology can be seen being introduced into mainstream products, one of which that is currently being pushed is that of autonomy. Established brand manufacturers and small research teams have been dedicated for years to find a way to make the automobile autonomous with none of them being able to confidently answer that they have found a solution. Among the engineering community there are two schools of thought when solving this issue: camera and LiDAR; some believe that only cameras and computer vision are required while other believe that LiDAR is the solution. The most optimal case is to use both cameras and LiDAR’s together in order to increase reliability and ensure data confidence. Designers are reluctant to use LiDAR systems due to their massive weight, cost, and complexity; with too many moving components, these systems are very bulky and have multiple costly, moving parts that eventually need replacement due to their constant motion. The solution to this problem is to develop a solid-state LiDAR system which would solve all those issues previously stated and this research takes it one level further and looks into a potential prototype for a solid-state camera and Lidar package. Currently no manufacturer offers a system that contains a solid-state LiDAR system and a solid-state camera with computing capabilities, all manufacturers provided either just the camera, just the Lidar, or just the computation ability. This design will also use of the shelf COTS parts in order to increase reproducibility for open-source development and to reduce total manufacturing cost. While keeping costs low, this design is also able to keep its specs and performance on par with that of a well-used commercial product, the Velodyne VL50.
ContributorsEltohamy, Gamal (Author) / Yu, Hongbin (Thesis advisor) / Goryll, Michael (Committee member) / Allee, David (Committee member) / Arizona State University (Publisher)
Created2024
193546-Thumbnail Image.png
Description
In the age of artificial intelligence, Machine Learning (ML) has become a pervasive force, impacting countless aspects of our lives. As ML’s influence expands, concerns about its reliability and trustworthiness have intensified, with security and robustness emerging as significant challenges. For instance, it has been demonstrated that slight perturbations to

In the age of artificial intelligence, Machine Learning (ML) has become a pervasive force, impacting countless aspects of our lives. As ML’s influence expands, concerns about its reliability and trustworthiness have intensified, with security and robustness emerging as significant challenges. For instance, it has been demonstrated that slight perturbations to a stop sign can cause ML classifiers to misidentify it as a speed limit sign, raising concerns about whether ML algorithms are suitable for real-world deployments. To tackle these issues, Responsible Machine Learning (Responsible ML) has emerged with a clear mission: to develop secure and robust ML algorithms. This dissertation aims to develop Responsible Machine Learning algorithms under real-world constraints. Specifically, recognizing the role of adversarial attacks in exposing security vulnerabilities and robustifying the ML methods, it lays down the foundation of Responsible ML by outlining a novel taxonomy of adversarial attacks within real-world settings, categorizing them into black-box target-specific, and target-agnostic attacks. Subsequently, it proposes potent adversarial attacks in each category, aiming to obtain effectiveness and efficiency. Transcending conventional boundaries, it then introduces the notion of causality into Responsible ML (a.k.a., Causal Responsible ML), presenting the causal adversarial attack. This represents the first principled framework to explain the transferability of adversarial attacks to unknown models by identifying their common source of vulnerabilities, thereby exposing the pinnacle of threat and vulnerability: conducting successful attacks on any model with no prior knowledge. Finally, acknowledging the surge of Generative AI, this dissertation explores Responsible ML for Generative AI. It introduces a novel adversarial attack that unveils their adversarial vulnerabilities and devises a strong defense mechanism to bolster the models’ robustness against potential attacks.
ContributorsMoraffah, Raha (Author) / Liu, Huan (Thesis advisor) / Yang, Yezhou (Committee member) / Xiao, Chaowei (Committee member) / Turaga, Pavan (Committee member) / Carley, Kathleen (Committee member) / Arizona State University (Publisher)
Created2024
193564-Thumbnail Image.png
Description
Manipulator motion planning has conventionally been solved using sampling and optimization-based algorithms that are agnostic to embodiment and environment configurations. However, these algorithms plan on a fixed environment representation approximated using shape primitives, and hence struggle to find solutions for cluttered and dynamic environments. Furthermore, these algorithms fail to produce

Manipulator motion planning has conventionally been solved using sampling and optimization-based algorithms that are agnostic to embodiment and environment configurations. However, these algorithms plan on a fixed environment representation approximated using shape primitives, and hence struggle to find solutions for cluttered and dynamic environments. Furthermore, these algorithms fail to produce solutions for complex unstructured environments under real-time bounds. Neural Motion Planners (NMPs) are an appealing alternative to algorithmic approaches as they can leverage parallel computing for planning while incorporating arbitrary environmental constraints directly from raw sensor observations. Contemporary NMPs successfully transfer to different environment variations, however, fail to generalize across embodiments. This thesis proposes "AnyNMP'', a generalist motion planning policy for zero-shot transfer across different robotic manipulators and environments. The policy is conditioned on semantically segmented 3D pointcloud representation of the workspace thus enabling implicit sim2real transfer. In the proposed approach, templates are formulated for manipulator kinematics and ground truth motion plans are collected for over 3 million procedurally sampled robots in randomized environments. The planning pipeline consists of a state validation model for differentiable collision detection and a sampling based planner for motion generation. AnyNMP has been validated on 5 different commercially available manipulators and showcases successful cross-embodiment planning, achieving an 80% average success rate on baseline benchmarks.
ContributorsRath, Prabin Kumar (Author) / Gopalan, Nakul (Thesis advisor) / Yu, Hongbin (Thesis advisor) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2024