Matching Items (21)
Filtering by

Clear all filters

150353-Thumbnail Image.png
Description
Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.
ContributorsMuralidhar, Ashwini (Author) / Saripalli, Srikanth (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2011
150833-Thumbnail Image.png
Description
Composite materials are increasingly being used in aircraft, automobiles, and other applications due to their high strength to weight and stiffness to weight ratios. However, the presence of damage, such as delamination or matrix cracks, can significantly compromise the performance of these materials and result in premature failure. Structural components

Composite materials are increasingly being used in aircraft, automobiles, and other applications due to their high strength to weight and stiffness to weight ratios. However, the presence of damage, such as delamination or matrix cracks, can significantly compromise the performance of these materials and result in premature failure. Structural components are often manually inspected to detect the presence of damage. This technique, known as schedule based maintenance, however, is expensive, time-consuming, and often limited to easily accessible structural elements. Therefore, there is an increased demand for robust and efficient Structural Health Monitoring (SHM) techniques that can be used for Condition Based Monitoring, which is the method in which structural components are inspected based upon damage metrics as opposed to flight hours. SHM relies on in situ frameworks for detecting early signs of damage in exposed and unexposed structural elements, offering not only reduced number of schedule based inspections, but also providing better useful life estimates. SHM frameworks require the development of different sensing technologies, algorithms, and procedures to detect, localize, quantify, characterize, as well as assess overall damage in aerospace structures so that strong estimations in the remaining useful life can be determined. The use of piezoelectric transducers along with guided Lamb waves is a method that has received considerable attention due to the weight, cost, and function of the systems based on these elements. The research in this thesis investigates the ability of Lamb waves to detect damage in feature dense anisotropic composite panels. Most current research negates the effects of experimental variability by performing tests on structurally simple isotropic plates that are used as a baseline and damaged specimen. However, in actual applications, variability cannot be negated, and therefore there is a need to research the effects of complex sample geometries, environmental operating conditions, and the effects of variability in material properties. This research is based on experiments conducted on a single blade-stiffened anisotropic composite panel that localizes delamination damage caused by impact. The overall goal was to utilize a correlative approach that used only the damage feature produced by the delamination as the damage index. This approach was adopted because it offered a simplistic way to determine the existence and location of damage without having to conduct a more complex wave propagation analysis or having to take into account the geometric complexities of the test specimen. Results showed that even in a complex structure, if the damage feature can be extracted and measured, then an appropriate damage index can be associated to it and the location of the damage can be inferred using a dense sensor array. The second experiment presented in this research studies the effects of temperature on damage detection when using one test specimen for a benchmark data set and another for damage data collection. This expands the previous experiment into exploring not only the effects of variable temperature, but also the effects of high experimental variability. Results from this work show that the damage feature in the data is not only extractable at higher temperatures, but that the data from one panel at one temperature can be directly compared to another panel at another temperature for baseline comparison due to linearity of the collected data.
ContributorsVizzini, Anthony James, II (Author) / Chattopadhyay, Aditi (Thesis advisor) / Fard, Masoud (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
149503-Thumbnail Image.png
Description
The exponential rise in unmanned aerial vehicles has necessitated the need for accurate pose estimation under any extreme conditions. Visual Odometry (VO) is the estimation of position and orientation of a vehicle based on analysis of a sequence of images captured from a camera mounted on it. VO offers a

The exponential rise in unmanned aerial vehicles has necessitated the need for accurate pose estimation under any extreme conditions. Visual Odometry (VO) is the estimation of position and orientation of a vehicle based on analysis of a sequence of images captured from a camera mounted on it. VO offers a cheap and relatively accurate alternative to conventional odometry techniques like wheel odometry, inertial measurement systems and global positioning system (GPS). This thesis implements and analyzes the performance of a two camera based VO called Stereo based visual odometry (SVO) in presence of various deterrent factors like shadows, extremely bright outdoors, wet conditions etc... To allow the implementation of VO on any generic vehicle, a discussion on porting of the VO algorithm to android handsets is presented too. The SVO is implemented in three steps. In the first step, a dense disparity map for a scene is computed. To achieve this we utilize sum of absolute differences technique for stereo matching on rectified and pre-filtered stereo frames. Epipolar geometry is used to simplify the matching problem. The second step involves feature detection and temporal matching. Feature detection is carried out by Harris corner detector. These features are matched between two consecutive frames using the Lucas-Kanade feature tracker. The 3D co-ordinates of these matched set of features are computed from the disparity map obtained from the first step and are mapped into each other by a translation and a rotation. The rotation and translation is computed using least squares minimization with the aid of Singular Value Decomposition. Random Sample Consensus (RANSAC) is used for outlier detection. This comprises the third step. The accuracy of the algorithm is quantified based on the final position error, which is the difference between the final position computed by the SVO algorithm and the final ground truth position as obtained from the GPS. The SVO showed an error of around 1% under normal conditions for a path length of 60 m and around 3% in bright conditions for a path length of 130 m. The algorithm suffered in presence of shadows and vibrations, with errors of around 15% and path lengths of 20 m and 100 m respectively.
ContributorsDhar, Anchit (Author) / Saripalli, Srikanth (Thesis advisor) / Li, Baoxin (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2010
Description
Non-Destructive Testing (NDT) is integral to preserving the structural health of materials. Techniques that fall under the NDT category are able to evaluate integrity and condition of a material without permanently altering any property of the material. Additionally, they can typically be used while the material is in

Non-Destructive Testing (NDT) is integral to preserving the structural health of materials. Techniques that fall under the NDT category are able to evaluate integrity and condition of a material without permanently altering any property of the material. Additionally, they can typically be used while the material is in active use instead of needing downtime for inspection.
The two general categories of structural health monitoring (SHM) systems include passive and active monitoring. Active SHM systems utilize an input of energy to monitor the health of a structure (such as sound waves in ultrasonics), while passive systems do not. As such, passive SHM tends to be more desirable. A system could be permanently fixed to a critical location, passively accepting signals until it records a damage event, then localize and characterize the damage. This is the goal of acoustic emissions testing.
When certain types of damage occur, such as matrix cracking or delamination in composites, the corresponding release of energy creates sound waves, or acoustic emissions, that propagate through the material. Audio sensors fixed to the surface can pick up data from both the time and frequency domains of the wave. With proper data analysis, a time of arrival (TOA) can be calculated for each sensor allowing for localization of the damage event. The frequency data can be used to characterize the damage.
In traditional acoustic emissions testing, the TOA combined with wave velocity and information about signal attenuation in the material is used to localize events. However, in instances of complex geometries or anisotropic materials (such as carbon fibre composites), velocity and attenuation can vary wildly based on the direction of interest. In these cases, localization can be based off of the time of arrival distances for each sensor pair. This technique is called Delta T mapping, and is the main focus of this study.
ContributorsBriggs, Nathaniel (Author) / Chattopadhyay, Aditi (Thesis director) / Papandreou-Suppappola, Antonia (Committee member) / Skinner, Travis (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
171848-Thumbnail Image.png
Description
Multi-segment manipulators and mobile robot collectives are examples of multi-agent robotic systems, in which each segment or robot can be considered an agent. Fundamental motion control problems for such systems include the stabilization of one or more agents to target configurations or trajectories while preventing inter-agent collisions, agent collisions with

Multi-segment manipulators and mobile robot collectives are examples of multi-agent robotic systems, in which each segment or robot can be considered an agent. Fundamental motion control problems for such systems include the stabilization of one or more agents to target configurations or trajectories while preventing inter-agent collisions, agent collisions with obstacles, and deadlocks. Despite extensive research on these control problems, there are still challenges in designing controllers that (1) are scalable with the number of agents; (2) have theoretical guarantees on collision-free agent navigation; and (3) can be used when the states of the agents and the environment are only partially observable. Existing centralized and distributed control architectures have limited scalability due to their computational complexity and communication requirements, while decentralized control architectures are often effective only under impractical assumptions that do not hold in real-world implementations. The main objective of this dissertation is to develop and evaluate decentralized approaches for multi-agent motion control that enable agents to use their onboard sensors and computational resources to decide how to move through their environment, with limited or absent inter-agent communication and external supervision. Specifically, control approaches are designed for multi-segment manipulators and mobile robot collectives to achieve position and pose (position and orientation) stabilization, trajectory tracking, and collision and deadlock avoidance. These control approaches are validated in both simulations and physical experiments to show that they can be implemented in real-time while remaining computationally tractable. First, kinematic controllers are proposed for position stabilization and trajectory tracking control of two- or three-dimensional hyper-redundant multi-segment manipulators. Next, robust and gradient-based feedback controllers are presented for individual holonomic and nonholonomic mobile robots that achieve position stabilization, trajectory tracking control, and obstacle avoidance. Then, nonlinear Model Predictive Control methods are developed for collision-free, deadlock-free pose stabilization and trajectory tracking control of multiple nonholonomic mobile robots in known and unknown environments with obstacles, both static and dynamic. Finally, a feedforward proportional-derivative controller is defined for collision-free velocity tracking of a moving ground target by multiple unmanned aerial vehicles.
ContributorsSalimi Lafmejani, Amir (Author) / Berman, Spring (Thesis advisor) / Tsakalis, Konstantinos (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Marvi, Hamidreza (Committee member) / Arizona State University (Publisher)
Created2022
168524-Thumbnail Image.png
Description
Few-layer black phosphorous (FLBP) is one of the most important two-dimensional (2D) materials due to its strongly layer-dependent quantized bandstructure, which leads to wavelength-tunable optical and electrical properties. This thesis focuses on the preparation of stable, high-quality FLBP, the characterization of its optical properties, and device applications.Part I presents an

Few-layer black phosphorous (FLBP) is one of the most important two-dimensional (2D) materials due to its strongly layer-dependent quantized bandstructure, which leads to wavelength-tunable optical and electrical properties. This thesis focuses on the preparation of stable, high-quality FLBP, the characterization of its optical properties, and device applications.Part I presents an approach to preparing high-quality, stable FLBP samples by combining O2 plasma etching, boron nitride (BN) sandwiching, and subsequent rapid thermal annealing (RTA). Such a strategy has successfully produced FLBP samples with a record-long lifetime, with 80% of photoluminescence (PL) intensity remaining after 7 months. The improved material quality of FLBP allows the establishment of a more definitive relationship between the layer number and PL energies. Part II presents the study of oxygen incorporation in FLBP. The natural oxidation formed in the air environment is dominated by the formation of interstitial oxygen and dangling oxygen. By the real-time PL and Raman spectroscopy, it is found that continuous laser excitation breaks the bonds of interstitial oxygen, and free oxygen atoms can diffuse around or form dangling oxygen under low heat. RTA at 450 °C can turn the interstitial oxygen into dangling oxygen more thoroughly. Such oxygen-containing samples show similar optical properties to the pristine BP samples. The bandgap of such FLBP samples increases with the concentration of the incorporated oxygen. Part III deals with the investigation of emission natures of the prepared samples. The power- and temperature-dependent measurements demonstrate that PL emissions are dominated by excitons and trions, with a combined percentage larger than 80% at room temperature. Such measurements allow the determination of trion and exciton binding energies of 2-, 3-, and 4-layer BP, with values around 33, 23, 15 meV for trions and 297, 276, 179 meV for excitons at 77K, respectively. Part IV presents the initial exploration of device applications of such FLBP samples. The coupling between photonic crystal cavity (PCC) modes and FLBP's emission is realized by integrating the prepared sandwich structure onto 2D PCC. Electroluminescence has also been achieved by integrating such materials onto interdigital electrodes driven by alternating electric fields.
ContributorsLi, Dongying (Author) / Ning, Cun-Zheng (Thesis advisor) / Vasileska, Dragica (Committee member) / Lai, Ying-Cheng (Committee member) / Yu, Hongbin (Committee member) / Arizona State University (Publisher)
Created2022
189258-Thumbnail Image.png
Description
Predicting nonlinear dynamical systems has been a long-standing challenge in science. This field is currently witnessing a revolution with the advent of machine learning methods. Concurrently, the analysis of dynamics in various nonlinear complex systems continues to be crucial. Guided by these directions, I conduct the following studies. Predicting critical

Predicting nonlinear dynamical systems has been a long-standing challenge in science. This field is currently witnessing a revolution with the advent of machine learning methods. Concurrently, the analysis of dynamics in various nonlinear complex systems continues to be crucial. Guided by these directions, I conduct the following studies. Predicting critical transitions and transient states in nonlinear dynamics is a complex problem. I developed a solution called parameter-aware reservoir computing, which uses machine learning to track how system dynamics change with a driving parameter. I show that the transition point can be accurately predicted while trained in a sustained functioning regime before the transition. Notably, it can also predict if the system will enter a transient state, the distribution of transient lifetimes, and their average before a final collapse, which are crucial for management. I introduce a machine-learning-based digital twin for monitoring and predicting the evolution of externally driven nonlinear dynamical systems, where reservoir computing is exploited. Extensive tests on various models, encompassing optics, ecology, and climate, verify the approach’s effectiveness. The digital twins can extrapolate unknown system dynamics, continually forecast and monitor under non-stationary external driving, infer hidden variables, adapt to different driving waveforms, and extrapolate bifurcation behaviors across varying system sizes. Integrating engineered gene circuits into host cells poses a significant challenge in synthetic biology due to circuit-host interactions, such as growth feedback. I conducted systematic studies on hundreds of circuit structures exhibiting various functionalities, and identified a comprehensive categorization of growth-induced failures. I discerned three dynamical mechanisms behind these circuit failures. Moreover, my comprehensive computations reveal a scaling law between the circuit robustness and the intensity of growth feedback. A class of circuits with optimal robustness is also identified. Chimera states, a phenomenon of symmetry-breaking in oscillator networks, traditionally have transient lifetimes that grow exponentially with system size. However, my research on high-dimensional oscillators leads to the discovery of ’short-lived’ chimera states. Their lifetime increases logarithmically with system size and decreases logarithmically with random perturbations, indicating a unique fragility. To understand these states, I use a transverse stability analysis supported by simulations.
ContributorsKong, Lingwei (Author) / Lai, Ying-Cheng (Thesis advisor) / Tian, Xiaojun (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Alkhateeb, Ahmed (Committee member) / Arizona State University (Publisher)
Created2023
187820-Thumbnail Image.png
Description
With the advent of new advanced analysis tools and access to related published data, it is getting more difficult for data owners to suppress private information from published data while still providing useful information. This dual problem of providing useful, accurate information and protecting it at the same time has

With the advent of new advanced analysis tools and access to related published data, it is getting more difficult for data owners to suppress private information from published data while still providing useful information. This dual problem of providing useful, accurate information and protecting it at the same time has been challenging, especially in healthcare. The data owners lack an automated resource that provides layers of protection on a published dataset with validated statistical values for usability. Differential privacy (DP) has gained a lot of attention in the past few years as a solution to the above-mentioned dual problem. DP is defined as a statistical anonymity model that can protect the data from adversarial observation while still providing intended usage. This dissertation introduces a novel DP protection mechanism called Inexact Data Cloning (IDC), which simultaneously protects and preserves information in published data while conveying source data intent. IDC preserves the privacy of the records by converting the raw data records into clonesets. The clonesets then pass through a classifier that removes potential compromising clonesets, filtering only good inexact cloneset. The mechanism of IDC is dependent on a set of privacy protection metrics called differential privacy protection metrics (DPPM), which represents the overall protection level. IDC uses two novel performance values, differential privacy protection score (DPPS) and clone classifier selection percentage (CCSP), to estimate the privacy level of protected data. In support of using IDC as a viable data security product, a software tool chain prototype, differential privacy protection architecture (DPPA), was developed to utilize the IDC. DPPA used the engineering security mechanism of IDC. DPPA is a hub which facilitates a market for data DP security mechanisms. DPPA works by incorporating standalone IDC mechanisms and provides automation, IDC protected published datasets and statistically verified IDC dataset diagnostic report. DPPA is currently doing functional, and operational benchmark processes that quantifies the DP protection of a given published dataset. The DPPA tool was recently used to test a couple of health datasets. The test results further validate the IDC mechanism as being feasible.
Contributorsthomas, zelpha (Author) / Bliss, Daniel W (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Banerjee, Ayan (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2023
157645-Thumbnail Image.png
Description
Disentangling latent spaces is an important research direction in the interpretability of unsupervised machine learning. Several recent works using deep learning are very effective at producing disentangled representations. However, in the unsupervised setting, there is no way to pre-specify which part of the latent space captures specific factors of

Disentangling latent spaces is an important research direction in the interpretability of unsupervised machine learning. Several recent works using deep learning are very effective at producing disentangled representations. However, in the unsupervised setting, there is no way to pre-specify which part of the latent space captures specific factors of variations. While this is generally a hard problem because of the non-existence of analytical expressions to capture these variations, there are certain factors like geometric

transforms that can be expressed analytically. Furthermore, in existing frameworks, the disentangled values are also not interpretable. The focus of this work is to disentangle these geometric factors of variations (which turn out to be nuisance factors for many applications) from the semantic content of the signal in an interpretable manner which in turn makes the features more discriminative. Experiments are designed to show the modularity of the approach with other disentangling strategies as well as on multiple one-dimensional (1D) and two-dimensional (2D) datasets, clearly indicating the efficacy of the proposed approach.
ContributorsKoneripalli Seetharam, Kaushik (Author) / Turaga, Pavan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2019
154660-Thumbnail Image.png
Description
The research on the topology and dynamics of complex networks is one of the most focused area in complex system science. The goals are to structure our understanding of the real-world social, economical, technological, and biological systems in the aspect of networks consisting a large number of interacting units and

The research on the topology and dynamics of complex networks is one of the most focused area in complex system science. The goals are to structure our understanding of the real-world social, economical, technological, and biological systems in the aspect of networks consisting a large number of interacting units and to develop corresponding detection, prediction, and control strategies. In this highly interdisciplinary field, my research mainly concentrates on universal estimation schemes, physical controllability, as well as mechanisms behind extreme events and cascading failure for complex networked systems.

Revealing the underlying structure and dynamics of complex networked systems from observed data without of any specific prior information is of fundamental importance to science, engineering, and society. We articulate a Markov network based model, the sparse dynamical Boltzmann machine (SDBM), as a universal network structural estimator and dynamics approximator based on techniques including compressive sensing and K-means algorithm. It recovers the network structure of the original system and predicts its short-term or even long-term dynamical behavior for a large variety of representative dynamical processes on model and real-world complex networks.

One of the most challenging problems in complex dynamical systems is to control complex networks.

Upon finding that the energy required to approach a target state with reasonable precision

is often unbearably large, and the energy of controlling a set of networks with similar structural properties follows a fat-tail distribution, we identify fundamental structural ``short boards'' that play a dominant role in the enormous energy and offer a theoretical interpretation for the fat-tail distribution and simple strategies to significantly reduce the energy.

Extreme events and cascading failure, a type of collective behavior in complex networked systems, often have catastrophic consequences. Utilizing transportation and evolutionary game dynamics as prototypical

settings, we investigate the emergence of extreme events in simplex complex networks, mobile ad-hoc networks and multi-layer interdependent networks. A striking resonance-like phenomenon and the emergence of global-scale cascading breakdown are discovered. We derive analytic theories to understand the mechanism of

control at a quantitative level and articulate cost-effective control schemes to significantly suppress extreme events and the cascading process.
ContributorsChen, Yuzhong (Author) / Lai, Ying-Cheng (Thesis advisor) / Spanias, Andreas (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Ying, Lei (Committee member) / Arizona State University (Publisher)
Created2016