Matching Items (1,136)
Filtering by

Clear all filters

158254-Thumbnail Image.png
Description
Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good performance in detecting areas of significant changes. Coherent change detection

Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good performance in detecting areas of significant changes. Coherent change detection can be implemented using the classical coherence estimator, which does better at detecting subtle changes, like vehicle tracks. A two-stage detector was proposed by Cha et al., where the sample variance ratio forms the first stage, and the second stage comprises of Berger's alternative coherence estimator.

A modification to the first stage of the two-stage detector is proposed in this study, which significantly simplifies the analysis of the this detector. Cha et al. have used a heuristic approach to determine the thresholds for this two-stage detector. In this study, the probability density function for the modified two-stage detector is derived, and using this probability density function, an approach for determining the thresholds for this two-dimensional detection problem has been proposed. The proposed method of threshold selection reveals an interesting behavior shown by the two-stage detector. With the help of theoretical receiver operating characteristic analysis, it is shown that the two-stage detector gives a better detection performance as compared to the other three detectors. However, the Berger's estimator proves to be a simpler alternative, since it gives only a slightly poorer performance as compared to the two-stage detector. All the four detectors have also been implemented on a SAR data set, and it is shown that the two-stage detector and the Berger's estimator generate images where the areas showing change are easily visible.
ContributorsBondre, Akshay Sunil (Author) / Richmond, Christ D (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel W (Committee member) / Arizona State University (Publisher)
Created2020
158270-Thumbnail Image.png
Description
This work is concerned with how best to reconstruct images from limited angle tomographic measurements. An introduction to tomography and to limited angle tomography will be provided and a brief overview of the many fields to which this work may contribute is given.

The traditional tomographic image reconstruction approach involves

This work is concerned with how best to reconstruct images from limited angle tomographic measurements. An introduction to tomography and to limited angle tomography will be provided and a brief overview of the many fields to which this work may contribute is given.

The traditional tomographic image reconstruction approach involves Fourier domain representations. The classic Filtered Back Projection algorithm will be discussed and used for comparison throughout the work. Bayesian statistics and information entropy considerations will be described. The Maximum Entropy reconstruction method will be derived and its performance in limited angular measurement scenarios will be examined.

Many new approaches become available once the reconstruction problem is placed within an algebraic form of Ax=b in which the measurement geometry and instrument response are defined as the matrix A, the measured object as the column vector x, and the resulting measurements by b. It is straightforward to invert A. However, for the limited angle measurement scenarios of interest in this work, the inversion is highly underconstrained and has an infinite number of possible solutions x consistent with the measurements b in a high dimensional space.

The algebraic formulation leads to the need for high performing regularization approaches which add constraints based on prior information of what is being measured. These are constraints beyond the measurement matrix A added with the goal of selecting the best image from this vast uncertainty space. It is well established within this work that developing satisfactory regularization techniques is all but impossible except for the simplest pathological cases. There is a need to capture the "character" of the objects being measured.

The novel result of this effort will be in developing a reconstruction approach that will match whatever reconstruction approach has proven best for the types of objects being measured given full angular coverage. However, when confronted with limited angle tomographic situations or early in a series of measurements, the approach will rely on a prior understanding of the "character" of the objects measured. This understanding will be learned by a parallel Deep Neural Network from examples.
ContributorsDallmann, Nicholas A. (Author) / Tsakalis, Konstantinos (Thesis advisor) / Hardgrove, Craig (Committee member) / Rodriguez, Armando (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2020
158175-Thumbnail Image.png
Description
Aortic aneurysms and dissections are life threatening conditions addressed by replacing damaged sections of the aorta. Blood circulation must be halted to facilitate repairs. Ischemia places the body, especially the brain, at risk of damage. Deep hypothermia circulatory arrest (DHCA) is employed to protect patients and provide time for surgeons

Aortic aneurysms and dissections are life threatening conditions addressed by replacing damaged sections of the aorta. Blood circulation must be halted to facilitate repairs. Ischemia places the body, especially the brain, at risk of damage. Deep hypothermia circulatory arrest (DHCA) is employed to protect patients and provide time for surgeons to complete repairs on the basis that reducing body temperature suppresses the metabolic rate. Supplementary surgical techniques can be employed to reinforce the brain's protection and increase the duration circulation can be suspended. Even then, protection is not completely guaranteed though. A medical condition that can arise early in recovery is postoperative delirium, which is correlated with poor long term outcome. This study develops a methodology to intraoperatively monitor neurophysiology through electroencephalography (EEG) and anticipate postoperative delirium. The earliest opportunity to detect occurrences of complications through EEG is immediately following DHCA during warming. The first observable electrophysiological activity after being completely suppressed is a phenomenon known as burst suppression, which is related to the brain's metabolic state and recovery of nominal neurological function. A metric termed burst suppression duty cycle (BSDC) is developed to characterize the changing electrophysiological dynamics. Predictions of postoperative delirium incidences are made by identifying deviations in the way these dynamics evolve. Sixteen cases are examined in this study. Accurate predictions can be made, where on average 89.74% of cases are correctly classified when burst suppression concludes and 78.10% when burst suppression begins. The best case receiver operating characteristic curve has an area under its convex hull of 0.8988, whereas the worst case area under the hull is 0.7889. These results demonstrate the feasibility of monitoring BSDC to anticipate postoperative delirium during burst suppression. They also motivate a further analysis on identifying footprints of causal mechanisms of neural injury within BSDC. Being able to raise warning signs of postoperative delirium early provides an opportunity to intervene and potentially avert neurological complications. Doing so would improve the success rate and quality of life after surgery.
ContributorsMa, Owen (Author) / Bliss, Daniel W (Thesis advisor) / Berisha, Visar (Committee member) / Kosut, Oliver (Committee member) / Brewer, Gene (Committee member) / Arizona State University (Publisher)
Created2020
158193-Thumbnail Image.png
Description
Energy is one of the wheels on which the modern world runs. Therefore, standards and limits have been devised to maintain the stability and reliability of the power grid. This research shows a simple methodology for increasing the amount of Inverter-based Renewable Generation (IRG), which is also known as Inverter-based

Energy is one of the wheels on which the modern world runs. Therefore, standards and limits have been devised to maintain the stability and reliability of the power grid. This research shows a simple methodology for increasing the amount of Inverter-based Renewable Generation (IRG), which is also known as Inverter-based Resources (IBR), for that considers the voltage and frequency limits specified by the Western Electricity Coordinating Council (WECC) Transmission Planning (TPL) criteria, and the tie line power flow limits between the area-under-study and its neighbors under contingency conditions. A WECC power flow and dynamic file is analyzed and modified in this research to demonstrate the performance of the methodology. GE's Positive Sequence Load Flow (PSLF) software is used to conduct this research and Python was used to analyze the output data.

The thesis explains in detail how the system with 11% of IRG operated before conducting any adjustments (addition of IRG) and what procedures were modified to make the system run correctly. The adjustments made to the dynamic models are also explained in depth to give a clearer picture of how each adjustment affects the system performance. A list of proposed IRG units along with their locations were provided by SRP, a power utility in Arizona, which were to be integrated into the power flow and dynamic files. In the process of finding the maximum IRG penetration threshold, three sensitivities were also considered, namely, momentary cessation due to low voltages, transmission vs. distribution connected solar generation, and stalling of induction motors. Finally, the thesis discusses how the system reacts to the aforementioned modifications, and how IRG penetration threshold gets adjusted with regards to the different sensitivities applied to the system.
ContributorsAlbhrani, Hashem A M H S (Author) / Pal, Anamitra (Thesis advisor) / Holbert, Keith E. (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2020
158293-Thumbnail Image.png
Description
Reliable operation of modern power systems is ensured by an intelligent cyber layer that monitors and controls the physical system. The data collection and transmission is achieved by the supervisory control and data acquisition (SCADA) system, and data processing is performed by the energy management system (EMS). In the recent

Reliable operation of modern power systems is ensured by an intelligent cyber layer that monitors and controls the physical system. The data collection and transmission is achieved by the supervisory control and data acquisition (SCADA) system, and data processing is performed by the energy management system (EMS). In the recent decades, the development of phasor measurement units (PMUs) enables wide area real-time monitoring and control. However, both SCADA-based and PMU-based cyber layers are prone to cyber attacks that can impact system operation and lead to severe physical consequences.

This dissertation studies false data injection (FDI) attacks that are unobservable to bad data detectors (BDD). Prior work has shown that an attacker-defender bi-level linear program (ADBLP) can be used to determine the worst-case consequences of FDI attacks aiming to maximize the physical power flow on a target line. However, the results were only demonstrated on small systems assuming that they are operated with DC optimal power flow (OPF). This dissertation is divided into four parts to thoroughly understand the consequences of these attacks as well as develop countermeasures.

The first part focuses on evaluating the vulnerability of large-scale power systems to FDI attacks. The solution technique introduced in prior work to solve the ADBLP is intractable on large-scale systems due to the large number of binary variables. Four new computationally efficient algorithms are presented to solve this problem.

The second part studies vulnerability of N-1 reliable power systems operated by state-of-the-art EMSs commonly used in practice, specifically real-time contingency analysis (RTCA), and security-constrained economic dispatch (SCED). An ADBLP is formulated with detailed assumptions on attacker's knowledge and system operations.

The third part considers FDI attacks on PMU measurements that have strong temporal correlations due to high data rate. It is shown that predictive filters can detect suddenly injected attacks, but not gradually ramping attacks.

The last part proposes a machine learning-based attack detection framework consists of a support vector regression (SVR) load predictor that predicts loads by exploiting both spatial and temporal correlations, and a subsequent support vector machine (SVM) attack detector to determine the existence of attacks.
ContributorsChu, Zhigang (Author) / Kosut, Oliver (Thesis advisor) / Sankar, Lalitha (Committee member) / Scaglione, Anna (Committee member) / Pal, Anamitra (Committee member) / Arizona State University (Publisher)
Created2020
158329-Thumbnail Image.png
Description
Precursors of carbon fibers include rayon, pitch, and polyacrylonitrile fibers that can be heat-treated for high-strength or high-modulus carbon fibers. Among them, polyacrylonitrile has been used most frequently due to its low viscosity for easy processing and excellent performance for high-end applications. To further explore polyacrylonitrile-based fibers for better precursors,

Precursors of carbon fibers include rayon, pitch, and polyacrylonitrile fibers that can be heat-treated for high-strength or high-modulus carbon fibers. Among them, polyacrylonitrile has been used most frequently due to its low viscosity for easy processing and excellent performance for high-end applications. To further explore polyacrylonitrile-based fibers for better precursors, in this study, carbon nanofillers were introduced in the polymer matrix to examine their reinforcement effects and influences on carbon fiber performance. Two-dimensional graphene nanoplatelets were mainly used for the polymer reinforcement and one-dimensional carbon nanotubes were also incorporated in polyacrylonitrile as a comparison. Dry-jet wet spinning was used to fabricate the composite fibers. Hot-stage drawing and heat-treatment were used to evolve the physical microstructures and molecular morphologies of precursor and carbon fibers. As compared to traditionally used random dispersions, selective placement of nanofillers was effective in improving composite fiber properties and enhancing mechanical and functional behaviors of carbon fibers. The particular position of reinforcement fillers with polymer layers was enabled by the in-house developed spinneret used for fiber spinning. The preferential alignment of graphitic planes contributed to the enhanced mechanical and functional behaviors than those of dispersed nanoparticles in polyacrylonitrile composites. The high in-plane modulus of graphene and the induction to polyacrylonitrile molecular carbonization/graphitization were the motivation for selectively placing graphene nanoplatelets between polyacrylonitrile layers. Mechanical tests, scanning electron microscopy, thermal, and electrical properties were characterized. Applications such as volatile organic compound sensing and pressure sensing were demonstrated.
ContributorsFranklin, Rahul Joseph (Author) / Song, Kenan (Thesis advisor) / Jiao, Yang (Thesis advisor) / Liu, Yongming (Committee member) / Arizona State University (Publisher)
Created2020
158336-Thumbnail Image.png
Description
With the rapid advancement in the technologies related to renewable energies such

as solar, wind, fuel cell, and many more, there is a definite need for new power con

verting methods involving data-driven methodology. Having adequate information is

crucial for any innovative ideas to fructify; accordingly, moving away from traditional

methodologies is the most

With the rapid advancement in the technologies related to renewable energies such

as solar, wind, fuel cell, and many more, there is a definite need for new power con

verting methods involving data-driven methodology. Having adequate information is

crucial for any innovative ideas to fructify; accordingly, moving away from traditional

methodologies is the most practical way of giving birth to new ideas. While working

on a DC-DC buck converter, the input voltages considered for running the simulations

are varied for research purposes. The critical aspect of the new data-driven method

ology is to propose a machine learning algorithm. In this design, solving for inductor

value and power switching losses, the parameters can be achieved while keeping the

input and output ratio close to the value as necessary. Thus, implementing machine

learning algorithms with the traditional design of a non-isolated buck converter deter

mines the optimal outcome for the inductor value and power loss, which is achieved

by assimilating a DC-DC converter and data-driven methodology.

The present thesis investigates the different outcomes from machine learning al

gorithms in comparison with the dynamic equations. Specifically, the DC-DC buck

converter will be focused on the thesis. In order to determine the most effective way

of keeping the system in a steady-state, different circuit buck converter with different

parameters have been performed.

At present, artificial intelligence plays a vital role in power system control and

theory. Consequently, in this thesis, the approximation error estimation has been

analyzed in a DC-DC buck converter model, with specific consideration of machine

learning algorithms tools that can help detect and calculate the difference in terms

of error. These tools, called models, are used to analyze the collected data. In the

present thesis, a focus on such models as K-nearest neighbors (K-NN), specifically

the Weighted-nearest neighbor (WKNN), is utilized for machine learning algorithm

purposes. The machine learning concept introduced in the present thesis lays down

the foundation for future research in this area so that to enable further research on

efficient ways to improve power electronic devices with reduced power switching losses

and optimal inductor values.
ContributorsAlsalem, Hamad (Author) / Weng, Yang (Thesis advisor) / Lei, Qin (Committee member) / Kozicki, Michael (Committee member) / Arizona State University (Publisher)
Created2020
157748-Thumbnail Image.png
Description
The problem of multiple object tracking seeks to jointly estimate the time-varying cardinality and trajectory of each object. There are numerous challenges that are encountered in tracking multiple objects including a time-varying number of measurements, under varying constraints, and environmental conditions. In this thesis, the proposed statistical methods integrate the

The problem of multiple object tracking seeks to jointly estimate the time-varying cardinality and trajectory of each object. There are numerous challenges that are encountered in tracking multiple objects including a time-varying number of measurements, under varying constraints, and environmental conditions. In this thesis, the proposed statistical methods integrate the use of physical-based models with Bayesian nonparametric methods to address the main challenges in a tracking problem. In particular, Bayesian nonparametric methods are exploited to efficiently and robustly infer object identity and learn time-dependent cardinality; together with Bayesian inference methods, they are also used to associate measurements to objects and estimate the trajectory of objects. These methods differ from the current methods to the core as the existing methods are mainly based on random finite set theory.

The first contribution proposes dependent nonparametric models such as the dependent Dirichlet process and the dependent Pitman-Yor process to capture the inherent time-dependency in the problem at hand. These processes are used as priors for object state distributions to learn dependent information between previous and current time steps. Markov chain Monte Carlo sampling methods exploit the learned information to sample from posterior distributions and update the estimated object parameters.

The second contribution proposes a novel, robust, and fast nonparametric approach based on a diffusion process over infinite random trees to infer information on object cardinality and trajectory. This method follows the hierarchy induced by objects entering and leaving a scene and the time-dependency between unknown object parameters. Markov chain Monte Carlo sampling methods integrate the prior distributions over the infinite random trees with time-dependent diffusion processes to update object states.

The third contribution develops the use of hierarchical models to form a prior for statistically dependent measurements in a single object tracking setup. Dependency among the sensor measurements provides extra information which is incorporated to achieve the optimal tracking performance. The hierarchical Dirichlet process as a prior provides the required flexibility to do inference. Bayesian tracker is integrated with the hierarchical Dirichlet process prior to accurately estimate the object trajectory.

The fourth contribution proposes an approach to model both the multiple dependent objects and multiple dependent measurements. This approach integrates the dependent Dirichlet process modeling over the dependent object with the hierarchical Dirichlet process modeling of the measurements to fully capture the dependency among both object and measurements. Bayesian nonparametric models can successfully associate each measurement to the corresponding object and exploit dependency among them to more accurately infer the trajectory of objects. Markov chain Monte Carlo methods amalgamate the dependent Dirichlet process with the hierarchical Dirichlet process to infer the object identity and object cardinality.

Simulations are exploited to demonstrate the improvement in multiple object tracking performance when compared to approaches that are developed based on random finite set theory.
ContributorsMoraffah, Bahman (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel W. (Committee member) / Richmond, Christ D. (Committee member) / Dasarathy, Gautam (Committee member) / Arizona State University (Publisher)
Created2019
157841-Thumbnail Image.png
Description
Modern Communication systems are progressively moving towards all-digital transmitters (ADTs) due to their high efficiency and potentially large frequency range. While significant work has been done on individual blocks within the ADT, there are few to no full systems designs at this point in time. The goal of this work

Modern Communication systems are progressively moving towards all-digital transmitters (ADTs) due to their high efficiency and potentially large frequency range. While significant work has been done on individual blocks within the ADT, there are few to no full systems designs at this point in time. The goal of this work is to provide a set of multiple novel block architectures which will allow for greater cohesion between the various ADT blocks. Furthermore, the design of these architectures are expected to focus on the practicalities of system design, such as regulatory compliance, which here to date has largely been neglected by the academic community. Amongst these techniques are a novel upconverted phase modulation, polyphase harmonic cancellation, and process voltage and temperature (PVT) invariant Delta Sigma phase interpolation. It will be shown in this work that the implementation of the aforementioned architectures allows ADTs to be designed with state of the art size, power, and accuracy levels, all while maintaining PVT insensitivity. Due to the significant performance enhancement over previously published works, this work presents the first feasible ADT architecture suitable for widespread commercial deployment.
ContributorsGrout, Kevin Samuel (Author) / Kitchen, Jennifer N (Thesis advisor) / Khalil, Waleed (Committee member) / Bakkaloglu, Bertan (Committee member) / Aberle, James T., 1961- (Committee member) / Garrity, Douglas (Committee member) / Arizona State University (Publisher)
Created2019
157844-Thumbnail Image.png
Description
This dissertation covers three primary topics and relates them in context. High frequency transformer design, microgrid modeling and control, and converter design as it pertains to the other topics are each investigated, establishing a summary of the state-of-the-art at the intersection of the three as a baseline. The culminating work

This dissertation covers three primary topics and relates them in context. High frequency transformer design, microgrid modeling and control, and converter design as it pertains to the other topics are each investigated, establishing a summary of the state-of-the-art at the intersection of the three as a baseline. The culminating work produced by the confluence of these topics is a novel modular solid-state transformer (SST) design, featuring an array of dual active bridge (DAB) converters, each of which contains an optimized high-frequency transformer, and an array of grid-forming inverters (GFI) suitable for centralized control in a microgrid environment. While no hardware was produced for this design, detailed modeling and simulation has been completed, and results are contextualized by rigorous analysis and comparison with results from published literature. The main contributions to each topic are best presented by topic area. For transformers, contributions include collation and presentation of the best-known methods of minimum loss high-frequency transformer design and analysis, descriptions of the implementation of these methods into a unified design script as well as access to an example of such a script, and the derivation and presentation of novel tools for analysis of multi-winding and multi-frequency transformers. For microgrid modeling and control, contributions include the modeling and simulation validation of the GFI and SST designs via state space modeling in a multi-scale simulation framework, as well as demonstration of stable and effective participation of these models in a centralized control scheme under phase imbalance. For converters, the SST design, analysis, and simulation are the primary contributions, though several novel derivations and analysis tools are also presented for the asymmetric half bridge and DAB.
ContributorsMongrain, Robert Scott (Author) / Ayyanar, Raja (Thesis advisor) / Pan, George (Committee member) / Qin, Jiangchao (Committee member) / Lei, Qin (Committee member) / Arizona State University (Publisher)
Created2019