Matching Items (22)
Filtering by

Clear all filters

152222-Thumbnail Image.png
Description
An embedded HVDC system is a dc link with at least two ends being physically connected within a single synchronous ac network. The thesis reviews previous works on embedded HVDC, proposes a dynamic embedded HVDC model by PSCAD program, and compares the transient stability performance among AC, DC and embedded

An embedded HVDC system is a dc link with at least two ends being physically connected within a single synchronous ac network. The thesis reviews previous works on embedded HVDC, proposes a dynamic embedded HVDC model by PSCAD program, and compares the transient stability performance among AC, DC and embedded HVDC. The test results indicate that by installing the embedded HVDC, AC network transient stability performance has been largely improved. Therefore the thesis designs a novel frequency control topology for embedded HVDC. According to the dynamic performance test results, when the embedded HVDC system equipped with a frequency control, the system transient stability will be improved further.
ContributorsYu, Jicheng (Author) / Karady, George G. (Thesis advisor) / Hui, Yu (Committee member) / Holbert, Keith E. (Committee member) / Arizona State University (Publisher)
Created2013
151475-Thumbnail Image.png
Description
The cyber-physical systems (CPS) are emerging as the underpinning technology for major industries in the 21-th century. This dissertation is focused on two fundamental issues in cyber-physical systems: network interdependence and information dynamics. It consists of the following two main thrusts. The first thrust is targeted at understanding the impact

The cyber-physical systems (CPS) are emerging as the underpinning technology for major industries in the 21-th century. This dissertation is focused on two fundamental issues in cyber-physical systems: network interdependence and information dynamics. It consists of the following two main thrusts. The first thrust is targeted at understanding the impact of network interdependence. It is shown that a cyber-physical system built upon multiple interdependent networks are more vulnerable to attacks since node failures in one network may result in failures in the other network, causing a cascade of failures that would potentially lead to the collapse of the entire infrastructure. There is thus a need to develop a new network science for modeling and quantifying cascading failures in multiple interdependent networks, and to develop network management algorithms that improve network robustness and ensure overall network reliability against cascading failures. To enhance the system robustness, a "regular" allocation strategy is proposed that yields better resistance against cascading failures compared to all possible existing strategies. Furthermore, in view of the load redistribution feature in many physical infrastructure networks, e.g., power grids, a CPS model is developed where the threshold model and the giant connected component model are used to capture the node failures in the physical infrastructure network and the cyber network, respectively. The second thrust is centered around the information dynamics in the CPS. One speculation is that the interconnections over multiple networks can facilitate information diffusion since information propagation in one network can trigger further spread in the other network. With this insight, a theoretical framework is developed to analyze information epidemic across multiple interconnecting networks. It is shown that the conjoining among networks can dramatically speed up message diffusion. Along a different avenue, many cyber-physical systems rely on wireless networks which offer platforms for information exchanges. To optimize the QoS of wireless networks, there is a need to develop a high-throughput and low-complexity scheduling algorithm to control link dynamics. To that end, distributed link scheduling algorithms are explored for multi-hop MIMO networks and two CSMA algorithms under the continuous-time model and the discrete-time model are devised, respectively.
ContributorsQian, Dajun (Author) / Zhang, Junshan (Thesis advisor) / Ying, Lei (Committee member) / Zhang, Yanchao (Committee member) / Cochran, Douglas (Committee member) / Arizona State University (Publisher)
Created2012
151436-Thumbnail Image.png
Description
Signal processing techniques have been used extensively in many engineering problems and in recent years its application has extended to non-traditional research fields such as biological systems. Many of these applications require extraction of a signal or parameter of interest from degraded measurements. One such application is mass spectrometry immunoassay

Signal processing techniques have been used extensively in many engineering problems and in recent years its application has extended to non-traditional research fields such as biological systems. Many of these applications require extraction of a signal or parameter of interest from degraded measurements. One such application is mass spectrometry immunoassay (MSIA) which has been one of the primary methods of biomarker discovery techniques. MSIA analyzes protein molecules as potential biomarkers using time of flight mass spectrometry (TOF-MS). Peak detection in TOF-MS is important for biomarker analysis and many other MS related application. Though many peak detection algorithms exist, most of them are based on heuristics models. One of the ways of detecting signal peaks is by deploying stochastic models of the signal and noise observations. Likelihood ratio test (LRT) detector, based on the Neyman-Pearson (NP) lemma, is an uniformly most powerful test to decision making in the form of a hypothesis test. The primary goal of this dissertation is to develop signal and noise models for the electrospray ionization (ESI) TOF-MS data. A new method is proposed for developing the signal model by employing first principles calculations based on device physics and molecular properties. The noise model is developed by analyzing MS data from careful experiments in the ESI mass spectrometer. A non-flat baseline in MS data is common. The reasons behind the formation of this baseline has not been fully comprehended. A new signal model explaining the presence of baseline is proposed, though detailed experiments are needed to further substantiate the model assumptions. Signal detection schemes based on these signal and noise models are proposed. A maximum likelihood (ML) method is introduced for estimating the signal peak amplitudes. The performance of the detection methods and ML estimation are evaluated with Monte Carlo simulation which shows promising results. An application of these methods is proposed for fractional abundance calculation for biomarker analysis, which is mathematically robust and fundamentally different than the current algorithms. Biomarker panels for type 2 diabetes and cardiovascular disease are analyzed using existing MS analysis algorithms. Finally, a support vector machine based multi-classification algorithm is developed for evaluating the biomarkers' effectiveness in discriminating type 2 diabetes and cardiovascular diseases and is shown to perform better than a linear discriminant analysis based classifier.
ContributorsBuddi, Sai (Author) / Taylor, Thomas (Thesis advisor) / Cochran, Douglas (Thesis advisor) / Nelson, Randall (Committee member) / Duman, Tolga (Committee member) / Arizona State University (Publisher)
Created2012
150319-Thumbnail Image.png
Description
This thesis describes an approach to system identification based on compressive sensing and demonstrates its efficacy on a challenging classical benchmark single-input, multiple output (SIMO) mechanical system consisting of an inverted pendulum on a cart. Due to its inherent non-linearity and unstable behavior, very few techniques currently exist that are

This thesis describes an approach to system identification based on compressive sensing and demonstrates its efficacy on a challenging classical benchmark single-input, multiple output (SIMO) mechanical system consisting of an inverted pendulum on a cart. Due to its inherent non-linearity and unstable behavior, very few techniques currently exist that are capable of identifying this system. The challenge in identification also lies in the coupled behavior of the system and in the difficulty of obtaining the full-range dynamics. The differential equations describing the system dynamics are determined from measurements of the system's input-output behavior. These equations are assumed to consist of the superposition, with unknown weights, of a small number of terms drawn from a large library of nonlinear terms. Under this assumption, compressed sensing allows the constituent library elements and their corresponding weights to be identified by decomposing a time-series signal of the system's outputs into a sparse superposition of corresponding time-series signals produced by the library components. The most popular techniques for non-linear system identification entail the use of ANN's (Artificial Neural Networks), which require a large number of measurements of the input and output data at high sampling frequencies. The method developed in this project requires very few samples and the accuracy of reconstruction is extremely high. Furthermore, this method yields the Ordinary Differential Equation (ODE) of the system explicitly. This is in contrast to some ANN approaches that produce only a trained network which might lose fidelity with change of initial conditions or if facing an input that wasn't used during its training. This technique is expected to be of value in system identification of complex dynamic systems encountered in diverse fields such as Biology, Computation, Statistics, Mechanics and Electrical Engineering.
ContributorsNaik, Manjish Arvind (Author) / Cochran, Douglas (Thesis advisor) / Kovvali, Narayan (Committee member) / Kawski, Matthias (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2011
149993-Thumbnail Image.png
Description
Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's

Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's inherent quality. However, at times, there may be cues in the upstream test data that, if detected, could serve to predict the likelihood of downstream failure or performance degradation induced by product use or environmental stresses. This study explores the use of downstream factory test data or product field reliability data to infer data mining or pattern recognition criteria onto manufacturing process or upstream test data by means of support vector machines (SVM) in order to provide reliability prediction models. In concert with a risk/benefit analysis, these models can be utilized to drive improvement of the product or, at least, via screening to improve the reliability of the product delivered to the customer. Such models can be used to aid in reliability risk assessment based on detectable correlations between the product test performance and the sources of supply, test stands, or other factors related to product manufacture. As an enhancement to the usefulness of the SVM or hyperplane classifier within this context, L-moments and the Western Electric Company (WECO) Rules are used to augment or replace the native process or test data used as inputs to the classifier. As part of this research, a generalizable binary classification methodology was developed that can be used to design and implement predictors of end-item field failure or downstream product performance based on upstream test data that may be composed of single-parameter, time-series, or multivariate real-valued data. Additionally, the methodology provides input parameter weighting factors that have proved useful in failure analysis and root cause investigations as indicators of which of several upstream product parameters have the greater influence on the downstream failure outcomes.
ContributorsMosley, James (Author) / Morrell, Darryl (Committee member) / Cochran, Douglas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Roberts, Chell (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
150439-Thumbnail Image.png
Description
This dissertation describes a novel, low cost strategy of using particle streak (track) images for accurate micro-channel velocity field mapping. It is shown that 2-dimensional, 2-component fields can be efficiently obtained using the spatial variation of particle track lengths in micro-channels. The velocity field is a critical performance feature of

This dissertation describes a novel, low cost strategy of using particle streak (track) images for accurate micro-channel velocity field mapping. It is shown that 2-dimensional, 2-component fields can be efficiently obtained using the spatial variation of particle track lengths in micro-channels. The velocity field is a critical performance feature of many microfluidic devices. Since it is often the case that un-modeled micro-scale physics frustrates principled design methodologies, particle based velocity field estimation is an essential design and validation tool. Current technologies that achieve this goal use particle constellation correlation strategies and rely heavily on costly, high-speed imaging hardware. The proposed image/ video processing based method achieves comparable accuracy for fraction of the cost. In the context of micro-channel velocimetry, the usability of particle streaks has been poorly studied so far. Their use has remained restricted mostly to bulk flow measurements and occasional ad-hoc uses in microfluidics. A second look at the usability of particle streak lengths in this work reveals that they can be efficiently used, after approximately 15 years from their first use for micro-channel velocimetry. Particle tracks in steady, smooth microfluidic flows is mathematically modeled and a framework for using experimentally observed particle track lengths for local velocity field estimation is introduced here, followed by algorithm implementation and quantitative verification. Further, experimental considerations and image processing techniques that can facilitate the proposed methods are also discussed in this dissertation. Unavailability of benchmarked particle track image data motivated the implementation of a simulation framework with the capability to generate exposure time controlled particle track image sequence for velocity vector fields. This dissertation also describes this work and shows that arbitrary velocity fields designed in computational fluid dynamics software tools can be used to obtain such images. Apart from aiding gold-standard data generation, such images would find use for quick microfluidic flow field visualization and help improve device designs.
ContributorsMahanti, Prasun (Author) / Cochran, Douglas (Thesis advisor) / Taylor, Thomas (Thesis advisor) / Hayes, Mark (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2011
151107-Thumbnail Image.png
Description
This dissertation considers two different kinds of two-hop multiple-input multiple-output (MIMO) relay networks with beamforming (BF). First, "one-way" amplify-and-forward (AF) and decode-and-forward (DF) MIMO BF relay networks are considered, in which the relay amplifies or decodes the received signal from the source and forwards it to the destination, respectively, where

This dissertation considers two different kinds of two-hop multiple-input multiple-output (MIMO) relay networks with beamforming (BF). First, "one-way" amplify-and-forward (AF) and decode-and-forward (DF) MIMO BF relay networks are considered, in which the relay amplifies or decodes the received signal from the source and forwards it to the destination, respectively, where all nodes beamform with multiple antennas to obtain gains in performance with reduced power consumption. A direct link from source to destination is included in performance analysis. Novel systematic upper-bounds and lower-bounds to average bit or symbol error rates (BERs or SERs) are proposed. Second, "two-way" AF MIMO BF relay networks are investigated, in which two sources exchange their data through a relay, to improve the spectral efficiency compared with one-way relay networks. Novel unified performance analysis is carried out for five different relaying schemes using two, three, and four time slots in sum-BER, the sum of two BERs at both sources, in two-way relay networks with and without direct links. For both kinds of relay networks, when any node is beamforming simultaneously to two nodes (i.e. from source to relay and destination in one-way relay networks, and from relay to both sources in two-way relay networks), the selection of the BF coefficients at a beamforming node becomes a challenging problem since it has to balance the needs of both receiving nodes. Although this "BF optimization" is performed for BER, SER, and sum-BER in this dissertation, the solution for optimal BF coefficients not only is difficult to implement, it also does not lend itself to performance analysis because the optimal BF coefficients cannot be expressed in closed-form. Therefore, the performance of optimal schemes through bounds, as well as suboptimal ones such as strong-path BF, which beamforms to the stronger path of two links based on their received signal-to-noise ratios (SNRs), is provided for BERs or SERs, for the first time. Since different channel state information (CSI) assumptions at the source, relay, and destination provide different error performance, various CSI assumptions are also considered.
ContributorsKim, Hyunjun (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Duman, Tolga M. (Committee member) / Hui, Yu (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2012
150929-Thumbnail Image.png
Description
This thesis examines the application of statistical signal processing approaches to data arising from surveys intended to measure psychological and sociological phenomena underpinning human social dynamics. The use of signal processing methods for analysis of signals arising from measurement of social, biological, and other non-traditional phenomena has been an important

This thesis examines the application of statistical signal processing approaches to data arising from surveys intended to measure psychological and sociological phenomena underpinning human social dynamics. The use of signal processing methods for analysis of signals arising from measurement of social, biological, and other non-traditional phenomena has been an important and growing area of signal processing research over the past decade. Here, we explore the application of statistical modeling and signal processing concepts to data obtained from the Global Group Relations Project, specifically to understand and quantify the effects and interactions of social psychological factors related to intergroup conflicts. We use Bayesian networks to specify prospective models of conditional dependence. Bayesian networks are determined between social psychological factors and conflict variables, and modeled by directed acyclic graphs, while the significant interactions are modeled as conditional probabilities. Since the data are sparse and multi-dimensional, we regress Gaussian mixture models (GMMs) against the data to estimate the conditional probabilities of interest. The parameters of GMMs are estimated using the expectation-maximization (EM) algorithm. However, the EM algorithm may suffer from over-fitting problem due to the high dimensionality and limited observations entailed in this data set. Therefore, the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) are used for GMM order estimation. To assist intuitive understanding of the interactions of social variables and the intergroup conflicts, we introduce a color-based visualization scheme. In this scheme, the intensities of colors are proportional to the conditional probabilities observed.
ContributorsLiu, Hui (Author) / Taylor, Thomas (Thesis advisor) / Cochran, Douglas (Thesis advisor) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2012
153915-Thumbnail Image.png
Description
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition

Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
ContributorsIlkturk, Utku (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Renaut, Rosemary (Committee member) / Armbruster, Dieter (Committee member) / Arizona State University (Publisher)
Created2015
156145-Thumbnail Image.png
Description
Spectral congestion is quickly becoming a problem for the telecommunications sector. In order to alleviate spectral congestion and achieve electromagnetic radio frequency (RF) convergence, communications and radar systems are increasingly encouraged to share bandwidth. In direct opposition to the traditional spectrum sharing approach between radar and communications systems of complete

Spectral congestion is quickly becoming a problem for the telecommunications sector. In order to alleviate spectral congestion and achieve electromagnetic radio frequency (RF) convergence, communications and radar systems are increasingly encouraged to share bandwidth. In direct opposition to the traditional spectrum sharing approach between radar and communications systems of complete isolation (temporal, spectral or spatial), both systems can be jointly co-designed from the ground up to maximize their joint performance for mutual benefit. In order to properly characterize and understand cooperative spectrum sharing between radar and communications systems, the fundamental limits on performance of a cooperative radar-communications system are investigated. To facilitate this investigation, performance metrics are chosen in this dissertation that allow radar and communications to be compared on the same scale. To that effect, information is chosen as the performance metric and an information theoretic radar performance metric compatible with the communications data rate, the radar estimation rate, is developed. The estimation rate measures the amount of information learned by illuminating a target. With the development of the estimation rate, standard multi-user communications performance bounds are extended with joint radar-communications users to produce bounds on the performance of a joint radar-communications system. System performance for variations of the standard spectrum sharing problem defined in this dissertation are investigated, and inner bounds on performance are extended to account for the effect of continuous radar waveform optimization, multiple radar targets, clutter, phase noise, and radar detection. A detailed interpretation of the estimation rate and a brief discussion on how to use these performance bounds to select an optimal operating point and achieve RF convergence are provided.
ContributorsChiriyath, Alex Rajan (Author) / Bliss, Daniel W (Thesis advisor) / Cochran, Douglas (Committee member) / Kosut, Oliver (Committee member) / Richmond, Christ D (Committee member) / Arizona State University (Publisher)
Created2018