Matching Items (599)
Filtering by

Clear all filters

151561-Thumbnail Image.png
Description
This dissertation presents a new hybrid fault current limiter (FCL) topology that is primarily intended to protect single-phase power equipment. It can however be extended to protect three phase systems but would need three devices to protect each individual phase. In comparison against the existing fault current limiter technology, the

This dissertation presents a new hybrid fault current limiter (FCL) topology that is primarily intended to protect single-phase power equipment. It can however be extended to protect three phase systems but would need three devices to protect each individual phase. In comparison against the existing fault current limiter technology, the salient fea-tures of the proposed topology are: a) provides variable impedance that provides a 50% reduction in prospective fault current; b) near instantaneous response time which is with-in the first half cycle (1-4 ms); c) the use of semiconductor switches as the commutating switch which produces reduced leakage current, reduced losses, improved reliability, and a faster switch time (ns-µs); d) zero losses in steady-state operation; e) use of a Neodym-ium (NdFeB) permanent magnet as the limiting impedance which reduces size, cost, weight, eliminates DC biasing and cooling costs; f) use of Pulse Width Modulation (PWM) to control the magnitude of the fault current to a user's desired level. g) experi-mental test system is developed and tested to prove the concepts of the proposed FCL. This dissertation presents the proposed topology and its working principle backed up with numerical verifications, simulation results, and hardware implementation results. Conclu-sions and future work are also presented.
ContributorsPrigmore, Jay (Author) / Karady, George G. (Thesis advisor) / Ayyanar, Raja (Committee member) / Holbert, Keith E. (Committee member) / Hedman, Kory (Committee member) / Arizona State University (Publisher)
Created2013
151565-Thumbnail Image.png
Description
Harsh environments have conditions that make collecting scientific data difficult with existing commercial-off-the-shelf technology. Micro Electro Mechanical Systems (MEMS) technology is ideally suited for harsh environment characterization and operation due to the wide range of materials available and an incredible array of different sensing techniques while providing small device size,

Harsh environments have conditions that make collecting scientific data difficult with existing commercial-off-the-shelf technology. Micro Electro Mechanical Systems (MEMS) technology is ideally suited for harsh environment characterization and operation due to the wide range of materials available and an incredible array of different sensing techniques while providing small device size, low power consumption, and robustness. There were two main objectives of the research conducted. The first objective was to design, fabricate, and test novel sensors that measure the amount of exposure to ionizing radiation for a wide range of applications including characterization of harsh environments. Two types of MEMS ionizing radiation dosimeters were developed. The first sensor was a passive radiation-sensitive capacitor-antenna design. The antenna's emitted frequency of peak-intensity changed as exposure time to radiation increased. The second sensor was a film bulk acoustic-wave resonator, whose resonant frequency decreased with increasing ionizing radiation exposure time. The second objective was to develop MEMS sensor systems that could be deployed to gather scientific data and to use that data to address the following research question: do temperature and/or conductivity predict the appearance of photosynthetic organisms in hot springs. To this end, temperature and electrical conductivity sensor arrays were designed and fabricated based on mature MEMS technology. Electronic circuits and the software interface to the electronics were developed for field data collection. The sensor arrays utilized in the hot springs yielded results that support the hypothesis that temperature plays a key role in determining where the photosynthetic organisms occur. Additionally, a cold-film fluidic flow sensor was developed, which is suitable for near-boiling temperature measurement. Future research should focus on (1) developing a MEMS pH sensor array with integrated temperature, conductivity, and flow sensors to provide multi-dimensional data for scientific study and (2) finding solutions to biofouling and self-calibration, which affects sensor performance over long-term deployment.
ContributorsOiler, Jonathon (Author) / Yu, Hongyu (Thesis advisor) / Anbar, Ariel (Committee member) / Hartnett, Hilairy (Committee member) / Scannapieco, Evan (Committee member) / Timmes, Francis (Committee member) / Arizona State University (Publisher)
Created2013
151465-Thumbnail Image.png
Description
Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on

Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on a priori information and user-specified model parameters. Also, ECG beat morphologies, which vary greatly across patients and disease states, cannot be uniquely characterized by a single model. In this work, sequential Bayesian based methods are used to appropriately model and adaptively select the corresponding model parameters of ECG signals. An adaptive framework based on a sequential Bayesian tracking method is proposed to adaptively select the cardiac parameters that minimize the estimation error, thus precluding the need for pre-processing. Simulations using real ECG data from the online Physionet database demonstrate the improvement in performance of the proposed algorithm in accurately estimating critical heart disease parameters. In addition, two new approaches to ECG modeling are presented using the interacting multiple model and the sequential Markov chain Monte Carlo technique with adaptive model selection. Both these methods can adaptively choose between different models for various ECG beat morphologies without requiring prior ECG information, as demonstrated by using real ECG signals. A supervised Bayesian maximum-likelihood (ML) based classifier uses the estimated model parameters to classify different types of cardiac arrhythmias. However, the non-availability of sufficient amounts of representative training data and the large inter-patient variability pose a challenge to the existing supervised learning algorithms, resulting in a poor classification performance. In addition, recently developed unsupervised learning methods require a priori knowledge on the number of diseases to cluster the ECG data, which often evolves over time. In order to address these issues, an adaptive learning ECG classification method that uses Dirichlet process Gaussian mixture models is proposed. This approach does not place any restriction on the number of disease classes, nor does it require any training data. This algorithm is adapted to be patient-specific by labeling or identifying the generated mixtures using the Bayesian ML method, assuming the availability of labeled training data.
ContributorsEdla, Shwetha Reddy (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2012
151474-Thumbnail Image.png
Description
The medical industry has benefited greatly by electronic integration resulting in the explosive growth of active medical implants. These devices often treat and monitor chronic health conditions and require very minimal power usage. A key part of these medical implants is an ultra-low power two way wireless communication system. This

The medical industry has benefited greatly by electronic integration resulting in the explosive growth of active medical implants. These devices often treat and monitor chronic health conditions and require very minimal power usage. A key part of these medical implants is an ultra-low power two way wireless communication system. This enables both control of the implant as well as relay of information collected. This research has focused on a high performance receiver for medical implant applications. One commonly quoted specification to compare receivers is energy per bit required. This metric is useful, but incomplete in that it ignores Sensitivity level, bit error rate, and immunity to interferers. In this study exploration of receiver architectures and convergence upon a comprehensive solution is done. This analysis is used to design and build a system for validation. The Direct Conversion Receiver architecture implemented for the MICS standard in 0.18 µm CMOS process consumes approximately 2 mW is competitive with published research.
ContributorsStevens, Mark (Author) / Kiaei, Sayfe (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Aberle, James T., 1961- (Committee member) / Barnaby, Hugh (Committee member) / Arizona State University (Publisher)
Created2012
151475-Thumbnail Image.png
Description
The cyber-physical systems (CPS) are emerging as the underpinning technology for major industries in the 21-th century. This dissertation is focused on two fundamental issues in cyber-physical systems: network interdependence and information dynamics. It consists of the following two main thrusts. The first thrust is targeted at understanding the impact

The cyber-physical systems (CPS) are emerging as the underpinning technology for major industries in the 21-th century. This dissertation is focused on two fundamental issues in cyber-physical systems: network interdependence and information dynamics. It consists of the following two main thrusts. The first thrust is targeted at understanding the impact of network interdependence. It is shown that a cyber-physical system built upon multiple interdependent networks are more vulnerable to attacks since node failures in one network may result in failures in the other network, causing a cascade of failures that would potentially lead to the collapse of the entire infrastructure. There is thus a need to develop a new network science for modeling and quantifying cascading failures in multiple interdependent networks, and to develop network management algorithms that improve network robustness and ensure overall network reliability against cascading failures. To enhance the system robustness, a "regular" allocation strategy is proposed that yields better resistance against cascading failures compared to all possible existing strategies. Furthermore, in view of the load redistribution feature in many physical infrastructure networks, e.g., power grids, a CPS model is developed where the threshold model and the giant connected component model are used to capture the node failures in the physical infrastructure network and the cyber network, respectively. The second thrust is centered around the information dynamics in the CPS. One speculation is that the interconnections over multiple networks can facilitate information diffusion since information propagation in one network can trigger further spread in the other network. With this insight, a theoretical framework is developed to analyze information epidemic across multiple interconnecting networks. It is shown that the conjoining among networks can dramatically speed up message diffusion. Along a different avenue, many cyber-physical systems rely on wireless networks which offer platforms for information exchanges. To optimize the QoS of wireless networks, there is a need to develop a high-throughput and low-complexity scheduling algorithm to control link dynamics. To that end, distributed link scheduling algorithms are explored for multi-hop MIMO networks and two CSMA algorithms under the continuous-time model and the discrete-time model are devised, respectively.
ContributorsQian, Dajun (Author) / Zhang, Junshan (Thesis advisor) / Ying, Lei (Committee member) / Zhang, Yanchao (Committee member) / Cochran, Douglas (Committee member) / Arizona State University (Publisher)
Created2012
151410-Thumbnail Image.png
Description
Test cost has become a significant portion of device cost and a bottleneck in high volume manufacturing. Increasing integration density and shrinking feature sizes increased test time/cost and reduce observability. Test engineers have to put a tremendous effort in order to maintain test cost within an acceptable budget. Unfortunately, there

Test cost has become a significant portion of device cost and a bottleneck in high volume manufacturing. Increasing integration density and shrinking feature sizes increased test time/cost and reduce observability. Test engineers have to put a tremendous effort in order to maintain test cost within an acceptable budget. Unfortunately, there is not a single straightforward solution to the problem. Products that are tested have several application domains and distinct customer profiles. Some products are required to operate for long periods of time while others are required to be low cost and optimized for low cost. Multitude of constraints and goals make it impossible to find a single solution that work for all cases. Hence, test development/optimization is typically design/circuit dependent and even process specific. Therefore, test optimization cannot be performed using a single test approach, but necessitates a diversity of approaches. This works aims at addressing test cost minimization and test quality improvement at various levels. In the first chapter of the work, we investigate pre-silicon strategies, such as design for test and pre-silicon statistical simulation optimization. In the second chapter, we investigate efficient post-silicon test strategies, such as adaptive test, adaptive multi-site test, outlier analysis, and process shift detection/tracking.
ContributorsYilmaz, Ender (Author) / Ozev, Sule (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Cao, Yu (Committee member) / Christen, Jennifer Blain (Committee member) / Arizona State University (Publisher)
Created2012
151436-Thumbnail Image.png
Description
Signal processing techniques have been used extensively in many engineering problems and in recent years its application has extended to non-traditional research fields such as biological systems. Many of these applications require extraction of a signal or parameter of interest from degraded measurements. One such application is mass spectrometry immunoassay

Signal processing techniques have been used extensively in many engineering problems and in recent years its application has extended to non-traditional research fields such as biological systems. Many of these applications require extraction of a signal or parameter of interest from degraded measurements. One such application is mass spectrometry immunoassay (MSIA) which has been one of the primary methods of biomarker discovery techniques. MSIA analyzes protein molecules as potential biomarkers using time of flight mass spectrometry (TOF-MS). Peak detection in TOF-MS is important for biomarker analysis and many other MS related application. Though many peak detection algorithms exist, most of them are based on heuristics models. One of the ways of detecting signal peaks is by deploying stochastic models of the signal and noise observations. Likelihood ratio test (LRT) detector, based on the Neyman-Pearson (NP) lemma, is an uniformly most powerful test to decision making in the form of a hypothesis test. The primary goal of this dissertation is to develop signal and noise models for the electrospray ionization (ESI) TOF-MS data. A new method is proposed for developing the signal model by employing first principles calculations based on device physics and molecular properties. The noise model is developed by analyzing MS data from careful experiments in the ESI mass spectrometer. A non-flat baseline in MS data is common. The reasons behind the formation of this baseline has not been fully comprehended. A new signal model explaining the presence of baseline is proposed, though detailed experiments are needed to further substantiate the model assumptions. Signal detection schemes based on these signal and noise models are proposed. A maximum likelihood (ML) method is introduced for estimating the signal peak amplitudes. The performance of the detection methods and ML estimation are evaluated with Monte Carlo simulation which shows promising results. An application of these methods is proposed for fractional abundance calculation for biomarker analysis, which is mathematically robust and fundamentally different than the current algorithms. Biomarker panels for type 2 diabetes and cardiovascular disease are analyzed using existing MS analysis algorithms. Finally, a support vector machine based multi-classification algorithm is developed for evaluating the biomarkers' effectiveness in discriminating type 2 diabetes and cardiovascular diseases and is shown to perform better than a linear discriminant analysis based classifier.
ContributorsBuddi, Sai (Author) / Taylor, Thomas (Thesis advisor) / Cochran, Douglas (Thesis advisor) / Nelson, Randall (Committee member) / Duman, Tolga (Committee member) / Arizona State University (Publisher)
Created2012
151537-Thumbnail Image.png
Description
Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses

Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses on the study of sparse models and their interplay with modern machine learning techniques such as manifold, ensemble and graph-based methods, along with their applications in image analysis and recovery. By considering graph relations between data samples while learning sparse models, graph-embedded codes can be obtained for use in unsupervised, supervised and semi-supervised problems. Using experiments on standard datasets, it is demonstrated that the codes obtained from the proposed methods outperform several baseline algorithms. In order to facilitate sparse learning with large scale data, the paradigm of ensemble sparse coding is proposed, and different strategies for constructing weak base models are developed. Experiments with image recovery and clustering demonstrate that these ensemble models perform better when compared to conventional sparse coding frameworks. When examples from the data manifold are available, manifold constraints can be incorporated with sparse models and two approaches are proposed to combine sparse coding with manifold projection. The improved performance of the proposed techniques in comparison to sparse coding approaches is demonstrated using several image recovery experiments. In addition to these approaches, it might be required in some applications to combine multiple sparse models with different regularizations. In particular, combining an unconstrained sparse model with non-negative sparse coding is important in image analysis, and it poses several algorithmic and theoretical challenges. A convex and an efficient greedy algorithm for recovering combined representations are proposed. Theoretical guarantees on sparsity thresholds for exact recovery using these algorithms are derived and recovery performance is also demonstrated using simulations on synthetic data. Finally, the problem of non-linear compressive sensing, where the measurement process is carried out in feature space obtained using non-linear transformations, is considered. An optimized non-linear measurement system is proposed, and improvements in recovery performance are demonstrated in comparison to using random measurements as well as optimized linear measurements.
ContributorsNatesan Ramamurthy, Karthikeyan (Author) / Spanias, Andreas (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Karam, Lina (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151540-Thumbnail Image.png
Description
The combined heat and power (CHP)-based distributed generation (DG) or dis-tributed energy resources (DERs) are mature options available in the present energy mar-ket, considered to be an effective solution to promote energy efficiency. In the urban en-vironment, the electricity, water and natural gas distribution networks are becoming in-creasingly interconnected with

The combined heat and power (CHP)-based distributed generation (DG) or dis-tributed energy resources (DERs) are mature options available in the present energy mar-ket, considered to be an effective solution to promote energy efficiency. In the urban en-vironment, the electricity, water and natural gas distribution networks are becoming in-creasingly interconnected with the growing penetration of the CHP-based DG. Subse-quently, this emerging interdependence leads to new topics meriting serious consideration: how much of the CHP-based DG can be accommodated and where to locate these DERs, and given preexisting constraints, how to quantify the mutual impacts on operation performances between these urban energy distribution networks and the CHP-based DG. The early research work was conducted to investigate the feasibility and design methods for one residential microgrid system based on existing electricity, water and gas infrastructures of a residential community, mainly focusing on the economic planning. However, this proposed design method cannot determine the optimal DG sizing and sit-ing for a larger test bed with the given information of energy infrastructures. In this con-text, a more systematic as well as generalized approach should be developed to solve these problems. In the later study, the model architecture that integrates urban electricity, water and gas distribution networks, and the CHP-based DG system was developed. The pro-posed approach addressed the challenge of identifying the optimal sizing and siting of the CHP-based DG on these urban energy networks and the mutual impacts on operation per-formances were also quantified. For this study, the overall objective is to maximize the electrical output and recovered thermal output of the CHP-based DG units. The electrici-ty, gas, and water system models were developed individually and coupled by the devel-oped CHP-based DG system model. The resultant integrated system model is used to constrain the DG's electrical output and recovered thermal output, which are affected by multiple factors and thus analyzed in different case studies. The results indicate that the designed typical gas system is capable of supplying sufficient natural gas for the DG normal operation, while the present water system cannot support the complete recovery of the exhaust heat from the DG units.
ContributorsZhang, Xianjun (Author) / Karady, George G. (Thesis advisor) / Ariaratnam, Samuel T. (Committee member) / Holbert, Keith E. (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2013
151542-Thumbnail Image.png
Description
Asymptotic comparisons of ergodic channel capacity at high and low signal-to-noise ratios (SNRs) are provided for several adaptive transmission schemes over fading channels with general distributions, including optimal power and rate adaptation, rate adaptation only, channel inversion and its variants. Analysis of the high-SNR pre-log constants of the ergodic capacity

Asymptotic comparisons of ergodic channel capacity at high and low signal-to-noise ratios (SNRs) are provided for several adaptive transmission schemes over fading channels with general distributions, including optimal power and rate adaptation, rate adaptation only, channel inversion and its variants. Analysis of the high-SNR pre-log constants of the ergodic capacity reveals the existence of constant capacity difference gaps among the schemes with a pre-log constant of 1. Closed-form expressions for these high-SNR capacity difference gaps are derived, which are proportional to the SNR loss between these schemes in dB scale. The largest one of these gaps is found to be between the optimal power and rate adaptation scheme and the channel inversion scheme. Based on these expressions it is shown that the presence of space diversity or multi-user diversity makes channel inversion arbitrarily close to achieving optimal capacity at high SNR with sufficiently large number of antennas or users. A low-SNR analysis also reveals that the presence of fading provably always improves capacity at sufficiently low SNR, compared to the additive white Gaussian noise (AWGN) case. Numerical results are shown to corroborate our analytical results. This dissertation derives high-SNR asymptotic average error rates over fading channels by relating them to the outage probability, under mild assumptions. The analysis is based on the Tauberian theorem for Laplace-Stieltjes transforms which is grounded on the notion of regular variation, and applies to a wider range of channel distributions than existing approaches. The theory of regular variation is argued to be the proper mathematical framework for finding sufficient and necessary conditions for outage events to dominate high-SNR error rate performance. It is proved that the diversity order being d and the cumulative distribution function (CDF) of the channel power gain having variation exponent d at 0 imply each other, provided that the instantaneous error rate is upper-bounded by an exponential function of the instantaneous SNR. High-SNR asymptotic average error rates are derived for specific instantaneous error rates. Compared to existing approaches in the literature, the asymptotic expressions are related to the channel distribution in a much simpler manner herein, and related with outage more intuitively. The high-SNR asymptotic error rate is also characterized under diversity combining schemes with the channel power gain of each branch having a regularly varying CDF. Numerical results are shown to corroborate our theoretical analysis. This dissertation studies several problems concerning channel inclusion, which is a partial ordering between discrete memoryless channels (DMCs) proposed by Shannon. Specifically, majorization-based conditions are derived for channel inclusion between certain DMCs. Furthermore, under general conditions, channel equivalence defined through Shannon ordering is shown to be the same as permutation of input and output symbols. The determination of channel inclusion is considered as a convex optimization problem, and the sparsity of the weights related to the representation of the worse DMC in terms of the better one is revealed when channel inclusion holds between two DMCs. For the exploitation of this sparsity, an effective iterative algorithm is established based on modifying the orthogonal matching pursuit algorithm. The extension of channel inclusion to continuous channels and its application in ordering phase noises are briefly addressed.
ContributorsZhang, Yuan (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Zhang, Junshan (Committee member) / Reisslein, Martin (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2013