Matching Items (195)
150050-Thumbnail Image.png
Description
The development of a Solid State Transformer (SST) that incorporates a DC-DC multiport converter to integrate both photovoltaic (PV) power generation and battery energy storage is presented in this dissertation. The DC-DC stage is based on a quad-active-bridge (QAB) converter which not only provides isolation for the load, but also

The development of a Solid State Transformer (SST) that incorporates a DC-DC multiport converter to integrate both photovoltaic (PV) power generation and battery energy storage is presented in this dissertation. The DC-DC stage is based on a quad-active-bridge (QAB) converter which not only provides isolation for the load, but also for the PV and storage. The AC-DC stage is implemented with a pulse-width-modulated (PWM) single phase rectifier. A unified gyrator-based average model is developed for a general multi-active-bridge (MAB) converter controlled through phase-shift modulation (PSM). Expressions to determine the power rating of the MAB ports are also derived. The developed gyrator-based average model is applied to the QAB converter for faster simulations of the proposed SST during the control design process as well for deriving the state-space representation of the plant. Both linear quadratic regulator (LQR) and single-input-single-output (SISO) types of controllers are designed for the DC-DC stage. A novel technique that complements the SISO controller by taking into account the cross-coupling characteristics of the QAB converter is also presented herein. Cascaded SISO controllers are designed for the AC-DC stage. The QAB demanded power is calculated at the QAB controls and then fed into the rectifier controls in order to minimize the effect of the interaction between the two SST stages. The dynamic performance of the designed control loops based on the proposed control strategies are verified through extensive simulation of the SST average and switching models. The experimental results presented herein show that the transient responses for each control strategy match those from the simulations results thus validating them.
ContributorsFalcones, Sixifo Daniel (Author) / Ayyanar, Raja (Thesis advisor) / Karady, George G. (Committee member) / Tylavsky, Daniel (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2011
149867-Thumbnail Image.png
Description
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
ContributorsKrishnamoorthi, Harish (Author) / Spanias, Andreas (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2011
149953-Thumbnail Image.png
Description
The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability

The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability to exploit sparsity. Traditional interior point methods encounter difficulties in computation for solving the CS applications. In the first part of this work, a fast algorithm based on the augmented Lagrangian method for solving the large-scale TV-$\ell_1$ regularized inverse problem is proposed. Specifically, by taking advantage of the separable structure, the original problem can be approximated via the sum of a series of simple functions with closed form solutions. A preconditioner for solving the block Toeplitz with Toeplitz block (BTTB) linear system is proposed to accelerate the computation. An in-depth discussion on the rate of convergence and the optimal parameter selection criteria is given. Numerical experiments are used to test the performance and the robustness of the proposed algorithm to a wide range of parameter values. Applications of the algorithm in magnetic resonance (MR) imaging and a comparison with other existing methods are included. The second part of this work is the application of the TV-$\ell_1$ model in source localization using sensor arrays. The array output is reformulated into a sparse waveform via an over-complete basis and study the $\ell_p$-norm properties in detecting the sparsity. An algorithm is proposed for minimizing a non-convex problem. According to the results of numerical experiments, the proposed algorithm with the aid of the $\ell_p$-norm can resolve closely distributed sources with higher accuracy than other existing methods.
ContributorsShen, Wei (Author) / Mittlemann, Hans D (Thesis advisor) / Renaut, Rosemary A. (Committee member) / Jackiewicz, Zdzislaw (Committee member) / Gelb, Anne (Committee member) / Ringhofer, Christian (Committee member) / Arizona State University (Publisher)
Created2011
149796-Thumbnail Image.png
Description
Telomerase is a specialized enzyme that adds telomeric DNA repeats to the chromosome ends to counterbalance the progressive telomere shortening over cell divisions. It has two essential core components, a catalytic telomerase reverse transcriptase protein (TERT), and a telomerase RNA (TR). TERT synthesizes telomeric DNA by reverse transcribing a short

Telomerase is a specialized enzyme that adds telomeric DNA repeats to the chromosome ends to counterbalance the progressive telomere shortening over cell divisions. It has two essential core components, a catalytic telomerase reverse transcriptase protein (TERT), and a telomerase RNA (TR). TERT synthesizes telomeric DNA by reverse transcribing a short template sequence in TR. Unlike TERT, TR is extremely divergent in size, sequence and structure and has only been identified in three evolutionarily distant groups. The lack of knowledge on TR from important model organisms has been a roadblock for vigorous studies on telomerase regulation. To address this issue, a novel in vitro system combining deep-sequencing and bioinformatics search was developed to discover TR from new phylogenetic groups. The system has been validated by the successful identification of TR from echinoderm purple sea urchin Strongylocentrotus purpuratus. The sea urchin TR (spTR) is the first invertebrate TR that has been identified and can serve as a model for understanding how the vertebrate TR evolved with vertebrate-specific traits. By using phylogenetic comparative analysis, the secondary structure of spTR was determined. The spTR secondary structure reveals unique sea urchin specific structure elements as well as homologous structural features shared by TR from other organisms. This study enhanced the understanding of telomerase mechanism and the evolution of telomerase RNP. The system that was used to identity telomerase RNA can be employed for the discovery of other TR as well as the discovery of novel RNA from other RNP complex.
ContributorsLi, Yang (Author) / Chen, Julian Jl (Thesis advisor) / Yan, Hao (Committee member) / Ghirlanda, Giovanna (Committee member) / Arizona State University (Publisher)
Created2011
149854-Thumbnail Image.png
Description
There is increasing interest in the medical and behavioral health communities towards developing effective strategies for the treatment of chronic diseases. Among these lie adaptive interventions, which consider adjusting treatment dosages over time based on participant response. Control engineering offers a broad-based solution framework for optimizing the effectiveness of such

There is increasing interest in the medical and behavioral health communities towards developing effective strategies for the treatment of chronic diseases. Among these lie adaptive interventions, which consider adjusting treatment dosages over time based on participant response. Control engineering offers a broad-based solution framework for optimizing the effectiveness of such interventions. In this thesis, an approach is proposed to develop dynamical models and subsequently, hybrid model predictive control schemes for assigning optimal dosages of naltrexone, an opioid antagonist, as treatment for a chronic pain condition known as fibromyalgia. System identification techniques are employed to model the dynamics from the daily diary reports completed by participants of a blind naltrexone intervention trial. These self-reports include assessments of outcomes of interest (e.g., general pain symptoms, sleep quality) and additional external variables (disturbances) that affect these outcomes (e.g., stress, anxiety, and mood). Using prediction-error methods, a multi-input model describing the effect of drug, placebo and other disturbances on outcomes of interest is developed. This discrete time model is approximated by a continuous second order model with zero, which was found to be adequate to capture the dynamics of this intervention. Data from 40 participants in two clinical trials were analyzed and participants were classified as responders and non-responders based on the models obtained from system identification. The dynamical models can be used by a model predictive controller for automated dosage selection of naltrexone using feedback/feedforward control actions in the presence of external disturbances. The clinical requirement for categorical (i.e., discrete-valued) drug dosage levels creates a need for hybrid model predictive control (HMPC). The controller features a multiple degree-of-freedom formulation that enables the user to adjust the speed of setpoint tracking, measured disturbance rejection and unmeasured disturbance rejection independently in the closed loop system. The nominal and robust performance of the proposed control scheme is examined via simulation using system identification models from a representative participant in the naltrexone intervention trial. The controller evaluation described in this thesis gives credibility to the promise and applicability of control engineering principles for optimizing adaptive interventions.
ContributorsDeśapāṇḍe, Sunīla (Author) / Rivera, Daniel E. (Thesis advisor) / Si, Jennie (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2011
149856-Thumbnail Image.png
Description
Nucleosomes are the basic repetitive unit of eukaryotic chromatin and are responsible for packing DNA inside the nucleus of the cell. They consist of a complex of eight histone proteins (two copies of four proteins H2A, H2B, H3 and H4) around which 147 base pairs of DNA are wrapped

Nucleosomes are the basic repetitive unit of eukaryotic chromatin and are responsible for packing DNA inside the nucleus of the cell. They consist of a complex of eight histone proteins (two copies of four proteins H2A, H2B, H3 and H4) around which 147 base pairs of DNA are wrapped in ~1.67 superhelical turns. Although the nucleosomes are stable protein-DNA complexes, they undergo spontaneous conformational changes that occur in an asynchronous fashion. This conformational dynamics, defined by the "site-exposure" model, involves the DNA unwrapping from the protein core and exposing itself transiently before wrapping back. Physiologically, this allows regulatory proteins to bind to their target DNA sites during cellular processes like replication, DNA repair and transcription. Traditional biochemical assays have stablished the equilibrium constants for the accessibility to various sites along the length of the nucleosomal DNA, from its end to the middle of the dyad axis. Using fluorescence correlation spectroscopy (FCS), we have established the position dependent rewrapping rates for nucleosomes. We have also used Monte Carlo simulation methods to analyze the applicability of FRET fluctuation spectroscopy towards conformational dynamics, specifically motivated by nucleosome dynamics. Another important conformational change that is involved in cellular processes is the disassembly of nucleosome into its constituent particles. The exact pathway adopted by nucleosomes is still not clear. We used dual color fluorescence correlation spectroscopy to study the intermediates during nucleosome disassembly induced by changing ionic strength. Studying the nature of nucleosome conformational change and the kinetics is very important in understanding gene expression. The results from this thesis give a quantitative description to the basic unit of the chromatin.
ContributorsGurunathan, Kaushik (Author) / Levitus, Marcia (Thesis advisor) / Lindsay, Stuart (Committee member) / Woodbury, Neal (Committee member) / Yan, Hao (Committee member) / Arizona State University (Publisher)
Created2011
150243-Thumbnail Image.png
Description
ABSTRACT The unique structural features of deoxyribonucleic acid (DNA) that are of considerable biological interest also make it a valuable engineering material. Perhaps the most useful property of DNA for molecular engineering is its ability to self-assemble into predictable, double helical secondary structures. These interactions are exploited to design a

ABSTRACT The unique structural features of deoxyribonucleic acid (DNA) that are of considerable biological interest also make it a valuable engineering material. Perhaps the most useful property of DNA for molecular engineering is its ability to self-assemble into predictable, double helical secondary structures. These interactions are exploited to design a variety of DNA nanostructures, which can be organized into both discrete and periodic structures. This dissertation focuses on studying the dynamic behavior of DNA nanostructure recognition processes. The thermodynamics and kinetics of nanostructure binding are evaluated, with the intention of improving our ability to understand and control their assembly. Presented here are a series of studies toward this goal. First, multi-helical DNA nanostructures were used to investigate how the valency and arrangement of the connections between DNA nanostructures affect super-structure formation. The study revealed that both the number and the relative position of connections play a significant role in the stability of the final assembly. Next, several DNA nanostructures were designed to gain insight into how small changes to the nanostructure scaffolds, intended to vary their conformational flexibility, would affect their association equilibrium. This approach yielded quantitative information about the roles of enthalpy and entropy in the affinity of polyvalent DNA nanostructure interactions, which exhibit an intriguing compensating effect. Finally, a multi-helical DNA nanostructure was used as a model `chip' for the detection of a single stranded DNA target. The results revealed that the rate constant of hybridization is strongly dominated by a rate-limiting nucleation step.
ContributorsNangreave, Jeanette (Author) / Yan, Hao (Thesis advisor) / Liu, Yan (Thesis advisor) / Chen, Julian J.-L. (Committee member) / Seo, Dong Kyun (Committee member) / Arizona State University (Publisher)
Created2011
150268-Thumbnail Image.png
Description
A major goal of synthetic biology is to recapitulate emergent properties of life. Despite a significant body of work, a longstanding question that remains to be answered is how such a complex system arose? In this dissertation, synthetic nucleic acid molecules with alternative sugar-phosphate backbones were investigated as potential ancestors

A major goal of synthetic biology is to recapitulate emergent properties of life. Despite a significant body of work, a longstanding question that remains to be answered is how such a complex system arose? In this dissertation, synthetic nucleic acid molecules with alternative sugar-phosphate backbones were investigated as potential ancestors of DNA and RNA. Threose nucleic acid (TNA) is capable of forming stable helical structures with complementary strands of itself and RNA. This provides a plausible mechanism for genetic information transfer between TNA and RNA. Therefore TNA has been proposed as a potential RNA progenitor. Using molecular evolution, functional sequences were isolated from a pool of random TNA molecules. This implicates a possible chemical framework capable of crosstalk between TNA and RNA. Further, this shows that heredity and evolution are not limited to the natural genetic system based on ribofuranosyl nucleic acids. Another alternative genetic system, glycerol nucleic acid (GNA) undergoes intrasystem pairing with superior thermalstability compared to that of DNA. Inspired by this property, I demonstrated a minimal nanostructure composed of both left- and right-handed mirro image GNA. This work suggested that GNA could be useful as promising orthogonal material in structural DNA nanotechnology.
ContributorsZhang, Su (Author) / Chaut, John C (Thesis advisor) / Ghirlanda, Giovanna (Committee member) / Yan, Hao (Committee member) / Arizona State University (Publisher)
Created2011
150108-Thumbnail Image.png
Description
In the late 1960s, Granger published a seminal study on causality in time series, using linear interdependencies and information transfer. Recent developments in the field of information theory have introduced new methods to investigate the transfer of information in dynamical systems. Using concepts from Chaos and Markov theory, much of

In the late 1960s, Granger published a seminal study on causality in time series, using linear interdependencies and information transfer. Recent developments in the field of information theory have introduced new methods to investigate the transfer of information in dynamical systems. Using concepts from Chaos and Markov theory, much of these methods have evolved to capture non-linear relations and information flow between coupled dynamical systems with applications to fields like biomedical signal processing. This thesis deals with the application of information theory to non-linear multivariate time series and develops measures of information flow to identify significant drivers and response (driven) components in networks of coupled sub-systems with variable coupling in strength and direction (uni- or bi-directional) for each connection. Transfer Entropy (TE) is used to quantify pairwise directional information. Four TE-based measures of information flow are proposed, namely TE Outflow (TEO), TE Inflow (TEI), TE Net flow (TEN), and Average TE flow (ATE). First, the reliability of the information flow measures on models, with and without noise, is evaluated. The driver and response sub-systems in these models are identified. Second, these measures are applied to electroencephalographic (EEG) data from two patients with focal epilepsy. The analysis showed dominant directions of information flow between brain sites and identified the epileptogenic focus as the system component typically with the highest value for the proposed measures (for example, ATE). Statistical tests between pre-seizure (preictal) and post-seizure (postictal) information flow also showed a breakage of the driving of the brain by the focus after seizure onset. The above findings shed light on the function of the epileptogenic focus and understanding of ictogenesis. It is expected that they will contribute to the diagnosis of epilepsy, for example by accurate identification of the epileptogenic focus from interictal periods, as well as the development of better seizure detection, prediction and control methods, for example by isolating pathologic areas of excessive information flow through electrical stimulation.
ContributorsPrasanna, Shashank (Author) / Jassemidis, Leonidas (Thesis advisor) / Tsakalis, Konstantinos (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011
152273-Thumbnail Image.png
Description
This study focuses on state estimation of nonlinear discrete time systems with constraints. Physical processes have inherent in them, constraints on inputs, outputs, states and disturbances. These constraints can provide additional information to the estimator in estimating states from the measured output. Recursive filters such as Kalman Filters or Extended

This study focuses on state estimation of nonlinear discrete time systems with constraints. Physical processes have inherent in them, constraints on inputs, outputs, states and disturbances. These constraints can provide additional information to the estimator in estimating states from the measured output. Recursive filters such as Kalman Filters or Extended Kalman Filters are commonly used in state estimation; however, they do not allow inclusion of constraints in their formulation. On the other hand, computational complexity of full information estimation (using all measurements) grows with iteration and becomes intractable. One way of formulating the recursive state estimation problem with constraints is the Moving Horizon Estimation (MHE) approximation. Estimates of states are calculated from the solution of a constrained optimization problem of fixed size. Detailed formulation of this strategy is studied and properties of this estimation algorithm are discussed in this work. The problem with the MHE formulation is solving an optimization problem in each iteration which is computationally intensive. State estimation with constraints can be formulated as Extended Kalman Filter (EKF) with a projection applied to estimates. The states are estimated from the measurements using standard Extended Kalman Filter (EKF) algorithm and the estimated states are projected on to a constrained set. Detailed formulation of this estimation strategy is studied and the properties associated with this algorithm are discussed. Both these state estimation strategies (MHE and EKF with projection) are tested with examples from the literature. The average estimation time and the sum of square estimation error are used to compare performance of these estimators. Results of the case studies are analyzed and trade-offs are discussed.
ContributorsJoshi, Rakesh (Author) / Tsakalis, Konstantinos (Thesis advisor) / Rodriguez, Armando (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2013