Matching Items (8)
Filtering by

Clear all filters

151815-Thumbnail Image.png
Description
The field of education has been immensely benefited by major breakthroughs in technology. The arrival of computers and the internet made student-teacher interaction from different parts of the world viable, increasing the reach of the educator to hitherto remote corners of the world. The arrival of mobile phones in the

The field of education has been immensely benefited by major breakthroughs in technology. The arrival of computers and the internet made student-teacher interaction from different parts of the world viable, increasing the reach of the educator to hitherto remote corners of the world. The arrival of mobile phones in the recent past has the potential to provide the next paradigm shift in the way education is conducted. It combines the universal reach and powerful visualization capabilities of the computer with intimacy and portability. Engineering education is a field which can exploit the benefits of mobile devices to enhance learning and spread essential technical know-how to different parts of the world. In this thesis, I present AJDSP, an Android application evolved from JDSP, providing an intuitive and a easy to use environment for signal processing education. AJDSP is a graphical programming laboratory for digital signal processing developed for the Android platform. It is designed to provide utility; both as a supplement to traditional classroom learning and as a tool for self-learning. The architecture of AJDSP is based on the Model-View-Controller paradigm optimized for the Android platform. The extensive set of function modules cover a wide range of basic signal processing areas such as convolution, fast Fourier transform, z transform and filter design. The simple and intuitive user interface inspired from iJDSP is designed to facilitate ease of navigation and to provide the user with an intimate learning environment. Rich visualizations necessary to understand mathematically intensive signal processing algorithms have been incorporated into the software. Interactive demonstrations boosting student understanding of concepts like convolution and the relation between different signal domains have also been developed. A set of detailed assessments to evaluate the application has been conducted for graduate and senior-level undergraduate students.
ContributorsRanganath, Suhas (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2013
189305-Thumbnail Image.png
Description
Quantum computing has the potential to revolutionize the signal-processing field by providing more efficient methods for analyzing signals. This thesis explores the application of quantum computing in signal analysis synthesis for compression applications. More specifically, the study focuses on two key approaches: quantum Fourier transform (QFT) and quantum linear prediction

Quantum computing has the potential to revolutionize the signal-processing field by providing more efficient methods for analyzing signals. This thesis explores the application of quantum computing in signal analysis synthesis for compression applications. More specifically, the study focuses on two key approaches: quantum Fourier transform (QFT) and quantum linear prediction (QLP). The research is motivated by the potential advantages offered by quantum computing in massive signal processing tasks and presents novel quantum circuit designs for QFT, quantum autocorrelation, and QLP, enabling signal analysis synthesis using quantum algorithms. The two approaches are explained as follows. The Quantum Fourier transform (QFT) demonstrates the potential for improved speed in quantum computing compared to classical methods. This thesis focuses on quantum encoding of signals and designing quantum algorithms for signal analysis synthesis, and signal compression using QFTs. Comparative studies are conducted to evaluate quantum computations for Fourier transform applications, considering Signal-to-Noise-Ratio results. The effects of qubit precision and quantum noise are also analyzed. The QFT algorithm is also developed in the J-DSP simulation environment, providing hands-on laboratory experiences for signal-processing students. User-friendly simulation programs on QFT-based signal analysis synthesis using peak picking, and perceptual selection using psychoacoustics in the J-DSP are developed. Further, this research is extended to analyze the autocorrelation of the signal using QFTs and develop a quantum linear prediction (QLP) algorithm for speech processing applications. QFTs and IQFTs are used to compute the quantum autocorrelation of the signal, and the HHL algorithm is modified and used to compute the solutions of the linear equations using quantum computing. The performance of the QLP algorithm is evaluated for system identification, spectral estimation, and speech analysis synthesis, and comparisons are performed for QLP and CLP results. The results demonstrate the following: effective quantum circuits for accurate QFT-based speech analysis synthesis, evaluation of performance with quantum noise, design of accurate quantum autocorrelation, and development of a modified HHL algorithm for efficient QLP. Overall, this thesis contributes to the research on quantum computing for signal processing applications and provides a foundation for further exploration of quantum algorithms for signal analysis synthesis.
ContributorsSharma, Aradhita (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2023
155036-Thumbnail Image.png
Description
For a sensor array, part of its elements may fail to work due to hardware failures. Then the missing data may distort in the beam pattern or decrease the accuracy of direction-of-arrival (DOA) estimation. Therefore, considerable research has been conducted to develop algorithms that can estimate the missing signal information.

For a sensor array, part of its elements may fail to work due to hardware failures. Then the missing data may distort in the beam pattern or decrease the accuracy of direction-of-arrival (DOA) estimation. Therefore, considerable research has been conducted to develop algorithms that can estimate the missing signal information. On the other hand, through those algorithms, array elements can also be selectively turned off while the missed information can be successfully recovered, which will save power consumption and hardware cost.

Conventional approaches focusing on array element failures are mainly based on interpolation or sequential learning algorithm. Both of them rely heavily on some prior knowledge such as the information of the failures or a training dataset without missing data. In addition, since most of the existing approaches are developed for DOA estimation, their recovery target is usually the co-variance matrix but not the signal matrix.

In this thesis, a new signal recovery method based on matrix completion (MC) theory is introduced. It aims to directly refill the absent entries in the signal matrix without any prior knowledge. We proposed a novel overlapping reshaping method to satisfy the applying conditions of MC algorithms. Compared to other existing MC based approaches, our proposed method can provide us higher probability of successful recovery. The thesis describes the principle of the algorithms and analyzes the performance of this method. A few application examples with simulation results are also provided.
ContributorsFan, Jie (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2016
152696-Thumbnail Image.png
Description
Increasing interest in individualized treatment strategies for prevention and treatment of health disorders has created a new application domain for dynamic modeling and control. Standard population-level clinical trials, while useful, are not the most suitable vehicle for understanding the dynamics of dosage changes to patient response. A secondary analysis of

Increasing interest in individualized treatment strategies for prevention and treatment of health disorders has created a new application domain for dynamic modeling and control. Standard population-level clinical trials, while useful, are not the most suitable vehicle for understanding the dynamics of dosage changes to patient response. A secondary analysis of intensive longitudinal data from a naltrexone intervention for fibromyalgia examined in this dissertation shows the promise of system identification and control. This includes datacentric identification methods such as Model-on-Demand, which are attractive techniques for estimating nonlinear dynamical systems from noisy data. These methods rely on generating a local function approximation using a database of regressors at the current operating point, with this process repeated at every new operating condition. This dissertation examines generating input signals for data-centric system identification by developing a novel framework of geometric distribution of regressors and time-indexed output points, in the finite dimensional space, to generate sufficient support for the estimator. The input signals are generated while imposing “patient-friendly” constraints on the design as a means to operationalize single-subject clinical trials. These optimization-based problem formulations are examined for linear time-invariant systems and block-structured Hammerstein systems, and the results are contrasted with alternative designs based on Weyl's criterion. Numerical solution to the resulting nonconvex optimization problems is proposed through semidefinite programming approaches for polynomial optimization and nonlinear programming methods. It is shown that useful bounds on the objective function can be calculated through relaxation procedures, and that the data-centric formulations are amenable to sparse polynomial optimization. In addition, input design formulations are formulated for achieving a desired output and specified input spectrum. Numerical examples illustrate the benefits of the input signal design formulations including an example of a hypothetical clinical trial using the drug gabapentin. In the final part of the dissertation, the mixed logical dynamical framework for hybrid model predictive control is extended to incorporate a switching time strategy, where decisions are made at some integer multiple of the sample time, and manipulation of only one input at a given sample time among multiple inputs. These are considerations important for clinical use of the algorithm.
ContributorsDeśapāṇḍe, Sunīla (Author) / Rivera, Daniel E. (Thesis advisor) / Peet, Matthew M. (Committee member) / Si, Jennie (Committee member) / Tsakalis, Konstantinos S. (Committee member) / Arizona State University (Publisher)
Created2014
153730-Thumbnail Image.png
Description
This thesis addresses control design for fixed-wing air-breathing aircraft. Four aircraft with distinct dynamical properties are considered: a scram-jet powered hypersonic (100foot long, X-43 like, wedge shaped) aircraft with flexible modes operating near Mach 8, 85k ft, a NASA HiMAT (Highly Maneuverable Aircraft Technology) F-18 aircraft,

a McDonnell Douglas AV-8A

This thesis addresses control design for fixed-wing air-breathing aircraft. Four aircraft with distinct dynamical properties are considered: a scram-jet powered hypersonic (100foot long, X-43 like, wedge shaped) aircraft with flexible modes operating near Mach 8, 85k ft, a NASA HiMAT (Highly Maneuverable Aircraft Technology) F-18 aircraft,

a McDonnell Douglas AV-8A Harrier aircraft, and a Vought F-8 Crusader aircraft. A two-input two-output (TITO) longitudinal LTI (linear time invariant) dynamical model is used for each aircraft. Control design trade studies are conducted for each of the aircraft. Emphasis is placed on the hypersonic vehicle because of its complex nonlinear (unstable, non-minimum phase, flexible) dynamics and uncertainty associated with hypersonic flight (Mach $>$ 5, shocks and high temperatures on leading edges). Two plume models are used for the hypersonic vehicle – an old plume model and a new plume model. The old plume model is simple and assumes a typical decaying pressure distribution for aft nozzle. The new plume model uses Newtonian impact theory and a nonlinear solver to compute the aft nozzle pressure distribution. Multivariable controllers were generated using standard weighted $H_{\inf}$ mixed-sensitivity optimization as well as a new input disturbance weighted mixed-sensitivity framework that attempts to achieve good multivariable properties at both the controls (plant inputs) as well as the errors (plant outputs). Classical inner-outer (PD-PI) structures (partially centralized and decentralized) were also used. It is shown that while these classical (sometimes partially centralized PD-PI) structures could be used to generate comparable results to the multivariable controllers (e.g. for the hypersonic vehicle, Harrier, F-8), considerable tuning (iterative optimization) is often essential. This is especially true for the highly coupled hypersonic vehicle – thus justifying the need for a good multivariable control design tool. Fundamental control design tradeoffs for each aircraft are presented – comprehensively for the hypersonic aircraft. In short, the thesis attempts to shed light on when complex controllers are essential and when simple structures are sufficient for achieving control designs with good multivariable loop properties at both the errors (plant outputs) and the controls (plant inputs).
ContributorsMondal, Kaustav (Author) / Rodriguez, Armando Antonio (Thesis advisor) / Tsakalis, Kostas (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2015
153096-Thumbnail Image.png
Description
Control engineering offers a systematic and efficient approach to optimizing the effectiveness of individually tailored treatment and prevention policies, also known as adaptive or ``just-in-time'' behavioral interventions. These types of interventions represent promising strategies for addressing many significant public health concerns. This dissertation explores the development of decision algorithms for

Control engineering offers a systematic and efficient approach to optimizing the effectiveness of individually tailored treatment and prevention policies, also known as adaptive or ``just-in-time'' behavioral interventions. These types of interventions represent promising strategies for addressing many significant public health concerns. This dissertation explores the development of decision algorithms for adaptive sequential behavioral interventions using dynamical systems modeling, control engineering principles and formal optimization methods. A novel gestational weight gain (GWG) intervention involving multiple intervention components and featuring a pre-defined, clinically relevant set of sequence rules serves as an excellent example of a sequential behavioral intervention; it is examined in detail in this research.

 

A comprehensive dynamical systems model for the GWG behavioral interventions is developed, which demonstrates how to integrate a mechanistic energy balance model with dynamical formulations of behavioral models, such as the Theory of Planned Behavior and self-regulation. Self-regulation is further improved with different advanced controller formulations. These model-based controller approaches enable the user to have significant flexibility in describing a participant's self-regulatory behavior through the tuning of controller adjustable parameters. The dynamic simulation model demonstrates proof of concept for how self-regulation and adaptive interventions influence GWG, how intra-individual and inter-individual variability play a critical role in determining intervention outcomes, and the evaluation of decision rules.

 

Furthermore, a novel intervention decision paradigm using Hybrid Model Predictive Control framework is developed to generate sequential decision policies in the closed-loop. Clinical considerations are systematically taken into account through a user-specified dosage sequence table corresponding to the sequence rules, constraints enforcing the adjustment of one input at a time, and a switching time strategy accounting for the difference in frequency between intervention decision points and sampling intervals. Simulation studies illustrate the potential usefulness of the intervention framework.

The final part of the dissertation presents a model scheduling strategy relying on gain-scheduling to address nonlinearities in the model, and a cascade filter design for dual-rate control system is introduced to address scenarios with variable sampling rates. These extensions are important for addressing real-life scenarios in the GWG intervention.
ContributorsDong, Yuwen (Author) / Rivera, Daniel E (Thesis advisor) / Dai, Lenore (Committee member) / Forzani, Erica (Committee member) / Rege, Kaushal (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2014
158716-Thumbnail Image.png
Description
The availability of data for monitoring and controlling the electrical grid has increased exponentially over the years in both resolution and quantity leaving a large data footprint. This dissertation is motivated by the need for equivalent representations of grid data in lower-dimensional feature spaces so that

The availability of data for monitoring and controlling the electrical grid has increased exponentially over the years in both resolution and quantity leaving a large data footprint. This dissertation is motivated by the need for equivalent representations of grid data in lower-dimensional feature spaces so that machine learning algorithms can be employed for a variety of purposes. To achieve that, without sacrificing the interpretation of the results, the dissertation leverages the physics behind power systems, well-known laws that underlie this man-made infrastructure, and the nature of the underlying stochastic phenomena that define the system operating conditions as the backbone for modeling data from the grid.

The first part of the dissertation introduces a new framework of graph signal processing (GSP) for the power grid, Grid-GSP, and applies it to voltage phasor measurements that characterize the overall system state of the power grid. Concepts from GSP are used in conjunction with known power system models in order to highlight the low-dimensional structure in data and present generative models for voltage phasors measurements. Applications such as identification of graphical communities, network inference, interpolation of missing data, detection of false data injection attacks and data compression are explored wherein Grid-GSP based generative models are used.

The second part of the dissertation develops a model for a joint statistical description of solar photo-voltaic (PV) power and the outdoor temperature which can lead to better management of power generation resources so that electricity demand such as air conditioning and supply from solar power are always matched in the face of stochasticity. The low-rank structure inherent in solar PV power data is used for forecasting and to detect partial-shading type of faults in solar panels.
ContributorsRamakrishna, Raksha (Author) / Scaglione, Anna (Thesis advisor) / Cochran, Douglas (Committee member) / Spanias, Andreas (Committee member) / Vittal, Vijay (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2020
157840-Thumbnail Image.png
Description
Over the last decade, deep neural networks also known as deep learning, combined with large databases and specialized hardware for computation, have made major strides in important areas such as computer vision, computational imaging and natural language processing. However, such frameworks currently suffer from some drawbacks. For example, it is

Over the last decade, deep neural networks also known as deep learning, combined with large databases and specialized hardware for computation, have made major strides in important areas such as computer vision, computational imaging and natural language processing. However, such frameworks currently suffer from some drawbacks. For example, it is generally not clear how the architectures are to be designed for different applications, or how the neural networks behave under different input perturbations and it is not easy to make the internal representations and parameters more interpretable. In this dissertation, I propose building constraints into feature maps, parameters and and design of algorithms involving neural networks for applications in low-level vision problems such as compressive imaging and multi-spectral image fusion, and high-level inference problems including activity and face recognition. Depending on the application, such constraints can be used to design architectures which are invariant/robust to certain nuisance factors, more efficient and, in some cases, more interpretable. Through extensive experiments on real-world datasets, I demonstrate these advantages of the proposed methods over conventional frameworks.
ContributorsLohit, Suhas Anand (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Li, Baoxin (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2019