Matching Items (10)
Filtering by

Clear all filters

150108-Thumbnail Image.png
Description
In the late 1960s, Granger published a seminal study on causality in time series, using linear interdependencies and information transfer. Recent developments in the field of information theory have introduced new methods to investigate the transfer of information in dynamical systems. Using concepts from Chaos and Markov theory, much of

In the late 1960s, Granger published a seminal study on causality in time series, using linear interdependencies and information transfer. Recent developments in the field of information theory have introduced new methods to investigate the transfer of information in dynamical systems. Using concepts from Chaos and Markov theory, much of these methods have evolved to capture non-linear relations and information flow between coupled dynamical systems with applications to fields like biomedical signal processing. This thesis deals with the application of information theory to non-linear multivariate time series and develops measures of information flow to identify significant drivers and response (driven) components in networks of coupled sub-systems with variable coupling in strength and direction (uni- or bi-directional) for each connection. Transfer Entropy (TE) is used to quantify pairwise directional information. Four TE-based measures of information flow are proposed, namely TE Outflow (TEO), TE Inflow (TEI), TE Net flow (TEN), and Average TE flow (ATE). First, the reliability of the information flow measures on models, with and without noise, is evaluated. The driver and response sub-systems in these models are identified. Second, these measures are applied to electroencephalographic (EEG) data from two patients with focal epilepsy. The analysis showed dominant directions of information flow between brain sites and identified the epileptogenic focus as the system component typically with the highest value for the proposed measures (for example, ATE). Statistical tests between pre-seizure (preictal) and post-seizure (postictal) information flow also showed a breakage of the driving of the brain by the focus after seizure onset. The above findings shed light on the function of the epileptogenic focus and understanding of ictogenesis. It is expected that they will contribute to the diagnosis of epilepsy, for example by accurate identification of the epileptogenic focus from interictal periods, as well as the development of better seizure detection, prediction and control methods, for example by isolating pathologic areas of excessive information flow through electrical stimulation.
ContributorsPrasanna, Shashank (Author) / Jassemidis, Leonidas (Thesis advisor) / Tsakalis, Konstantinos (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011
151815-Thumbnail Image.png
Description
The field of education has been immensely benefited by major breakthroughs in technology. The arrival of computers and the internet made student-teacher interaction from different parts of the world viable, increasing the reach of the educator to hitherto remote corners of the world. The arrival of mobile phones in the

The field of education has been immensely benefited by major breakthroughs in technology. The arrival of computers and the internet made student-teacher interaction from different parts of the world viable, increasing the reach of the educator to hitherto remote corners of the world. The arrival of mobile phones in the recent past has the potential to provide the next paradigm shift in the way education is conducted. It combines the universal reach and powerful visualization capabilities of the computer with intimacy and portability. Engineering education is a field which can exploit the benefits of mobile devices to enhance learning and spread essential technical know-how to different parts of the world. In this thesis, I present AJDSP, an Android application evolved from JDSP, providing an intuitive and a easy to use environment for signal processing education. AJDSP is a graphical programming laboratory for digital signal processing developed for the Android platform. It is designed to provide utility; both as a supplement to traditional classroom learning and as a tool for self-learning. The architecture of AJDSP is based on the Model-View-Controller paradigm optimized for the Android platform. The extensive set of function modules cover a wide range of basic signal processing areas such as convolution, fast Fourier transform, z transform and filter design. The simple and intuitive user interface inspired from iJDSP is designed to facilitate ease of navigation and to provide the user with an intimate learning environment. Rich visualizations necessary to understand mathematically intensive signal processing algorithms have been incorporated into the software. Interactive demonstrations boosting student understanding of concepts like convolution and the relation between different signal domains have also been developed. A set of detailed assessments to evaluate the application has been conducted for graduate and senior-level undergraduate students.
ContributorsRanganath, Suhas (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2013
189305-Thumbnail Image.png
Description
Quantum computing has the potential to revolutionize the signal-processing field by providing more efficient methods for analyzing signals. This thesis explores the application of quantum computing in signal analysis synthesis for compression applications. More specifically, the study focuses on two key approaches: quantum Fourier transform (QFT) and quantum linear prediction

Quantum computing has the potential to revolutionize the signal-processing field by providing more efficient methods for analyzing signals. This thesis explores the application of quantum computing in signal analysis synthesis for compression applications. More specifically, the study focuses on two key approaches: quantum Fourier transform (QFT) and quantum linear prediction (QLP). The research is motivated by the potential advantages offered by quantum computing in massive signal processing tasks and presents novel quantum circuit designs for QFT, quantum autocorrelation, and QLP, enabling signal analysis synthesis using quantum algorithms. The two approaches are explained as follows. The Quantum Fourier transform (QFT) demonstrates the potential for improved speed in quantum computing compared to classical methods. This thesis focuses on quantum encoding of signals and designing quantum algorithms for signal analysis synthesis, and signal compression using QFTs. Comparative studies are conducted to evaluate quantum computations for Fourier transform applications, considering Signal-to-Noise-Ratio results. The effects of qubit precision and quantum noise are also analyzed. The QFT algorithm is also developed in the J-DSP simulation environment, providing hands-on laboratory experiences for signal-processing students. User-friendly simulation programs on QFT-based signal analysis synthesis using peak picking, and perceptual selection using psychoacoustics in the J-DSP are developed. Further, this research is extended to analyze the autocorrelation of the signal using QFTs and develop a quantum linear prediction (QLP) algorithm for speech processing applications. QFTs and IQFTs are used to compute the quantum autocorrelation of the signal, and the HHL algorithm is modified and used to compute the solutions of the linear equations using quantum computing. The performance of the QLP algorithm is evaluated for system identification, spectral estimation, and speech analysis synthesis, and comparisons are performed for QLP and CLP results. The results demonstrate the following: effective quantum circuits for accurate QFT-based speech analysis synthesis, evaluation of performance with quantum noise, design of accurate quantum autocorrelation, and development of a modified HHL algorithm for efficient QLP. Overall, this thesis contributes to the research on quantum computing for signal processing applications and provides a foundation for further exploration of quantum algorithms for signal analysis synthesis.
ContributorsSharma, Aradhita (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2023
154241-Thumbnail Image.png
Description
Statistical Methods have been widely used in understanding factors for clinical and public health data. Statistical hypotheses are procedures for testing pre-stated hypotheses. The development and properties of these procedures as well as their performance are based upon certain assumptions. Desirable properties of statistical tests are to maintain validity and

Statistical Methods have been widely used in understanding factors for clinical and public health data. Statistical hypotheses are procedures for testing pre-stated hypotheses. The development and properties of these procedures as well as their performance are based upon certain assumptions. Desirable properties of statistical tests are to maintain validity and to perform well even if these assumptions are not met. A statistical test that maintains such desirable properties is called robust. Mathematical models are typically mechanistic framework, used to study dynamic interactions between components (mechanisms) of a system, and how these interactions give rise to the changes in behavior (patterns) of the system as a whole over time.

In this thesis, I have developed a study that uses novel techniques to link robust statistical tests and mathematical modeling methods guided by limited data from developed and developing regions in order to address pressing clinical and epidemiological questions of interest. The procedure in this study consists of three primary steps, namely, data collection, uncertainty quantification in data, and linking dynamic model to collected data.

The first part of the study focuses on designing, collecting, and summarizing empirical data from the only national survey of hospitals ever conducted regarding patient controlled analgesia (PCA) practices among 168 hospitals across 40 states, in order to assess risks before putting patients on PCA. I used statistical relational models and exploratory data analysis to address the question. Risk factors assessed indicate a great concern for the safety of patients from one healthcare institution to other.

In the second part, I quantify uncertainty associated with data obtained from James A Lovell Federal Healthcare Center to primarily study the effect of Benign Prostatic Hypertrophy (BPH) on sleep architecture in patients with Obstructive Sleep Apnea (OSA). Patients with OSA and BPH demonstrated significant difference in their sleep architecture in comparison to patients without BPH. One of the ways to validate these differences in sleep architecture between the two groups may be to carry out a similar study that evaluates the effect of some other chronic disease on sleep architecture in patients with OSA.

Additionally, I also address theoretical statistical questions such as (1) how to estimate the distribution of a variable in order to retest null hypothesis when the sample size is limited, and (2) how changes on assumptions (like monotonicity and nonlinearity) translate into the effect of the independent variable on the outcome variable. To address these questions we use multiple techniques such as Partial Rank Correlation Coefficients (PRCC) based sensitivity analysis, Fractional Polynomials, and statistical relational models.

In the third part, my goal was to identify socio-economic-environment-related risk factors for Visceral Leishmaniasis (VL) and use the identified critical factors to develop a mathematical model to understand VL transmission dynamics when data is highly underreported. I primarily studied the role of age-specific- susceptibility and epidemiological quantities on the dynamics of VL in the Indian state of Bihar. Statistical results provided ideas on the choice of the modeling framework and estimates of model parameters.

In the conclusion, this study addressed three primary theoretical modeling-related questions (1) how to analyze collected data when sample size limited, and how modeling assumptions varies results of data analysis? (2) Is it possible to identify hidden associations and nonlinearity of these associations using such underpowered data and (3) how statistical models provide more reasonable structure to mathematical modeling framework that can be used in turn to understand dynamics of the system.
ContributorsGonzalez, Beverly, 1980- (Author) / Castillo-Chavez, Carlos (Thesis advisor) / Mubayi, Anuj (Thesis advisor) / Nuno, Miriam (Committee member) / Arizona State University (Publisher)
Created2015
155036-Thumbnail Image.png
Description
For a sensor array, part of its elements may fail to work due to hardware failures. Then the missing data may distort in the beam pattern or decrease the accuracy of direction-of-arrival (DOA) estimation. Therefore, considerable research has been conducted to develop algorithms that can estimate the missing signal information.

For a sensor array, part of its elements may fail to work due to hardware failures. Then the missing data may distort in the beam pattern or decrease the accuracy of direction-of-arrival (DOA) estimation. Therefore, considerable research has been conducted to develop algorithms that can estimate the missing signal information. On the other hand, through those algorithms, array elements can also be selectively turned off while the missed information can be successfully recovered, which will save power consumption and hardware cost.

Conventional approaches focusing on array element failures are mainly based on interpolation or sequential learning algorithm. Both of them rely heavily on some prior knowledge such as the information of the failures or a training dataset without missing data. In addition, since most of the existing approaches are developed for DOA estimation, their recovery target is usually the co-variance matrix but not the signal matrix.

In this thesis, a new signal recovery method based on matrix completion (MC) theory is introduced. It aims to directly refill the absent entries in the signal matrix without any prior knowledge. We proposed a novel overlapping reshaping method to satisfy the applying conditions of MC algorithms. Compared to other existing MC based approaches, our proposed method can provide us higher probability of successful recovery. The thesis describes the principle of the algorithms and analyzes the performance of this method. A few application examples with simulation results are also provided.
ContributorsFan, Jie (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2016
153479-Thumbnail Image.png
Description
Analysis of social networks has the potential to provide insights into wide range of applications. As datasets continue to grow, a key challenge is the lack of a widely applicable algorithmic framework for detection of statistically anomalous networks and network properties. Unlike traditional signal processing, where models of truth or

Analysis of social networks has the potential to provide insights into wide range of applications. As datasets continue to grow, a key challenge is the lack of a widely applicable algorithmic framework for detection of statistically anomalous networks and network properties. Unlike traditional signal processing, where models of truth or empirical verification and background data exist and are often well defined, these features are commonly lacking in social and other networks. Here, a novel algorithmic framework for statistical signal processing for graphs is presented. The framework is based on the analysis of spectral properties of the residuals matrix. The framework is applied to the detection of innovation patterns in publication networks, leveraging well-studied empirical knowledge from the history of science. Both the framework itself and the application constitute novel contributions, while advancing algorithmic and mathematical techniques for graph-based data and understanding of the patterns of emergence of novel scientific research. Results indicate the efficacy of the approach and highlight a number of fruitful future directions.
ContributorsBliss, Nadya Travinin (Author) / Laubichler, Manfred (Thesis advisor) / Castillo-Chavez, Carlos (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2015
153310-Thumbnail Image.png
Description
This work considers the problem of multiple detection and tracking in two complex time-varying environments, urban terrain and underwater. Tracking multiple radar targets in urban environments is rst investigated by exploiting multipath signal returns, wideband underwater acoustic (UWA) communications channels are estimated using adaptive learning methods, and multiple UWA communications

This work considers the problem of multiple detection and tracking in two complex time-varying environments, urban terrain and underwater. Tracking multiple radar targets in urban environments is rst investigated by exploiting multipath signal returns, wideband underwater acoustic (UWA) communications channels are estimated using adaptive learning methods, and multiple UWA communications users are detected by designing the transmit signal to match the environment. For the urban environment, a multi-target tracking algorithm is proposed that integrates multipath-to-measurement association and the probability hypothesis density method implemented using particle filtering. The algorithm is designed to track an unknown time-varying number of targets by extracting information from multiple measurements due to multipath returns in the urban terrain. The path likelihood probability is calculated by considering associations between measurements and multipath returns, and an adaptive clustering algorithm is used to estimate the number of target and their corresponding parameters. The performance of the proposed algorithm is demonstrated for different multiple target scenarios and evaluated using the optimal subpattern assignment metric. The underwater environment provides a very challenging communication channel due to its highly time-varying nature, resulting in large distortions due to multipath and Doppler-scaling, and frequency-dependent path loss. A model-based wideband UWA channel estimation algorithm is first proposed to estimate the channel support and the wideband spreading function coefficients. A nonlinear frequency modulated signaling scheme is proposed that is matched to the wideband characteristics of the underwater environment. Constraints on the signal parameters are derived to optimally reduce multiple access interference and the UWA channel effects. The signaling scheme is compared to a code division multiple access (CDMA) scheme to demonstrate its improved bit error rate performance. The overall multi-user communication system performance is finally analyzed by first estimating the UWA channel and then designing the signaling scheme for multiple communications users.
ContributorsZhou, Meng (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Kovvali, Narayan (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2014
153209-Thumbnail Image.png
Description
Peptide microarrays have been used in molecular biology to profile immune responses and develop diagnostic tools. When the microarrays are printed with random peptide sequences, they can be used to identify antigen antibody binding patterns or immunosignatures. In this thesis, an advanced signal processing method is proposed to estimate

Peptide microarrays have been used in molecular biology to profile immune responses and develop diagnostic tools. When the microarrays are printed with random peptide sequences, they can be used to identify antigen antibody binding patterns or immunosignatures. In this thesis, an advanced signal processing method is proposed to estimate epitope antigen subsequences as well as identify mimotope antigen subsequences that mimic the structure of epitopes from random-sequence peptide microarrays. The method first maps peptide sequences to linear expansions of highly-localized one-dimensional (1-D) time-varying signals and uses a time-frequency processing technique to detect recurring patterns in subsequences. This technique is matched to the aforementioned mapping scheme, and it allows for an inherent analysis on how substitutions in the subsequences can affect antibody binding strength. The performance of the proposed method is demonstrated by estimating epitopes and identifying potential mimotopes for eight monoclonal antibody samples.

The proposed mapping is generalized to express information on a protein's sequence location, structure and function onto a highly localized three-dimensional (3-D) Gaussian waveform. In particular, as analysis of protein homology has shown that incorporating different kinds of information into an alignment process can yield more robust alignment results, a pairwise protein structure alignment method is proposed based on a joint similarity measure of multiple mapped protein attributes. The 3-D mapping allocates protein properties into distinct regions in the time-frequency plane in order to simplify the alignment process by including all relevant information into a single, highly customizable waveform. Simulations demonstrate the improved performance of the joint alignment approach to infer relationships between proteins, and they provide information on mutations that cause changes to both the sequence and structure of a protein.

In addition to the biology-based signal processing methods, a statistical method is considered that uses a physics-based model to improve processing performance. In particular, an externally developed physics-based model for sea clutter is examined when detecting a low radar cross-section target in heavy sea clutter. This novel model includes a process that generates random dynamic sea clutter based on the governing physics of water gravity and capillary waves and a finite-difference time-domain electromagnetics simulation process based on Maxwell's equations propagating the radar signal. A subspace clutter suppression detector is applied to remove dominant clutter eigenmodes, and its improved performance over matched filtering is demonstrated using simulations.
ContributorsO'Donnell, Brian (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel (Committee member) / Johnston, Stephen A. (Committee member) / Kovvali, Narayan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2014
156511-Thumbnail Image.png
Description
This dissertation explores the impact of environmental dependent risk on disease dynamics within a Lagrangian modeling perspective; where the identity (defined by place of residency) of individuals is preserved throughout the epidemic process. In Chapter Three, the impact of individuals who refuse to be vaccinated is explored. MMR vaccination and

This dissertation explores the impact of environmental dependent risk on disease dynamics within a Lagrangian modeling perspective; where the identity (defined by place of residency) of individuals is preserved throughout the epidemic process. In Chapter Three, the impact of individuals who refuse to be vaccinated is explored. MMR vaccination and birth rate data from the State of California are used to determine the impact of the anti-vaccine movement on the dynamics of growth of the anti-vaccine sub-population. Dissertation results suggest that under realistic California social dynamics scenarios, it is not possible to revert the influence of anti-vaccine

contagion. In Chapter Four, the dynamics of Zika virus are explored in two highly distinct idealized environments defined by a parameter that models highly distinctive levels of risk, the result of vector and host density and vector control measures. The underlying assumption is that these two communities are intimately connected due to economics with the impact of various patterns of mobility being incorporated via

the use of residency times. In short, a highly heterogeneous community is defined by its risk of acquiring a Zika infection within one of two "spaces," one lacking access to health services or effective vector control policies (lack of resources or ignored due to high levels of crime, or poverty, or both). Low risk regions are defined as those with access to solid health facilities and where vector control measures are implemented routinely. It was found that the better connected these communities are, the existence of communities where mobility between risk regions is not hampered, lower the overall, two patch Zika prevalence. Chapter Five focuses on the dynamics of tuberculosis (TB), a communicable disease, also on an idealized high-low risk set up. The impact of mobility within these two highly distinct TB-risk environments on the dynamics and control of this disease is systematically explored. It is found that collaboration and mobility, under some circumstances, can reduce the overall TB burden.
ContributorsMoreno Martínez, Victor Manuel (Author) / Castillo-Chavez, Carlos (Thesis advisor) / Kang, Yun (Committee member) / Mubayi, Anuj (Committee member) / Arizona State University (Publisher)
Created2018
157817-Thumbnail Image.png
Description
An analysis is presented of a network of distributed receivers encumbered by strong in-band interference. The structure of information present across such receivers and how they might collaborate to recover a signal of interest is studied. Unstructured (random coding) and structured (lattice coding) strategies are studied towards this purpose for

An analysis is presented of a network of distributed receivers encumbered by strong in-band interference. The structure of information present across such receivers and how they might collaborate to recover a signal of interest is studied. Unstructured (random coding) and structured (lattice coding) strategies are studied towards this purpose for a certain adaptable system model. Asymptotic performances of these strategies and algorithms to compute them are developed. A jointly-compressed lattice code with proper configuration performs best of all strategies investigated.
ContributorsChapman, Christian Douglas (Author) / Bliss, Daniel W (Thesis advisor) / Richmond, Christ D (Committee member) / Kosut, Oliver (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2019