Matching Items (6)
Filtering by

Clear all filters

151465-Thumbnail Image.png
Description
Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on

Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on a priori information and user-specified model parameters. Also, ECG beat morphologies, which vary greatly across patients and disease states, cannot be uniquely characterized by a single model. In this work, sequential Bayesian based methods are used to appropriately model and adaptively select the corresponding model parameters of ECG signals. An adaptive framework based on a sequential Bayesian tracking method is proposed to adaptively select the cardiac parameters that minimize the estimation error, thus precluding the need for pre-processing. Simulations using real ECG data from the online Physionet database demonstrate the improvement in performance of the proposed algorithm in accurately estimating critical heart disease parameters. In addition, two new approaches to ECG modeling are presented using the interacting multiple model and the sequential Markov chain Monte Carlo technique with adaptive model selection. Both these methods can adaptively choose between different models for various ECG beat morphologies without requiring prior ECG information, as demonstrated by using real ECG signals. A supervised Bayesian maximum-likelihood (ML) based classifier uses the estimated model parameters to classify different types of cardiac arrhythmias. However, the non-availability of sufficient amounts of representative training data and the large inter-patient variability pose a challenge to the existing supervised learning algorithms, resulting in a poor classification performance. In addition, recently developed unsupervised learning methods require a priori knowledge on the number of diseases to cluster the ECG data, which often evolves over time. In order to address these issues, an adaptive learning ECG classification method that uses Dirichlet process Gaussian mixture models is proposed. This approach does not place any restriction on the number of disease classes, nor does it require any training data. This algorithm is adapted to be patient-specific by labeling or identifying the generated mixtures using the Bayesian ML method, assuming the availability of labeled training data.
ContributorsEdla, Shwetha Reddy (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2012
150830-Thumbnail Image.png
Description
Research on developing new algorithms to improve information on brain functionality and structure is ongoing. Studying neural activity through dipole source localization with electroencephalography (EEG) and magnetoencephalography (MEG) sensor measurements can lead to diagnosis and treatment of a brain disorder and can also identify the area of the brain from

Research on developing new algorithms to improve information on brain functionality and structure is ongoing. Studying neural activity through dipole source localization with electroencephalography (EEG) and magnetoencephalography (MEG) sensor measurements can lead to diagnosis and treatment of a brain disorder and can also identify the area of the brain from where the disorder has originated. Designing advanced localization algorithms that can adapt to environmental changes is considered a significant shift from manual diagnosis which is based on the knowledge and observation of the doctor, to an adaptive and improved brain disorder diagnosis as these algorithms can track activities that might not be noticed by the human eye. An important consideration of these localization algorithms, however, is to try and minimize the overall power consumption in order to improve the study and treatment of brain disorders. This thesis considers the problem of estimating dynamic parameters of neural dipole sources while minimizing the system's overall power consumption; this is achieved by minimizing the number of EEG/MEG measurements sensors without a loss in estimation performance accuracy. As the EEG/MEG measurements models are related non-linearity to the dipole source locations and moments, these dynamic parameters can be estimated using sequential Monte Carlo methods such as particle filtering. Due to the large number of sensors required to record EEG/MEG Measurements for use in the particle filter, over long period recordings, a large amounts of power is required for storage and transmission. In order to reduce the overall power consumption, two methods are proposed. The first method used the predicted mean square estimation error as the performance metric under the constraint of a maximum power consumption. The performance metric of the second method uses the distance between the location of the sensors and the location estimate of the dipole source at the previous time step; this sensor scheduling scheme results in maximizing the overall signal-to-noise ratio. The performance of both methods is demonstrated using simulated data, and both methods show that they can provide good estimation results with significant reduction in the number of activated sensors at each time step.
ContributorsMichael, Stefanos (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2012
153915-Thumbnail Image.png
Description
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition

Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
ContributorsIlkturk, Utku (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Renaut, Rosemary (Committee member) / Armbruster, Dieter (Committee member) / Arizona State University (Publisher)
Created2015
156420-Thumbnail Image.png
Description
The Kuramoto model is an archetypal model for studying synchronization in groups

of nonidentical oscillators where oscillators are imbued with their own frequency and

coupled with other oscillators though a network of interactions. As the coupling

strength increases, there is a bifurcation to complete synchronization where all oscillators

move with the same frequency and

The Kuramoto model is an archetypal model for studying synchronization in groups

of nonidentical oscillators where oscillators are imbued with their own frequency and

coupled with other oscillators though a network of interactions. As the coupling

strength increases, there is a bifurcation to complete synchronization where all oscillators

move with the same frequency and show a collective rhythm. Kuramoto-like

dynamics are considered a relevant model for instabilities of the AC-power grid which

operates in synchrony under standard conditions but exhibits, in a state of failure,

segmentation of the grid into desynchronized clusters.

In this dissertation the minimum coupling strength required to ensure total frequency

synchronization in a Kuramoto system, called the critical coupling, is investigated.

For coupling strength below the critical coupling, clusters of oscillators form

where oscillators within a cluster are on average oscillating with the same long-term

frequency. A unified order parameter based approach is developed to create approximations

of the critical coupling. Some of the new approximations provide strict lower

bounds for the critical coupling. In addition, these approximations allow for predictions

of the partially synchronized clusters that emerge in the bifurcation from the

synchronized state.

Merging the order parameter approach with graph theoretical concepts leads to a

characterization of this bifurcation as a weighted graph partitioning problem on an

arbitrary networks which then leads to an optimization problem that can efficiently

estimate the partially synchronized clusters. Numerical experiments on random Kuramoto

systems show the high accuracy of these methods. An interpretation of the

methods in the context of power systems is provided.
ContributorsGilg, Brady (Author) / Armbruster, Dieter (Thesis advisor) / Mittelmann, Hans (Committee member) / Scaglione, Anna (Committee member) / Strogatz, Steven (Committee member) / Welfert, Bruno (Committee member) / Arizona State University (Publisher)
Created2018
157690-Thumbnail Image.png
Description
The main objective of mathematical modeling is to connect mathematics with other scientific fields. Developing predictable models help to understand the behavior of biological systems. By testing models, one can relate mathematics and real-world experiments. To validate predictions numerically, one has to compare them with experimental data sets. Mathematical modeling

The main objective of mathematical modeling is to connect mathematics with other scientific fields. Developing predictable models help to understand the behavior of biological systems. By testing models, one can relate mathematics and real-world experiments. To validate predictions numerically, one has to compare them with experimental data sets. Mathematical modeling can be split into two groups: microscopic and macroscopic models. Microscopic models described the motion of so-called agents (e.g. cells, ants) that interact with their surrounding neighbors. The interactions among these agents form at a large scale some special structures such as flocking and swarming. One of the key questions is to relate the particular interactions among agents with the overall emerging structures. Macroscopic models are precisely designed to describe the evolution of such large structures. They are usually given as partial differential equations describing the time evolution of a density distribution (instead of tracking each individual agent). For instance, reaction-diffusion equations are used to model glioma cells and are being used to predict tumor growth. This dissertation aims at developing such a framework to better understand the complex behavior of foraging ants and glioma cells.
ContributorsJamous, Sara Sami (Author) / Motsch, Sebastien (Thesis advisor) / Armbruster, Dieter (Committee member) / Camacho, Erika (Committee member) / Moustaoui, Mohamed (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2019
157998-Thumbnail Image.png
Description
The marked increase in the inflow of remotely sensed data from satellites have trans- formed the Earth and Space Sciences to a data rich domain creating a rich repository for domain experts to analyze. These observations shed light on a diverse array of disciplines ranging from monitoring Earth system components

The marked increase in the inflow of remotely sensed data from satellites have trans- formed the Earth and Space Sciences to a data rich domain creating a rich repository for domain experts to analyze. These observations shed light on a diverse array of disciplines ranging from monitoring Earth system components to planetary explo- ration by highlighting the expected trend and patterns in the data. However, the complexity of these patterns from local to global scales, coupled with the volume of this ever-growing repository necessitates advanced techniques to sequentially process the datasets to determine the underlying trends. Such techniques essentially model the observations to learn characteristic parameters of data-generating processes and highlight anomalous planetary surface observations to help domain scientists for making informed decisions. The primary challenge in defining such models arises due to the spatio-temporal variability of these processes.

This dissertation introduces models of multispectral satellite observations that sequentially learn the expected trend from the data by extracting salient features of planetary surface observations. The main objectives are to learn the temporal variability for modeling dynamic processes and to build representations of features of interest that is learned over the lifespan of an instrument. The estimated model parameters are then exploited in detecting anomalies due to changes in land surface reflectance as well as novelties in planetary surface landforms. A model switching approach is proposed that allows the selection of the best matched representation given the observations that is designed to account for rate of time-variability in land surface. The estimated parameters are exploited to design a change detector, analyze the separability of change events, and form an expert-guided representation of planetary landforms for prioritizing the retrieval of scientifically relevant observations with both onboard and post-downlink applications.
ContributorsChakraborty, Srija (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Christensen, Philip R. (Philip Russel) (Thesis advisor) / Richmond, Christ (Committee member) / Maurer, Alexander (Committee member) / Arizona State University (Publisher)
Created2019