Matching Items (9)
Filtering by

Clear all filters

151815-Thumbnail Image.png
Description
The field of education has been immensely benefited by major breakthroughs in technology. The arrival of computers and the internet made student-teacher interaction from different parts of the world viable, increasing the reach of the educator to hitherto remote corners of the world. The arrival of mobile phones in the

The field of education has been immensely benefited by major breakthroughs in technology. The arrival of computers and the internet made student-teacher interaction from different parts of the world viable, increasing the reach of the educator to hitherto remote corners of the world. The arrival of mobile phones in the recent past has the potential to provide the next paradigm shift in the way education is conducted. It combines the universal reach and powerful visualization capabilities of the computer with intimacy and portability. Engineering education is a field which can exploit the benefits of mobile devices to enhance learning and spread essential technical know-how to different parts of the world. In this thesis, I present AJDSP, an Android application evolved from JDSP, providing an intuitive and a easy to use environment for signal processing education. AJDSP is a graphical programming laboratory for digital signal processing developed for the Android platform. It is designed to provide utility; both as a supplement to traditional classroom learning and as a tool for self-learning. The architecture of AJDSP is based on the Model-View-Controller paradigm optimized for the Android platform. The extensive set of function modules cover a wide range of basic signal processing areas such as convolution, fast Fourier transform, z transform and filter design. The simple and intuitive user interface inspired from iJDSP is designed to facilitate ease of navigation and to provide the user with an intimate learning environment. Rich visualizations necessary to understand mathematically intensive signal processing algorithms have been incorporated into the software. Interactive demonstrations boosting student understanding of concepts like convolution and the relation between different signal domains have also been developed. A set of detailed assessments to evaluate the application has been conducted for graduate and senior-level undergraduate students.
ContributorsRanganath, Suhas (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2013
150788-Thumbnail Image.png
Description
Interictal spikes, together with seizures, have been recognized as the two hallmarks of epilepsy, a brain disorder that 1% of the world's population suffers from. Even though the presence of spikes in brain's electromagnetic activity has diagnostic value, their dynamics are still elusive. It was an objective of this dissertation

Interictal spikes, together with seizures, have been recognized as the two hallmarks of epilepsy, a brain disorder that 1% of the world's population suffers from. Even though the presence of spikes in brain's electromagnetic activity has diagnostic value, their dynamics are still elusive. It was an objective of this dissertation to formulate a mathematical framework within which the dynamics of interictal spikes could be thoroughly investigated. A new epileptic spike detection algorithm was developed by employing data adaptive morphological filters. The performance of the spike detection algorithm was favorably compared with others in the literature. A novel spike spatial synchronization measure was developed and tested on coupled spiking neuron models. Application of this measure to individual epileptic spikes in EEG from patients with temporal lobe epilepsy revealed long-term trends of increase in synchronization between pairs of brain sites before seizures and desynchronization after seizures, in the same patient as well as across patients, thus supporting the hypothesis that seizures may occur to break (reset) the abnormal spike synchronization in the brain network. Furthermore, based on these results, a separate spatial analysis of spike rates was conducted that shed light onto conflicting results in the literature about variability of spike rate before and after seizure. The ability to automatically classify seizures into clinical and subclinical was a result of the above findings. A novel method for epileptogenic focus localization from interictal periods based on spike occurrences was also devised, combining concepts from graph theory, like eigenvector centrality, and the developed spike synchronization measure, and tested very favorably against the utilized gold rule in clinical practice for focus localization from seizures onset. Finally, in another application of resetting of brain dynamics at seizures, it was shown that it is possible to differentiate with a high accuracy between patients with epileptic seizures (ES) and patients with psychogenic nonepileptic seizures (PNES). The above studies of spike dynamics have elucidated many unknown aspects of ictogenesis and it is expected to significantly contribute to further understanding of the basic mechanisms that lead to seizures, the diagnosis and treatment of epilepsy.
ContributorsKrishnan, Balu (Author) / Iasemidis, Leonidas (Thesis advisor) / Tsakalis, Kostantinos (Committee member) / Spanias, Andreas (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2012
150924-Thumbnail Image.png
Description
Approximately 1% of the world population suffers from epilepsy. Continuous long-term electroencephalographic (EEG) monitoring is the gold-standard for recording epileptic seizures and assisting in the diagnosis and treatment of patients with epilepsy. However, this process still requires that seizures are visually detected and marked by experienced and trained electroencephalographers. The

Approximately 1% of the world population suffers from epilepsy. Continuous long-term electroencephalographic (EEG) monitoring is the gold-standard for recording epileptic seizures and assisting in the diagnosis and treatment of patients with epilepsy. However, this process still requires that seizures are visually detected and marked by experienced and trained electroencephalographers. The motivation for the development of an automated seizure detection algorithm in this research was to assist physicians in such a laborious, time consuming and expensive task. Seizures in the EEG vary in duration (seconds to minutes), morphology and severity (clinical to subclinical, occurrence rate) within the same patient and across patients. The task of seizure detection is also made difficult due to the presence of movement and other recording artifacts. An early approach towards the development of automated seizure detection algorithms utilizing both EEG changes and clinical manifestations resulted to a sensitivity of 70-80% and 1 false detection per hour. Approaches based on artificial neural networks have improved the detection performance at the cost of algorithm's training. Measures of nonlinear dynamics, such as Lyapunov exponents, have been applied successfully to seizure prediction. Within the framework of this MS research, a seizure detection algorithm based on measures of linear and nonlinear dynamics, i.e., the adaptive short-term maximum Lyapunov exponent (ASTLmax) and the adaptive Teager energy (ATE) was developed and tested. The algorithm was tested on long-term (0.5-11.7 days) continuous EEG recordings from five patients (3 with intracranial and 2 with scalp EEG) and a total of 56 seizures, producing a mean sensitivity of 93% and mean specificity of 0.048 false positives per hour. The developed seizure detection algorithm is data-adaptive, training-free and patient-independent. It is expected that this algorithm will assist physicians in reducing the time spent on detecting seizures, lead to faster and more accurate diagnosis, better evaluation of treatment, and possibly to better treatments if it is incorporated on-line and real-time with advanced neuromodulation therapies for epilepsy.
ContributorsVenkataraman, Vinay (Author) / Jassemidis, Leonidas (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2012
135425-Thumbnail Image.png
Description
The detection and characterization of transients in signals is important in many wide-ranging applications from computer vision to audio processing. Edge detection on images is typically realized using small, local, discrete convolution kernels, but this is not possible when samples are measured directly in the frequency domain. The concentration factor

The detection and characterization of transients in signals is important in many wide-ranging applications from computer vision to audio processing. Edge detection on images is typically realized using small, local, discrete convolution kernels, but this is not possible when samples are measured directly in the frequency domain. The concentration factor edge detection method was therefore developed to realize an edge detector directly from spectral data. This thesis explores the possibilities of detecting edges from the phase of the spectral data, that is, without the magnitude of the sampled spectral data. Prior work has demonstrated that the spectral phase contains particularly important information about underlying features in a signal. Furthermore, the concentration factor method yields some insight into the detection of edges in spectral phase data. An iterative design approach was taken to realize an edge detector using only the spectral phase data, also allowing for the design of an edge detector when phase data are intermittent or corrupted. Problem formulations showing the power of the design approach are given throughout. A post-processing scheme relying on the difference of multiple edge approximations yields a strong edge detector which is shown to be resilient under noisy, intermittent phase data. Lastly, a thresholding technique is applied to give an explicit enhanced edge detector ready to be used. Examples throughout are demonstrate both on signals and images.
ContributorsReynolds, Alexander Bryce (Author) / Gelb, Anne (Thesis director) / Cochran, Douglas (Committee member) / Viswanathan, Adityavikram (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
155036-Thumbnail Image.png
Description
For a sensor array, part of its elements may fail to work due to hardware failures. Then the missing data may distort in the beam pattern or decrease the accuracy of direction-of-arrival (DOA) estimation. Therefore, considerable research has been conducted to develop algorithms that can estimate the missing signal information.

For a sensor array, part of its elements may fail to work due to hardware failures. Then the missing data may distort in the beam pattern or decrease the accuracy of direction-of-arrival (DOA) estimation. Therefore, considerable research has been conducted to develop algorithms that can estimate the missing signal information. On the other hand, through those algorithms, array elements can also be selectively turned off while the missed information can be successfully recovered, which will save power consumption and hardware cost.

Conventional approaches focusing on array element failures are mainly based on interpolation or sequential learning algorithm. Both of them rely heavily on some prior knowledge such as the information of the failures or a training dataset without missing data. In addition, since most of the existing approaches are developed for DOA estimation, their recovery target is usually the co-variance matrix but not the signal matrix.

In this thesis, a new signal recovery method based on matrix completion (MC) theory is introduced. It aims to directly refill the absent entries in the signal matrix without any prior knowledge. We proposed a novel overlapping reshaping method to satisfy the applying conditions of MC algorithms. Compared to other existing MC based approaches, our proposed method can provide us higher probability of successful recovery. The thesis describes the principle of the algorithms and analyzes the performance of this method. A few application examples with simulation results are also provided.
ContributorsFan, Jie (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2016
155064-Thumbnail Image.png
Description
From time immemorial, epilepsy has persisted to be one of the greatest impediments to human life for those stricken by it. As the fourth most common neurological disorder, epilepsy causes paroxysmal electrical discharges in the brain that manifest as seizures. Seizures have the effect of debilitating patients on a physical

From time immemorial, epilepsy has persisted to be one of the greatest impediments to human life for those stricken by it. As the fourth most common neurological disorder, epilepsy causes paroxysmal electrical discharges in the brain that manifest as seizures. Seizures have the effect of debilitating patients on a physical and psychological level. Although not lethal by themselves, they can bring about total disruption in consciousness which can, in hazardous conditions, lead to fatality. Roughly 1\% of the world population suffer from epilepsy and another 30 to 50 new cases per 100,000 increase the number of affected annually. Controlling seizures in epileptic patients has therefore become a great medical and, in recent years, engineering challenge.



In this study, the conditions of human seizures are recreated in an animal model of temporal lobe epilepsy. The rodents used in this study are chemically induced to become chronically epileptic. Their Electroencephalogram (EEG) data is then recorded and analyzed to detect and predict seizures; with the ultimate goal being the control and complete suppression of seizures.



Two methods, the maximum Lyapunov exponent and the Generalized Partial Directed Coherence (GPDC), are applied on EEG data to extract meaningful information. Their effectiveness have been reported in the literature for the purpose of prediction of seizures and seizure focus localization. This study integrates these measures, through some modifications, to robustly detect seizures and separately find precursors to them and in consequence provide stimulation to the epileptic brain of rats in order to suppress seizures. Additionally open-loop stimulation with biphasic currents of various pairs of sites in differing lengths of time have helped us create control efficacy maps. While GPDC tells us about the possible location of the focus, control efficacy maps tells us how effective stimulating a certain pair of sites will be.



The results from computations performed on the data are presented and the feasibility of the control problem is discussed. The results show a new reliable means of seizure detection even in the presence of artifacts in the data. The seizure precursors provide a means of prediction, in the order of tens of minutes, prior to seizures. Closed loop stimulation experiments based on these precursors and control efficacy maps on the epileptic animals show a maximum reduction of seizure frequency by 24.26\% in one animal and reduction of length of seizures by 51.77\% in another. Thus, through this study it was shown that the implementation of the methods can ameliorate seizures in an epileptic patient. It is expected that the new knowledge and experimental techniques will provide a guide for future research in an effort to ultimately eliminate seizures in epileptic patients.
ContributorsShafique, Md Ashfaque Bin (Author) / Tsakalis, Konstantinos (Thesis advisor) / Rodriguez, Armando (Committee member) / Muthuswamy, Jitendran (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2016
189305-Thumbnail Image.png
Description
Quantum computing has the potential to revolutionize the signal-processing field by providing more efficient methods for analyzing signals. This thesis explores the application of quantum computing in signal analysis synthesis for compression applications. More specifically, the study focuses on two key approaches: quantum Fourier transform (QFT) and quantum linear prediction

Quantum computing has the potential to revolutionize the signal-processing field by providing more efficient methods for analyzing signals. This thesis explores the application of quantum computing in signal analysis synthesis for compression applications. More specifically, the study focuses on two key approaches: quantum Fourier transform (QFT) and quantum linear prediction (QLP). The research is motivated by the potential advantages offered by quantum computing in massive signal processing tasks and presents novel quantum circuit designs for QFT, quantum autocorrelation, and QLP, enabling signal analysis synthesis using quantum algorithms. The two approaches are explained as follows. The Quantum Fourier transform (QFT) demonstrates the potential for improved speed in quantum computing compared to classical methods. This thesis focuses on quantum encoding of signals and designing quantum algorithms for signal analysis synthesis, and signal compression using QFTs. Comparative studies are conducted to evaluate quantum computations for Fourier transform applications, considering Signal-to-Noise-Ratio results. The effects of qubit precision and quantum noise are also analyzed. The QFT algorithm is also developed in the J-DSP simulation environment, providing hands-on laboratory experiences for signal-processing students. User-friendly simulation programs on QFT-based signal analysis synthesis using peak picking, and perceptual selection using psychoacoustics in the J-DSP are developed. Further, this research is extended to analyze the autocorrelation of the signal using QFTs and develop a quantum linear prediction (QLP) algorithm for speech processing applications. QFTs and IQFTs are used to compute the quantum autocorrelation of the signal, and the HHL algorithm is modified and used to compute the solutions of the linear equations using quantum computing. The performance of the QLP algorithm is evaluated for system identification, spectral estimation, and speech analysis synthesis, and comparisons are performed for QLP and CLP results. The results demonstrate the following: effective quantum circuits for accurate QFT-based speech analysis synthesis, evaluation of performance with quantum noise, design of accurate quantum autocorrelation, and development of a modified HHL algorithm for efficient QLP. Overall, this thesis contributes to the research on quantum computing for signal processing applications and provides a foundation for further exploration of quantum algorithms for signal analysis synthesis.
ContributorsSharma, Aradhita (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2023
157840-Thumbnail Image.png
Description
Over the last decade, deep neural networks also known as deep learning, combined with large databases and specialized hardware for computation, have made major strides in important areas such as computer vision, computational imaging and natural language processing. However, such frameworks currently suffer from some drawbacks. For example, it is

Over the last decade, deep neural networks also known as deep learning, combined with large databases and specialized hardware for computation, have made major strides in important areas such as computer vision, computational imaging and natural language processing. However, such frameworks currently suffer from some drawbacks. For example, it is generally not clear how the architectures are to be designed for different applications, or how the neural networks behave under different input perturbations and it is not easy to make the internal representations and parameters more interpretable. In this dissertation, I propose building constraints into feature maps, parameters and and design of algorithms involving neural networks for applications in low-level vision problems such as compressive imaging and multi-spectral image fusion, and high-level inference problems including activity and face recognition. Depending on the application, such constraints can be used to design architectures which are invariant/robust to certain nuisance factors, more efficient and, in some cases, more interpretable. Through extensive experiments on real-world datasets, I demonstrate these advantages of the proposed methods over conventional frameworks.
ContributorsLohit, Suhas Anand (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Li, Baoxin (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2019
158716-Thumbnail Image.png
Description
The availability of data for monitoring and controlling the electrical grid has increased exponentially over the years in both resolution and quantity leaving a large data footprint. This dissertation is motivated by the need for equivalent representations of grid data in lower-dimensional feature spaces so that

The availability of data for monitoring and controlling the electrical grid has increased exponentially over the years in both resolution and quantity leaving a large data footprint. This dissertation is motivated by the need for equivalent representations of grid data in lower-dimensional feature spaces so that machine learning algorithms can be employed for a variety of purposes. To achieve that, without sacrificing the interpretation of the results, the dissertation leverages the physics behind power systems, well-known laws that underlie this man-made infrastructure, and the nature of the underlying stochastic phenomena that define the system operating conditions as the backbone for modeling data from the grid.

The first part of the dissertation introduces a new framework of graph signal processing (GSP) for the power grid, Grid-GSP, and applies it to voltage phasor measurements that characterize the overall system state of the power grid. Concepts from GSP are used in conjunction with known power system models in order to highlight the low-dimensional structure in data and present generative models for voltage phasors measurements. Applications such as identification of graphical communities, network inference, interpolation of missing data, detection of false data injection attacks and data compression are explored wherein Grid-GSP based generative models are used.

The second part of the dissertation develops a model for a joint statistical description of solar photo-voltaic (PV) power and the outdoor temperature which can lead to better management of power generation resources so that electricity demand such as air conditioning and supply from solar power are always matched in the face of stochasticity. The low-rank structure inherent in solar PV power data is used for forecasting and to detect partial-shading type of faults in solar panels.
ContributorsRamakrishna, Raksha (Author) / Scaglione, Anna (Thesis advisor) / Cochran, Douglas (Committee member) / Spanias, Andreas (Committee member) / Vittal, Vijay (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2020