Matching Items (11)
Filtering by

Clear all filters

152044-Thumbnail Image.png
Description
Doppler radar can be used to measure respiration and heart rate without contact and through obstacles. In this work, a Doppler radar architecture at 2.4 GHz and a new signal processing algorithm to estimate the respiration and heart rate are presented. The received signal is dominated by the transceiver noise,

Doppler radar can be used to measure respiration and heart rate without contact and through obstacles. In this work, a Doppler radar architecture at 2.4 GHz and a new signal processing algorithm to estimate the respiration and heart rate are presented. The received signal is dominated by the transceiver noise, LO phase noise and clutter which reduces the signal-to-noise ratio of the desired signal. The proposed architecture and algorithm are used to mitigate these issues and obtain an accurate estimate of the heart and respiration rate. Quadrature low-IF transceiver architecture is adopted to resolve null point problem as well as avoid 1/f noise and DC offset due to mixer-LO coupling. Adaptive clutter cancellation algorithm is used to enhance receiver sensitivity coupled with a novel Pattern Search in Noise Subspace (PSNS) algorithm is used to estimate respiration and heart rate. PSNS is a modified MUSIC algorithm which uses the phase noise to enhance Doppler shift detection. A prototype system was implemented using off-the-shelf TI and RFMD transceiver and tests were conduct with eight individuals. The measured results shows accurate estimate of the cardio pulmonary signals in low-SNR conditions and have been tested up to a distance of 6 meters.
ContributorsKhunti, Hitesh Devshi (Author) / Kiaei, Sayfe (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Bliss, Daniel (Committee member) / Kitchen, Jennifer (Committee member) / Arizona State University (Publisher)
Created2013
151971-Thumbnail Image.png
Description
Electrical neural activity detection and tracking have many applications in medical research and brain computer interface technologies. In this thesis, we focus on the development of advanced signal processing algorithms to track neural activity and on the mapping of these algorithms onto hardware to enable real-time tracking. At the heart

Electrical neural activity detection and tracking have many applications in medical research and brain computer interface technologies. In this thesis, we focus on the development of advanced signal processing algorithms to track neural activity and on the mapping of these algorithms onto hardware to enable real-time tracking. At the heart of these algorithms is particle filtering (PF), a sequential Monte Carlo technique used to estimate the unknown parameters of dynamic systems. First, we analyze the bottlenecks in existing PF algorithms, and we propose a new parallel PF (PPF) algorithm based on the independent Metropolis-Hastings (IMH) algorithm. We show that the proposed PPF-IMH algorithm improves the root mean-squared error (RMSE) estimation performance, and we demonstrate that a parallel implementation of the algorithm results in significant reduction in inter-processor communication. We apply our implementation on a Xilinx Virtex-5 field programmable gate array (FPGA) platform to demonstrate that, for a one-dimensional problem, the PPF-IMH architecture with four processing elements and 1,000 particles can process input samples at 170 kHz by using less than 5% FPGA resources. We also apply the proposed PPF-IMH to waveform-agile sensing to achieve real-time tracking of dynamic targets with high RMSE tracking performance. We next integrate the PPF-IMH algorithm to track the dynamic parameters in neural sensing when the number of neural dipole sources is known. We analyze the computational complexity of a PF based method and propose the use of multiple particle filtering (MPF) to reduce the complexity. We demonstrate the improved performance of MPF using numerical simulations with both synthetic and real data. We also propose an FPGA implementation of the MPF algorithm and show that the implementation supports real-time tracking. For the more realistic scenario of automatically estimating an unknown number of time-varying neural dipole sources, we propose a new approach based on the probability hypothesis density filtering (PHDF) algorithm. The PHDF is implemented using particle filtering (PF-PHDF), and it is applied in a closed-loop to first estimate the number of dipole sources and then their corresponding amplitude, location and orientation parameters. We demonstrate the improved tracking performance of the proposed PF-PHDF algorithm and map it onto a Xilinx Virtex-5 FPGA platform to show its real-time implementation potential. Finally, we propose the use of sensor scheduling and compressive sensing techniques to reduce the number of active sensors, and thus overall power consumption, of electroencephalography (EEG) systems. We propose an efficient sensor scheduling algorithm which adaptively configures EEG sensors at each measurement time interval to reduce the number of sensors needed for accurate tracking. We combine the sensor scheduling method with PF-PHDF and implement the system on an FPGA platform to achieve real-time tracking. We also investigate the sparsity of EEG signals and integrate compressive sensing with PF to estimate neural activity. Simulation results show that both sensor scheduling and compressive sensing based methods achieve comparable tracking performance with significantly reduced number of sensors.
ContributorsMiao, Lifeng (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Thesis advisor) / Zhang, Junshan (Committee member) / Bliss, Daniel (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2013
152886-Thumbnail Image.png
Description
As the number of devices with wireless capabilities and the proximity of these devices to each other increases, better ways to handle the interference they cause need to be explored. Also important is for these devices to keep up with the demand for data rates while not compromising on

As the number of devices with wireless capabilities and the proximity of these devices to each other increases, better ways to handle the interference they cause need to be explored. Also important is for these devices to keep up with the demand for data rates while not compromising on industry established expectations of power consumption and mobility. Current methods of distributing the spectrum among all participants are expected to not cope with the demand in a very near future. In this thesis, the effect of employing sophisticated multiple-input, multiple-output (MIMO) systems in this regard is explored. The efficacy of systems which can make intelligent decisions on the transmission mode usage and power allocation to these modes becomes relevant in the current scenario, where the need for performance far exceeds the cost expendable on hardware. The effect of adding multiple antennas at either ends will be examined, the capacity of such systems and of networks comprised of many such participants will be evaluated. Methods of simulating said networks, and ways to achieve better performance by making intelligent transmission decisions will be proposed. Finally, a way of access control closer to the physical layer (a 'statistical MAC') and a possible metric to be used for such a MAC is suggested.
ContributorsThontadarya, Niranjan (Author) / Bliss, Daniel W (Thesis advisor) / Berisha, Visar (Committee member) / Ying, Lei (Committee member) / Arizona State University (Publisher)
Created2014
156661-Thumbnail Image.png
Description
Multiple-input multiple-output systems have gained focus in the last decade due to the benefits they provide in enhancing the quality of communications. On the other hand, full-duplex communication has attracted remarkable attention due to its ability to improve the spectral efficiency compared to the existing half-duplex systems. Using full-duplex communications

Multiple-input multiple-output systems have gained focus in the last decade due to the benefits they provide in enhancing the quality of communications. On the other hand, full-duplex communication has attracted remarkable attention due to its ability to improve the spectral efficiency compared to the existing half-duplex systems. Using full-duplex communications on MIMO co-operative networks can provide us solutions that can completely outperform existing systems with simultaneous transmission and reception at high data rates.

This thesis considers a full-duplex MIMO relay which amplifies and forwards the received signals, between a source and a destination that do not a have line of sight. Full-duplex mode raises the problem of self-interference. Though all the links in the system undergo frequency flat fading, the end-to-end effective channel is frequency selective. This is due to the imperfect cancellation of the self-interference at the relay and this residual self-interference acts as intersymbol interference at the destination which is treated by equalization. This also leads to complications in form of recursive equations to determine the input-output relationship of the system. This also leads to complications in the form of recursive equations to determine the input-output relationship of the system.

To overcome this, a signal flow graph approach using Mason's gain formula is proposed, where the effective channel is analyzed with keen notice to every loop and path the signal traverses. This gives a clear understanding and awareness about the orders of the polynomials involved in the transfer function, from which desired conclusions can be drawn. But the complexity of Mason's gain formula increases with the number of antennas at relay which can be overcome by the proposed linear algebraic method. Input-output relationship derived using simple concepts of linear algebra can be generalized to any number of antennas and the computation complexity is comparatively very low.

For a full-duplex amplify-and-forward MIMO relay system, assuming equalization at the destination, new mechanisms have been implemented at the relay that can compensate the effect of residual self-interference namely equal-gain transmission and antenna selection. Though equal-gain transmission does not perform better than the maximal ratio transmission, a trade-off can be made between performance and implementation complexity. Using the proposed antenna selection strategy, one pair of transmit-receive antennas at the relay is selected based on four selection criteria discussed. Outage probability analysis is performed for all the strategies presented and detailed comparison has been established. Considering minimum mean-squared error decision feedback equalizer at the destination, a bound on the outage probability has been obtained for the antenna selection case and is used for comparisons. A cross-over point is observed while comparing the outage probabilities of equal-gain transmission and antenna selection techniques, as the signal-to-noise ratio increases and from that point antenna selection outperforms equal-gain transmission and this is explained by the fact of reduced residual self-interference in antenna selection method.
ContributorsJonnalagadda, Geeta Sankar Kalyan (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Bliss, Daniel (Committee member) / Kosut, Oliver (Committee member) / Arizona State University (Publisher)
Created2018
131527-Thumbnail Image.png
Description
Object localization is used to determine the location of a device, an important aspect of applications ranging from autonomous driving to augmented reality. Commonly-used localization techniques include global positioning systems (GPS), simultaneous localization and mapping (SLAM), and positional tracking, but all of these methodologies have drawbacks, especially in high traffic

Object localization is used to determine the location of a device, an important aspect of applications ranging from autonomous driving to augmented reality. Commonly-used localization techniques include global positioning systems (GPS), simultaneous localization and mapping (SLAM), and positional tracking, but all of these methodologies have drawbacks, especially in high traffic indoor or urban environments. Using recent improvements in the field of machine learning, this project proposes a new method of localization using networks with several wireless transceivers and implemented without heavy computational loads or high costs. This project aims to build a proof-of-concept prototype and demonstrate that the proposed technique is feasible and accurate.

Modern communication networks heavily depend upon an estimate of the communication channel, which represents the distortions that a transmitted signal takes as it moves towards a receiver. A channel can become quite complicated due to signal reflections, delays, and other undesirable effects and, as a result, varies significantly with each different location. This localization system seeks to take advantage of this distinctness by feeding channel information into a machine learning algorithm, which will be trained to associate channels with their respective locations. A device in need of localization would then only need to calculate a channel estimate and pose it to this algorithm to obtain its location.

As an additional step, the effect of location noise is investigated in this report. Once the localization system described above demonstrates promising results, the team demonstrates that the system is robust to noise on its location labels. In doing so, the team demonstrates that this system could be implemented in a continued learning environment, in which some user agents report their estimated (noisy) location over a wireless communication network, such that the model can be implemented in an environment without extensive data collection prior to release.
ContributorsChang, Roger (Co-author) / Kann, Trevor (Co-author) / Alkhateeb, Ahmed (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
136475-Thumbnail Image.png
Description
Epilepsy affects numerous people around the world and is characterized by recurring seizures, prompting the ability to predict them so precautionary measures may be employed. One promising algorithm extracts spatiotemporal correlation based features from intracranial electroencephalography signals for use with support vector machines. The robustness of this methodology is tested

Epilepsy affects numerous people around the world and is characterized by recurring seizures, prompting the ability to predict them so precautionary measures may be employed. One promising algorithm extracts spatiotemporal correlation based features from intracranial electroencephalography signals for use with support vector machines. The robustness of this methodology is tested through a sensitivity analysis. Doing so also provides insight about how to construct more effective feature vectors.
ContributorsMa, Owen (Author) / Bliss, Daniel (Thesis director) / Berisha, Visar (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2015-05
154587-Thumbnail Image.png
Description
Information divergence functions, such as the Kullback-Leibler divergence or the Hellinger distance, play a critical role in statistical signal processing and information theory; however estimating them can be challenge. Most often, parametric assumptions are made about the two distributions to estimate the divergence of interest. In cases where no parametric

Information divergence functions, such as the Kullback-Leibler divergence or the Hellinger distance, play a critical role in statistical signal processing and information theory; however estimating them can be challenge. Most often, parametric assumptions are made about the two distributions to estimate the divergence of interest. In cases where no parametric model fits the data, non-parametric density estimation is used. In statistical signal processing applications, Gaussianity is usually assumed since closed-form expressions for common divergence measures have been derived for this family of distributions. Parametric assumptions are preferred when it is known that the data follows the model, however this is rarely the case in real-word scenarios. Non-parametric density estimators are characterized by a very large number of parameters that have to be tuned with costly cross-validation. In this dissertation we focus on a specific family of non-parametric estimators, called direct estimators, that bypass density estimation completely and directly estimate the quantity of interest from the data. We introduce a new divergence measure, the $D_p$-divergence, that can be estimated directly from samples without parametric assumptions on the distribution. We show that the $D_p$-divergence bounds the binary, cross-domain, and multi-class Bayes error rates and, in certain cases, provides provably tighter bounds than the Hellinger divergence. In addition, we also propose a new methodology that allows the experimenter to construct direct estimators for existing divergence measures or to construct new divergence measures with custom properties that are tailored to the application. To examine the practical efficacy of these new methods, we evaluate them in a statistical learning framework on a series of real-world data science problems involving speech-based monitoring of neuro-motor disorders.
ContributorsWisler, Alan (Author) / Berisha, Visar (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Liss, Julie (Committee member) / Bliss, Daniel (Committee member) / Arizona State University (Publisher)
Created2017
154967-Thumbnail Image.png
Description
Biological and biomedical measurements, when adequately analyzed and processed, can be used to impart quantitative diagnosis during primary health care consultation to improve patient adherence to recommended treatments. For example, analyzing neural recordings from neurostimulators implanted in patients with neurological disorders can be used by a physician to adjust detrimental

Biological and biomedical measurements, when adequately analyzed and processed, can be used to impart quantitative diagnosis during primary health care consultation to improve patient adherence to recommended treatments. For example, analyzing neural recordings from neurostimulators implanted in patients with neurological disorders can be used by a physician to adjust detrimental stimulation parameters to improve treatment. As another example, biosequences, such as sequences from peptide microarrays obtained from a biological sample, can potentially provide pre-symptomatic diagnosis for infectious diseases when processed to associate antibodies to specific pathogens or infectious agents. This work proposes advanced statistical signal processing and machine learning methodologies to assess neurostimulation from neural recordings and to extract diagnostic information from biosequences.

For locating specific cognitive and behavioral information in different regions of the brain, neural recordings are processed using sequential Bayesian filtering methods to detect and estimate both the number of neural sources and their corresponding parameters. Time-frequency based feature selection algorithms are combined with adaptive machine learning approaches to suppress physiological and non-physiological artifacts present in neural recordings. Adaptive processing and unsupervised clustering methods applied to neural recordings are also used to suppress neurostimulation artifacts and classify between various behavior tasks to assess the level of neurostimulation in patients.

For pathogen detection and identification, random peptide sequences and their properties are first uniquely mapped to highly-localized signals and their corresponding parameters in the time-frequency plane. Time-frequency signal processing methods are then applied to estimate antigenic determinants or epitope candidates for detecting and identifying potential pathogens.
ContributorsMaurer, Alexander Joseph (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel (Committee member) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2016
155207-Thumbnail Image.png
Description
The radar performance of detecting a target and estimating its parameters can deteriorate rapidly in the presence of high clutter. This is because radar measurements due to clutter returns can be falsely detected as if originating from the actual target. Various data association methods and multiple hypothesis filtering

The radar performance of detecting a target and estimating its parameters can deteriorate rapidly in the presence of high clutter. This is because radar measurements due to clutter returns can be falsely detected as if originating from the actual target. Various data association methods and multiple hypothesis filtering approaches have been considered to solve this problem. Such methods, however, can be computationally intensive for real time radar processing. This work proposes a new approach that is based on the unsupervised clustering of target and clutter detections before target tracking using particle filtering. In particular, Gaussian mixture modeling is first used to separate detections into two Gaussian distinct mixtures. Using eigenvector analysis, the eccentricity of the covariance matrices of the Gaussian mixtures are computed and compared to threshold values that are obtained a priori. The thresholding allows only target detections to be used for target tracking. Simulations demonstrate the performance of the new algorithm and compare it with using k-means for clustering instead of Gaussian mixture modeling.
ContributorsFreeman, Matthew Gregory (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2016
135457-Thumbnail Image.png
Description
This work details the bootstrap estimation of a nonparametric information divergence measure, the Dp divergence measure, using a power law model. To address the challenge posed by computing accurate divergence estimates given finite size data, the bootstrap approach is used in conjunction with a power law curve to calculate an

This work details the bootstrap estimation of a nonparametric information divergence measure, the Dp divergence measure, using a power law model. To address the challenge posed by computing accurate divergence estimates given finite size data, the bootstrap approach is used in conjunction with a power law curve to calculate an asymptotic value of the divergence estimator. Monte Carlo estimates of Dp are found for increasing values of sample size, and a power law fit is used to relate the divergence estimates as a function of sample size. The fit is also used to generate a confidence interval for the estimate to characterize the quality of the estimate. We compare the performance of this method with the other estimation methods. The calculated divergence is applied to the binary classification problem. Using the inherent relation between divergence measures and classification error rate, an analysis of the Bayes error rate of several data sets is conducted using the asymptotic divergence estimate.
ContributorsKadambi, Pradyumna Sanjay (Author) / Berisha, Visar (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05