Matching Items (7)
Filtering by

Clear all filters

152758-Thumbnail Image.png
Description
Dynamic channel selection in cognitive radio consists of two main phases. The first phase is spectrum sensing, during which the channels that are occupied by the primary users are detected. The second phase is channel selection, during which the state of the channel to be used by the secondary user

Dynamic channel selection in cognitive radio consists of two main phases. The first phase is spectrum sensing, during which the channels that are occupied by the primary users are detected. The second phase is channel selection, during which the state of the channel to be used by the secondary user is estimated. The existing cognitive radio channel selection literature assumes perfect spectrum sensing. However, this assumption becomes problematic as the noise in the channels increases, resulting in high probability of false alarm and high probability of missed detection. This thesis proposes a solution to this problem by incorporating the estimated state of channel occupancy into a selection cost function. The problem of optimal single-channel selection in cognitive radio is considered. A unique approach to the channel selection problem is proposed which consists of first using a particle filter to estimate the state of channel occupancy and then using the estimated state with a cost function to select a single channel for transmission. The selection cost function provides a means of assessing the various combinations of unoccupied channels in terms of desirability. By minimizing the expected selection cost function over all possible channel occupancy combinations, the optimal hypothesis which identifies the optimal single channel is obtained. Several variations of the proposed cost-based channel selection approach are discussed and simulated in a variety of environments, ranging from low to high number of primary user channels, low to high levels of signal-to-noise ratios, and low to high levels of primary user traffic.
ContributorsZapp, Joseph (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Kovvali, Narayan (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2014
150930-Thumbnail Image.png
Description
In this thesis, an integrated waveform-agile multi-modal tracking-beforedetect sensing system is investigated and the performance is evaluated using an experimental platform. The sensing system of adapting asymmetric multi-modal sensing operation platforms using radio frequency (RF) radar and electro-optical (EO) sensors allows for integration of complementary information from different sensors. However,

In this thesis, an integrated waveform-agile multi-modal tracking-beforedetect sensing system is investigated and the performance is evaluated using an experimental platform. The sensing system of adapting asymmetric multi-modal sensing operation platforms using radio frequency (RF) radar and electro-optical (EO) sensors allows for integration of complementary information from different sensors. However, there are many challenges to overcome, including tracking low signal-to-noise ratio (SNR) targets, waveform configurations that can optimize tracking performance and statistically dependent measurements. Address some of these challenges, a particle filter (PF) based recursive waveformagile track-before-detect (TBD) algorithm is developed to avoid information loss caused by conventional detection under low SNR environments. Furthermore, a waveform-agile selection technique is integrated into the PF-TBD to allow for adaptive waveform configurations. The embedded exponential family (EEF) approach is used to approximate distributions of parameters of dependent RF and EO measurements and to further improve target detection rate and tracking performance. The performance of the integrated algorithm is evaluated using real data from three experimental scenarios.
ContributorsLiu, Shubo (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Duman, Tolga (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2012
150830-Thumbnail Image.png
Description
Research on developing new algorithms to improve information on brain functionality and structure is ongoing. Studying neural activity through dipole source localization with electroencephalography (EEG) and magnetoencephalography (MEG) sensor measurements can lead to diagnosis and treatment of a brain disorder and can also identify the area of the brain from

Research on developing new algorithms to improve information on brain functionality and structure is ongoing. Studying neural activity through dipole source localization with electroencephalography (EEG) and magnetoencephalography (MEG) sensor measurements can lead to diagnosis and treatment of a brain disorder and can also identify the area of the brain from where the disorder has originated. Designing advanced localization algorithms that can adapt to environmental changes is considered a significant shift from manual diagnosis which is based on the knowledge and observation of the doctor, to an adaptive and improved brain disorder diagnosis as these algorithms can track activities that might not be noticed by the human eye. An important consideration of these localization algorithms, however, is to try and minimize the overall power consumption in order to improve the study and treatment of brain disorders. This thesis considers the problem of estimating dynamic parameters of neural dipole sources while minimizing the system's overall power consumption; this is achieved by minimizing the number of EEG/MEG measurements sensors without a loss in estimation performance accuracy. As the EEG/MEG measurements models are related non-linearity to the dipole source locations and moments, these dynamic parameters can be estimated using sequential Monte Carlo methods such as particle filtering. Due to the large number of sensors required to record EEG/MEG Measurements for use in the particle filter, over long period recordings, a large amounts of power is required for storage and transmission. In order to reduce the overall power consumption, two methods are proposed. The first method used the predicted mean square estimation error as the performance metric under the constraint of a maximum power consumption. The performance metric of the second method uses the distance between the location of the sensors and the location estimate of the dipole source at the previous time step; this sensor scheduling scheme results in maximizing the overall signal-to-noise ratio. The performance of both methods is demonstrated using simulated data, and both methods show that they can provide good estimation results with significant reduction in the number of activated sensors at each time step.
ContributorsMichael, Stefanos (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2012
155207-Thumbnail Image.png
Description
The radar performance of detecting a target and estimating its parameters can deteriorate rapidly in the presence of high clutter. This is because radar measurements due to clutter returns can be falsely detected as if originating from the actual target. Various data association methods and multiple hypothesis filtering

The radar performance of detecting a target and estimating its parameters can deteriorate rapidly in the presence of high clutter. This is because radar measurements due to clutter returns can be falsely detected as if originating from the actual target. Various data association methods and multiple hypothesis filtering approaches have been considered to solve this problem. Such methods, however, can be computationally intensive for real time radar processing. This work proposes a new approach that is based on the unsupervised clustering of target and clutter detections before target tracking using particle filtering. In particular, Gaussian mixture modeling is first used to separate detections into two Gaussian distinct mixtures. Using eigenvector analysis, the eccentricity of the covariance matrices of the Gaussian mixtures are computed and compared to threshold values that are obtained a priori. The thresholding allows only target detections to be used for target tracking. Simulations demonstrate the performance of the new algorithm and compare it with using k-means for clustering instead of Gaussian mixture modeling.
ContributorsFreeman, Matthew Gregory (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2016
157824-Thumbnail Image.png
Description
The human brain controls a person's actions and reactions. In this study, the main objective is to quantify reaction time towards a change of visual event and figuring out the inherent relationship between response time and corresponding brain activities. Furthermore, which parts of the human brain are responsible for the

The human brain controls a person's actions and reactions. In this study, the main objective is to quantify reaction time towards a change of visual event and figuring out the inherent relationship between response time and corresponding brain activities. Furthermore, which parts of the human brain are responsible for the reaction time is also of interest. As electroencephalogram (EEG) signals are proportional to the change of brain functionalities with time, EEG signals from different locations of the brain are used as indicators of brain activities. As the different channels are from different parts of our brain, identifying most relevant channels can provide the idea of responsible brain locations. In this study, response time is estimated using EEG signal features from time, frequency and time-frequency domain. Regression-based estimation using the full data-set results in RMSE (Root Mean Square Error) of 99.5 milliseconds and a correlation value of 0.57. However, the addition of non-EEG features with the existing features gives RMSE of 101.7 ms and a correlation value of 0.58. Using the same analysis with a custom data-set provides RMSE of 135.7 milliseconds and a correlation value of 0.69. Classification-based estimation provides 79% & 72% of accuracy for binary and 3-class classication respectively. Classification of extremes (high-low) results in 95% of accuracy. Combining recursive feature elimination, tree-based feature importance, and mutual feature information method, important channels, and features are isolated based on the best result. As human response time is not solely dependent on brain activities, it requires additional information about the subject to improve the reaction time estimation.
ContributorsChowdhury, Mohammad Samin Nur (Author) / Bliss, Daniel W (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Brewer, Gene (Committee member) / Arizona State University (Publisher)
Created2019
157645-Thumbnail Image.png
Description
Disentangling latent spaces is an important research direction in the interpretability of unsupervised machine learning. Several recent works using deep learning are very effective at producing disentangled representations. However, in the unsupervised setting, there is no way to pre-specify which part of the latent space captures specific factors of

Disentangling latent spaces is an important research direction in the interpretability of unsupervised machine learning. Several recent works using deep learning are very effective at producing disentangled representations. However, in the unsupervised setting, there is no way to pre-specify which part of the latent space captures specific factors of variations. While this is generally a hard problem because of the non-existence of analytical expressions to capture these variations, there are certain factors like geometric

transforms that can be expressed analytically. Furthermore, in existing frameworks, the disentangled values are also not interpretable. The focus of this work is to disentangle these geometric factors of variations (which turn out to be nuisance factors for many applications) from the semantic content of the signal in an interpretable manner which in turn makes the features more discriminative. Experiments are designed to show the modularity of the approach with other disentangling strategies as well as on multiple one-dimensional (1D) and two-dimensional (2D) datasets, clearly indicating the efficacy of the proposed approach.
ContributorsKoneripalli Seetharam, Kaushik (Author) / Turaga, Pavan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2019
158864-Thumbnail Image.png
Description
Infants born before 37 weeks of pregnancy are considered to be preterm. Typically, preterm infants have to be strictly monitored since they are highly susceptible to health problems like hypoxemia (low blood oxygen level), apnea, respiratory issues, cardiac problems, neurological problems as well as an increased chance of long-term health

Infants born before 37 weeks of pregnancy are considered to be preterm. Typically, preterm infants have to be strictly monitored since they are highly susceptible to health problems like hypoxemia (low blood oxygen level), apnea, respiratory issues, cardiac problems, neurological problems as well as an increased chance of long-term health issues such as cerebral palsy, asthma and sudden infant death syndrome. One of the leading health complications in preterm infants is bradycardia - which is defined as the slower than expected heart rate, generally beating lower than 60 beats per minute. Bradycardia is often accompanied by low oxygen levels and can cause additional long term health problems in the premature infant.The implementation of a non-parametric method to predict the onset of brady- cardia is presented. This method assumes no prior knowledge of the data and uses kernel density estimation to predict the future onset of bradycardia events. The data is preprocessed, and then analyzed to detect the peaks in the ECG signals, following which different kernels are implemented to estimate the shared underlying distribu- tion of the data. The performance of the algorithm is evaluated using various metrics and the computational challenges and methods to overcome them are also discussed.
It is observed that the performance of the algorithm with regards to the kernels used are consistent with the theoretical performance of the kernel as presented in a previous work. The theoretical approach has also been automated in this work and the various implementation challenges have been addressed.
ContributorsMitra, Sinjini (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Moraffah, Bahman (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2020