Matching Items (48)
Filtering by

Clear all filters

155050-Thumbnail Image.png
Description
Full-duplex communication has attracted significant attention as it promises to increase the spectral efficiency compared to half-duplex. Multi-hop full-duplex networks add new dimensions and capabilities to cooperative networks by facilitating simultaneous transmission and reception and improving data rates.

When a relay in a multi-hop full-duplex system amplifies and forwards its received

Full-duplex communication has attracted significant attention as it promises to increase the spectral efficiency compared to half-duplex. Multi-hop full-duplex networks add new dimensions and capabilities to cooperative networks by facilitating simultaneous transmission and reception and improving data rates.

When a relay in a multi-hop full-duplex system amplifies and forwards its received signals, due to the presence of self-interference, the input-output relationship is determined by recursive equations. This thesis introduces a signal flow graph approach to solve the problem of finding the input-output relationship of a multi-hop amplify-and-forward full-duplex relaying system using Mason's gain formula. Even when all links have flat fading channels, the residual self-interference component due to imperfect self-interference cancellation at the relays results in an end-to-end effective channel that is an all-pole frequency-selective channel. Also, by assuming the relay channels undergo frequency-selective fading, the outage probability analysis is performed and the performance is compared with the case when the relay channels undergo frequency-flat fading. The outage performance of this system is performed assuming that the destination employs an equalizer or a matched filter.

For the case of a two-hop (single relay) full-duplex amplify-and-forward relaying system, the bounds on the outage probability are derived by assuming that the destination employs a matched filter or a minimum mean squared error decision feedback equalizer. For the case of a three-hop (two-relay) system with frequency-flat relay channels, the outage probability analysis is performed by considering the output SNR of different types of equalizers and matched filter at the destination. Also, the closed-form upper bounds on the output SNR are derived when the destination employs a minimum mean squared error decision feedback equalizer which is used in outage probability analysis. It is seen that for sufficiently high target rates, full-duplex relaying with equalizers is always better than half-duplex relaying in terms of achieving lower outage probability, despite the higher RSI. In contrast, since full-duplex relaying with MF is sensitive to RSI, it is outperformed by half-duplex relaying under strong RSI.
ContributorsSureshbabu, Abhilash (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel (Committee member) / Arizona State University (Publisher)
Created2016
155207-Thumbnail Image.png
Description
The radar performance of detecting a target and estimating its parameters can deteriorate rapidly in the presence of high clutter. This is because radar measurements due to clutter returns can be falsely detected as if originating from the actual target. Various data association methods and multiple hypothesis filtering

The radar performance of detecting a target and estimating its parameters can deteriorate rapidly in the presence of high clutter. This is because radar measurements due to clutter returns can be falsely detected as if originating from the actual target. Various data association methods and multiple hypothesis filtering approaches have been considered to solve this problem. Such methods, however, can be computationally intensive for real time radar processing. This work proposes a new approach that is based on the unsupervised clustering of target and clutter detections before target tracking using particle filtering. In particular, Gaussian mixture modeling is first used to separate detections into two Gaussian distinct mixtures. Using eigenvector analysis, the eccentricity of the covariance matrices of the Gaussian mixtures are computed and compared to threshold values that are obtained a priori. The thresholding allows only target detections to be used for target tracking. Simulations demonstrate the performance of the new algorithm and compare it with using k-means for clustering instead of Gaussian mixture modeling.
ContributorsFreeman, Matthew Gregory (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2016
149503-Thumbnail Image.png
Description
The exponential rise in unmanned aerial vehicles has necessitated the need for accurate pose estimation under any extreme conditions. Visual Odometry (VO) is the estimation of position and orientation of a vehicle based on analysis of a sequence of images captured from a camera mounted on it. VO offers a

The exponential rise in unmanned aerial vehicles has necessitated the need for accurate pose estimation under any extreme conditions. Visual Odometry (VO) is the estimation of position and orientation of a vehicle based on analysis of a sequence of images captured from a camera mounted on it. VO offers a cheap and relatively accurate alternative to conventional odometry techniques like wheel odometry, inertial measurement systems and global positioning system (GPS). This thesis implements and analyzes the performance of a two camera based VO called Stereo based visual odometry (SVO) in presence of various deterrent factors like shadows, extremely bright outdoors, wet conditions etc... To allow the implementation of VO on any generic vehicle, a discussion on porting of the VO algorithm to android handsets is presented too. The SVO is implemented in three steps. In the first step, a dense disparity map for a scene is computed. To achieve this we utilize sum of absolute differences technique for stereo matching on rectified and pre-filtered stereo frames. Epipolar geometry is used to simplify the matching problem. The second step involves feature detection and temporal matching. Feature detection is carried out by Harris corner detector. These features are matched between two consecutive frames using the Lucas-Kanade feature tracker. The 3D co-ordinates of these matched set of features are computed from the disparity map obtained from the first step and are mapped into each other by a translation and a rotation. The rotation and translation is computed using least squares minimization with the aid of Singular Value Decomposition. Random Sample Consensus (RANSAC) is used for outlier detection. This comprises the third step. The accuracy of the algorithm is quantified based on the final position error, which is the difference between the final position computed by the SVO algorithm and the final ground truth position as obtained from the GPS. The SVO showed an error of around 1% under normal conditions for a path length of 60 m and around 3% in bright conditions for a path length of 130 m. The algorithm suffered in presence of shadows and vibrations, with errors of around 15% and path lengths of 20 m and 100 m respectively.
ContributorsDhar, Anchit (Author) / Saripalli, Srikanth (Thesis advisor) / Li, Baoxin (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2010
161843-Thumbnail Image.png
Description
In this thesis, the applications of deep learning in the analysis, detection and classification of medical imaging datasets were studied, with a focus on datasets having a limited sample size. A combined machine learning-deep learning model was designed to classify one small dataset, prostate cancer provided by Mayo

In this thesis, the applications of deep learning in the analysis, detection and classification of medical imaging datasets were studied, with a focus on datasets having a limited sample size. A combined machine learning-deep learning model was designed to classify one small dataset, prostate cancer provided by Mayo Clinic, Arizona. Deep learning model was implemented to extract imaging features followed by machine learning classifier for prostate cancer diagnosis. The results were compared against models trained on texture-based features, namely gray level co-occurrence matrix (GLCM) and Gabor. Some of the challenges of performing diagnosis on medical imaging datasets with limited sample sizes, have been identified. Lastly, a set of future works have been proposed. Keywords: Deep learning, radiology, transfer learning, convolutional neural network.
ContributorsSarkar, Suryadipto (Author) / Wu, Teresa (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Silva, Alvin (Committee member) / Arizona State University (Publisher)
Created2021
157644-Thumbnail Image.png
Description
Background. Despite extensive research in the literature aimed at understanding the role of hypertension as a major risk factor for numerous leading causes of death in the United

States, rates of this disease continue to rise. Recent findings suggest that antiseptic mouthwash use may increase blood pressure through elimination of oral

Background. Despite extensive research in the literature aimed at understanding the role of hypertension as a major risk factor for numerous leading causes of death in the United

States, rates of this disease continue to rise. Recent findings suggest that antiseptic mouthwash use may increase blood pressure through elimination of oral bacteria that facilitate the enterosalivary nitrate-nitrite-nitric oxide pathway.

Objective. The purpose of this randomized, controlled, crossover trial was to examine the effects of antiseptic mouthwash use and sodium intake on blood pressure and salivary nitrate levels in prehypertensive adults.

Methods. Healthy adults (n=10; 47.3±12.5) with mildly elevated blood pressure (average baseline blood pressure of 114.9/75.2 mmHg) were recruited and were randomly assigned to a control condition, antiseptic mouthwash use, or antiseptic mouthwash use + consumption of three pickles per day (~6000 mg/day of sodium) for a total of 7 days. Given the crossover design of this study, participants adhered to a 1-week washout period between each condition and all participants received all three treatments. Findings were considered significant at a p-value of <0.05 and a repeated measures ANOVA test was used to compare change data of each condition.

Results. Changes in systolic and diastolic blood pressure were not statistically significant (p=0.469 and p=0.859, respectively). Changes in salivary nitrite levels were not statistically significant (p=0.493). Although there appeared to be fluctuations in sodium intake between interventions, differences in sodium intake were not statistically significant when pickles were not accounted for (p=0.057).

Conclusion. Antiseptic mouthwash use did not appear to induce significant changes in systolic or diastolic blood pressure in this population.
ContributorsShaw, Karrol (Author) / Johnston, Carol (Thesis advisor) / Alexon, Christy (Committee member) / Sweazea, Karen (Committee member) / Arizona State University (Publisher)
Created2019
171630-Thumbnail Image.png
Description
This thesis covers the design, development and testing of two high-power radio frequency transmitters that operate in C-band and X-band (System-C/X). The operational bands of System-C/X are 3-6 GHz and 8-11 GHz, respectively. Each system is designed to produce a peak effective isotropic radiated power of at least 50 dBW.

This thesis covers the design, development and testing of two high-power radio frequency transmitters that operate in C-band and X-band (System-C/X). The operational bands of System-C/X are 3-6 GHz and 8-11 GHz, respectively. Each system is designed to produce a peak effective isotropic radiated power of at least 50 dBW. The transmitters use parabolic dish antennas with dual-linear polarization feeds that can be steered over a wide range of azimuths and elevations with a precision of a fraction of a degree. System-C/X's transmit waveforms are generated using software-defined radios. The software-defined radio software is lightweight and reconfigurable. New waveforms can be loaded into the system during operation and saved to an onboard database. The waveform agility of the two systems lends them to potential uses in a wide range of broadcasting applications, including radar and communications. The effective isotropic radiated power and beam patterns for System-C/X were measured during two field test events in July 2021 and January 2022. The performance of both systems was found to be within acceptable limits of their design specifications.
ContributorsGordon, Samuel (Author) / Bliss, Daniel (Thesis advisor) / Mauskopf, Philip (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2022
171737-Thumbnail Image.png
Description
Increased demand on bandwidth has resulted in wireless communications and radar systems sharing spectrum. As signal transmissions from both modalities coexist, methodologies must be designed to reduce the induced interference from each system. This work considers the problem of tracking an object using radar measurements embedded in noise and

Increased demand on bandwidth has resulted in wireless communications and radar systems sharing spectrum. As signal transmissions from both modalities coexist, methodologies must be designed to reduce the induced interference from each system. This work considers the problem of tracking an object using radar measurements embedded in noise and corrupted from transmissions of multiple communications users. Radar received signals in low noise can be successively processed to estimate object parameters maximum likelihood estimation. For linear frequency-modulated (LFM) signals, such estimates can be efficiently computed by integrating the Wigner distribution along lines in the time-frequency (TF) plane. However, the presence of communications interference highly reduces estimation performance.This thesis proposes a new approach to increase radar estimation performance by integrating a highly-localized TF method with data clustering. The received signal is first decomposed into highly localized Gaussians using the iterative matching pursuit decomposition (MPD). As the MPD is iterative, high noise levels can be reduced by appropriately selecting the algorithm’s stopping criteria. The decomposition also provides feature vectors of reduced dimensionality that can be used for clustering using a Gaussian mixture model (GMM). The proposed estimation method integrates along lines of a modified Wigner distribution of the Gaussian clusters in the TF plane. Using simulations, the object parameter estimation performance of the MPD is shown to highly improve when the MPD is integrated with GMM clustering.
ContributorsZhang, Yiming (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Moraffah, Bahman (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2022
157824-Thumbnail Image.png
Description
The human brain controls a person's actions and reactions. In this study, the main objective is to quantify reaction time towards a change of visual event and figuring out the inherent relationship between response time and corresponding brain activities. Furthermore, which parts of the human brain are responsible for the

The human brain controls a person's actions and reactions. In this study, the main objective is to quantify reaction time towards a change of visual event and figuring out the inherent relationship between response time and corresponding brain activities. Furthermore, which parts of the human brain are responsible for the reaction time is also of interest. As electroencephalogram (EEG) signals are proportional to the change of brain functionalities with time, EEG signals from different locations of the brain are used as indicators of brain activities. As the different channels are from different parts of our brain, identifying most relevant channels can provide the idea of responsible brain locations. In this study, response time is estimated using EEG signal features from time, frequency and time-frequency domain. Regression-based estimation using the full data-set results in RMSE (Root Mean Square Error) of 99.5 milliseconds and a correlation value of 0.57. However, the addition of non-EEG features with the existing features gives RMSE of 101.7 ms and a correlation value of 0.58. Using the same analysis with a custom data-set provides RMSE of 135.7 milliseconds and a correlation value of 0.69. Classification-based estimation provides 79% & 72% of accuracy for binary and 3-class classication respectively. Classification of extremes (high-low) results in 95% of accuracy. Combining recursive feature elimination, tree-based feature importance, and mutual feature information method, important channels, and features are isolated based on the best result. As human response time is not solely dependent on brain activities, it requires additional information about the subject to improve the reaction time estimation.
ContributorsChowdhury, Mohammad Samin Nur (Author) / Bliss, Daniel W (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Brewer, Gene (Committee member) / Arizona State University (Publisher)
Created2019
157937-Thumbnail Image.png
Description
Signal compressed using classical compression methods can be acquired using brute force (i.e. searching for non-zero entries in component-wise). However, sparse solutions require combinatorial searches of high computations. In this thesis, instead, two Bayesian approaches are considered to recover a sparse vector from underdetermined noisy measurements. The first is constructed

Signal compressed using classical compression methods can be acquired using brute force (i.e. searching for non-zero entries in component-wise). However, sparse solutions require combinatorial searches of high computations. In this thesis, instead, two Bayesian approaches are considered to recover a sparse vector from underdetermined noisy measurements. The first is constructed using a Bernoulli-Gaussian (BG) prior distribution and is assumed to be the true generative model. The second is constructed using a Gamma-Normal (GN) prior distribution and is, therefore, a different (i.e. misspecified) model. To estimate the posterior distribution for the correctly specified scenario, an algorithm based on generalized approximated message passing (GAMP) is constructed, while an algorithm based on sparse Bayesian learning (SBL) is used for the misspecified scenario. Recovering sparse signal using Bayesian framework is one class of algorithms to solve the sparse problem. All classes of algorithms aim to get around the high computations associated with the combinatorial searches. Compressive sensing (CS) is a widely-used terminology attributed to optimize the sparse problem and its applications. Applications such as magnetic resonance imaging (MRI), image acquisition in radar imaging, and facial recognition. In CS literature, the target vector can be recovered either by optimizing an objective function using point estimation, or recovering a distribution of the sparse vector using Bayesian estimation. Although Bayesian framework provides an extra degree of freedom to assume a distribution that is directly applicable to the problem of interest, it is hard to find a theoretical guarantee of convergence. This limitation has shifted some of researches to use a non-Bayesian framework. This thesis tries to close this gab by proposing a Bayesian framework with a suggested theoretical bound for the assumed, not necessarily correct, distribution. In the simulation study, a general lower Bayesian Cram\'er-Rao bound (BCRB) bound is extracted along with misspecified Bayesian Cram\'er-Rao bound (MBCRB) for GN model. Both bounds are validated using mean square error (MSE) performances of the aforementioned algorithms. Also, a quantification of the performance in terms of gains versus losses is introduced as one main finding of this report.
ContributorsAlhowaish, Abdulhakim (Author) / Richmond, Christ D (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Sankar, Lalitha (Committee member) / Arizona State University (Publisher)
Created2019
157645-Thumbnail Image.png
Description
Disentangling latent spaces is an important research direction in the interpretability of unsupervised machine learning. Several recent works using deep learning are very effective at producing disentangled representations. However, in the unsupervised setting, there is no way to pre-specify which part of the latent space captures specific factors of

Disentangling latent spaces is an important research direction in the interpretability of unsupervised machine learning. Several recent works using deep learning are very effective at producing disentangled representations. However, in the unsupervised setting, there is no way to pre-specify which part of the latent space captures specific factors of variations. While this is generally a hard problem because of the non-existence of analytical expressions to capture these variations, there are certain factors like geometric

transforms that can be expressed analytically. Furthermore, in existing frameworks, the disentangled values are also not interpretable. The focus of this work is to disentangle these geometric factors of variations (which turn out to be nuisance factors for many applications) from the semantic content of the signal in an interpretable manner which in turn makes the features more discriminative. Experiments are designed to show the modularity of the approach with other disentangling strategies as well as on multiple one-dimensional (1D) and two-dimensional (2D) datasets, clearly indicating the efficacy of the proposed approach.
ContributorsKoneripalli Seetharam, Kaushik (Author) / Turaga, Pavan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2019