Matching Items (5)
Filtering by

Clear all filters

152758-Thumbnail Image.png
Description
Dynamic channel selection in cognitive radio consists of two main phases. The first phase is spectrum sensing, during which the channels that are occupied by the primary users are detected. The second phase is channel selection, during which the state of the channel to be used by the secondary user

Dynamic channel selection in cognitive radio consists of two main phases. The first phase is spectrum sensing, during which the channels that are occupied by the primary users are detected. The second phase is channel selection, during which the state of the channel to be used by the secondary user is estimated. The existing cognitive radio channel selection literature assumes perfect spectrum sensing. However, this assumption becomes problematic as the noise in the channels increases, resulting in high probability of false alarm and high probability of missed detection. This thesis proposes a solution to this problem by incorporating the estimated state of channel occupancy into a selection cost function. The problem of optimal single-channel selection in cognitive radio is considered. A unique approach to the channel selection problem is proposed which consists of first using a particle filter to estimate the state of channel occupancy and then using the estimated state with a cost function to select a single channel for transmission. The selection cost function provides a means of assessing the various combinations of unoccupied channels in terms of desirability. By minimizing the expected selection cost function over all possible channel occupancy combinations, the optimal hypothesis which identifies the optimal single channel is obtained. Several variations of the proposed cost-based channel selection approach are discussed and simulated in a variety of environments, ranging from low to high number of primary user channels, low to high levels of signal-to-noise ratios, and low to high levels of primary user traffic.
ContributorsZapp, Joseph (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Kovvali, Narayan (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2014
151093-Thumbnail Image.png
Description
This thesis aims to investigate the capacity and bit error rate (BER) performance of multi-user diversity systems with random number of users and considers its application to cognitive radio systems. Ergodic capacity, normalized capacity, outage capacity, and average bit error rate metrics are studied. It has been found that the

This thesis aims to investigate the capacity and bit error rate (BER) performance of multi-user diversity systems with random number of users and considers its application to cognitive radio systems. Ergodic capacity, normalized capacity, outage capacity, and average bit error rate metrics are studied. It has been found that the randomization of the number of users will reduce the ergodic capacity. A stochastic ordering framework is adopted to order user distributions, for example, Laplace transform ordering. The ergodic capacity under different user distributions will follow their corresponding Laplace transform order. The scaling law of ergodic capacity with mean number of users under Poisson and negative binomial user distributions are studied for large mean number of users and these two random distributions are ordered in Laplace transform ordering sense. The ergodic capacity per user is defined and is shown to increase when the total number of users is randomized, which is the opposite to the case of unnormalized ergodic capacity metric. Outage probability under slow fading is also considered and shown to decrease when the total number of users is randomized. The bit error rate (BER) in a general multi-user diversity system has a completely monotonic derivative, which implies that, according to the Jensen's inequality, the randomization of the total number of users will decrease the average BER performance. The special case of Poisson number of users and Rayleigh fading is studied. Combining with the knowledge of regular variation, the average BER is shown to achieve tightness in the Jensen's inequality. This is followed by the extension to the negative binomial number of users, for which the BER is derived and shown to be decreasing in the number of users. A single primary user cognitive radio system with multi-user diversity at the secondary users is proposed. Comparing to the general multi-user diversity system, there exists an interference constraint between secondary and primary users, which is independent of the secondary users' transmission. The secondary user with high- est transmitted SNR which also satisfies the interference constraint is selected to communicate. The active number of secondary users is a binomial random variable. This is then followed by a derivation of the scaling law of the ergodic capacity with mean number of users and the closed form expression of average BER under this situation. The ergodic capacity under binomial user distribution is shown to outperform the Poisson case. Monte-Carlo simulations are used to supplement our analytical results and compare the performance of different user distributions.
ContributorsZeng, Ruochen (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Duman, Tolga (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
151165-Thumbnail Image.png
Description
Camera calibration has applications in the fields of robotic motion, geographic mapping, semiconductor defect characterization, and many more. This thesis considers camera calibration for the purpose of high accuracy three-dimensional reconstruction when characterizing ball grid arrays within the semiconductor industry. Bouguet's calibration method is used following a set of criteria

Camera calibration has applications in the fields of robotic motion, geographic mapping, semiconductor defect characterization, and many more. This thesis considers camera calibration for the purpose of high accuracy three-dimensional reconstruction when characterizing ball grid arrays within the semiconductor industry. Bouguet's calibration method is used following a set of criteria with the purpose of studying the method's performance according to newly proposed standards. The performance of the camera calibration method is currently measured using standards such as pixel error and computational time. This thesis proposes the use of standard deviation of the intrinsic parameter estimation within a Monte Carlo simulation as a new standard of performance measure. It specifically shows that the standard deviation decreases based on the increased number of images input into the calibration routine. It is also shown that the default thresholds of the non-linear maximum likelihood estimation problem of the calibration method require change in order to improve computational time performance; however, the accuracy lost is negligable even for high accuracy requirements such as ball grid array characterization.
ContributorsStenger, Nickolas Arthur (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Kovvali, Narayan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2012
155050-Thumbnail Image.png
Description
Full-duplex communication has attracted significant attention as it promises to increase the spectral efficiency compared to half-duplex. Multi-hop full-duplex networks add new dimensions and capabilities to cooperative networks by facilitating simultaneous transmission and reception and improving data rates.

When a relay in a multi-hop full-duplex system amplifies and forwards its received

Full-duplex communication has attracted significant attention as it promises to increase the spectral efficiency compared to half-duplex. Multi-hop full-duplex networks add new dimensions and capabilities to cooperative networks by facilitating simultaneous transmission and reception and improving data rates.

When a relay in a multi-hop full-duplex system amplifies and forwards its received signals, due to the presence of self-interference, the input-output relationship is determined by recursive equations. This thesis introduces a signal flow graph approach to solve the problem of finding the input-output relationship of a multi-hop amplify-and-forward full-duplex relaying system using Mason's gain formula. Even when all links have flat fading channels, the residual self-interference component due to imperfect self-interference cancellation at the relays results in an end-to-end effective channel that is an all-pole frequency-selective channel. Also, by assuming the relay channels undergo frequency-selective fading, the outage probability analysis is performed and the performance is compared with the case when the relay channels undergo frequency-flat fading. The outage performance of this system is performed assuming that the destination employs an equalizer or a matched filter.

For the case of a two-hop (single relay) full-duplex amplify-and-forward relaying system, the bounds on the outage probability are derived by assuming that the destination employs a matched filter or a minimum mean squared error decision feedback equalizer. For the case of a three-hop (two-relay) system with frequency-flat relay channels, the outage probability analysis is performed by considering the output SNR of different types of equalizers and matched filter at the destination. Also, the closed-form upper bounds on the output SNR are derived when the destination employs a minimum mean squared error decision feedback equalizer which is used in outage probability analysis. It is seen that for sufficiently high target rates, full-duplex relaying with equalizers is always better than half-duplex relaying in terms of achieving lower outage probability, despite the higher RSI. In contrast, since full-duplex relaying with MF is sensitive to RSI, it is outperformed by half-duplex relaying under strong RSI.
ContributorsSureshbabu, Abhilash (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel (Committee member) / Arizona State University (Publisher)
Created2016
155207-Thumbnail Image.png
Description
The radar performance of detecting a target and estimating its parameters can deteriorate rapidly in the presence of high clutter. This is because radar measurements due to clutter returns can be falsely detected as if originating from the actual target. Various data association methods and multiple hypothesis filtering

The radar performance of detecting a target and estimating its parameters can deteriorate rapidly in the presence of high clutter. This is because radar measurements due to clutter returns can be falsely detected as if originating from the actual target. Various data association methods and multiple hypothesis filtering approaches have been considered to solve this problem. Such methods, however, can be computationally intensive for real time radar processing. This work proposes a new approach that is based on the unsupervised clustering of target and clutter detections before target tracking using particle filtering. In particular, Gaussian mixture modeling is first used to separate detections into two Gaussian distinct mixtures. Using eigenvector analysis, the eccentricity of the covariance matrices of the Gaussian mixtures are computed and compared to threshold values that are obtained a priori. The thresholding allows only target detections to be used for target tracking. Simulations demonstrate the performance of the new algorithm and compare it with using k-means for clustering instead of Gaussian mixture modeling.
ContributorsFreeman, Matthew Gregory (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2016