Matching Items (84)
Filtering by

Clear all filters

150476-Thumbnail Image.png
Description
Multidimensional (MD) discrete Fourier transform (DFT) is a key kernel algorithm in many signal processing applications, such as radar imaging and medical imaging. Traditionally, a two-dimensional (2-D) DFT is computed using Row-Column (RC) decomposition, where one-dimensional (1-D) DFTs are computed along the rows followed by 1-D DFTs along the columns.

Multidimensional (MD) discrete Fourier transform (DFT) is a key kernel algorithm in many signal processing applications, such as radar imaging and medical imaging. Traditionally, a two-dimensional (2-D) DFT is computed using Row-Column (RC) decomposition, where one-dimensional (1-D) DFTs are computed along the rows followed by 1-D DFTs along the columns. However, architectures based on RC decomposition are not efficient for large input size data which have to be stored in external memories based Synchronous Dynamic RAM (SDRAM). In this dissertation, first an efficient architecture to implement 2-D DFT for large-sized input data is proposed. This architecture achieves very high throughput by exploiting the inherent parallelism due to a novel 2-D decomposition and by utilizing the row-wise burst access pattern of the SDRAM external memory. In addition, an automatic IP generator is provided for mapping this architecture onto a reconfigurable platform of Xilinx Virtex-5 devices. For a 2048x2048 input size, the proposed architecture is 1.96 times faster than RC decomposition based implementation under the same memory constraints, and also outperforms other existing implementations. While the proposed 2-D DFT IP can achieve high performance, its output is bit-reversed. For systems where the output is required to be in natural order, use of this DFT IP would result in timing overhead. To solve this problem, a new bandwidth-efficient MD DFT IP that is transpose-free and produces outputs in natural order is proposed. It is based on a novel decomposition algorithm that takes into account the output order, FPGA resources, and the characteristics of off-chip memory access. An IP generator is designed and integrated into an in-house FPGA development platform, AlgoFLEX, for easy verification and fast integration. The corresponding 2-D and 3-D DFT architectures are ported onto the BEE3 board and their performance measured and analyzed. The results shows that the architecture can maintain the maximum memory bandwidth throughout the whole procedure while avoiding matrix transpose operations used in most other MD DFT implementations. The proposed architecture has also been ported onto the Xilinx ML605 board. When clocked at 100 MHz, 2048x2048 images with complex single-precision can be processed in less than 27 ms. Finally, transpose-free imaging flows for range-Doppler algorithm (RDA) and chirp-scaling algorithm (CSA) in SAR imaging are proposed. The corresponding implementations take advantage of the memory access patterns designed for the MD DFT IP and have superior timing performance. The RDA and CSA flows are mapped onto a unified architecture which is implemented on an FPGA platform. When clocked at 100MHz, the RDA and CSA computations with data size 4096x4096 can be completed in 323ms and 162ms, respectively. This implementation outperforms existing SAR image accelerators based on FPGA and GPU.
ContributorsYu, Chi-Li (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Karam, Lina (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2012
151165-Thumbnail Image.png
Description
Camera calibration has applications in the fields of robotic motion, geographic mapping, semiconductor defect characterization, and many more. This thesis considers camera calibration for the purpose of high accuracy three-dimensional reconstruction when characterizing ball grid arrays within the semiconductor industry. Bouguet's calibration method is used following a set of criteria

Camera calibration has applications in the fields of robotic motion, geographic mapping, semiconductor defect characterization, and many more. This thesis considers camera calibration for the purpose of high accuracy three-dimensional reconstruction when characterizing ball grid arrays within the semiconductor industry. Bouguet's calibration method is used following a set of criteria with the purpose of studying the method's performance according to newly proposed standards. The performance of the camera calibration method is currently measured using standards such as pixel error and computational time. This thesis proposes the use of standard deviation of the intrinsic parameter estimation within a Monte Carlo simulation as a new standard of performance measure. It specifically shows that the standard deviation decreases based on the increased number of images input into the calibration routine. It is also shown that the default thresholds of the non-linear maximum likelihood estimation problem of the calibration method require change in order to improve computational time performance; however, the accuracy lost is negligable even for high accuracy requirements such as ball grid array characterization.
ContributorsStenger, Nickolas Arthur (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Kovvali, Narayan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2012
151215-Thumbnail Image.png
Description
Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures

Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.
ContributorsEmre, Yunus (Author) / Chakrabarti, Chaitali (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Cao, Yu (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
151204-Thumbnail Image.png
Description
There is a growing interest for improved high-accuracy camera calibration methods due to the increasing demand for 3D visual media in commercial markets. Camera calibration is used widely in the fields of computer vision, robotics and 3D reconstruction. Camera calibration is the first step for extracting 3D data from a

There is a growing interest for improved high-accuracy camera calibration methods due to the increasing demand for 3D visual media in commercial markets. Camera calibration is used widely in the fields of computer vision, robotics and 3D reconstruction. Camera calibration is the first step for extracting 3D data from a 2D image. It plays a crucial role in computer vision and 3D reconstruction due to the fact that the accuracy of the reconstruction and 3D coordinate determination relies on the accuracy of the camera calibration to a great extent. This thesis presents a novel camera calibration method using a circular calibration pattern. The disadvantages and issues with existing state-of-the-art methods are discussed and are overcome in this work. The implemented system consists of techniques of local adaptive segmentation, ellipse fitting, projection and optimization. Simulation results are presented to illustrate the performance of the proposed scheme. These results show that the proposed method reduces the error as compared to the state-of-the-art for high-resolution images, and that the proposed scheme is more robust to blur in the imaged calibration pattern.
ContributorsPrakash, Charan Dudda (Author) / Karam, Lina J (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
151156-Thumbnail Image.png
Description
Continuous underwater observation is a challenging engineering task that could be accomplished by development and deployment of a sensor array that can survive harsh underwater conditions. One approach to this challenge is a swarm of micro underwater robots, known as Sensorbots, that are equipped with biogeochemical sensors that can relay

Continuous underwater observation is a challenging engineering task that could be accomplished by development and deployment of a sensor array that can survive harsh underwater conditions. One approach to this challenge is a swarm of micro underwater robots, known as Sensorbots, that are equipped with biogeochemical sensors that can relay information among themselves in real-time. This innovative method for underwater exploration can contribute to a more comprehensive understanding of the ocean by not limiting sampling to a single point and time. In this thesis, Sensorbot Beta, a low-cost fully enclosed Sensorbot prototype for bench-top characterization and short-term field testing, is presented in a modular format that provides flexibility and the potential for rapid design. Sensorbot Beta is designed around a microcontroller driven platform comprised of commercial off-the-shelf components for all hardware to reduce cost and development time. The primary sensor incorporated into Sensorbot Beta is an in situ fluorescent pH sensor. Design considerations have been made for easy adoption of other fluorescent or phosphorescent sensors, such as dissolved oxygen or temperature. Optical components are designed in a format that enables additional sensors. A real-time data acquisition system, utilizing Bluetooth, allows for characterization of the sensor in bench top experiments. The Sensorbot Beta demonstrates rapid calibration and future work will include deployment for large scale experiments in a lake or ocean.
ContributorsJohansen, John (Civil engineer) (Author) / Meldrum, Deirdre R (Thesis advisor) / Chao, Shih-hui (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
151093-Thumbnail Image.png
Description
This thesis aims to investigate the capacity and bit error rate (BER) performance of multi-user diversity systems with random number of users and considers its application to cognitive radio systems. Ergodic capacity, normalized capacity, outage capacity, and average bit error rate metrics are studied. It has been found that the

This thesis aims to investigate the capacity and bit error rate (BER) performance of multi-user diversity systems with random number of users and considers its application to cognitive radio systems. Ergodic capacity, normalized capacity, outage capacity, and average bit error rate metrics are studied. It has been found that the randomization of the number of users will reduce the ergodic capacity. A stochastic ordering framework is adopted to order user distributions, for example, Laplace transform ordering. The ergodic capacity under different user distributions will follow their corresponding Laplace transform order. The scaling law of ergodic capacity with mean number of users under Poisson and negative binomial user distributions are studied for large mean number of users and these two random distributions are ordered in Laplace transform ordering sense. The ergodic capacity per user is defined and is shown to increase when the total number of users is randomized, which is the opposite to the case of unnormalized ergodic capacity metric. Outage probability under slow fading is also considered and shown to decrease when the total number of users is randomized. The bit error rate (BER) in a general multi-user diversity system has a completely monotonic derivative, which implies that, according to the Jensen's inequality, the randomization of the total number of users will decrease the average BER performance. The special case of Poisson number of users and Rayleigh fading is studied. Combining with the knowledge of regular variation, the average BER is shown to achieve tightness in the Jensen's inequality. This is followed by the extension to the negative binomial number of users, for which the BER is derived and shown to be decreasing in the number of users. A single primary user cognitive radio system with multi-user diversity at the secondary users is proposed. Comparing to the general multi-user diversity system, there exists an interference constraint between secondary and primary users, which is independent of the secondary users' transmission. The secondary user with high- est transmitted SNR which also satisfies the interference constraint is selected to communicate. The active number of secondary users is a binomial random variable. This is then followed by a derivation of the scaling law of the ergodic capacity with mean number of users and the closed form expression of average BER under this situation. The ergodic capacity under binomial user distribution is shown to outperform the Poisson case. Monte-Carlo simulations are used to supplement our analytical results and compare the performance of different user distributions.
ContributorsZeng, Ruochen (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Duman, Tolga (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
137494-Thumbnail Image.png
Description
This project examines the science of electric field sensing and completes experiments, gathering data to support its utility for various applications. The basic system consists of a transmitter, receiver, and lock-in amplifier. The primary goal of the study was to determine if such a system could detect a human disturbance,

This project examines the science of electric field sensing and completes experiments, gathering data to support its utility for various applications. The basic system consists of a transmitter, receiver, and lock-in amplifier. The primary goal of the study was to determine if such a system could detect a human disturbance, due to the capacitance of a human body, and such a thesis was supported. Much different results were obtained when a person disturbed the electric field transmitted by the system than when other types of objects, such as chairs and electronic devices, were placed in the field. In fact, there was a distinct difference between persons of varied sizes as well. This thesis goes through the basic design of the system and the process of experimental design for determining the capabilities of such an electric field sensing system.
ContributorsBranham, Breana Michelle (Author) / Allee, David (Thesis director) / Papandreou-Suppappola, Antonia (Committee member) / Phillips, Stephen (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor) / School of International Letters and Cultures (Contributor)
Created2013-05
149361-Thumbnail Image.png
Description
Distributed inference has applications in fields as varied as source localization, evaluation of network quality, and remote monitoring of wildlife habitats. In this dissertation, distributed inference algorithms over multiple-access channels are considered. The performance of these algorithms and the effects of wireless communication channels on the performance are studied. In

Distributed inference has applications in fields as varied as source localization, evaluation of network quality, and remote monitoring of wildlife habitats. In this dissertation, distributed inference algorithms over multiple-access channels are considered. The performance of these algorithms and the effects of wireless communication channels on the performance are studied. In a first class of problems, distributed inference over fading Gaussian multiple-access channels with amplify-and-forward is considered. Sensors observe a phenomenon and transmit their observations using the amplify-and-forward scheme to a fusion center (FC). Distributed estimation is considered with a single antenna at the FC, where the performance is evaluated using the asymptotic variance of the estimator. The loss in performance due to varying assumptions on the limited amounts of channel information at the sensors is quantified. With multiple antennas at the FC, a distributed detection problem is also considered, where the error exponent is used to evaluate performance. It is shown that for zero-mean channels between the sensors and the FC when there is no channel information at the sensors, arbitrarily large gains in the error exponent can be obtained with sufficient increase in the number of antennas at the FC. In stark contrast, when there is channel information at the sensors, the gain in error exponent due to having multiple antennas at the FC is shown to be no more than a factor of 8/π for Rayleigh fading channels between the sensors and the FC, independent of the number of antennas at the FC, or correlation among noise samples across sensors. In a second class of problems, sensor observations are transmitted to the FC using constant-modulus phase modulation over Gaussian multiple-access-channels. The phase modulation scheme allows for constant transmit power and estimation of moments other than the mean with a single transmission from the sensors. Estimators are developed for the mean, variance and signal-to-noise ratio (SNR) of the sensor observations. The performance of these estimators is studied for different distributions of the observations. It is proved that the estimator of the mean is asymptotically efficient if and only if the distribution of the sensor observations is Gaussian.
ContributorsBanavar, Mahesh Krishna (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Duman, Tolga (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2010
149503-Thumbnail Image.png
Description
The exponential rise in unmanned aerial vehicles has necessitated the need for accurate pose estimation under any extreme conditions. Visual Odometry (VO) is the estimation of position and orientation of a vehicle based on analysis of a sequence of images captured from a camera mounted on it. VO offers a

The exponential rise in unmanned aerial vehicles has necessitated the need for accurate pose estimation under any extreme conditions. Visual Odometry (VO) is the estimation of position and orientation of a vehicle based on analysis of a sequence of images captured from a camera mounted on it. VO offers a cheap and relatively accurate alternative to conventional odometry techniques like wheel odometry, inertial measurement systems and global positioning system (GPS). This thesis implements and analyzes the performance of a two camera based VO called Stereo based visual odometry (SVO) in presence of various deterrent factors like shadows, extremely bright outdoors, wet conditions etc... To allow the implementation of VO on any generic vehicle, a discussion on porting of the VO algorithm to android handsets is presented too. The SVO is implemented in three steps. In the first step, a dense disparity map for a scene is computed. To achieve this we utilize sum of absolute differences technique for stereo matching on rectified and pre-filtered stereo frames. Epipolar geometry is used to simplify the matching problem. The second step involves feature detection and temporal matching. Feature detection is carried out by Harris corner detector. These features are matched between two consecutive frames using the Lucas-Kanade feature tracker. The 3D co-ordinates of these matched set of features are computed from the disparity map obtained from the first step and are mapped into each other by a translation and a rotation. The rotation and translation is computed using least squares minimization with the aid of Singular Value Decomposition. Random Sample Consensus (RANSAC) is used for outlier detection. This comprises the third step. The accuracy of the algorithm is quantified based on the final position error, which is the difference between the final position computed by the SVO algorithm and the final ground truth position as obtained from the GPS. The SVO showed an error of around 1% under normal conditions for a path length of 60 m and around 3% in bright conditions for a path length of 130 m. The algorithm suffered in presence of shadows and vibrations, with errors of around 15% and path lengths of 20 m and 100 m respectively.
ContributorsDhar, Anchit (Author) / Saripalli, Srikanth (Thesis advisor) / Li, Baoxin (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2010
168844-Thumbnail Image.png
Description
The continuous time-tagging of photon arrival times for high count rate sources isnecessary for applications such as optical communications, quantum key encryption, and astronomical measurements. Detection of Hanbury-Brown and Twiss (HBT) single photon correlations from thermal sources, such as stars, requires a combination of high dynamic range, long integration times, and low systematics

The continuous time-tagging of photon arrival times for high count rate sources isnecessary for applications such as optical communications, quantum key encryption, and astronomical measurements. Detection of Hanbury-Brown and Twiss (HBT) single photon correlations from thermal sources, such as stars, requires a combination of high dynamic range, long integration times, and low systematics in the photon detection and time tagging system. The continuous nature of the measurements and the need for highly accurate timing resolution requires a customized time-to-digital converter (TDC). A custom built, two-channel, field programmable gate array (FPGA)-based TDC capable of continuously time tagging single photons with sub clock cycle timing resolution was characterized. Auto-correlation and cross-correlation measurements were used to constrain spurious systematic effects in the pulse count data as a function of system variables. These variables included, but were not limited to, incident photon count rate, incoming signal attenuation, and measurements of fixed signals. Additionally, a generalized likelihood ratio test using maximum likelihood estimators (MLEs) was derived as a means to detect and estimate correlated photon signal parameters. The derived GLRT was capable of detecting correlated photon signals in a laboratory setting with a high degree of statistical confidence. A proof is presented in which the MLE for the amplitude of the correlated photon signal is shown to be the minimum variance unbiased estimator (MVUE). The fully characterized TDC was used in preliminary measurements of astronomical sources using ground based telescopes. Finally, preliminary theoretical groundwork is established for the deep space optical communications system of the proposed Breakthrough Starshot project, in which low-mass craft will travel to the Alpha Centauri system to collect scientific data from Proxima B. This theoretical groundwork utilizes recent and upcoming space based optical communication systems as starting points for the Starshot communication system.
ContributorsHodges, Todd Michael William (Author) / Mauskopf, Philip (Thesis advisor) / Trichopoulos, George (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel (Committee member) / Arizona State University (Publisher)
Created2022