Matching Items (80)

Filtering by

Clear all filters

149361-Thumbnail Image.png
Description

Distributed inference has applications in fields as varied as source localization, evaluation of network quality, and remote monitoring of wildlife habitats. In this dissertation, distributed inference algorithms over multiple-access channels are considered. The performance of these algorithms and the effects of wireless communication channels on the performance are studied. In

Distributed inference has applications in fields as varied as source localization, evaluation of network quality, and remote monitoring of wildlife habitats. In this dissertation, distributed inference algorithms over multiple-access channels are considered. The performance of these algorithms and the effects of wireless communication channels on the performance are studied. In a first class of problems, distributed inference over fading Gaussian multiple-access channels with amplify-and-forward is considered. Sensors observe a phenomenon and transmit their observations using the amplify-and-forward scheme to a fusion center (FC). Distributed estimation is considered with a single antenna at the FC, where the performance is evaluated using the asymptotic variance of the estimator. The loss in performance due to varying assumptions on the limited amounts of channel information at the sensors is quantified. With multiple antennas at the FC, a distributed detection problem is also considered, where the error exponent is used to evaluate performance. It is shown that for zero-mean channels between the sensors and the FC when there is no channel information at the sensors, arbitrarily large gains in the error exponent can be obtained with sufficient increase in the number of antennas at the FC. In stark contrast, when there is channel information at the sensors, the gain in error exponent due to having multiple antennas at the FC is shown to be no more than a factor of 8/π for Rayleigh fading channels between the sensors and the FC, independent of the number of antennas at the FC, or correlation among noise samples across sensors. In a second class of problems, sensor observations are transmitted to the FC using constant-modulus phase modulation over Gaussian multiple-access-channels. The phase modulation scheme allows for constant transmit power and estimation of moments other than the mean with a single transmission from the sensors. Estimators are developed for the mean, variance and signal-to-noise ratio (SNR) of the sensor observations. The performance of these estimators is studied for different distributions of the observations. It is proved that the estimator of the mean is asymptotically efficient if and only if the distribution of the sensor observations is Gaussian.

ContributorsBanavar, Mahesh Krishna (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Duman, Tolga (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2010
153726-Thumbnail Image.png
Description

As the demand for spectrum sharing between radar and communications systems is steadily increasing, the coexistence between the two systems is a growing and very challenging problem. Radar tracking in the presence of strong communications interference can result in low probability of detection even when sequential Monte Carlo

tracking methods

As the demand for spectrum sharing between radar and communications systems is steadily increasing, the coexistence between the two systems is a growing and very challenging problem. Radar tracking in the presence of strong communications interference can result in low probability of detection even when sequential Monte Carlo

tracking methods such as the particle filter (PF) are used that better match the target kinematic model. In particular, the tracking performance can fluctuate as the power level of the communications interference can vary dynamically and unpredictably.

This work proposes to integrate the interacting multiple model (IMM) selection approach with the PF tracker to allow for dynamic variations in the power spectral density of the communications interference. The model switching allows for a necessary transition between different communications interference power spectral density (CI-PSD) values in order to reduce prediction errors. Simulations demonstrate the high performance of the integrated approach with as many as six dynamic CI-PSD value changes during the target track. For low signal-to-interference-plus-noise ratios, the derivation for estimating the high power levels of the communications interference is provided; the estimated power levels would be dynamically used in the IMM when integrated with a track-before-detect filter that is better matched to low SINR tracking applications.

ContributorsZhou, Jian (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Kovvali, Narayan (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2015
152455-Thumbnail Image.png
Description

This dissertation introduces stochastic ordering of instantaneous channel powers of fading channels as a general method to compare the performance of a communication system over two different channels, even when a closed-form expression for the metric may not be available. Such a comparison is with respect to a variety of

This dissertation introduces stochastic ordering of instantaneous channel powers of fading channels as a general method to compare the performance of a communication system over two different channels, even when a closed-form expression for the metric may not be available. Such a comparison is with respect to a variety of performance metrics such as error rates, outage probability and ergodic capacity, which share common mathematical properties such as monotonicity, convexity or complete monotonicity. Complete monotonicity of a metric, such as the symbol error rate, in conjunction with the stochastic Laplace transform order between two fading channels implies the ordering of the two channels with respect to the metric. While it has been established previously that certain modulation schemes have convex symbol error rates, there is no study of the complete monotonicity of the same, which helps in establishing stronger channel ordering results. Toward this goal, the current research proves for the first time, that all 1-dimensional and 2-dimensional modulations have completely monotone symbol error rates. Furthermore, it is shown that the frequently used parametric fading distributions for modeling line of sight exhibit a monotonicity in the line of sight parameter with respect to the Laplace transform order. While the Laplace transform order can also be used to order fading distributions based on the ergodic capacity, there exist several distributions which are not Laplace transform ordered, although they have ordered ergodic capacities. To address this gap, a new stochastic order called the ergodic capacity order has been proposed herein, which can be used to compare channels based on the ergodic capacity. Using stochastic orders, average performance of systems involving multiple random variables are compared over two different channels. These systems include diversity combining schemes, relay networks, and signal detection over fading channels with non-Gaussian additive noise. This research also addresses the problem of unifying fading distributions. This unification is based on infinite divisibility, which subsumes almost all known fading distributions, and provides simplified expressions for performance metrics, in addition to enabling stochastic ordering.

ContributorsRajan, Adithya (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel (Committee member) / Kosut, Oliver (Committee member) / Arizona State University (Publisher)
Created2014
150798-Thumbnail Image.png
Description

Structural health management (SHM) is emerging as a vital methodology to help engineers improve the safety and maintainability of critical structures. SHM systems are designed to reliably monitor and test the health and performance of structures in aerospace, civil, and mechanical engineering applications. SHM combines multidisciplinary technologies including sensing, signal

Structural health management (SHM) is emerging as a vital methodology to help engineers improve the safety and maintainability of critical structures. SHM systems are designed to reliably monitor and test the health and performance of structures in aerospace, civil, and mechanical engineering applications. SHM combines multidisciplinary technologies including sensing, signal processing, pattern recognition, data mining, high fidelity probabilistic progressive damage models, physics based damage models, and regression analysis. Due to the wide application of carbon fiber reinforced composites and their multiscale failure mechanisms, it is necessary to emphasize the research of SHM on composite structures. This research develops a comprehensive framework for the damage detection, localization, quantification, and prediction of the remaining useful life of complex composite structures. To interrogate a composite structure, guided wave propagation is applied to thin structures such as beams and plates. Piezoelectric transducers are selected because of their versatility, which allows them to be used as sensors and actuators. Feature extraction from guided wave signals is critical to demonstrate the presence of damage and estimate the damage locations. Advanced signal processing techniques are employed to extract robust features and information. To provide a better estimate of the damage for accurate life estimation, probabilistic regression analysis is used to obtain a prediction model for the prognosis of complex structures subject to fatigue loading. Special efforts have been applied to the extension of SHM techniques on aerospace and spacecraft structures, such as UAV composite wings and deployable composite boom structures. Necessary modifications of the developed SHM techniques were conducted to meet the unique requirements of the aerospace structures. The developed SHM algorithms are able to accurately detect and quantify impact damages as well as matrix cracking introduced.

ContributorsLiu, Yingtao (Author) / Chattopadhyay, Aditi (Thesis advisor) / Rajadas, John (Committee member) / Dai, Lenore (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Jiang, Hanqing (Committee member) / Arizona State University (Publisher)
Created2012
152344-Thumbnail Image.png
Description

Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of

Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of damage over time can provide extremely useful information in assessing the operational worthiness of a structure and in determining whether the structure should be repaired or removed from service. In this work, a sequential Bayesian approach with active sensing is employed for monitoring crack growth within fatigue-loaded materials. The monitoring approach is based on predicting crack damage state dynamics and modeling crack length observations. Since fatigue loading of a structural component can change while in service, an interacting multiple model technique is employed to estimate probabilities of different loading modes and incorporate this information in the crack length estimation problem. For the observation model, features are obtained from regions of high signal energy in the time-frequency plane and modeled for each crack length damage condition. Although this observation model approach exhibits high classification accuracy, the resolution characteristics can change depending upon the extent of the damage. Therefore, several different transmission waveforms and receiver sensors are considered to create multiple modes for making observations of crack damage. Resolution characteristics of the different observation modes are assessed using a predicted mean squared error criterion and observations are obtained using the predicted, optimal observation modes based on these characteristics. Calculation of the predicted mean square error metric can be computationally intensive, especially if performed in real time, and an approximation method is proposed. With this approach, the real time computational burden is decreased significantly and the number of possible observation modes can be increased. Using sensor measurements from real experiments, the overall sequential Bayesian estimation approach, with the adaptive capability of varying the state dynamics and observation modes, is demonstrated for tracking crack damage.

ContributorsHuff, Daniel W (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Kovvali, Narayan (Committee member) / Chakrabarti, Chaitali (Committee member) / Chattopadhyay, Aditi (Committee member) / Arizona State University (Publisher)
Created2013
150423-Thumbnail Image.png
Description

In this thesis, an adaptive waveform selection technique for dynamic target tracking under low signal-to-noise ratio (SNR) conditions is investigated. The approach is integrated with a track-before-detect (TBD) algorithm and uses delay-Doppler matched filter (MF) outputs as raw measurements without setting any threshold for extracting delay-Doppler estimates. The particle filter

In this thesis, an adaptive waveform selection technique for dynamic target tracking under low signal-to-noise ratio (SNR) conditions is investigated. The approach is integrated with a track-before-detect (TBD) algorithm and uses delay-Doppler matched filter (MF) outputs as raw measurements without setting any threshold for extracting delay-Doppler estimates. The particle filter (PF) Bayesian sequential estimation approach is used with the TBD algorithm (PF-TBD) to estimate the dynamic target state. A waveform-agile TBD technique is proposed that integrates the PF-TBD with a waveform selection technique. The new approach predicts the waveform to transmit at the next time step by minimizing the predicted mean-squared error (MSE). As a result, the radar parameters are adaptively and optimally selected for superior performance. Based on previous work, this thesis highlights the applicability of the predicted covariance matrix to the lower SNR waveform-agile tracking problem. The adaptive waveform selection algorithm's MSE performance was compared against fixed waveforms using Monte Carlo simulations. It was found that the adaptive approach performed at least as well as the best fixed waveform when focusing on estimating only position or only velocity. When these estimates were weighted by different amounts, then the adaptive performance exceeded all fixed waveforms. This improvement in performance demonstrates the utility of the predicted covariance in waveform design, at low SNR conditions that are poorly handled with more traditional tracking algorithms.

ContributorsPiwowarski, Ryan (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2011
150440-Thumbnail Image.png
Description

Super-Resolution (SR) techniques are widely developed to increase image resolution by fusing several Low-Resolution (LR) images of the same scene to overcome sensor hardware limitations and reduce media impairments in a cost-effective manner. When choosing a solution for the SR problem, there is always a trade-off between computational efficiency and

Super-Resolution (SR) techniques are widely developed to increase image resolution by fusing several Low-Resolution (LR) images of the same scene to overcome sensor hardware limitations and reduce media impairments in a cost-effective manner. When choosing a solution for the SR problem, there is always a trade-off between computational efficiency and High-Resolution (HR) image quality. Existing SR approaches suffer from extremely high computational requirements due to the high number of unknowns to be estimated in the solution of the SR inverse problem. This thesis proposes efficient iterative SR techniques based on Visual Attention (VA) and perceptual modeling of the human visual system. In the first part of this thesis, an efficient ATtentive-SELective Perceptual-based (AT-SELP) SR framework is presented, where only a subset of perceptually significant active pixels is selected for processing by the SR algorithm based on a local contrast sensitivity threshold model and a proposed low complexity saliency detector. The proposed saliency detector utilizes a probability of detection rule inspired by concepts of luminance masking and visual attention. The second part of this thesis further enhances on the efficiency of selective SR approaches by presenting an ATtentive (AT) SR framework that is completely driven by VA region detectors. Additionally, different VA techniques that combine several low-level features, such as center-surround differences in intensity and orientation, patch luminance and contrast, bandpass outputs of patch luminance and contrast, and difference of Gaussians of luminance intensity are integrated and analyzed to illustrate the effectiveness of the proposed selective SR frameworks. The proposed AT-SELP SR and AT-SR frameworks proved to be flexible by integrating a Maximum A Posteriori (MAP)-based SR algorithm as well as a fast two-stage Fusion-Restoration (FR) SR estimator. By adopting the proposed selective SR frameworks, simulation results show significant reduction on average in computational complexity with comparable visual quality in terms of quantitative metrics such as PSNR, SNR or MAE gains, and subjective assessment. The third part of this thesis proposes a Perceptually Weighted (WP) SR technique that incorporates unequal weighting parameters in the cost function of iterative SR problems. The proposed approach is inspired by the unequal processing of the Human Visual System (HVS) to different local image features in an image. Simulation results show an enhanced reconstruction quality and faster convergence rates when applied to the MAP-based and FR-based SR schemes.

ContributorsSadaka, Nabil (Author) / Karam, Lina J (Thesis advisor) / Spanias, Andreas S (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Abousleman, Glen P (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2011
150530-Thumbnail Image.png
Description

With increased usage of green energy, the number of photovoltaic arrays used in power generation is increasing rapidly. Many of the arrays are located at remote locations where faults that occur within the array often go unnoticed and unattended for large periods of time. Technicians sent to rectify the faults

With increased usage of green energy, the number of photovoltaic arrays used in power generation is increasing rapidly. Many of the arrays are located at remote locations where faults that occur within the array often go unnoticed and unattended for large periods of time. Technicians sent to rectify the faults have to spend a large amount of time determining the location of the fault manually. Automated monitoring systems are needed to obtain the information about the performance of the array and detect faults. Such systems must monitor the DC side of the array in addition to the AC side to identify non catastrophic faults. This thesis focuses on two of the requirements for DC side monitoring of an automated PV array monitoring system. The first part of the thesis quantifies the advantages of obtaining higher resolution data from a PV array on detection of faults. Data for the monitoring system can be gathered for the array as a whole or from additional places within the array such as individual modules and end of strings. The fault detection rate and the false positive rates are compared for array level, string level and module level PV data. Monte Carlo simulations are performed using PV array models developed in Simulink and MATLAB for fault and no fault cases. The second part describes a graphical user interface (GUI) that can be used to visualize the PV array for module level monitoring system information. A demonstration GUI is built in MATLAB using data obtained from a PV array test facility in Tempe, AZ. Visualizations are implemented to display information about the array as a whole or individual modules and locate faults in the array.

ContributorsKrishnan, Venkatachalam (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Ayyanar, Raja (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
150476-Thumbnail Image.png
Description

Multidimensional (MD) discrete Fourier transform (DFT) is a key kernel algorithm in many signal processing applications, such as radar imaging and medical imaging. Traditionally, a two-dimensional (2-D) DFT is computed using Row-Column (RC) decomposition, where one-dimensional (1-D) DFTs are computed along the rows followed by 1-D DFTs along the columns.

Multidimensional (MD) discrete Fourier transform (DFT) is a key kernel algorithm in many signal processing applications, such as radar imaging and medical imaging. Traditionally, a two-dimensional (2-D) DFT is computed using Row-Column (RC) decomposition, where one-dimensional (1-D) DFTs are computed along the rows followed by 1-D DFTs along the columns. However, architectures based on RC decomposition are not efficient for large input size data which have to be stored in external memories based Synchronous Dynamic RAM (SDRAM). In this dissertation, first an efficient architecture to implement 2-D DFT for large-sized input data is proposed. This architecture achieves very high throughput by exploiting the inherent parallelism due to a novel 2-D decomposition and by utilizing the row-wise burst access pattern of the SDRAM external memory. In addition, an automatic IP generator is provided for mapping this architecture onto a reconfigurable platform of Xilinx Virtex-5 devices. For a 2048x2048 input size, the proposed architecture is 1.96 times faster than RC decomposition based implementation under the same memory constraints, and also outperforms other existing implementations. While the proposed 2-D DFT IP can achieve high performance, its output is bit-reversed. For systems where the output is required to be in natural order, use of this DFT IP would result in timing overhead. To solve this problem, a new bandwidth-efficient MD DFT IP that is transpose-free and produces outputs in natural order is proposed. It is based on a novel decomposition algorithm that takes into account the output order, FPGA resources, and the characteristics of off-chip memory access. An IP generator is designed and integrated into an in-house FPGA development platform, AlgoFLEX, for easy verification and fast integration. The corresponding 2-D and 3-D DFT architectures are ported onto the BEE3 board and their performance measured and analyzed. The results shows that the architecture can maintain the maximum memory bandwidth throughout the whole procedure while avoiding matrix transpose operations used in most other MD DFT implementations. The proposed architecture has also been ported onto the Xilinx ML605 board. When clocked at 100 MHz, 2048x2048 images with complex single-precision can be processed in less than 27 ms. Finally, transpose-free imaging flows for range-Doppler algorithm (RDA) and chirp-scaling algorithm (CSA) in SAR imaging are proposed. The corresponding implementations take advantage of the memory access patterns designed for the MD DFT IP and have superior timing performance. The RDA and CSA flows are mapped onto a unified architecture which is implemented on an FPGA platform. When clocked at 100MHz, the RDA and CSA computations with data size 4096x4096 can be completed in 323ms and 162ms, respectively. This implementation outperforms existing SAR image accelerators based on FPGA and GPU.

ContributorsYu, Chi-Li (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Karam, Lina (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2012
150830-Thumbnail Image.png
Description

Research on developing new algorithms to improve information on brain functionality and structure is ongoing. Studying neural activity through dipole source localization with electroencephalography (EEG) and magnetoencephalography (MEG) sensor measurements can lead to diagnosis and treatment of a brain disorder and can also identify the area of the brain from

Research on developing new algorithms to improve information on brain functionality and structure is ongoing. Studying neural activity through dipole source localization with electroencephalography (EEG) and magnetoencephalography (MEG) sensor measurements can lead to diagnosis and treatment of a brain disorder and can also identify the area of the brain from where the disorder has originated. Designing advanced localization algorithms that can adapt to environmental changes is considered a significant shift from manual diagnosis which is based on the knowledge and observation of the doctor, to an adaptive and improved brain disorder diagnosis as these algorithms can track activities that might not be noticed by the human eye. An important consideration of these localization algorithms, however, is to try and minimize the overall power consumption in order to improve the study and treatment of brain disorders. This thesis considers the problem of estimating dynamic parameters of neural dipole sources while minimizing the system's overall power consumption; this is achieved by minimizing the number of EEG/MEG measurements sensors without a loss in estimation performance accuracy. As the EEG/MEG measurements models are related non-linearity to the dipole source locations and moments, these dynamic parameters can be estimated using sequential Monte Carlo methods such as particle filtering. Due to the large number of sensors required to record EEG/MEG Measurements for use in the particle filter, over long period recordings, a large amounts of power is required for storage and transmission. In order to reduce the overall power consumption, two methods are proposed. The first method used the predicted mean square estimation error as the performance metric under the constraint of a maximum power consumption. The performance metric of the second method uses the distance between the location of the sensors and the location estimate of the dipole source at the previous time step; this sensor scheduling scheme results in maximizing the overall signal-to-noise ratio. The performance of both methods is demonstrated using simulated data, and both methods show that they can provide good estimation results with significant reduction in the number of activated sensors at each time step.

ContributorsMichael, Stefanos (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2012