Matching Items (6)
Filtering by

Clear all filters

153420-Thumbnail Image.png
Description
Tracking a time-varying number of targets is a challenging

dynamic state estimation problem whose complexity is intensified

under low signal-to-noise ratio (SNR) or high clutter conditions.

This is important, for example, when tracking

multiple, closely spaced targets moving in the same direction such as a

convoy of low observable vehicles moving

Tracking a time-varying number of targets is a challenging

dynamic state estimation problem whose complexity is intensified

under low signal-to-noise ratio (SNR) or high clutter conditions.

This is important, for example, when tracking

multiple, closely spaced targets moving in the same direction such as a

convoy of low observable vehicles moving through a forest or multiple

targets moving in a crisscross pattern. The SNR in

these applications is usually low as the reflected signals from

the targets are weak or the noise level is very high.

An effective approach for detecting and tracking a single target

under low SNR conditions is the track-before-detect filter (TBDF)

that uses unthresholded measurements. However, the TBDF has only been used to

track a small fixed number of targets at low SNR.

This work proposes a new multiple target TBDF approach to track a

dynamically varying number of targets under the recursive Bayesian framework.

For a given maximum number of

targets, the state estimates are obtained by estimating the joint

multiple target posterior probability density function under all possible

target

existence combinations. The estimation of the corresponding target existence

combination probabilities and the target existence probabilities are also

derived. A feasible sequential Monte Carlo (SMC) based implementation

algorithm is proposed. The approximation accuracy of the SMC

method with a reduced number of particles is improved by an efficient

proposal density function that partitions the multiple target space into a

single target space.

The proposed multiple target TBDF method is extended to track targets in sea

clutter using highly time-varying radar measurements. A generalized

likelihood function for closely spaced multiple targets in compound Gaussian

sea clutter is derived together with the maximum likelihood estimate of

the model parameters using an iterative fixed point algorithm.

The TBDF performance is improved by proposing a computationally feasible

method to estimate the space-time covariance matrix of rapidly-varying sea

clutter. The method applies the Kronecker product approximation to the

covariance matrix and uses particle filtering to solve the resulting dynamic

state space model formulation.
ContributorsEbenezer, Samuel P (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Bliss, Daniel (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2015
154587-Thumbnail Image.png
Description
Information divergence functions, such as the Kullback-Leibler divergence or the Hellinger distance, play a critical role in statistical signal processing and information theory; however estimating them can be challenge. Most often, parametric assumptions are made about the two distributions to estimate the divergence of interest. In cases where no parametric

Information divergence functions, such as the Kullback-Leibler divergence or the Hellinger distance, play a critical role in statistical signal processing and information theory; however estimating them can be challenge. Most often, parametric assumptions are made about the two distributions to estimate the divergence of interest. In cases where no parametric model fits the data, non-parametric density estimation is used. In statistical signal processing applications, Gaussianity is usually assumed since closed-form expressions for common divergence measures have been derived for this family of distributions. Parametric assumptions are preferred when it is known that the data follows the model, however this is rarely the case in real-word scenarios. Non-parametric density estimators are characterized by a very large number of parameters that have to be tuned with costly cross-validation. In this dissertation we focus on a specific family of non-parametric estimators, called direct estimators, that bypass density estimation completely and directly estimate the quantity of interest from the data. We introduce a new divergence measure, the $D_p$-divergence, that can be estimated directly from samples without parametric assumptions on the distribution. We show that the $D_p$-divergence bounds the binary, cross-domain, and multi-class Bayes error rates and, in certain cases, provides provably tighter bounds than the Hellinger divergence. In addition, we also propose a new methodology that allows the experimenter to construct direct estimators for existing divergence measures or to construct new divergence measures with custom properties that are tailored to the application. To examine the practical efficacy of these new methods, we evaluate them in a statistical learning framework on a series of real-world data science problems involving speech-based monitoring of neuro-motor disorders.
ContributorsWisler, Alan (Author) / Berisha, Visar (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Liss, Julie (Committee member) / Bliss, Daniel (Committee member) / Arizona State University (Publisher)
Created2017
155381-Thumbnail Image.png
Description
Distributed wireless sensor networks (WSNs) have attracted researchers recently due to their advantages such as low power consumption, scalability and robustness to link failures. In sensor networks with no fusion center, consensus is a process where

all the sensors in the network achieve global agreement using only local transmissions. In this

Distributed wireless sensor networks (WSNs) have attracted researchers recently due to their advantages such as low power consumption, scalability and robustness to link failures. In sensor networks with no fusion center, consensus is a process where

all the sensors in the network achieve global agreement using only local transmissions. In this dissertation, several consensus and consensus-based algorithms in WSNs are studied.

Firstly, a distributed consensus algorithm for estimating the maximum and minimum value of the initial measurements in a sensor network in the presence of communication noise is proposed. In the proposed algorithm, a soft-max approximation together with a non-linear average consensus algorithm is used. A design parameter controls the trade-off between the soft-max error and convergence speed. An analysis of this trade-off gives guidelines towards how to choose the design parameter for the max estimate. It is also shown that if some prior knowledge of the initial measurements is available, the consensus process can be accelerated.

Secondly, a distributed system size estimation algorithm is proposed. The proposed algorithm is based on distributed average consensus and L2 norm estimation. Different sources of error are explicitly discussed, and the distribution of the final estimate is derived. The CRBs for system size estimator with average and max consensus strategies are also considered, and different consensus based system size estimation approaches are compared.

Then, a consensus-based network center and radius estimation algorithm is described. The center localization problem is formulated as a convex optimization problem with a summation form by using soft-max approximation with exponential functions. Distributed optimization methods such as stochastic gradient descent and diffusion adaptation are used to estimate the center. Then, max consensus is used to compute the radius of the network area.

Finally, two average consensus based distributed estimation algorithms are introduced: distributed degree distribution estimation algorithm and algorithm for tracking the dynamics of the desired parameter. Simulation results for all proposed algorithms are provided.
ContributorsZhang, Sai (Electrical engineer) (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Tsakalis, Kostas (Committee member) / Bliss, Daniel (Committee member) / Arizona State University (Publisher)
Created2017
155207-Thumbnail Image.png
Description
The radar performance of detecting a target and estimating its parameters can deteriorate rapidly in the presence of high clutter. This is because radar measurements due to clutter returns can be falsely detected as if originating from the actual target. Various data association methods and multiple hypothesis filtering

The radar performance of detecting a target and estimating its parameters can deteriorate rapidly in the presence of high clutter. This is because radar measurements due to clutter returns can be falsely detected as if originating from the actual target. Various data association methods and multiple hypothesis filtering approaches have been considered to solve this problem. Such methods, however, can be computationally intensive for real time radar processing. This work proposes a new approach that is based on the unsupervised clustering of target and clutter detections before target tracking using particle filtering. In particular, Gaussian mixture modeling is first used to separate detections into two Gaussian distinct mixtures. Using eigenvector analysis, the eccentricity of the covariance matrices of the Gaussian mixtures are computed and compared to threshold values that are obtained a priori. The thresholding allows only target detections to be used for target tracking. Simulations demonstrate the performance of the new algorithm and compare it with using k-means for clustering instead of Gaussian mixture modeling.
ContributorsFreeman, Matthew Gregory (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2016
171666-Thumbnail Image.png
Description
The Discrete Fourier Transform (DFT) is a mathematical operation utilized in various signal processing applications including Astronomy and digital communications (satellite, cellphone, radar, etc.) to separate signals at different frequencies. Performing DFT on a signal by itself suffers from inter-channel leakage. For an ultrasensitive application like radio astronomy, it is

The Discrete Fourier Transform (DFT) is a mathematical operation utilized in various signal processing applications including Astronomy and digital communications (satellite, cellphone, radar, etc.) to separate signals at different frequencies. Performing DFT on a signal by itself suffers from inter-channel leakage. For an ultrasensitive application like radio astronomy, it is important to minimize frequency sidelobes. To achieve this, the Polyphase Filterbank (PFB) technique is used which modifies the bin-response of the DFT to a rectangular function and suppresses out-of-band crosstalk. This helps achieve the Signal-to-Noise Ratio (SNR) required for astronomy measurements. In practice, 2N DFT can be efficiently implemented on Digital Signal Processing (DSP) hardware by the popular Fast Fourier Transform (FFT) algorithm. Hence, 2N tap-filters are commonly used in the Filterbank stage before the FFT. At present, Field Programmable Gate Arrays (FPGAs) and Application Specific Integrated Circuits (ASICs) from different vendors (e.g. Xilinx, Altera, Microsemi, etc.) are available which offer high performance. Xilinx Radio-Frequency System-on-Chip (RFSoC) is the latest kind of such a platform offering Radio-frequency (RF) signal capture / generate capability on the same chip. This thesis describes the characterization of the Analog-to-Digital Converter (ADC) available on the Xilinx ZCU111 RFSoC platform, detailed design steps of a Critically-Sampled PFB, and the testing and debugging of a Weighted OverLap and Add (WOLA) PFB to examine the feasibility of implementation on custom ASICs for future space missions. The design and testing of an analog Printed Circuit Board (PCB) circuit for biasing cryogenic detectors and readout components are also presented here.
ContributorsBiswas, Raj (Author) / Mauskopf, Philip (Thesis advisor) / Bliss, Daniel (Thesis advisor) / Hooks, Tracee J (Committee member) / Groppi, Christopher (Committee member) / Zeinolabedinzadeh, Saeed (Committee member) / Arizona State University (Publisher)
Created2022
154246-Thumbnail Image.png
Description
The power of science lies in its ability to infer and predict the

existence of objects from which no direct information can be obtained

experimentally or observationally. A well known example is to

ascertain the existence of black holes of various masses in different

parts of the universe from indirect evidence, such as X-ray

The power of science lies in its ability to infer and predict the

existence of objects from which no direct information can be obtained

experimentally or observationally. A well known example is to

ascertain the existence of black holes of various masses in different

parts of the universe from indirect evidence, such as X-ray emissions.

In the field of complex networks, the problem of detecting

hidden nodes can be stated, as follows. Consider a network whose

topology is completely unknown but whose nodes consist of two types:

one accessible and another inaccessible from the outside world. The

accessible nodes can be observed or monitored, and it is assumed that time

series are available from each node in this group. The inaccessible

nodes are shielded from the outside and they are essentially

``hidden.'' The question is, based solely on the

available time series from the accessible nodes, can the existence and

locations of the hidden nodes be inferred? A completely data-driven,

compressive-sensing based method is developed to address this issue by utilizing

complex weighted networks of nonlinear oscillators, evolutionary game

and geospatial networks.

Both microbes and multicellular organisms actively regulate their cell

fate determination to cope with changing environments or to ensure

proper development. Here, the synthetic biology approaches are used to

engineer bistable gene networks to demonstrate that stochastic and

permanent cell fate determination can be achieved through initializing

gene regulatory networks (GRNs) at the boundary between dynamic

attractors. This is experimentally realized by linking a synthetic GRN

to a natural output of galactose metabolism regulation in yeast.

Combining mathematical modeling and flow cytometry, the

engineered systems are shown to be bistable and that inherent gene expression

stochasticity does not induce spontaneous state transitioning at

steady state. By interfacing rationally designed synthetic

GRNs with background gene regulation mechanisms, this work

investigates intricate properties of networks that illuminate possible

regulatory mechanisms for cell differentiation and development that

can be initiated from points of instability.
ContributorsSu, Ri-Qi (Author) / Lai, Ying-Cheng (Thesis advisor) / Wang, Xiao (Thesis advisor) / Bliss, Daniel (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2015