This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 66
152044-Thumbnail Image.png
Description
Doppler radar can be used to measure respiration and heart rate without contact and through obstacles. In this work, a Doppler radar architecture at 2.4 GHz and a new signal processing algorithm to estimate the respiration and heart rate are presented. The received signal is dominated by the transceiver noise,

Doppler radar can be used to measure respiration and heart rate without contact and through obstacles. In this work, a Doppler radar architecture at 2.4 GHz and a new signal processing algorithm to estimate the respiration and heart rate are presented. The received signal is dominated by the transceiver noise, LO phase noise and clutter which reduces the signal-to-noise ratio of the desired signal. The proposed architecture and algorithm are used to mitigate these issues and obtain an accurate estimate of the heart and respiration rate. Quadrature low-IF transceiver architecture is adopted to resolve null point problem as well as avoid 1/f noise and DC offset due to mixer-LO coupling. Adaptive clutter cancellation algorithm is used to enhance receiver sensitivity coupled with a novel Pattern Search in Noise Subspace (PSNS) algorithm is used to estimate respiration and heart rate. PSNS is a modified MUSIC algorithm which uses the phase noise to enhance Doppler shift detection. A prototype system was implemented using off-the-shelf TI and RFMD transceiver and tests were conduct with eight individuals. The measured results shows accurate estimate of the cardio pulmonary signals in low-SNR conditions and have been tested up to a distance of 6 meters.
ContributorsKhunti, Hitesh Devshi (Author) / Kiaei, Sayfe (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Bliss, Daniel (Committee member) / Kitchen, Jennifer (Committee member) / Arizona State University (Publisher)
Created2013
151971-Thumbnail Image.png
Description
Electrical neural activity detection and tracking have many applications in medical research and brain computer interface technologies. In this thesis, we focus on the development of advanced signal processing algorithms to track neural activity and on the mapping of these algorithms onto hardware to enable real-time tracking. At the heart

Electrical neural activity detection and tracking have many applications in medical research and brain computer interface technologies. In this thesis, we focus on the development of advanced signal processing algorithms to track neural activity and on the mapping of these algorithms onto hardware to enable real-time tracking. At the heart of these algorithms is particle filtering (PF), a sequential Monte Carlo technique used to estimate the unknown parameters of dynamic systems. First, we analyze the bottlenecks in existing PF algorithms, and we propose a new parallel PF (PPF) algorithm based on the independent Metropolis-Hastings (IMH) algorithm. We show that the proposed PPF-IMH algorithm improves the root mean-squared error (RMSE) estimation performance, and we demonstrate that a parallel implementation of the algorithm results in significant reduction in inter-processor communication. We apply our implementation on a Xilinx Virtex-5 field programmable gate array (FPGA) platform to demonstrate that, for a one-dimensional problem, the PPF-IMH architecture with four processing elements and 1,000 particles can process input samples at 170 kHz by using less than 5% FPGA resources. We also apply the proposed PPF-IMH to waveform-agile sensing to achieve real-time tracking of dynamic targets with high RMSE tracking performance. We next integrate the PPF-IMH algorithm to track the dynamic parameters in neural sensing when the number of neural dipole sources is known. We analyze the computational complexity of a PF based method and propose the use of multiple particle filtering (MPF) to reduce the complexity. We demonstrate the improved performance of MPF using numerical simulations with both synthetic and real data. We also propose an FPGA implementation of the MPF algorithm and show that the implementation supports real-time tracking. For the more realistic scenario of automatically estimating an unknown number of time-varying neural dipole sources, we propose a new approach based on the probability hypothesis density filtering (PHDF) algorithm. The PHDF is implemented using particle filtering (PF-PHDF), and it is applied in a closed-loop to first estimate the number of dipole sources and then their corresponding amplitude, location and orientation parameters. We demonstrate the improved tracking performance of the proposed PF-PHDF algorithm and map it onto a Xilinx Virtex-5 FPGA platform to show its real-time implementation potential. Finally, we propose the use of sensor scheduling and compressive sensing techniques to reduce the number of active sensors, and thus overall power consumption, of electroencephalography (EEG) systems. We propose an efficient sensor scheduling algorithm which adaptively configures EEG sensors at each measurement time interval to reduce the number of sensors needed for accurate tracking. We combine the sensor scheduling method with PF-PHDF and implement the system on an FPGA platform to achieve real-time tracking. We also investigate the sparsity of EEG signals and integrate compressive sensing with PF to estimate neural activity. Simulation results show that both sensor scheduling and compressive sensing based methods achieve comparable tracking performance with significantly reduced number of sensors.
ContributorsMiao, Lifeng (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Thesis advisor) / Zhang, Junshan (Committee member) / Bliss, Daniel (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2013
152590-Thumbnail Image.png
Description
Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from unintended and improper implementation of security policies. Security testing can help identify security vulnerabilities early and avoid unexpected expensive cost

Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from unintended and improper implementation of security policies. Security testing can help identify security vulnerabilities early and avoid unexpected expensive cost in handling breaches for security architects and security engineers. The process of security testing which involves creating tests that effectively examine vulnerabilities is a challenging task. Role-Based Access Control (RBAC) has been widely adopted to support fine-grained access control. However, in practice, due to its complexity including role management, role hierarchy with hundreds of roles, and their associated privileges and users, systematically testing RBAC systems is crucial to ensure the security in various domains ranging from cyber-infrastructure to mission-critical applications. In this thesis, we introduce i) a security testing technique for RBAC systems considering the principle of maximum privileges, the structure of the role hierarchy, and a new security test coverage criterion; ii) a MTBDD (Multi-Terminal Binary Decision Diagram) based representation of RBAC security policy including RHMTBDD (Role Hierarchy MTBDD) to efficiently generate effective positive and negative security test cases; and iii) a security testing framework which takes an XACML-based RBAC security policy as an input, parses it into a RHMTBDD representation and then generates positive and negative test cases. We also demonstrate the efficacy of our approach through case studies.
ContributorsGupta, Poonam (Author) / Ahn, Gail-Joon (Thesis advisor) / Collofello, James (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2014
152302-Thumbnail Image.png
Description
The energy consumption of data centers is increasing steadily along with the associ- ated power-density. Approximately half of such energy consumption is attributed to the cooling energy, as a result of which reducing cooling energy along with reducing servers energy consumption in data centers is becoming imperative so as to

The energy consumption of data centers is increasing steadily along with the associ- ated power-density. Approximately half of such energy consumption is attributed to the cooling energy, as a result of which reducing cooling energy along with reducing servers energy consumption in data centers is becoming imperative so as to achieve greening of the data centers. This thesis deals with cooling energy management in data centers running data-processing frameworks. In particular, we propose ther- mal aware scheduling for MapReduce framework and its Hadoop implementation to reduce cooling energy in data centers. Data-processing frameworks run many low- priority batch processing jobs, such as background log analysis, that do not have strict completion time requirements; they can be delayed by a bounded amount of time. Cooling energy savings are possible by being able to temporally spread the workload, and assign it to the computing equipments which reduce the heat recirculation in data center room and therefore the load on the cooling systems. We implement our scheme in Hadoop and performs some experiments using both CPU-intensive and I/O-intensive workload benchmarks in order to evaluate the efficiency of our scheme. The evaluation results highlight that our thermal aware scheduling reduces hot-spots and makes uniform temperature distribution within the data center possible. Sum- marizing the contribution, we incorporated thermal awareness in Hadoop MapReduce framework by enhancing the native scheduler to make it thermally aware, compare the Thermal Aware Scheduler(TAS) with the Hadoop scheduler (FCFS) by running PageRank and TeraSort benchmarks in the BlueTool data center of Impact lab and show that there is reduction in peak temperature and decrease in cooling power using TAS over FCFS scheduler.
ContributorsKole, Sayan (Author) / Gupta, Sandeep (Thesis advisor) / Huang, Dijiang (Committee member) / Varsamopoulos, Georgios (Committee member) / Arizona State University (Publisher)
Created2013
152307-Thumbnail Image.png
Description
Immunosignaturing is a medical test for assessing the health status of a patient by applying microarrays of random sequence peptides to determine the patient's immune fingerprint by associating antibodies from a biological sample to immune responses. The immunosignature measurements can potentially provide pre-symptomatic diagnosis for infectious diseases or detection of

Immunosignaturing is a medical test for assessing the health status of a patient by applying microarrays of random sequence peptides to determine the patient's immune fingerprint by associating antibodies from a biological sample to immune responses. The immunosignature measurements can potentially provide pre-symptomatic diagnosis for infectious diseases or detection of biological threats. Currently, traditional bioinformatics tools, such as data mining classification algorithms, are used to process the large amount of peptide microarray data. However, these methods generally require training data and do not adapt to changing immune conditions or additional patient information. This work proposes advanced processing techniques to improve the classification and identification of single and multiple underlying immune response states embedded in immunosignatures, making it possible to detect both known and previously unknown diseases or biothreat agents. Novel adaptive learning methodologies for un- supervised and semi-supervised clustering integrated with immunosignature feature extraction approaches are proposed. The techniques are based on extracting novel stochastic features from microarray binding intensities and use Dirichlet process Gaussian mixture models to adaptively cluster the immunosignatures in the feature space. This learning-while-clustering approach allows continuous discovery of antibody activity by adaptively detecting new disease states, with limited a priori disease or patient information. A beta process factor analysis model to determine underlying patient immune responses is also proposed to further improve the adaptive clustering performance by formatting new relationships between patients and antibody activity. In order to extend the clustering methods for diagnosing multiple states in a patient, the adaptive hierarchical Dirichlet process is integrated with modified beta process factor analysis latent feature modeling to identify relationships between patients and infectious agents. The use of Bayesian nonparametric adaptive learning techniques allows for further clustering if additional patient data is received. Significant improvements in feature identification and immune response clustering are demonstrated using samples from patients with different diseases.
ContributorsMalin, Anna (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel (Committee member) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Lacroix, Zoé (Committee member) / Arizona State University (Publisher)
Created2013
152455-Thumbnail Image.png
Description
This dissertation introduces stochastic ordering of instantaneous channel powers of fading channels as a general method to compare the performance of a communication system over two different channels, even when a closed-form expression for the metric may not be available. Such a comparison is with respect to a variety of

This dissertation introduces stochastic ordering of instantaneous channel powers of fading channels as a general method to compare the performance of a communication system over two different channels, even when a closed-form expression for the metric may not be available. Such a comparison is with respect to a variety of performance metrics such as error rates, outage probability and ergodic capacity, which share common mathematical properties such as monotonicity, convexity or complete monotonicity. Complete monotonicity of a metric, such as the symbol error rate, in conjunction with the stochastic Laplace transform order between two fading channels implies the ordering of the two channels with respect to the metric. While it has been established previously that certain modulation schemes have convex symbol error rates, there is no study of the complete monotonicity of the same, which helps in establishing stronger channel ordering results. Toward this goal, the current research proves for the first time, that all 1-dimensional and 2-dimensional modulations have completely monotone symbol error rates. Furthermore, it is shown that the frequently used parametric fading distributions for modeling line of sight exhibit a monotonicity in the line of sight parameter with respect to the Laplace transform order. While the Laplace transform order can also be used to order fading distributions based on the ergodic capacity, there exist several distributions which are not Laplace transform ordered, although they have ordered ergodic capacities. To address this gap, a new stochastic order called the ergodic capacity order has been proposed herein, which can be used to compare channels based on the ergodic capacity. Using stochastic orders, average performance of systems involving multiple random variables are compared over two different channels. These systems include diversity combining schemes, relay networks, and signal detection over fading channels with non-Gaussian additive noise. This research also addresses the problem of unifying fading distributions. This unification is based on infinite divisibility, which subsumes almost all known fading distributions, and provides simplified expressions for performance metrics, in addition to enabling stochastic ordering.
ContributorsRajan, Adithya (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel (Committee member) / Kosut, Oliver (Committee member) / Arizona State University (Publisher)
Created2014
152970-Thumbnail Image.png
Description
Neural activity tracking using electroencephalography (EEG) and magnetoencephalography (MEG) brain scanning methods has been widely used in the field of neuroscience to provide insight into the nervous system. However, the tracking accuracy depends on the presence of artifacts in the EEG/MEG recordings. Artifacts include any signals that do not originate

Neural activity tracking using electroencephalography (EEG) and magnetoencephalography (MEG) brain scanning methods has been widely used in the field of neuroscience to provide insight into the nervous system. However, the tracking accuracy depends on the presence of artifacts in the EEG/MEG recordings. Artifacts include any signals that do not originate from neural activity, including physiological artifacts such as eye movement and non-physiological activity caused by the environment.

This work proposes an integrated method for simultaneously tracking multiple neural sources using the probability hypothesis density particle filter (PPHDF) and reducing the effect of artifacts using feature extraction and stochastic modeling. Unique time-frequency features are first extracted using matching pursuit decomposition for both neural activity and artifact signals.

The features are used to model probability density functions for each signal type using Gaussian mixture modeling for use in the PPHDF neural tracking algorithm. The probability density function of the artifacts provides information to the tracking algorithm that can help reduce the probability of incorrectly estimating the dynamically varying number of current dipole sources and their corresponding neural activity localization parameters. Simulation results demonstrate the effectiveness of the proposed algorithm in increasing the tracking accuracy performance for multiple dipole sources using recordings that have been contaminated by artifacts.
ContributorsJiang, Jiewei (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2014
153420-Thumbnail Image.png
Description
Tracking a time-varying number of targets is a challenging

dynamic state estimation problem whose complexity is intensified

under low signal-to-noise ratio (SNR) or high clutter conditions.

This is important, for example, when tracking

multiple, closely spaced targets moving in the same direction such as a

convoy of low observable vehicles moving

Tracking a time-varying number of targets is a challenging

dynamic state estimation problem whose complexity is intensified

under low signal-to-noise ratio (SNR) or high clutter conditions.

This is important, for example, when tracking

multiple, closely spaced targets moving in the same direction such as a

convoy of low observable vehicles moving through a forest or multiple

targets moving in a crisscross pattern. The SNR in

these applications is usually low as the reflected signals from

the targets are weak or the noise level is very high.

An effective approach for detecting and tracking a single target

under low SNR conditions is the track-before-detect filter (TBDF)

that uses unthresholded measurements. However, the TBDF has only been used to

track a small fixed number of targets at low SNR.

This work proposes a new multiple target TBDF approach to track a

dynamically varying number of targets under the recursive Bayesian framework.

For a given maximum number of

targets, the state estimates are obtained by estimating the joint

multiple target posterior probability density function under all possible

target

existence combinations. The estimation of the corresponding target existence

combination probabilities and the target existence probabilities are also

derived. A feasible sequential Monte Carlo (SMC) based implementation

algorithm is proposed. The approximation accuracy of the SMC

method with a reduced number of particles is improved by an efficient

proposal density function that partitions the multiple target space into a

single target space.

The proposed multiple target TBDF method is extended to track targets in sea

clutter using highly time-varying radar measurements. A generalized

likelihood function for closely spaced multiple targets in compound Gaussian

sea clutter is derived together with the maximum likelihood estimate of

the model parameters using an iterative fixed point algorithm.

The TBDF performance is improved by proposing a computationally feasible

method to estimate the space-time covariance matrix of rapidly-varying sea

clutter. The method applies the Kronecker product approximation to the

covariance matrix and uses particle filtering to solve the resulting dynamic

state space model formulation.
ContributorsEbenezer, Samuel P (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Bliss, Daniel (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2015
153032-Thumbnail Image.png
Description
Most existing security decisions for both defending and attacking are made based on some deterministic approaches that only give binary answers. Even though these approaches can achieve low false positive rate for decision making, they have high false negative rates due to the lack of accommodations to new attack methods

Most existing security decisions for both defending and attacking are made based on some deterministic approaches that only give binary answers. Even though these approaches can achieve low false positive rate for decision making, they have high false negative rates due to the lack of accommodations to new attack methods and defense techniques. In this dissertation, I study how to discover and use patterns with uncertainty and randomness to counter security challenges. By extracting and modeling patterns in security events, I am able to handle previously unknown security events with quantified confidence, rather than simply making binary decisions. In particular, I cope with the following four real-world security challenges by modeling and analyzing with pattern-based approaches: 1) How to detect and attribute previously unknown shellcode? I propose instruction sequence abstraction that extracts coarse-grained patterns from an instruction sequence and use Markov chain-based model and support vector machines to detect and attribute shellcode; 2) How to safely mitigate routing attacks in mobile ad hoc networks? I identify routing table change patterns caused by attacks, propose an extended Dempster-Shafer theory to measure the risk of such changes, and use a risk-aware response mechanism to mitigate routing attacks; 3) How to model, understand, and guess human-chosen picture passwords? I analyze collected human-chosen picture passwords, propose selection function that models patterns in password selection, and design two algorithms to optimize password guessing paths; and 4) How to identify influential figures and events in underground social networks? I analyze collected underground social network data, identify user interaction patterns, and propose a suite of measures for systematically discovering and mining adversarial evidence. By solving these four problems, I demonstrate that discovering and using patterns could help deal with challenges in computer security, network security, human-computer interaction security, and social network security.
ContributorsZhao, Ziming (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Huang, Dijiang (Committee member) / Santanam, Raghu (Committee member) / Arizona State University (Publisher)
Created2014
153209-Thumbnail Image.png
Description
Peptide microarrays have been used in molecular biology to profile immune responses and develop diagnostic tools. When the microarrays are printed with random peptide sequences, they can be used to identify antigen antibody binding patterns or immunosignatures. In this thesis, an advanced signal processing method is proposed to estimate

Peptide microarrays have been used in molecular biology to profile immune responses and develop diagnostic tools. When the microarrays are printed with random peptide sequences, they can be used to identify antigen antibody binding patterns or immunosignatures. In this thesis, an advanced signal processing method is proposed to estimate epitope antigen subsequences as well as identify mimotope antigen subsequences that mimic the structure of epitopes from random-sequence peptide microarrays. The method first maps peptide sequences to linear expansions of highly-localized one-dimensional (1-D) time-varying signals and uses a time-frequency processing technique to detect recurring patterns in subsequences. This technique is matched to the aforementioned mapping scheme, and it allows for an inherent analysis on how substitutions in the subsequences can affect antibody binding strength. The performance of the proposed method is demonstrated by estimating epitopes and identifying potential mimotopes for eight monoclonal antibody samples.

The proposed mapping is generalized to express information on a protein's sequence location, structure and function onto a highly localized three-dimensional (3-D) Gaussian waveform. In particular, as analysis of protein homology has shown that incorporating different kinds of information into an alignment process can yield more robust alignment results, a pairwise protein structure alignment method is proposed based on a joint similarity measure of multiple mapped protein attributes. The 3-D mapping allocates protein properties into distinct regions in the time-frequency plane in order to simplify the alignment process by including all relevant information into a single, highly customizable waveform. Simulations demonstrate the improved performance of the joint alignment approach to infer relationships between proteins, and they provide information on mutations that cause changes to both the sequence and structure of a protein.

In addition to the biology-based signal processing methods, a statistical method is considered that uses a physics-based model to improve processing performance. In particular, an externally developed physics-based model for sea clutter is examined when detecting a low radar cross-section target in heavy sea clutter. This novel model includes a process that generates random dynamic sea clutter based on the governing physics of water gravity and capillary waves and a finite-difference time-domain electromagnetics simulation process based on Maxwell's equations propagating the radar signal. A subspace clutter suppression detector is applied to remove dominant clutter eigenmodes, and its improved performance over matched filtering is demonstrated using simulations.
ContributorsO'Donnell, Brian (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel (Committee member) / Johnston, Stephen A. (Committee member) / Kovvali, Narayan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2014