This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 31 - 40 of 58
151126-Thumbnail Image.png
Description
Insertion and deletion errors represent an important category of channel impairments. Despite their importance and much work over the years, channels with such impairments are far from being fully understood as they proved to be difficult to analyze. In this dissertation, a promising coding scheme is investigated over independent and

Insertion and deletion errors represent an important category of channel impairments. Despite their importance and much work over the years, channels with such impairments are far from being fully understood as they proved to be difficult to analyze. In this dissertation, a promising coding scheme is investigated over independent and identically distributed (i.i.d.) insertion/deletion channels, i.e., interleaved concatenation of an outer low-density parity-check (LDPC) code with error-correction capabilities and an inner marker code for synchronization purposes. Marker code structures which offer the highest achievable rates are found with standard bit-level synchronization is performed. Then, to exploit the correlations in the likelihoods corresponding to different transmitted bits, a novel symbol-level synchronization algorithm that works on groups of consecutive bits is introduced. Extrinsic information transfer (EXIT) charts are also utilized to analyze the convergence behavior of the receiver, and to design LDPC codes with degree distributions matched to these channels. The next focus is on segmented deletion channels. It is first shown that such channels are information stable, and hence their channel capacity exists. Several upper and lower bounds are then introduced in an attempt to understand the channel capacity behavior. The asymptotic behavior of the channel capacity is also quantified when the average bit deletion rate is small. Further, maximum-a-posteriori (MAP) based synchronization algorithms are developed and specific LDPC codes are designed to match the channel characteristics. Finally, in addition to binary substitution errors, coding schemes and the corresponding detection algorithms are also studied for several other models with synchronization errors, including inter-symbol interference (ISI) channels, channels with multiple transmit/receive elements and multi-user communication systems.
ContributorsWang, Feng (Author) / Duman, Tolga M. (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Reisslein, Martin (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2012
156587-Thumbnail Image.png
Description
Modern machine learning systems leverage data and features from multiple modalities to gain more predictive power. In most scenarios, the modalities are vastly different and the acquired data are heterogeneous in nature. Consequently, building highly effective fusion algorithms is at the core to achieve improved model robustness and inferencing performance.

Modern machine learning systems leverage data and features from multiple modalities to gain more predictive power. In most scenarios, the modalities are vastly different and the acquired data are heterogeneous in nature. Consequently, building highly effective fusion algorithms is at the core to achieve improved model robustness and inferencing performance. This dissertation focuses on the representation learning approaches as the fusion strategy. Specifically, the objective is to learn the shared latent representation which jointly exploit the structural information encoded in all modalities, such that a straightforward learning model can be adopted to obtain the prediction.

We first consider sensor fusion, a typical multimodal fusion problem critical to building a pervasive computing platform. A systematic fusion technique is described to support both multiple sensors and descriptors for activity recognition. Targeted to learn the optimal combination of kernels, Multiple Kernel Learning (MKL) algorithms have been successfully applied to numerous fusion problems in computer vision etc. Utilizing the MKL formulation, next we describe an auto-context algorithm for learning image context via the fusion with low-level descriptors. Furthermore, a principled fusion algorithm using deep learning to optimize kernel machines is developed. By bridging deep architectures with kernel optimization, this approach leverages the benefits of both paradigms and is applied to a wide variety of fusion problems.

In many real-world applications, the modalities exhibit highly specific data structures, such as time sequences and graphs, and consequently, special design of the learning architecture is needed. In order to improve the temporal modeling for multivariate sequences, we developed two architectures centered around attention models. A novel clinical time series analysis model is proposed for several critical problems in healthcare. Another model coupled with triplet ranking loss as metric learning framework is described to better solve speaker diarization. Compared to state-of-the-art recurrent networks, these attention-based multivariate analysis tools achieve improved performance while having a lower computational complexity. Finally, in order to perform community detection on multilayer graphs, a fusion algorithm is described to derive node embedding from word embedding techniques and also exploit the complementary relational information contained in each layer of the graph.
ContributorsSong, Huan (Author) / Spanias, Andreas (Thesis advisor) / Thiagarajan, Jayaraman (Committee member) / Berisha, Visar (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2018
156976-Thumbnail Image.png
Description
In the past half century, low-power wireless signals from portable radar sensors, initially continuous-wave (CW) radars and more recently ultra-wideband (UWB) radar systems, have been successfully used to detect physiological movements of stationary human beings.

The thesis starts with a careful review of existing signal processing techniques and state

In the past half century, low-power wireless signals from portable radar sensors, initially continuous-wave (CW) radars and more recently ultra-wideband (UWB) radar systems, have been successfully used to detect physiological movements of stationary human beings.

The thesis starts with a careful review of existing signal processing techniques and state of the art methods possible for vital signs monitoring using UWB impulse systems. Then an in-depth analysis of various approaches is presented.

Robust heart-rate monitoring methods are proposed based on a novel result: spectrally the fundamental heartbeat frequency is respiration-interference-limited while its higher-order harmonics are noise-limited. The higher-order statistics related to heartbeat can be a robust indication when the fundamental heartbeat is masked by the strong lower-order harmonics of respiration or when phase calibration is not accurate if phase-based method is used. Analytical spectral analysis is performed to validate that the higher-order harmonics of heartbeat is almost respiration-interference free. Extensive experiments have been conducted to justify an adaptive heart-rate monitoring algorithm. The scenarios of interest are, 1) single subject, 2) multiple subjects at different ranges, 3) multiple subjects at same range, and 4) through wall monitoring.

A remote sensing radar system implemented using the proposed adaptive heart-rate estimation algorithm is compared to the competing remote sensing technology, a remote imaging photoplethysmography system, showing promising results.

State of the art methods for vital signs monitoring are fundamentally related to process the phase variation due to vital signs motions. Their performance are determined by a phase calibration procedure. Existing methods fail to consider the time-varying nature of phase noise. There is no prior knowledge about which of the corrupted complex signals, in-phase component (I) and quadrature component (Q), need to be corrected. A precise phase calibration routine is proposed based on the respiration pattern. The I/Q samples from every breath are more likely to experience similar motion noise and therefore they should be corrected independently. High slow-time sampling rate is used to ensure phase calibration accuracy. Occasionally, a 180-degree phase shift error occurs after the initial calibration step and should be corrected as well. All phase trajectories in the I/Q plot are only allowed in certain angular spaces. This precise phase calibration routine is validated through computer simulations incorporating a time-varying phase noise model, controlled mechanic system, and human subject experiment.
ContributorsRong, Yu (Author) / Bliss, Daniel W (Thesis advisor) / Richmond, Christ D (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Alkhateeb, Ahmed (Committee member) / Arizona State University (Publisher)
Created2018
154660-Thumbnail Image.png
Description
The research on the topology and dynamics of complex networks is one of the most focused area in complex system science. The goals are to structure our understanding of the real-world social, economical, technological, and biological systems in the aspect of networks consisting a large number of interacting units and

The research on the topology and dynamics of complex networks is one of the most focused area in complex system science. The goals are to structure our understanding of the real-world social, economical, technological, and biological systems in the aspect of networks consisting a large number of interacting units and to develop corresponding detection, prediction, and control strategies. In this highly interdisciplinary field, my research mainly concentrates on universal estimation schemes, physical controllability, as well as mechanisms behind extreme events and cascading failure for complex networked systems.

Revealing the underlying structure and dynamics of complex networked systems from observed data without of any specific prior information is of fundamental importance to science, engineering, and society. We articulate a Markov network based model, the sparse dynamical Boltzmann machine (SDBM), as a universal network structural estimator and dynamics approximator based on techniques including compressive sensing and K-means algorithm. It recovers the network structure of the original system and predicts its short-term or even long-term dynamical behavior for a large variety of representative dynamical processes on model and real-world complex networks.

One of the most challenging problems in complex dynamical systems is to control complex networks.

Upon finding that the energy required to approach a target state with reasonable precision

is often unbearably large, and the energy of controlling a set of networks with similar structural properties follows a fat-tail distribution, we identify fundamental structural ``short boards'' that play a dominant role in the enormous energy and offer a theoretical interpretation for the fat-tail distribution and simple strategies to significantly reduce the energy.

Extreme events and cascading failure, a type of collective behavior in complex networked systems, often have catastrophic consequences. Utilizing transportation and evolutionary game dynamics as prototypical

settings, we investigate the emergence of extreme events in simplex complex networks, mobile ad-hoc networks and multi-layer interdependent networks. A striking resonance-like phenomenon and the emergence of global-scale cascading breakdown are discovered. We derive analytic theories to understand the mechanism of

control at a quantitative level and articulate cost-effective control schemes to significantly suppress extreme events and the cascading process.
ContributorsChen, Yuzhong (Author) / Lai, Ying-Cheng (Thesis advisor) / Spanias, Andreas (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Ying, Lei (Committee member) / Arizona State University (Publisher)
Created2016
155036-Thumbnail Image.png
Description
For a sensor array, part of its elements may fail to work due to hardware failures. Then the missing data may distort in the beam pattern or decrease the accuracy of direction-of-arrival (DOA) estimation. Therefore, considerable research has been conducted to develop algorithms that can estimate the missing signal information.

For a sensor array, part of its elements may fail to work due to hardware failures. Then the missing data may distort in the beam pattern or decrease the accuracy of direction-of-arrival (DOA) estimation. Therefore, considerable research has been conducted to develop algorithms that can estimate the missing signal information. On the other hand, through those algorithms, array elements can also be selectively turned off while the missed information can be successfully recovered, which will save power consumption and hardware cost.

Conventional approaches focusing on array element failures are mainly based on interpolation or sequential learning algorithm. Both of them rely heavily on some prior knowledge such as the information of the failures or a training dataset without missing data. In addition, since most of the existing approaches are developed for DOA estimation, their recovery target is usually the co-variance matrix but not the signal matrix.

In this thesis, a new signal recovery method based on matrix completion (MC) theory is introduced. It aims to directly refill the absent entries in the signal matrix without any prior knowledge. We proposed a novel overlapping reshaping method to satisfy the applying conditions of MC algorithms. Compared to other existing MC based approaches, our proposed method can provide us higher probability of successful recovery. The thesis describes the principle of the algorithms and analyzes the performance of this method. A few application examples with simulation results are also provided.
ContributorsFan, Jie (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2016
155245-Thumbnail Image.png
Description
Large-scale integration of wind generation introduces planning and operational difficulties due to the intermittent and highly variable nature of wind. In particular, the generation from non-hydro renewable resources is inherently variable and often times difficult to predict. Integrating significant amounts of renewable generation, thus, presents a challenge to the power

Large-scale integration of wind generation introduces planning and operational difficulties due to the intermittent and highly variable nature of wind. In particular, the generation from non-hydro renewable resources is inherently variable and often times difficult to predict. Integrating significant amounts of renewable generation, thus, presents a challenge to the power systems operators, requiring additional flexibility, which may incur a decrease of conventional generation capacity.

This research investigates the algorithms employing emerging computational advances in system operation policies that can improve the flexibility of the electricity industry. The focus of this study is on flexible operation policies for renewable generation, particularly wind generation. Specifically, distributional forecasts of windfarm generation are used to dispatch a “discounted” amount of the wind generation, leaving a reserve margin that can be used for reserve if needed. This study presents systematic mathematic formulations that allow the operator incorporate this flexibility into the operation optimization model to increase the benefits in the energy and reserve scheduling procedure. Incorporating this formulation into the dispatch optimization problem provides the operator with the ability of using forecasted probability distributions as well as the off-line generated policies to choose proper approaches for operating the system in real-time. Methods to generate such policies are discussed and a forecast-based approach for developing wind margin policies is presented. The impacts of incorporating such policies in the electricity market models are also investigated.
ContributorsHedayati Mehdiabadi, Mojgan (Author) / Zhang, Junshan (Thesis advisor) / Hedman, Kory (Thesis advisor) / Heydt, Gerald (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2017
155255-Thumbnail Image.png
Description
RF convergence of radar and communications users is rapidly becoming an issue for a multitude of stakeholders. To hedge against growing spectral congestion, research into cooperative radar and communications systems has been identified as a critical necessity for the United States and other countries. Further, the joint sensing-communicating paradigm appears

RF convergence of radar and communications users is rapidly becoming an issue for a multitude of stakeholders. To hedge against growing spectral congestion, research into cooperative radar and communications systems has been identified as a critical necessity for the United States and other countries. Further, the joint sensing-communicating paradigm appears imminent in several technological domains. In the pursuit of co-designing radar and communications systems that work cooperatively and benefit from each other's existence, joint radar-communications metrics are defined and bounded as a measure of performance. Estimation rate is introduced, a novel measure of radar estimation information as a function of time. Complementary to communications data rate, the two systems can now be compared on the same scale. An information-centric approach has a number of advantages, defining precisely what is gained through radar illumination and serves as a measure of spectral efficiency. Bounding radar estimation rate and communications data rate jointly, systems can be designed as a joint optimization problem.
ContributorsPaul, Bryan (Author) / Bliss, Daniel W. (Thesis advisor) / Berisha, Visar (Committee member) / Kosut, Oliver (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2017
155665-Thumbnail Image.png
Description
Dynamic spectrum access (DSA) has great potential to address worldwide spectrum shortage by enhancing spectrum efficiency. It allows unlicensed secondary users to access the under-utilized spectrum when the primary users are not transmitting. On the other hand, the open wireless medium subjects DSA systems to various security and privacy issues,

Dynamic spectrum access (DSA) has great potential to address worldwide spectrum shortage by enhancing spectrum efficiency. It allows unlicensed secondary users to access the under-utilized spectrum when the primary users are not transmitting. On the other hand, the open wireless medium subjects DSA systems to various security and privacy issues, which might hinder the practical deployment. This dissertation consists of two parts to discuss the potential challenges and solutions.

The first part consists of three chapters, with a focus on secondary-user authentication. Chapter One gives an overview of the challenges and existing solutions in spectrum-misuse detection. Chapter Two presents SpecGuard, the first crowdsourced spectrum-misuse detection framework for DSA systems. In SpecGuard, three novel schemes are proposed for embedding and detecting a spectrum permit at the physical layer. Chapter Three proposes SafeDSA, a novel PHY-based scheme utilizing temporal features for authenticating secondary users. In SafeDSA, the secondary user embeds his spectrum authorization into the cyclic prefix of each physical-layer symbol, which can be detected and authenticated by a verifier.

The second part also consists of three chapters, with a focus on crowdsourced spectrum sensing (CSS) with privacy consideration. CSS allows a spectrum sensing provider (SSP) to outsource the spectrum sensing to distributed mobile users. Without strong incentives and location-privacy protection in place, however, mobile users are reluctant to act as crowdsourcing workers for spectrum-sensing tasks. Chapter Four gives an overview of the challenges and existing solutions. Chapter Five presents PriCSS, where the SSP selects participants based on the exponential mechanism such that the participants' sensing cost, associated with their locations, are privacy-preserved. Chapter Six further proposes DPSense, a framework that allows the honest-but-curious SSP to select mobile users for executing spatiotemporal spectrum-sensing tasks without violating the location privacy of mobile users. By collecting perturbed location traces with differential privacy guarantee from participants, the SSP assigns spectrum-sensing tasks to participants with the consideration of both spatial and temporal factors.

Through theoretical analysis and simulations, the efficacy and effectiveness of the proposed schemes are validated.
ContributorsJin, Xiaocong (Author) / Zhang, Yanchao (Thesis advisor) / Zhang, Junshan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Ying, Lei (Committee member) / Arizona State University (Publisher)
Created2017
155679-Thumbnail Image.png
Description
Small wireless cells have the potential to overcome bottlenecks in wireless access through the sharing of spectrum resources. A novel access backhaul network architecture based on a Smart Gateway (Sm-GW) between the small cell base stations, e.g., LTE eNBs, and the conventional backhaul gateways, e.g., LTE Servicing/Packet Gateways (S/P-GWs) has

Small wireless cells have the potential to overcome bottlenecks in wireless access through the sharing of spectrum resources. A novel access backhaul network architecture based on a Smart Gateway (Sm-GW) between the small cell base stations, e.g., LTE eNBs, and the conventional backhaul gateways, e.g., LTE Servicing/Packet Gateways (S/P-GWs) has been introduced to address the bottleneck. The Sm-GW flexibly schedules uplink transmissions for the eNBs. Based on software defined networking (SDN) a management mechanism that allows multiple operator to flexibly inter-operate via multiple Sm-GWs with a multitude of small cells has been proposed. This dissertation also comprehensively survey the studies that examine the SDN paradigm in optical networks. Along with the PHY functional split improvements, the performance of Distributed Converged Cable Access Platform (DCCAP) in the cable architectures especially for the Remote-PHY and Remote-MACPHY nodes has been evaluated. In the PHY functional split, in addition to the re-use of infrastructure with a common FFT module for multiple technologies, a novel cross functional split interaction to cache the repetitive QAM symbols across time at the remote node to reduce the transmission rate requirement of the fronthaul link has been proposed.
ContributorsThyagaturu, Akhilesh Thyagaturu (Author) / Reisslein, Martin (Thesis advisor) / Seeling, Patrick (Committee member) / Zhang, Yanchao (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2017
155155-Thumbnail Image.png
Description
Compressed sensing (CS) is a novel approach to collecting and analyzing data of all types. By exploiting prior knowledge of the compressibility of many naturally-occurring signals, specially designed sensors can dramatically undersample the data of interest and still achieve high performance. However, the generated data are pseudorandomly mixed and

Compressed sensing (CS) is a novel approach to collecting and analyzing data of all types. By exploiting prior knowledge of the compressibility of many naturally-occurring signals, specially designed sensors can dramatically undersample the data of interest and still achieve high performance. However, the generated data are pseudorandomly mixed and must be processed before use. In this work, a model of a single-pixel compressive video camera is used to explore the problems of performing inference based on these undersampled measurements. Three broad types of inference from CS measurements are considered: recovery of video frames, target tracking, and object classification/detection. Potential applications include automated surveillance, autonomous navigation, and medical imaging and diagnosis.



Recovery of CS video frames is far more complex than still images, which are known to be (approximately) sparse in a linear basis such as the discrete cosine transform. By combining sparsity of individual frames with an optical flow-based model of inter-frame dependence, the perceptual quality and peak signal to noise ratio (PSNR) of reconstructed frames is improved. The efficacy of this approach is demonstrated for the cases of \textit{a priori} known image motion and unknown but constant image-wide motion.



Although video sequences can be reconstructed from CS measurements, the process is computationally costly. In autonomous systems, this reconstruction step is unnecessary if higher-level conclusions can be drawn directly from the CS data. A tracking algorithm is described and evaluated which can hold target vehicles at very high levels of compression where reconstruction of video frames fails. The algorithm performs tracking by detection using a particle filter with likelihood given by a maximum average correlation height (MACH) target template model.



Motivated by possible improvements over the MACH filter-based likelihood estimation of the tracking algorithm, the application of deep learning models to detection and classification of compressively sensed images is explored. In tests, a Deep Boltzmann Machine trained on CS measurements outperforms a naive reconstruct-first approach.



Taken together, progress in these three areas of CS inference has the potential to lower system cost and improve performance, opening up new applications of CS video cameras.
ContributorsBraun, Henry Carlton (Author) / Turaga, Pavan K (Thesis advisor) / Spanias, Andreas S (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2016