Matching Items (7)
Filtering by

Clear all filters

151475-Thumbnail Image.png
Description
The cyber-physical systems (CPS) are emerging as the underpinning technology for major industries in the 21-th century. This dissertation is focused on two fundamental issues in cyber-physical systems: network interdependence and information dynamics. It consists of the following two main thrusts. The first thrust is targeted at understanding the impact

The cyber-physical systems (CPS) are emerging as the underpinning technology for major industries in the 21-th century. This dissertation is focused on two fundamental issues in cyber-physical systems: network interdependence and information dynamics. It consists of the following two main thrusts. The first thrust is targeted at understanding the impact of network interdependence. It is shown that a cyber-physical system built upon multiple interdependent networks are more vulnerable to attacks since node failures in one network may result in failures in the other network, causing a cascade of failures that would potentially lead to the collapse of the entire infrastructure. There is thus a need to develop a new network science for modeling and quantifying cascading failures in multiple interdependent networks, and to develop network management algorithms that improve network robustness and ensure overall network reliability against cascading failures. To enhance the system robustness, a "regular" allocation strategy is proposed that yields better resistance against cascading failures compared to all possible existing strategies. Furthermore, in view of the load redistribution feature in many physical infrastructure networks, e.g., power grids, a CPS model is developed where the threshold model and the giant connected component model are used to capture the node failures in the physical infrastructure network and the cyber network, respectively. The second thrust is centered around the information dynamics in the CPS. One speculation is that the interconnections over multiple networks can facilitate information diffusion since information propagation in one network can trigger further spread in the other network. With this insight, a theoretical framework is developed to analyze information epidemic across multiple interconnecting networks. It is shown that the conjoining among networks can dramatically speed up message diffusion. Along a different avenue, many cyber-physical systems rely on wireless networks which offer platforms for information exchanges. To optimize the QoS of wireless networks, there is a need to develop a high-throughput and low-complexity scheduling algorithm to control link dynamics. To that end, distributed link scheduling algorithms are explored for multi-hop MIMO networks and two CSMA algorithms under the continuous-time model and the discrete-time model are devised, respectively.
ContributorsQian, Dajun (Author) / Zhang, Junshan (Thesis advisor) / Ying, Lei (Committee member) / Zhang, Yanchao (Committee member) / Cochran, Douglas (Committee member) / Arizona State University (Publisher)
Created2012
153915-Thumbnail Image.png
Description
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition

Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
ContributorsIlkturk, Utku (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Renaut, Rosemary (Committee member) / Armbruster, Dieter (Committee member) / Arizona State University (Publisher)
Created2015
153629-Thumbnail Image.png
Description
The explosive growth of data generated from different services has opened a new vein of research commonly called ``big data.'' The sheer volume of the information in this data has yielded new applications in a wide range of fields, but the difficulties inherent in processing the enormous amount of

The explosive growth of data generated from different services has opened a new vein of research commonly called ``big data.'' The sheer volume of the information in this data has yielded new applications in a wide range of fields, but the difficulties inherent in processing the enormous amount of data, as well as the rate at which it is generated, also give rise to significant challenges. In particular, processing, modeling, and understanding the structure of online social networks is computationally difficult due to these challenges. The goal of this study is twofold: first to present a new networked data processing framework to model this social structure, and second to highlight the wireless networking gains possible by using this social structure.

The first part of the dissertation considers a new method for modeling social networks via probabilistic graphical models. Specifically, this new method employs the t-cherry junction tree, a recent advancement in probabilistic graphical models, to develop a compact representation and good approximation of an otherwise intractable probabilistic model. There are a number of advantages in this approach: 1) the best approximation possible via junction trees belongs to the class of t-cherry junction trees; 2) constructing a t-cherry junction tree can be largely parallelized; and 3) inference can be performed using distributed computation. To improve the quality of approximation, an algorithm to build a higher order tree gracefully from an existing one, without constructing it from scratch, is developed. this approach is applied to Twitter data containing 100,000 nodes to study the problem of recommending connections to new users.

Next, the t-cherry junction tree framework is extended by considering the impact of estimating the distributions involved from a training data set. Understanding this impact is vital to real-world applications as distributions are not known perfectly, but rather generated from training data. First, the fidelity of the t-cherry junction tree approximation due to this estimation is quantified. Then the scaling behavior, in terms of the size of the t-cherry junction tree, is approximated to show that higher-order t-cherry junction trees, which with perfect information are higher fidelity approximations, may actually result in decreased fidelity due to the difficulties in accurately estimating higher-dimensional distributions. Finally, this part concludes by demonstrating these findings by considering a distributed detection situation in which the sensors' measurements are correlated.

Having developed a framework to model social structure in online social networks, the study then highlights two approaches for utilizing this social network data in existing wireless communication networks. The first approach is a novel application: using social networks to enhance device-to-device wireless communication. It is well known that wireless communication can be significantly improved by utilizing relays to aid in transmission. Rather than deploying dedicated relays, a system is designed in which users can relay traffic for other users if there is a shared social trust between them, e.g., they are ``friends'' on Facebook, and for users that do not share social trust, implements a coalitional game framework to motivate users to relay traffic for each other. This framework guarantees that all users improve their throughput via relaying while ensuring that each user will function as a relay only if there is a social trust relationship or, if there is no social trust, a cycle of reciprocity is established in which a set of users will agree to relay for each other. This new system shows significant throughput gain in simulated networks that utilize real-world social network traces.

The second application of social structure to wireless communication is an approach to reduce the congestion in cellular networks during peak times. This is achieved by two means: preloading and offloading. Preloading refers to the process of using social network data to predict user demand and serve some users early, before the cellular network traffic peaks. Offloading allows users that have already obtained a copy of the content to opportunistically serve other users using device-to-device communication, thus eliminating the need for some cellular traffic. These two methods work especially well in tandem, as preloading creates a base of users that can serve later users via offloading. These two processes can greatly reduce the peak cellular traffic under ideal conditions, and in a more realistic situation, the impact of uncertainty in human mobility and the social network structure is analyzed. Even with the randomness inherent in these processes, both preloading and offloading offer substantial improvement. Finally, potential difficulties in preloading multiple pieces of content simultaneously are highlighted, and a heuristic method to solve these challenges is developed.
ContributorsProulx, Brian (Author) / Zhang, Junshan (Thesis advisor) / Cochran, Douglas (Committee member) / Ying, Lei (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2015
153686-Thumbnail Image.png
Description
A principal goal of this dissertation is to study wireless network design and optimization with the focus on two perspectives: 1) socially-aware mobile networking and computing; 2) security and privacy in wireless networking. Under this common theme, this dissertation can be broadly organized into three parts.

The first part studies socially-aware

A principal goal of this dissertation is to study wireless network design and optimization with the focus on two perspectives: 1) socially-aware mobile networking and computing; 2) security and privacy in wireless networking. Under this common theme, this dissertation can be broadly organized into three parts.

The first part studies socially-aware mobile networking and computing. First, it studies random access control and power control under a social group utility maximization (SGUM) framework. The socially-aware Nash equilibria (SNEs) are derived and analyzed. Then, it studies mobile crowdsensing under an incentive mechanism that exploits social trust assisted reciprocity (STAR). The efficacy of the STAR mechanism is thoroughly investigated. Next, it studies mobile users' data usage behaviors under the impact of social services and the wireless operator's pricing. Based on a two-stage Stackelberg game formulation, the user demand equilibrium (UDE) is analyzed in Stage II and the optimal pricing strategy is developed in Stage I. Last, it studies opportunistic cooperative networking under an optimal stopping framework with two-level decision-making. For both cases with or without dedicated relays, the optimal relaying strategies are derived and analyzed.

The second part studies radar sensor network coverage for physical security. First, it studies placement of bistatic radar (BR) sensor networks for barrier coverage. The optimality of line-based placement is analyzed, and the optimal placement of BRs on a line segment is characterized. Then, it studies the coverage of radar sensor networks that exploits the Doppler effect. Based on a Doppler coverage model, an efficient method is devised to characterize Doppler-covered regions and an algorithm is developed to find the minimum radar density required for Doppler coverage.

The third part studies cyber security and privacy in socially-aware networking and computing. First, it studies random access control, cooperative jamming, and spectrum access under an extended SGUM framework that incorporates negative social ties. The SNEs are derived and analyzed. Then, it studies pseudonym change for personalized location privacy under the SGUM framework. The SNEs are analyzed and an efficient algorithm is developed to find an SNE with desirable properties.
ContributorsGong, Xiaowen (Author) / Zhang, Junshan (Thesis advisor) / Cochran, Douglas (Committee member) / Ying, Lei (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2015
154471-Thumbnail Image.png
Description
The data explosion in the past decade is in part due to the widespread use of rich sensors that measure various physical phenomenon -- gyroscopes that measure orientation in phones and fitness devices, the Microsoft Kinect which measures depth information, etc. A typical application requires inferring the underlying physical phenomenon

The data explosion in the past decade is in part due to the widespread use of rich sensors that measure various physical phenomenon -- gyroscopes that measure orientation in phones and fitness devices, the Microsoft Kinect which measures depth information, etc. A typical application requires inferring the underlying physical phenomenon from data, which is done using machine learning. A fundamental assumption in training models is that the data is Euclidean, i.e. the metric is the standard Euclidean distance governed by the L-2 norm. However in many cases this assumption is violated, when the data lies on non Euclidean spaces such as Riemannian manifolds. While the underlying geometry accounts for the non-linearity, accurate analysis of human activity also requires temporal information to be taken into account. Human movement has a natural interpretation as a trajectory on the underlying feature manifold, as it evolves smoothly in time. A commonly occurring theme in many emerging problems is the need to \emph{represent, compare, and manipulate} such trajectories in a manner that respects the geometric constraints. This dissertation is a comprehensive treatise on modeling Riemannian trajectories to understand and exploit their statistical and dynamical properties. Such properties allow us to formulate novel representations for Riemannian trajectories. For example, the physical constraints on human movement are rarely considered, which results in an unnecessarily large space of features, making search, classification and other applications more complicated. Exploiting statistical properties can help us understand the \emph{true} space of such trajectories. In applications such as stroke rehabilitation where there is a need to differentiate between very similar kinds of movement, dynamical properties can be much more effective. In this regard, we propose a generalization to the Lyapunov exponent to Riemannian manifolds and show its effectiveness for human activity analysis. The theory developed in this thesis naturally leads to several benefits in areas such as data mining, compression, dimensionality reduction, classification, and regression.
ContributorsAnirudh, Rushil (Author) / Turaga, Pavan (Thesis advisor) / Cochran, Douglas (Committee member) / Runger, George C. (Committee member) / Taylor, Thomas (Committee member) / Arizona State University (Publisher)
Created2016
135475-Thumbnail Image.png
Description
Divergence functions are both highly useful and fundamental to many areas in information theory and machine learning, but require either parametric approaches or prior knowledge of labels on the full data set. This paper presents a method to estimate the divergence between two data sets in the absence of fully

Divergence functions are both highly useful and fundamental to many areas in information theory and machine learning, but require either parametric approaches or prior knowledge of labels on the full data set. This paper presents a method to estimate the divergence between two data sets in the absence of fully labeled data. This semi-labeled case is common in many domains where labeling data by hand is expensive or time-consuming, or wherever large data sets are present. The theory derived in this paper is demonstrated on a simulated example, and then applied to a feature selection and classification problem from pathological speech analysis.
ContributorsGilton, Davis Leland (Author) / Berisha, Visar (Thesis director) / Cochran, Douglas (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
158716-Thumbnail Image.png
Description
The availability of data for monitoring and controlling the electrical grid has increased exponentially over the years in both resolution and quantity leaving a large data footprint. This dissertation is motivated by the need for equivalent representations of grid data in lower-dimensional feature spaces so that

The availability of data for monitoring and controlling the electrical grid has increased exponentially over the years in both resolution and quantity leaving a large data footprint. This dissertation is motivated by the need for equivalent representations of grid data in lower-dimensional feature spaces so that machine learning algorithms can be employed for a variety of purposes. To achieve that, without sacrificing the interpretation of the results, the dissertation leverages the physics behind power systems, well-known laws that underlie this man-made infrastructure, and the nature of the underlying stochastic phenomena that define the system operating conditions as the backbone for modeling data from the grid.

The first part of the dissertation introduces a new framework of graph signal processing (GSP) for the power grid, Grid-GSP, and applies it to voltage phasor measurements that characterize the overall system state of the power grid. Concepts from GSP are used in conjunction with known power system models in order to highlight the low-dimensional structure in data and present generative models for voltage phasors measurements. Applications such as identification of graphical communities, network inference, interpolation of missing data, detection of false data injection attacks and data compression are explored wherein Grid-GSP based generative models are used.

The second part of the dissertation develops a model for a joint statistical description of solar photo-voltaic (PV) power and the outdoor temperature which can lead to better management of power generation resources so that electricity demand such as air conditioning and supply from solar power are always matched in the face of stochasticity. The low-rank structure inherent in solar PV power data is used for forecasting and to detect partial-shading type of faults in solar panels.
ContributorsRamakrishna, Raksha (Author) / Scaglione, Anna (Thesis advisor) / Cochran, Douglas (Committee member) / Spanias, Andreas (Committee member) / Vittal, Vijay (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2020