Matching Items (69)
156246-Thumbnail Image.png
Description
Diffusion processes in networks can be used to model many real-world processes, such as the propagation of a rumor on social networks and cascading failures on power networks. Analysis of diffusion processes in networks can help us answer important questions such as the role and the importance of each node

Diffusion processes in networks can be used to model many real-world processes, such as the propagation of a rumor on social networks and cascading failures on power networks. Analysis of diffusion processes in networks can help us answer important questions such as the role and the importance of each node in the network for spreading the diffusion and how to top or contain a cascading failure in the network. This dissertation consists of three parts.

In the first part, we study the problem of locating multiple diffusion sources in networks under the Susceptible-Infected-Recovered (SIR) model. Given a complete snapshot of the network, we developed a sample-path-based algorithm, named clustering and localization, and proved that for regular trees, the estimators produced by the proposed algorithm are within a constant distance from the real sources with a high probability. Then, we considered the case in which only a partial snapshot is observed and proposed a new algorithm, named Optimal-Jordan-Cover (OJC). The algorithm first extracts a subgraph using a candidate selection algorithm that selects source candidates based on the number of observed infected nodes in their neighborhoods. Then, in the extracted subgraph, OJC finds a set of nodes that "cover" all observed infected nodes with the minimum radius. The set of nodes is called the Jordan cover, and is regarded as the set of diffusion sources. We proved that OJC can locate all sources with probability one asymptotically with partial observations in the Erdos-Renyi (ER) random graph. Multiple experiments on different networks were done, which show our algorithms outperform others.

In the second part, we tackle the problem of reconstructing the diffusion history from partial observations. We formulated the diffusion history reconstruction problem as a maximum a posteriori (MAP) problem and proved the problem is NP hard. Then we proposed a step-by- step reconstruction algorithm, which can always produce a diffusion history that is consistent with the partial observations. Our experimental results based on synthetic and real networks show that the algorithm significantly outperforms some existing methods.

In the third part, we consider the problem of improving the robustness of an interdependent network by rewiring a small number of links during a cascading attack. We formulated the problem as a Markov decision process (MDP) problem. While the problem is NP-hard, we developed an effective and efficient algorithm, RealWire, to robustify the network and to mitigate the damage during the attack. Extensive experimental results show that our algorithm outperforms other algorithms on most of the robustness metrics.
ContributorsChen, Zhen (Author) / Ying, Lei (Thesis advisor) / Tong, Hanghang (Thesis advisor) / Zhang, Junshan (Committee member) / He, Jingrui (Committee member) / Arizona State University (Publisher)
Created2018
155997-Thumbnail Image.png
Description
This thesis investigates three different resource allocation problems, aiming to achieve two common goals: i) adaptivity to a fast-changing environment, ii) distribution of the computation tasks to achieve a favorable solution. The motivation for this work relies on the modern-era proliferation of sensors and devices, in the Data Acquisition Systems

This thesis investigates three different resource allocation problems, aiming to achieve two common goals: i) adaptivity to a fast-changing environment, ii) distribution of the computation tasks to achieve a favorable solution. The motivation for this work relies on the modern-era proliferation of sensors and devices, in the Data Acquisition Systems (DAS) layer of the Internet of Things (IoT) architecture. To avoid congestion and enable low-latency services, limits have to be imposed on the amount of decisions that can be centralized (i.e. solved in the ``cloud") and/or amount of control information that devices can exchange. This has been the motivation to develop i) a lightweight PHY Layer protocol for time synchronization and scheduling in Wireless Sensor Networks (WSNs), ii) an adaptive receiver that enables Sub-Nyquist sampling, for efficient spectrum sensing at high frequencies, and iii) an SDN-scheme for resource-sharing across different technologies and operators, to harmoniously and holistically respond to fluctuations in demands at the eNodeB' s layer.

The proposed solution for time synchronization and scheduling is a new protocol, called PulseSS, which is completely event-driven and is inspired by biological networks. The results on convergence and accuracy for locally connected networks, presented in this thesis, constitute the theoretical foundation for the protocol in terms of performance guarantee. The derived limits provided guidelines for ad-hoc solutions in the actual implementation of the protocol.

The proposed receiver for Compressive Spectrum Sensing (CSS) aims at tackling the noise folding phenomenon, e.g., the accumulation of noise from different sub-bands that are folded, prior to sampling and baseband processing, when an analog front-end aliasing mixer is utilized.

The sensing phase design has been conducted via a utility maximization approach, thus the scheme derived has been called Cognitive Utility Maximization Multiple Access (CUMMA).

The framework described in the last part of the thesis is inspired by stochastic network optimization tools and dynamics.

While convergence of the proposed approach remains an open problem, the numerical results here presented suggest the capability of the algorithm to handle traffic fluctuations across operators, while respecting different time and economic constraints.

The scheme has been named Decomposition of Infrastructure-based Dynamic Resource Allocation (DIDRA).
ContributorsFerrari, Lorenzo (Author) / Scaglione, Anna (Thesis advisor) / Bliss, Daniel (Committee member) / Ying, Lei (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2017
155971-Thumbnail Image.png
Description
Our ability to understand networks is important to many applications, from the analysis and modeling of biological networks to analyzing social networks. Unveiling network dynamics allows us to make predictions and decisions. Moreover, network dynamics models have inspired new ideas for computational methods involving multi-agent cooperation, offering effective solutions for

Our ability to understand networks is important to many applications, from the analysis and modeling of biological networks to analyzing social networks. Unveiling network dynamics allows us to make predictions and decisions. Moreover, network dynamics models have inspired new ideas for computational methods involving multi-agent cooperation, offering effective solutions for optimization tasks. This dissertation presents new theoretical results on network inference and multi-agent optimization, split into two parts -

The first part deals with modeling and identification of network dynamics. I study two types of network dynamics arising from social and gene networks. Based on the network dynamics, the proposed network identification method works like a `network RADAR', meaning that interaction strengths between agents are inferred by injecting `signal' into the network and observing the resultant reverberation. In social networks, this is accomplished by stubborn agents whose opinions do not change throughout a discussion. In gene networks, genes are suppressed to create desired perturbations. The steady-states under these perturbations are characterized. In contrast to the common assumption of full rank input, I take a laxer assumption where low-rank input is used, to better model the empirical network data. Importantly, a network is proven to be identifiable from low rank data of rank that grows proportional to the network's sparsity. The proposed method is applied to synthetic and empirical data, and is shown to offer superior performance compared to prior work. The second part is concerned with algorithms on networks. I develop three consensus-based algorithms for multi-agent optimization. The first method is a decentralized Frank-Wolfe (DeFW) algorithm. The main advantage of DeFW lies on its projection-free nature, where we can replace the costly projection step in traditional algorithms by a low-cost linear optimization step. I prove the convergence rates of DeFW for convex and non-convex problems. I also develop two consensus-based alternating optimization algorithms --- one for least square problems and one for non-convex problems. These algorithms exploit the problem structure for faster convergence and their efficacy is demonstrated by numerical simulations.

I conclude this dissertation by describing future research directions.
ContributorsWai, Hoi To (Author) / Scaglione, Anna (Thesis advisor) / Berisha, Visar (Committee member) / Nedich, Angelia (Committee member) / Ying, Lei (Committee member) / Arizona State University (Publisher)
Created2017
152527-Thumbnail Image.png
Description
This thesis report aims at introducing the background of QR decomposition and its application. QR decomposition using Givens rotations is a efficient method to prevent directly matrix inverse in solving least square minimization problem, which is a typical approach for weight calculation in adaptive beamforming. Furthermore, this thesis introduces Givens

This thesis report aims at introducing the background of QR decomposition and its application. QR decomposition using Givens rotations is a efficient method to prevent directly matrix inverse in solving least square minimization problem, which is a typical approach for weight calculation in adaptive beamforming. Furthermore, this thesis introduces Givens rotations algorithm and two general VLSI (very large scale integrated circuit) architectures namely triangular systolic array and linear systolic array for numerically QR decomposition. To fulfill the goal, a 4 input channels triangular systolic array with 16 bits fixed-point format and a 5 input channels linear systolic array are implemented on FPGA (Field programmable gate array). The final result shows that the estimated clock frequencies of 65 MHz and 135 MHz on post-place and route static timing report could be achieved using Xilinx Virtex 6 xc6vlx240t chip. Meanwhile, this report proposes a new method to test the dynamic range of QR-D. The dynamic range of the both architectures can be achieved around 110dB.
ContributorsYu, Hanguang (Author) / Bliss, Daniel W (Thesis advisor) / Ying, Lei (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2014
158513-Thumbnail Image.png
Description
This dissertation studies the scheduling in two stochastic networks, a co-located wireless network and an outpatient healthcare network, both of which have a cyclic planning horizon and a deadline-related performance metric.

For the co-located wireless network, a time-slotted system is considered. A cycle of planning horizon is called a frame,

This dissertation studies the scheduling in two stochastic networks, a co-located wireless network and an outpatient healthcare network, both of which have a cyclic planning horizon and a deadline-related performance metric.

For the co-located wireless network, a time-slotted system is considered. A cycle of planning horizon is called a frame, which consists of a fixed number of time slots. The size of the frame is determined by the upper-layer applications. Packets with deadlines arrive at the beginning of each frame and will be discarded if missing their deadlines, which are in the same frame. Each link of the network is associated with a quality of service constraint and an average transmit power constraint. For this system, a MaxWeight-type problem for which the solutions achieve the throughput optimality is formulated. Since the computational complexity of solving the MaxWeight-type problem with exhaustive search is exponential even for a single-link system, a greedy algorithm with complexity O(nlog(n)) is proposed, which is also throughput optimal.

The outpatient healthcare network is modeled as a discrete-time queueing network, in which patients receive diagnosis and treatment planning that involves collaboration between multiple service stations. For each patient, only the root (first) appointment can be scheduled as the following appointments evolve stochastically. The cyclic planing horizon is a week. The root appointment is optimized to maximize the proportion of patients that can complete their care by a class-dependent deadline. In the optimization algorithm, the sojourn time of patients in the healthcare network is approximated with a doubly-stochastic phase-type distribution. To address the computational intractability, a mean-field model with convergence guarantees is proposed. A linear programming-based policy improvement framework is developed, which can approximately solve the original large-scale stochastic optimization in queueing networks of realistic sizes.
ContributorsLiu, Yiqiu (Author) / Ying, Lei (Thesis advisor) / Shi, Pengyi (Committee member) / Wang, Weina (Committee member) / Zhang, Junshan (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2020
157816-Thumbnail Image.png
Description
This dissertation studies load balancing algorithms for many-server systems (with N servers) and focuses on the steady-state performance of load balancing algorithms in the heavy traffic regime. The framework of Stein’s method and (iterative) state-space collapse (SSC) are used to analyze three load balancing systems: 1) load balancing in the

This dissertation studies load balancing algorithms for many-server systems (with N servers) and focuses on the steady-state performance of load balancing algorithms in the heavy traffic regime. The framework of Stein’s method and (iterative) state-space collapse (SSC) are used to analyze three load balancing systems: 1) load balancing in the Sub-Halfin-Whitt regime with exponential service time; 2) load balancing in the Beyond-Halfin-Whitt regime with exponential service time; 3) load balancing in the Sub-Halfin-Whitt regime with Coxian-2 service time.

When in the Sub-Halfin-Whitt regime, the sufficient conditions are established such that any load balancing algorithm that satisfies the conditions have both asymptotic zero waiting time and zero waiting probability. Furthermore, the number of servers with more than one jobs is o(1), in other words, the system collapses to a one-dimensional space. The result is proven using Stein’s method and state space collapse (SSC), which are powerful mathematical tools for steady-state analysis of load balancing algorithms. The second system is in even “heavier” traffic regime, and an iterative refined procedure is proposed to obtain the steady-state metrics. Again, asymptotic zero delay and waiting are established for a set of load balancing algorithms. Different from the first system, the system collapses to a two-dimensional state-space instead of one-dimensional state-space. The third system is more challenging because of “non-monotonicity” with Coxian-2 service time, and an iterative state space collapse is proposed to tackle the “non-monotonicity” challenge. For these three systems, a set of load balancing algorithms is established, respectively, under which the probability that an incoming job is routed to an idle server is one asymptotically at steady-state. The set of load balancing algorithms includes join-the-shortest-queue (JSQ), idle-one-first(I1F), join-the-idle-queue (JIQ), and power-of-d-choices (Pod) with a carefully-chosen d.
ContributorsLiu, Xin (Author) / Ying, Lei (Thesis advisor) / Maguluri, Siva Theja (Committee member) / Wang, Weina (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2019
158763-Thumbnail Image.png
Description
The first half of this dissertation introduces a minimum cost incentive mechanism for collecting discrete distributed private data for big-data analysis. The goal of an incentive mechanism is to incentivize informative reports and make sure randomization in the reported data does not exceed a target level. It answers two fundamental

The first half of this dissertation introduces a minimum cost incentive mechanism for collecting discrete distributed private data for big-data analysis. The goal of an incentive mechanism is to incentivize informative reports and make sure randomization in the reported data does not exceed a target level. It answers two fundamental questions: what is the minimum payment required to incentivize an individual to submit data with quality level $\epsilon$? and what incentive mechanisms can achieve the minimum payment? A lower bound on the minimum amount of payment required for guaranteeing quality level $\epsilon$ is derived. Inspired by the lower bound, our incentive mechanism (WINTALL) first decides a winning answer based on reported data, then pays to individuals whose reported data match the winning answer. The expected payment of WINTALL matches lower bound asymptotically. Real-world experiments on Amazon Mechanical Turk are presented to further illustrate novelty of the principle behind WINTALL.

The second half studies problem of iterative training in Federated Learning. A system with a single parameter server and $M$ client devices is considered for training a predictive learning model with distributed data. The clients communicate with the parameter server using a common wireless channel so each time, only one device can transmit. The training is an iterative process consisting of multiple rounds. Adaptive training is considered where the parameter server decides when to stop/restart a new round, so the problem is formulated as an optimal stopping problem. While this optimal stopping problem is difficult to solve, a modified optimal stopping problem is proposed. Then a low complexity algorithm is introduced to solve the modified problem, which also works for the original problem. Experiments on a real data set shows significant improvements compared with policies collecting a fixed number of updates in each iteration.
ContributorsJiang, Pengfei (Author) / Ying, Lei (Thesis advisor) / Zhang, Junshan (Committee member) / Zhang, Yanchao (Committee member) / Wang, Weina (Committee member) / Arizona State University (Publisher)
Created2020
158599-Thumbnail Image.png
Description
This dissertation presents a novel algorithm for recovering missing values of co-evolving time series with partial embedded network information. The idea is to connect two sources of data through a shared low dimensional latent space. The proposed algorithm, named NetDyna, is an Expectation-Maximization algorithm, and uses the Kalman filter and

This dissertation presents a novel algorithm for recovering missing values of co-evolving time series with partial embedded network information. The idea is to connect two sources of data through a shared low dimensional latent space. The proposed algorithm, named NetDyna, is an Expectation-Maximization algorithm, and uses the Kalman filter and matrix factorization approaches to infer the missing values both in the time series and embedded network. The experimental results on real datasets, including a Motes dataset and a Motion Capture dataset, show that (1) NetDyna outperforms other state-of-the-art algorithms, especially with partially observed network information; (2) its computational complexity scales linearly with the time duration of time series; and (3) the algorithm recovers the embedded network in addition to missing time series values.

This dissertation also studies a load balancing algorithm, the so called power-of-two-choices(Po2), for many-server systems (with N servers) and focuses on the convergence of stationary distribution of Po2 in the both light and heavy traffic regimes to the solution of mean-field system. The framework of Stein’s method and state space collapse (SSC) are used to analyze both regimes.

In both regimes, the thesis first uses the argument of state space collapse to show that the probability of the state being far from the mean-field solution is small enough. By a simple Markov inequality, it is able to show that the probability is indeed very small with a proper choice of parameters.

Then, for the state space close to the solution of mean-field model, the thesis uses Stein’s method to show that the stochastic system is close to a linear mean-field model. By characterizing the generator difference, it is able to characterize the dominant terms in both regimes. Note that for heavy traffic case, the lower and upper bound analysis of a tridiagonal matrix, which arises from the linear mean-field model, is needed. From the dominant term, it allows to calculate the coefficient of the convergence rate.

In the end, comparisons between the theoretical predictions and numerical simulations are presented.
ContributorsHairi, FNU (Author) / Ying, Lei (Thesis advisor) / Wang, Weina (Committee member) / Zhang, Junshan (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2020
161788-Thumbnail Image.png
Description
Collision-free path planning is also a major challenge in managing unmanned aerial vehicles (UAVs) fleets, especially in uncertain environments. The design of UAV routing policies using multi-agent reinforcement learning has been considered, and propose a Multi-resolution, Multi-agent, Mean-field reinforcement learning algorithm, named 3M-RL, for flight planning, where multiple vehicles need

Collision-free path planning is also a major challenge in managing unmanned aerial vehicles (UAVs) fleets, especially in uncertain environments. The design of UAV routing policies using multi-agent reinforcement learning has been considered, and propose a Multi-resolution, Multi-agent, Mean-field reinforcement learning algorithm, named 3M-RL, for flight planning, where multiple vehicles need to avoid collisions with each other while moving towards their destinations. In this system, each UAV makes decisions based on local observations, and does not communicate with other UAVs. The algorithm trains a routing policy using an Actor-Critic neural network with multi-resolution observations, including detailed local information and aggregated global information based on mean-field. The algorithm tackles the curse-of-dimensionality problem in multi-agent reinforcement learning and provides a scalable solution. The proposed algorithm is tested in different complex scenarios in both 2D and 3D space and the simulation results show that 3M-RL result in good routing policies. Also as a compliment, dynamic data communications between UAVs and a control center has also been studied, where the control center needs to monitor the safety state of each UAV in the system in real time, where the transition of risk level is simply considered as a Markov process. Given limited communication bandwidth, it is impossible for the control center to communicate with all UAVs at the same time. A dynamic learning problem with limited communication bandwidth is also discussed in this paper where the objective is to minimize the total information entropy in real-time risk level tracking. The simulations also demonstrate that the algorithm outperforms policies such as a Round & Robin policy.
ContributorsWang, Weichang (Author) / Ying, Lei (Thesis advisor) / Liu, Yongming (Thesis advisor) / Zhang, Junshan (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2021
161790-Thumbnail Image.png
Description
The seminal work of Lasry and Lion showed the existence of Nash equilibria in thecontinuum limit of agents who try to optimize their own utility functions. However, a lot of work in this region is predicated on strong assumptions on the asymptotic independence of the agents and their homogeneity. This work explores

The seminal work of Lasry and Lion showed the existence of Nash equilibria in thecontinuum limit of agents who try to optimize their own utility functions. However, a lot of work in this region is predicated on strong assumptions on the asymptotic independence of the agents and their homogeneity. This work explores the existence of Equilibria under the limit for Markov Decision Processes for density dependent continuous time Markov chains. Under suitable conditions it is possible to show that the empirical measure of the agents converges in finite time to a time invariant distribution which makes the solution of the MDP tractable. This key step allows one to show not only the existence of equilibria for these MDPs without asymptotic independence but also a tractable means to find said equilibria. Finally, this work shows that a fixed point does exist in the in finite state limit. However, to show that such a limit is indeed a Nash equilibrium remains an open problem.
ContributorsNarasimha, Dheeraj (Author) / Ying, Lei (Thesis advisor) / Dasarathy, Gautam (Thesis advisor) / Liu, Yongmin (Committee member) / Shakkottai, Srinivas (Committee member) / Arizona State University (Publisher)
Created2021