Matching Items (5)
Filtering by

Clear all filters

152113-Thumbnail Image.png
Description
The rapid advancement of wireless technology has instigated the broad deployment of wireless networks. Different types of networks have been developed, including wireless sensor networks, mobile ad hoc networks, wireless local area networks, and cellular networks. These networks have different structures and applications, and require different control algorithms. The focus

The rapid advancement of wireless technology has instigated the broad deployment of wireless networks. Different types of networks have been developed, including wireless sensor networks, mobile ad hoc networks, wireless local area networks, and cellular networks. These networks have different structures and applications, and require different control algorithms. The focus of this thesis is to design scheduling and power control algorithms in wireless networks, and analyze their performances. In this thesis, we first study the multicast capacity of wireless ad hoc networks. Gupta and Kumar studied the scaling law of the unicast capacity of wireless ad hoc networks. They derived the order of the unicast throughput, as the number of nodes in the network goes to infinity. In our work, we characterize the scaling of the multicast capacity of large-scale MANETs under a delay constraint D. We first derive an upper bound on the multicast throughput, and then propose a lower bound on the multicast capacity by proposing a joint coding-scheduling algorithm that achieves a throughput within logarithmic factor of the upper bound. We then study the power control problem in ad-hoc wireless networks. We propose a distributed power control algorithm based on the Gibbs sampler, and prove that the algorithm is throughput optimal. Finally, we consider the scheduling algorithm in collocated wireless networks with flow-level dynamics. Specifically, we study the delay performance of workload-based scheduling algorithm with SRPT as a tie-breaking rule. We demonstrate the superior flow-level delay performance of the proposed algorithm using simulations.
ContributorsZhou, Shan (Author) / Ying, Lei (Thesis advisor) / Zhang, Yanchao (Committee member) / Zhang, Junshan (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2013
190798-Thumbnail Image.png
Description
With the proliferation of mobile computing and Internet-of-Things (IoT), billions of mobile and IoT devices are connected to the Internet, generating zillions of Bytes of data at the network edge. Driving by this trend, there is an urgent need to push the artificial intelligence (AI) frontiers to the network edge

With the proliferation of mobile computing and Internet-of-Things (IoT), billions of mobile and IoT devices are connected to the Internet, generating zillions of Bytes of data at the network edge. Driving by this trend, there is an urgent need to push the artificial intelligence (AI) frontiers to the network edge to unleash the potential of the edge big data fully. This dissertation aims to comprehensively study collaborative learning and optimization algorithms to build a foundation of edge intelligence. Under this common theme, this dissertation is broadly organized into three parts. The first part of this study focuses on model learning with limited data and limited computing capability at the network edge. A global model initialization is first obtained by running federated learning (FL) across many edge devices, based on which a semi-supervised algorithm is devised for an edge device to carry out quick adaptation, aiming to address the insufficiency of labeled data and to learn a personalized model efficiently. In the second part of this study, collaborative learning between the edge and the cloud is studied to achieve real-time edge intelligence. More specifically, a distributionally robust optimization (DRO) approach is proposed to enable the synergy between local data processing and cloud knowledge transfer. Two attractive uncertainty models are investigated corresponding to the cloud knowledge transfer: the distribution uncertainty set based on the cloud data distribution and the prior distribution of the edge model conditioned on the cloud model. Collaborative learning algorithms are developed along this line. The final part focuses on developing an offline model-based safe Inverse Reinforcement Learning (IRL) algorithm for connected Autonomous Vehicles (AVs). A reward penalty is introduced to penalize unsafe states, and a risk-measure-based approach is proposed to mitigate the model uncertainty introduced by offline training. The experimental results demonstrate the improvement of the proposed algorithm over the existing baselines in terms of cumulative rewards.
ContributorsZhang, Zhaofeng (Author) / Zhang, Junshan (Thesis advisor) / Zhang, Yanchao (Thesis advisor) / Dasarathy, Gautam (Committee member) / Fan, Deliang (Committee member) / Arizona State University (Publisher)
Created2023
168293-Thumbnail Image.png
Description
Edge networks pose unique challenges for machine learning and network management. The primary objective of this dissertation is to study deep learning and adaptive control aspects of edge networks and to address some of the unique challenges therein. This dissertation explores four particular problems of interest at the intersection of

Edge networks pose unique challenges for machine learning and network management. The primary objective of this dissertation is to study deep learning and adaptive control aspects of edge networks and to address some of the unique challenges therein. This dissertation explores four particular problems of interest at the intersection of edge intelligence, deep learning and network management. The first problem explores the learning of generative models in edge learning setting. Since the learning tasks in similar environments share model similarity, it is plausible to leverage pre-trained generative models from other edge nodes. Appealing to optimal transport theory tailored towards Wasserstein-1 generative adversarial networks, this part aims to develop a framework which systematically optimizes the generative model learning performance using local data at the edge node while exploiting the adaptive coalescence of pre-trained generative models from other nodes. In the second part, a many-to-one wireless architecture for federated learning at the network edge, where multiple edge devices collaboratively train a model using local data, is considered. The unreliable nature of wireless connectivity, togetherwith the constraints in computing resources at edge devices, dictates that the local updates at edge devices should be carefully crafted and compressed to match the wireless communication resources available and should work in concert with the receiver. Therefore, a Stochastic Gradient Descent based bandlimited coordinate descent algorithm is designed for such settings. The third part explores the adaptive traffic engineering algorithms in a dynamic network environment. The ages of traffic measurements exhibit significant variation due to asynchronization and random communication delays between routers and controllers. Inspired by the software defined networking architecture, a controller-assisted distributed routing scheme with recursive link weight reconfigurations, accounting for the impact of measurement ages and routing instability, is devised. The final part focuses on developing a federated learning based framework for traffic reshaping of electric vehicle (EV) charging. The absence of private EV owner information and scattered EV charging data among charging stations motivates the utilization of a federated learning approach. Federated learning algorithms are devised to minimize peak EV charging demand both spatially and temporarily, while maximizing the charging station profit.
ContributorsDedeoglu, Mehmet (Author) / Zhang, Junshan (Thesis advisor) / Kosut, Oliver (Committee member) / Zhang, Yanchao (Committee member) / Fan, Deliang (Committee member) / Arizona State University (Publisher)
Created2021
157188-Thumbnail Image.png
Description
The commercial semiconductor industry is gearing up for 5G communications in the 28GHz and higher band. In order to maintain the same relative receiver sensitivity, a larger number of antenna elements are required; the larger number of antenna elements is, in turn, driving semiconductor development. The purpose

The commercial semiconductor industry is gearing up for 5G communications in the 28GHz and higher band. In order to maintain the same relative receiver sensitivity, a larger number of antenna elements are required; the larger number of antenna elements is, in turn, driving semiconductor development. The purpose of this paper is to introduce a new method of dividing wireless communication protocols (such as the 802.11a/b/g
and cellular UMTS MAC protocols) across multiple unreliable communication links using a new link layer communication model in concert with a smart antenna aperture design referred to as Vector Antenna. A vector antenna is a ‘smart’ antenna system and as any smart antenna aperture, the design inherently requires unique microwave component performance as well as Digital Signal Processing (DSP) capabilities. This performance and these capabilities are further enhanced with a patented wireless protocol stack capability.
ContributorsJames, Frank Lee (Author) / Reisslein, Martin (Thesis advisor) / Seeling, Patrick (Thesis advisor) / McGarry, Michael (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2019
Description
The purpose of this paper is to introduce a new method of dividing wireless communication (such as the 802.11a/b/g
and cellular UMTS MAC protocols) across multiple unreliable communication links (such as Ethernet). The purpose is to introduce the appropriate hardware, software, and system architecture required to provide the basis for

The purpose of this paper is to introduce a new method of dividing wireless communication (such as the 802.11a/b/g
and cellular UMTS MAC protocols) across multiple unreliable communication links (such as Ethernet). The purpose is to introduce the appropriate hardware, software, and system architecture required to provide the basis for a wireless system (using a 802.11a/b/g
and cellular protocols as a model) that can scale to support thousands of users simultaneously (say in a large office building, super chain store, etc.) or in a small, but very dense communication RF region. Elements of communication between a base station and a Mobile Station will be analyzed statistically to demonstrate higher throughput, fewer collisions and lower bit error rates (BER) with the given bandwidth defined by the 802.11n wireless specification (use of MIMO channels will be evaluated). A new network nodal paradigm will be presented. Alternative link layer communication techniques will be recommended and analyzed for the affect on mobile devices. The analysis will describe how the algorithms used by state machines implemented on Mobile Stations and Wi-Fi client devices will be influenced by new base station transmission behavior. New hardware design techniques that can be used to optimize this architecture as well as hardware design principles in regard to the minimal hardware functional blocks required to support such a system design will be described. Hardware design and verification simulation techniques to prove the hardware design will accommodate an acceptable level of performance to meet the strict timing as it relates to this new system architecture.
ContributorsJames, Frank (Author) / Reisslein, Martin (Thesis advisor) / Ying, Lei (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2014