Matching Items (10)
Filtering by

Clear all filters

152383-Thumbnail Image.png
Description
Data centers connect a larger number of servers requiring IO and switches with low power and delay. Virtualization of IO and network is crucial for these servers, which run virtual processes for computing, storage, and apps. We propose using the PCI Express (PCIe) protocol and a new PCIe switch fabric

Data centers connect a larger number of servers requiring IO and switches with low power and delay. Virtualization of IO and network is crucial for these servers, which run virtual processes for computing, storage, and apps. We propose using the PCI Express (PCIe) protocol and a new PCIe switch fabric for IO and switch virtualization. The switch fabric has little data buffering, allowing up to 512 physical 10 Gb/s PCIe2.0 lanes to be connected via a switch fabric. The switch is scalable with adapters running multiple adaptation protocols, such as Ethernet over PCIe, PCIe over Internet, or FibreChannel over Ethernet. Such adaptation protocols allow integration of IO often required for disjoint datacenter applications such as storage and networking. The novel switch fabric based on space-time carrier sensing facilitates high bandwidth, low power, and low delay multi-protocol switching. To achieve Terabit switching, both time (high transmission speed) and space (multi-stage interconnection network) technologies are required. In this paper, we present the design of an up to 256 lanes Clos-network of multistage crossbar switch fabric for PCIe system. The switch core consists of 48 16x16 crossbar sub-switches. We also propose a new output contention resolution algorithm utilizing an out-of-band protocol of Request-To-Send (RTS), Clear-To-Send (CTS) before sending PCIe packets through the switch fabric. Preliminary power and delay estimates are provided.
ContributorsLuo, Haojun (Author) / Hui, Joseph (Thesis advisor) / Song, Hongjiang (Committee member) / Reisslein, Martin (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2013
153081-Thumbnail Image.png
Description
LTE (Long Term Evolution) represents an emerging technology that will change how service providers backhaul user traffic to their infrastructure over IP networks. To support growing mobile bandwidth demand, an EPON backhaul infrastructure will make possible realtime high bandwidth applications. LTE backhaul planning and deployment scenarios are important

LTE (Long Term Evolution) represents an emerging technology that will change how service providers backhaul user traffic to their infrastructure over IP networks. To support growing mobile bandwidth demand, an EPON backhaul infrastructure will make possible realtime high bandwidth applications. LTE backhaul planning and deployment scenarios are important factors to network success. In this thesis, we are going to study the effect of LTE backhaul on Optical network, in an attempt to interoperate Fiber and Wireless networks. This project is based on traffic forecast for the LTE networks. Traffic models are studied and gathered from literature to reflect applications accurately. Careful capacity planning of the mobile backhaul is going to bring a better experience for LTE users, in terms of bit rates and latency they can expect, while allowing the network operators to spend their funds effectively.
ContributorsAlharbi, Ziyad (Author) / Reisslein, Martin (Thesis advisor) / Zhang, Yanchao (Committee member) / McGarry, Michael (Committee member) / Arizona State University (Publisher)
Created2014
151059-Thumbnail Image.png
Description
With internet traffic being bursty in nature, Dynamic Bandwidth Allocation(DBA) Algorithms have always been very important for any broadband access network to utilize the available bandwidth effciently. It is no different for Passive Optical Networks(PON), which are networks based on fiber optics in the physical layer of TCP/IP stack or

With internet traffic being bursty in nature, Dynamic Bandwidth Allocation(DBA) Algorithms have always been very important for any broadband access network to utilize the available bandwidth effciently. It is no different for Passive Optical Networks(PON), which are networks based on fiber optics in the physical layer of TCP/IP stack or OSI model, which in turn increases the bandwidth in the upper layers. The work in this thesis covers general description of basic DBA Schemes and mathematical derivations that have been established in research. We introduce a Novel Survey Topology that classifes DBA schemes based on their functionality. The novel perspective of classification will be useful in determining which scheme will best suit consumer's needs. We classify DBA as Direct, Intelligent and Predictive back on its computation method and we are able to qualitatively describe their delay and throughput bounds. Also we describe a recently developed DBA Scheme, Multi-thread polling(MTP) used in LRPON and describes the different viewpoints and issues and consequently introduce a novel technique Parallel Polling that overcomes most of issues faced in MTP and that promises better delay performance for LRPON.
ContributorsMercian, Anu (Author) / Reisslein, Martin (Thesis advisor) / McGarry, Michael (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2012
150639-Thumbnail Image.png
Description
A new type of Ethernet switch based on the PCI Express switching fabric is being presented. The switch leverages PCI Express peer-to-peer communication protocol to implement high performance Ethernet packet switching. The advantages and challenges of using the PCI Express as the switching fabric are addressed. The PCI Express is

A new type of Ethernet switch based on the PCI Express switching fabric is being presented. The switch leverages PCI Express peer-to-peer communication protocol to implement high performance Ethernet packet switching. The advantages and challenges of using the PCI Express as the switching fabric are addressed. The PCI Express is a high-speed short-distance communication protocol largely used in motherboard-level interconnects. The total bandwidth of a PCI Express 3.0 link can reach as high as 256 gigabit per second (Gb/s) per 16 lanes. Concerns for PCI Express such as buffer speed, address mapping, Quality of Service and power consumption need to be considered. An overview of the proposed Ethernet switch architecture is presented. The switch consists of a PCI Express switching fabric and multiple adaptor cards. The thesis reviews the peer-to-peer (P2P) communication protocol used in the switching fabric. The thesis also discusses the packet routing procedure in P2P protocol in detail. The Ethernet switch utilizes a portion of the Quality of Service provided with PCI Express to ensure guaranteed transmission. The thesis presents a method of adapting Ethernet packets over the PCI Express transaction layer packets. The adaptor card is divided into the following two parts: receive path and transmit path. The commercial off-the-shelf Media Access Control (MAC) core and PCI Express endpoint core are used in the adaptor. The output address lookup logic block is responsible for converting Ethernet MAC addresses to PCI Express port addresses. Different methods of providing Quality of Service in the adaptor card include classification, flow control, and error detection with the cooperation of the PCI Express switch are discussed. The adaptor logic is implemented in Verilog hardware description language. Functional simulation is conducted in ModelSim. The simulation results show that the Ethernet packets are able to be converted to the corresponding PCI Express transaction layer packets based on their destination MAC addresses. The transaction layer packets are then converted back to Ethernet packets. A functionally correct FPGA logic of the adaptor card is ready for implementation on real FPGA development board.
ContributorsChen, Caiyi (Author) / Hui, Joseph (Thesis advisor) / Reisslein, Martin (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2012
155149-Thumbnail Image.png
Description
Cyber systems, including IoT (Internet of Things), are increasingly being used ubiquitously to vastly improve the efficiency and reduce the cost of critical application areas, such as finance, transportation, defense, and healthcare. Over the past two decades, computing efficiency and hardware cost have dramatically been improved. These improvements have made

Cyber systems, including IoT (Internet of Things), are increasingly being used ubiquitously to vastly improve the efficiency and reduce the cost of critical application areas, such as finance, transportation, defense, and healthcare. Over the past two decades, computing efficiency and hardware cost have dramatically been improved. These improvements have made cyber systems omnipotent, and control many aspects of human lives. Emerging trends in successful cyber system breaches have shown increasing sophistication in attacks and that attackers are no longer limited by resources, including human and computing power. Most existing cyber defense systems for IoT systems have two major issues: (1) they do not incorporate human user behavior(s) and preferences in their approaches, and (2) they do not continuously learn from dynamic environment and effectively adapt to thwart sophisticated cyber-attacks. Consequently, the security solutions generated may not be usable or implementable by the user(s) thereby drastically reducing the effectiveness of these security solutions.

In order to address these major issues, a comprehensive approach to securing ubiquitous smart devices in IoT environment by incorporating probabilistic human user behavioral inputs is presented. The approach will include techniques to (1) protect the controller device(s) [smart phone or tablet] by continuously learning and authenticating the legitimate user based on the touch screen finger gestures in the background, without requiring users’ to provide their finger gesture inputs intentionally for training purposes, and (2) efficiently configure IoT devices through controller device(s), in conformance with the probabilistic human user behavior(s) and preferences, to effectively adapt IoT devices to the changing environment. The effectiveness of the approach will be demonstrated with experiments that are based on collected user behavioral data and simulations.
ContributorsBuduru, Arun Balaji (Author) / Yau, Sik-Sang (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Davulcu, Hasan (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2016
156819-Thumbnail Image.png
Description
Internet of Things (IoT) is emerging as part of the infrastructures for advancing a large variety of applications involving connections of many intelligent devices, leading to smart communities. Due to the severe limitation of the computing resources of IoT devices, it is common to offload tasks of various applications requiring

Internet of Things (IoT) is emerging as part of the infrastructures for advancing a large variety of applications involving connections of many intelligent devices, leading to smart communities. Due to the severe limitation of the computing resources of IoT devices, it is common to offload tasks of various applications requiring substantial computing resources to computing systems with sufficient computing resources, such as servers, cloud systems, and/or data centers for processing. However, this offloading method suffers from both high latency and network congestion in the IoT infrastructures.

Recently edge computing has emerged to reduce the negative impacts of tasks offloading to remote computing systems. As edge computing is in close proximity to IoT devices, it can reduce the latency of task offloading and reduce network congestion. Yet, edge computing has its drawbacks, such as the limited computing resources of some edge computing devices and the unbalanced loads among these devices. In order to effectively explore the potential of edge computing to support IoT applications, it is necessary to have efficient task management and load balancing in edge computing networks.

In this dissertation research, an approach is presented to periodically distributing tasks within the edge computing network while satisfying the quality-of-service (QoS) requirements of tasks. The QoS requirements include task completion deadline and security requirement. The approach aims to maximize the number of tasks that can be accommodated in the edge computing network, with consideration of tasks’ priorities. The goal is achieved through the joint optimization of the computing resource allocation and network bandwidth provisioning. Evaluation results show the improvement of the approach in increasing the number of tasks that can be accommodated in the edge computing network and the efficiency in resource utilization.
ContributorsSong, Yaozhong (Author) / Yau, Sik-Sang (Thesis advisor) / Huang, Dijiang (Committee member) / Sarjoughian, Hessam S. (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2018
154049-Thumbnail Image.png
Description
A Fiber-Wireless (FiWi) network integrates a passive optical network (PON) with wireless mesh networks (WMNs) to provide high speed backhaul via the PON while offering the flexibility and mobility of a WMN. Generally, increasing the size of a WMN leads to higher wireless interference and longer packet delays. The partitioning

A Fiber-Wireless (FiWi) network integrates a passive optical network (PON) with wireless mesh networks (WMNs) to provide high speed backhaul via the PON while offering the flexibility and mobility of a WMN. Generally, increasing the size of a WMN leads to higher wireless interference and longer packet delays. The partitioning of a large WMN into several smaller WMN clusters, whereby each cluster is served by an Optical Network Unit (ONU) of the PON, is examined. Existing WMN throughput-delay analysis techniques considering the mean load of the nodes at a given hop distance from a gateway (ONU) are unsuitable for the heterogeneous nodal traffic loads arising from clustering. A simple analytical queuing model that considers the individual node loads to accurately characterize the throughput-delay performance of a clustered FiWi network is introduced. The accuracy of the model is verified through extensive simulations. It is found that with sufficient PON bandwidth, clustering substantially improves the FiWi network throughput-delay performance by employing the model to examine the impact of the number of clusters on the network throughput-delay performance. Different traffic models and network designs are also studied to improve the FiWi network performance.
ContributorsChen, Po-Yen (Author) / Reisslein, Martin (Thesis advisor) / Seeling, Patrick (Committee member) / Ying, Lei (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2015
154395-Thumbnail Image.png
Description
The integration of passive optical networks (PONs) and wireless mesh networks (WMNs) into Fiber-Wireless (FiWi) networks has recently emerged as a promising strategy for

providing flexible network services at relative high transmission rates. This work investigates the effectiveness of localized routing that prioritizes transmissions over the local gateway to the optical

The integration of passive optical networks (PONs) and wireless mesh networks (WMNs) into Fiber-Wireless (FiWi) networks has recently emerged as a promising strategy for

providing flexible network services at relative high transmission rates. This work investigates the effectiveness of localized routing that prioritizes transmissions over the local gateway to the optical network and avoids wireless packet transmissions in radio zones that do not contain the packet source or destination. Existing routing schemes for FiWi networks consider mainly hop-count and delay metrics over a flat WMN node topology and do not specifically prioritize the local network structure. The combination of clustered and localized routing (CluLoR) performs better in terms of throughput-delay compared to routing schemes that are based on minimum hop-count which do not consider traffic localization. Subsequently, this work also investigates the packet delays when relatively low-rate traffic that has traversed a wireless network is mixed with conventional high-rate PON-only traffic. A range of different FiWi network architectures with different dynamic bandwidth allocation (DBA) mechanisms is considered. The grouping of the optical network units (ONUs) in the double-phase polling (DPP) DBA mechanism in long-range (order of 100~Km) FiWi networks is closely examined, and a novel grouping by cycle length (GCL) strategy that achieves favorable packet delay performance is introduced. At the end, this work proposes a novel backhaul network architecture based on a Smart Gateway (Sm-GW) between the small cell base stations (e.g., LTE eNBs) and the conventional backhaul gateways, e.g., LTE Servicing/Packet Gateway (S/P-GW). The Sm-GW accommodates flexible number of small cells while reducing the infrastructure requirements at the S-GW of LTE backhaul. In contrast to existing methods, the proposed Sm-GW incorporates the scheduling mechanisms to achieve the network fairness while sharing the resources among all the connected small cells base stations.
ContributorsDashti, Yousef (Author) / Reisslein, Martin (Thesis advisor) / Zhang, Yanchao (Committee member) / Fowler, John (Committee member) / Seeling, Patrick (Committee member) / Arizona State University (Publisher)
Created2016
154232-Thumbnail Image.png
Description
Access Networks provide the backbone to the Internet connecting the end-users to

the core network thus forming the most important segment for connectivity. Access

Networks have multiple physical layer medium ranging from fiber cables, to DSL links

and Wireless nodes, creating practically-used hybrid access networks. We explore the

hybrid access network at the Medium

Access Networks provide the backbone to the Internet connecting the end-users to

the core network thus forming the most important segment for connectivity. Access

Networks have multiple physical layer medium ranging from fiber cables, to DSL links

and Wireless nodes, creating practically-used hybrid access networks. We explore the

hybrid access network at the Medium ACcess (MAC) Layer which receives packets

segregated as data and control packets, thus providing the needed decoupling of data

and control plane. We utilize the Software Defined Networking (SDN) principle of

centralized processing with segregated data and control plane to further extend the

usability of our algorithms. This dissertation introduces novel techniques in Dynamic

Bandwidth allocation, control message scheduling policy, flow control techniques and

Grouping techniques to provide improved performance in Hybrid Passive Optical Networks (PON) such as PON-xDSL, FiWi etc. Finally, we study the different types of

software defined algorithms in access networks and describe the various open challenges and research directions.
ContributorsMercian, Anu (Author) / Reisslein, Martin (Thesis advisor) / McGarry, Michael P (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2015
157577-Thumbnail Image.png
Description
Emerging from years of research and development, the Internet-of-Things (IoT) has finally paved its way into our daily lives. From smart home to Industry 4.0, IoT has been fundamentally transforming numerous domains with its unique superpower of interconnecting world-wide devices. However, the capability of IoT is largely constrained by the

Emerging from years of research and development, the Internet-of-Things (IoT) has finally paved its way into our daily lives. From smart home to Industry 4.0, IoT has been fundamentally transforming numerous domains with its unique superpower of interconnecting world-wide devices. However, the capability of IoT is largely constrained by the limited resources it can employ in various application scenarios, including computing power, network resource, dedicated hardware, etc. The situation is further exacerbated by the stringent quality-of-service (QoS) requirements of many IoT applications, such as delay, bandwidth, security, reliability, and more. This mismatch in resources and demands has greatly hindered the deployment and utilization of IoT services in many resource-intense and QoS-sensitive scenarios like autonomous driving and virtual reality.

I believe that the resource issue in IoT will persist in the near future due to technological, economic and environmental factors. In this dissertation, I seek to address this issue by means of smart resource allocation. I propose mathematical models to formally describe various resource constraints and application scenarios in IoT. Based on these, I design smart resource allocation algorithms and protocols to maximize the system performance in face of resource restrictions. Different aspects are tackled, including networking, security, and economics of the entire IoT ecosystem. For different problems, different algorithmic solutions are devised, including optimal algorithms, provable approximation algorithms, and distributed protocols. The solutions are validated with rigorous theoretical analysis and/or extensive simulation experiments.
ContributorsYu, Ruozhou, Ph.D (Author) / Xue, Guoliang (Thesis advisor) / Huang, Dijiang (Committee member) / Sen, Arunabha (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2019