Matching Items (21)

132101-Thumbnail Image.png

Improving on 802.11: Streaming Audio and Quality of Service

Description

Ad hoc wireless networks present several interesting problems, one of which is Medium Access Control (MAC). Medium Access Control is a fundamental problem deciding who get to transmit next. MAC

Ad hoc wireless networks present several interesting problems, one of which is Medium Access Control (MAC). Medium Access Control is a fundamental problem deciding who get to transmit next. MAC protocols for ad hoc wireless networks must also be distributed, because the network is multi-hop. The 802.11 Wi-Fi protocol is often used in ad hoc networking. An alternative protocol, REACT, uses the metaphor of an auction to compute airtime allocations for each node, then realizes those allocations by tuning the contention window parameter using a tuning protocol called SALT. 802.11 is inherently unfair due to how it returns the contention window to its minimum size after successfully transmitting, while REACT’s distributed auction nature allows nodes to negotiate an allocation where all nodes get a fair portion of the airtime. A common application in the network is audio streaming. Audio streams are dependent on having good Quality of Service (QoS) metrics, such as delay or jitter, due to their real-time nature.

Experiments were conducted to determine the performance of REACT/SALT compared to 802.11 in a streaming audio application on a physical wireless testbed, w-iLab.t. Four experiments were designed, using four different wireless node topologies, and QoS metrics were collected using Qosium. REACT performs better in these these topologies, when the mean value is calculated across each run. For the butterfly and star topology, the variance was higher for REACT even though the mean was lower. In the hidden terminal and exposed node topology, the performance of REACT was much better than 802.11 and converged more tightly, but had drops in quality occasionally.

Contributors

Agent

Created

Date Created
  • 2019-12

152172-Thumbnail Image.png

Scheduled medium access control in mobile ad hoc networks

Description

The primary function of the medium access control (MAC) protocol is managing access to a shared communication channel. From the viewpoint of transmitters, the MAC protocol determines each transmitter's persistence,

The primary function of the medium access control (MAC) protocol is managing access to a shared communication channel. From the viewpoint of transmitters, the MAC protocol determines each transmitter's persistence, the fraction of time it is permitted to spend transmitting. Schedule-based schemes implement stable persistences, achieving low variation in delay and throughput, and sometimes bounding maximum delay. However, they adapt slowly, if at all, to changes in the network. Contention-based schemes are agile, adapting quickly to changes in perceived contention, but suffer from short-term unfairness, large variations in packet delay, and poor performance at high load. The perfect MAC protocol, it seems, embodies the strengths of both contention- and schedule-based approaches while avoiding their weaknesses. This thesis culminates in the design of a Variable-Weight and Adaptive Topology Transparent (VWATT) MAC protocol. The design of VWATT first required answers for two questions: (1) If a node is equipped with schedules of different weights, which weight should it employ? (2) How is the node to compute the desired weight in a network lacking centralized control? The first question is answered by the Topology- and Load-Aware (TLA) allocation which defines target persistences that conform to both network topology and traffic load. Simulations show the TLA allocation to outperform IEEE 802.11, improving on the expectation and variation of delay, throughput, and drop rate. The second question is answered in the design of an Adaptive Topology- and Load-Aware Scheduled (ATLAS) MAC that computes the TLA allocation in a decentralized and adaptive manner. Simulation results show that ATLAS converges quickly on the TLA allocation, supporting highly dynamic networks. With these questions answered, a construction based on transversal designs is given for a variable-weight topology transparent schedule that allows nodes to dynamically and independently select weights to accommodate local topology and traffic load. The schedule maintains a guarantee on maximum delay when the maximum neighbourhood size is not too large. The schedule is integrated with the distributed computation of ATLAS to create VWATT. Simulations indicate that VWATT offers the stable performance characteristics of a scheduled MAC while adapting quickly to changes in topology and traffic load.

Contributors

Agent

Created

Date Created
  • 2013

152849-Thumbnail Image.png

Infinite cacheflow: a rule-caching solution for software defined networks

Description

New OpenFlow switches support a wide range of network applications, such as firewalls, load balancers, routers, and traffic monitoring. While ternary content addressable memory (TCAM) allows switches to process packets

New OpenFlow switches support a wide range of network applications, such as firewalls, load balancers, routers, and traffic monitoring. While ternary content addressable memory (TCAM) allows switches to process packets at high speed based on multiple header fields, today's commodity switches support just thousands to tens of thousands of forwarding rules. To allow for finer-grained policies on this hardware, efficient ways to support the abstraction of a switch are needed with arbitrarily large rule tables. To do so, a hardware-software hybrid switch is designed that relies on rule caching to provide large rule tables at low cost. Unlike traditional caching solutions, neither individual rules are cached (to respect rule dependencies) nor compressed (to preserve the per-rule traffic counts). Instead long dependency chains are ``spliced'' to cache smaller groups of rules while preserving the semantics of the network policy. The proposed hybrid switch design satisfies three criteria: (1) responsiveness, to allow rapid changes to the cache with minimal effect on traffic throughput; (2) transparency, to faithfully support native OpenFlow semantics; (3) correctness, to cache rules while preserving the semantics of the original policy. The evaluation of the hybrid switch on large rule tables suggest that it can effectively expose the benefits of both hardware and software switches to the controller and to applications running on top of it.

Contributors

Agent

Created

Date Created
  • 2014

156648-Thumbnail Image.png

Maximizing Routing Throughput with Applications to Delay Tolerant Networks

Description

Many applications require efficient data routing and dissemination in Delay Tolerant Networks (DTNs) in order to maximize the throughput of data in the network, such as providing healthcare to remote

Many applications require efficient data routing and dissemination in Delay Tolerant Networks (DTNs) in order to maximize the throughput of data in the network, such as providing healthcare to remote communities, and spreading related information in Mobile Social Networks (MSNs). In this thesis, the feasibility of using boats in the Amazon Delta Riverine region as data mule nodes is investigated and a robust data routing algorithm based on a fountain code approach is designed to ensure fast and timely data delivery considering unpredictable boat delays, break-downs, and high transmission failures. Then, the scenario of providing healthcare in Amazon Delta Region is extended to a general All-or-Nothing (Splittable) Multicommodity Flow (ANF) problem and a polynomial time constant approximation algorithm is designed for the maximum throughput routing problem based on a randomized rounding scheme with applications to DTNs. In an MSN, message content is closely related to users’ preferences, and can be used to significantly impact the performance of data dissemination. An interest- and content-based algorithm is developed where the contents of the messages, along with the network structural information are taken into consideration when making message relay decisions in order to maximize data throughput in an MSN. Extensive experiments show the effectiveness of the above proposed data dissemination algorithm by comparing it with state-of-the-art techniques.

Contributors

Agent

Created

Date Created
  • 2018

154096-Thumbnail Image.png

Performance optimization of linux networking for latency-sensitive virtual systems

Description

Virtual machines and containers have steadily improved their performance over time as a result of innovations in their architecture and software ecosystems. Network functions and workloads are increasingly migrating

Virtual machines and containers have steadily improved their performance over time as a result of innovations in their architecture and software ecosystems. Network functions and workloads are increasingly migrating to virtual environments, supported by developments in software defined networking (SDN) and network function virtualization (NFV). Previous performance analyses of virtual systems in this context often ignore significant performance gains that can be acheived with practical modifications to hypervisor and host systems. In this thesis, the network performance of containers and virtual machines are measured with standard network performance tools. The performance of these systems utilizing a standard 3.18.20 Linux kernel is compared to that of a realtime-tuned variant of the same kernel. This thesis motivates improving determinism in virtual systems with modifications to host and guest kernels and thoughtful process isolation. With the system modifications described, the median TCP bandwidth of KVM virtual machines over bridged network interfaces, is increased by 10.8% with a corresponding reduction in standard deviation of 87.6%. Docker containers see a 8.8% improvement in median bandwidth and 4.4% reduction in standard deviation of TCP measurements using similar bridged networking. System tuning also reduces the standard deviation of TCP request/response latency (TCP RR) over bridged interfaces by 86.8% for virtual machines and 97.9% for containers. Hardware devices assigned to virtual systems also see reductions in variance, although not as noteworthy.

Contributors

Agent

Created

Date Created
  • 2015

154142-Thumbnail Image.png

A network function virtualization based load balancer for TCP

Description

A load balancer is an essential part of many network systems. A load balancer is capable of dividing and redistributing incoming network traffic to different back end servers, thus improving

A load balancer is an essential part of many network systems. A load balancer is capable of dividing and redistributing incoming network traffic to different back end servers, thus improving reliability and performance. Existing load balancing solutions can be classified into two categories: hardware-based or software-based. Hardware-based load balancing systems are hard to manage and force network administrators to scale up (replacing with more powerful but expensive hardware) when their system can not handle the growing traffic. Software-based solutions have a limitation when dealing with a single large TCP flow. In recent years, with the fast developments of virtualization technology, a new trend of network function virtualization (NFV) is being adopted. Instead of using proprietary hardware, an NFV network infrastructure uses virtual machines running to implement network functions such as load balancers, firewalls, etc. In this thesis, a new load balancing system is designed and evaluated. This system is high performance and flexible. It can fully utilize the bandwidth between a load balancer and back end servers compared to traditional load balancers such as HAProxy. The experimental results show that using this NFV load balancer could have $n$ ($n$ is the number of back end servers) times better performance than HAProxy. Also, an extract, transform and load (ETL) application was implemented to demonstrate that this load balancer can shorten data load time. The experiment shows that when loading a large data set (18.3GB), our load balancer needs only 28\% less time than traditional load balancer.

Contributors

Agent

Created

Date Created
  • 2015

153041-Thumbnail Image.png

Firewall rule set analysis and visualization

Description

A firewall is a necessary component for network security and just like any regular equipment it requires maintenance. To keep up with changing cyber security trends and threats, firewall rules

A firewall is a necessary component for network security and just like any regular equipment it requires maintenance. To keep up with changing cyber security trends and threats, firewall rules are modified frequently. Over time such modifications increase the complexity, size and verbosity of firewall rules. As the rule set grows in size, adding and modifying rule becomes a tedious task. This discourages network administrators to review the work done by previous administrators before and after applying any changes. As a result the quality and efficiency of the firewall goes down.

Modification and addition of rules without knowledge of previous rules creates anomalies like shadowing and rule redundancy. Anomalous rule sets not only limit the efficiency of the firewall but in some cases create a hole in the perimeter security. Detection of anomalies has been studied for a long time and some well established procedures have been implemented and tested. But they all have a common problem of visualizing the results. When it comes to visualization of firewall anomalies, the results do not fit in traditional matrix, tree or sunburst representations.

This research targets the anomaly detection and visualization problem. It analyzes and represents firewall rule anomalies in innovative ways such as hive plots and dynamic slices. Such graphical representations of rule anomalies are useful in understanding the state of a firewall. It also helps network administrators in finding and fixing the anomalous rules.

Contributors

Agent

Created

Date Created
  • 2014

153754-Thumbnail Image.png

Comparing a commercial and an SDN-based load balancer in a campus network

Description

Commercial load balancers are often in use, and the production network at Arizona State University (ASU) is no exception. However, because the load balancer uses IP addresses, the solution does

Commercial load balancers are often in use, and the production network at Arizona State University (ASU) is no exception. However, because the load balancer uses IP addresses, the solution does not apply to all applications. One such application is Rsyslog. This software processes syslog packets and stores them in files. The loss rate of incoming log packets is high due to the incoming rate of the data. The Rsyslog servers are overwhelmed by the continuous data stream. To solve this problem a software defined networking (SDN) based load balancer is designed to perform a transport-level load balancing over the incoming load to Rsyslog servers. In this solution the load is forwarded to one Rsyslog server at a time, according to one of a Round-Robin, Random, or Load-Based policy. This gives time to other servers to process the data they have received and prevent them from being overwhelmed. The evaluation of the proposed solution is conducted a physical testbed with the same data feed as the commercial solution. The results suggest that the SDN-based load balancer is competitive with the commercial load balancer. Replacing the software OpenFlow switch with a hardware switch is likely to further improve the results.

Contributors

Agent

Created

Date Created
  • 2015

154765-Thumbnail Image.png

Fixed verse generation using neural word embeddings

Description

For the past three decades, the design of an effective strategy for generating poetry that matches that of a human’s creative capabilities and complexities has been an elusive goal in

For the past three decades, the design of an effective strategy for generating poetry that matches that of a human’s creative capabilities and complexities has been an elusive goal in artificial intelligence (AI) and natural language generation (NLG) research, and among linguistic creativity researchers in particular. This thesis presents a novel approach to fixed verse poetry generation using neural word embeddings. During the course of generation, a two layered poetry classifier is developed. The first layer uses a lexicon based method to classify poems into types based on form and structure, and the second layer uses a supervised classification method to classify poems into subtypes based on content with an accuracy of 92%. The system then uses a two-layer neural network to generate poetry based on word similarities and word movements in a 50-dimensional vector space.

The verses generated by the system are evaluated using rhyme, rhythm, syllable counts and stress patterns. These computational features of language are considered for generating haikus, limericks and iambic pentameter verses. The generated poems are evaluated using a Turing test on both experts and non-experts. The user study finds that only 38% computer generated poems were correctly identified by nonexperts while 65% of the computer generated poems were correctly identified by experts. Although the system does not pass the Turing test, the results from the Turing test suggest an improvement of over 17% when compared to previous methods which use Turing tests to evaluate poetry generators.

Contributors

Agent

Created

Date Created
  • 2016

154622-Thumbnail Image.png

Analysis and visualization of OpenFlow rule conflicts

Description

In traditional networks the control and data plane are highly coupled, hindering development. With Software Defined Networking (SDN), the two planes are separated, allowing innovations on either one independently of

In traditional networks the control and data plane are highly coupled, hindering development. With Software Defined Networking (SDN), the two planes are separated, allowing innovations on either one independently of the other. Here, the control plane is formed by the applications that specify an organization's policy and the data plane contains the forwarding logic. The application sends all commands to an SDN controller which then performs the requested action on behalf of the application. Generally, the requested action is a modification to the flow tables, present in the switches, to reflect a change in the organization's policy. There are a number of ways to control the network using the SDN principles, but the most widely used approach is OpenFlow.

With the applications now having direct access to the flow table entries, it is easy to have inconsistencies arise in the flow table rules. Since the flow rules are structured similar to firewall rules, the research done in analyzing and identifying firewall rule conflicts can be adapted to work with OpenFlow rules.

The main work of this thesis is to implement flow conflict detection logic in OpenDaylight and inspect the applicability of techniques in visualizing the conflicts. A hierarchical edge-bundling technique coupled with a Reingold-Tilford tree is employed to present the relationship between the conflicting rules. Additionally, a table-driven approach is also implemented to display the details of each flow.

Both types of visualization are then tested for correctness by providing them with flows which are known to have conflicts. The conflicts were identified properly and displayed by the views.

Contributors

Agent

Created

Date Created
  • 2016