Matching Items (23)
153607-Thumbnail Image.png
Description
Complex systems are pervasive in science and engineering. Some examples include complex engineered networks such as the internet, the power grid, and transportation networks. The complexity of such systems arises not just from their size, but also from their structure, operation (including control and management), evolution over time, and that

Complex systems are pervasive in science and engineering. Some examples include complex engineered networks such as the internet, the power grid, and transportation networks. The complexity of such systems arises not just from their size, but also from their structure, operation (including control and management), evolution over time, and that people are involved in their design and operation. Our understanding of such systems is limited because their behaviour cannot be characterized using traditional techniques of modelling and analysis.

As a step in model development, statistically designed screening experiments may be used to identify the main effects and interactions most significant on a response of a system. However, traditional approaches for screening are ineffective for complex systems because of the size of the experimental design. Consequently, the factors considered are often restricted, but this automatically restricts the interactions that may be identified as well. Alternatively, the designs are restricted to only identify main effects, but this then fails to consider any possible interactions of the factors.

To address this problem, a specific combinatorial design termed a locating array is proposed as a screening design for complex systems. Locating arrays exhibit logarithmic growth in the number of factors because their focus is on identification rather than on measurement. This makes practical the consideration of an order of magnitude more factors in experimentation than traditional screening designs.

As a proof-of-concept, a locating array is applied to screen for main effects and low-order interactions on the response of average transport control protocol (TCP) throughput in a simulation model of a mobile ad hoc network (MANET). A MANET is a collection of mobile wireless nodes that self-organize without the aid of any centralized control or fixed infrastructure. The full-factorial design for the MANET considered is infeasible (with over 10^{43} design points) yet a locating array has only 421 design points.

In conjunction with the locating array, a ``heavy hitters'' algorithm is developed to identify the influential main effects and two-way interactions, correcting for the non-normal distribution of the average throughput, and uneven coverage of terms in the locating array. The significance of the identified main effects and interactions is validated independently using the statistical software JMP.

The statistical characteristics used to evaluate traditional screening designs are also applied to locating arrays.

These include the matrix of covariance, fraction of design space, and aliasing, among others. The results lend additional support to the use of locating arrays as screening designs.

The use of locating arrays as screening designs for complex engineered systems is promising as they yield useful models. This facilitates quantitative evaluation of architectures and protocols and contributes to our understanding of complex engineered networks.
ContributorsAldaco-Gastelum, Abraham Netzahualcoyotl (Author) / Syrotiuk, Violet R. (Thesis advisor) / Colbourn, Charles J. (Committee member) / Sen, Arunabha (Committee member) / Montgomery, Douglas C. (Committee member) / Arizona State University (Publisher)
Created2015
132101-Thumbnail Image.png
Description
Ad hoc wireless networks present several interesting problems, one of which is Medium Access Control (MAC). Medium Access Control is a fundamental problem deciding who get to transmit next. MAC protocols for ad hoc wireless networks must also be distributed, because the network is multi-hop. The 802.11 Wi-Fi protocol is

Ad hoc wireless networks present several interesting problems, one of which is Medium Access Control (MAC). Medium Access Control is a fundamental problem deciding who get to transmit next. MAC protocols for ad hoc wireless networks must also be distributed, because the network is multi-hop. The 802.11 Wi-Fi protocol is often used in ad hoc networking. An alternative protocol, REACT, uses the metaphor of an auction to compute airtime allocations for each node, then realizes those allocations by tuning the contention window parameter using a tuning protocol called SALT. 802.11 is inherently unfair due to how it returns the contention window to its minimum size after successfully transmitting, while REACT’s distributed auction nature allows nodes to negotiate an allocation where all nodes get a fair portion of the airtime. A common application in the network is audio streaming. Audio streams are dependent on having good Quality of Service (QoS) metrics, such as delay or jitter, due to their real-time nature.

Experiments were conducted to determine the performance of REACT/SALT compared to 802.11 in a streaming audio application on a physical wireless testbed, w-iLab.t. Four experiments were designed, using four different wireless node topologies, and QoS metrics were collected using Qosium. REACT performs better in these these topologies, when the mean value is calculated across each run. For the butterfly and star topology, the variance was higher for REACT even though the mean was lower. In the hidden terminal and exposed node topology, the performance of REACT was much better than 802.11 and converged more tightly, but had drops in quality occasionally.
ContributorsKulenkamp, Daniel (Author) / Syrotiuk, Violet R. (Thesis director) / Colbourn, Charles J. (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-12
168845-Thumbnail Image.png
Description
Ethernet based technologies are emerging as the ubiquitous de facto form of communication due to their interoperability, capacity, cost, and reliability. Traditional Ethernet is designed with the goal of delivering best effort services. However, several real time and control applications require more precise deterministic requirements and Ultra Low Latency (ULL),

Ethernet based technologies are emerging as the ubiquitous de facto form of communication due to their interoperability, capacity, cost, and reliability. Traditional Ethernet is designed with the goal of delivering best effort services. However, several real time and control applications require more precise deterministic requirements and Ultra Low Latency (ULL), that Ethernet cannot be used for. Current Industrial Automation and Control Systems (IACS) applications use semi-proprietary technologies that provide deterministic communication behavior for sporadic and periodic traffic, but can lead to closed systems that do not interoperate effectively. The convergence between the informational and operational technologies in modern industrial control networks cannot be achieved using traditional Ethernet. Time Sensitive Networking (TSN) is a suite of IEEE standards designed by augmenting traditional Ethernet with real time deterministic properties ideal for Digital Signal Processing (DSP) applications. Similarly, Deterministic Networking (DetNet) is a Internet Engineering Task Force (IETF) standardization that enhances the network layer with the required deterministic properties needed for IACS applications. This dissertation provides an in-depth survey and literature review on both standards/research and 5G related material on ULL. Recognizing the limitations of several features of the standards, this dissertation provides an empirical evaluation of these approaches and presents novel enhancements to the shapers and schedulers involved in TSN. More specifically, this dissertation investigates Time Aware Shaper (TAS), Asynchronous Traffic Shaper (ATS), and Cyclic Queuing and Forwarding (CQF) schedulers. Moreover, the IEEE 802.1Qcc, centralized management and control, and the IEEE 802.1Qbv can be used to manage and control scheduled traffic streams with periodic properties along with best-effort traffic on the same network infrastructure. Both the centralized network/distributed user model (hybrid model) and the fully-distributed (decentralized) IEEE 802.1Qcc model are examined on a typical industrial control network with the goal of maximizing scheduled traffic streams. Finally, since industrial applications and cyber-physical systems require timely delivery, any channel or node faults can cause severe disruption to the operational continuity of the application. Therefore, the IEEE 802.1CB, Frame Replication and Elimination for Reliability (FRER), is examined and tested using machine learning models to predict faulty scenarios and issue remedies seamlessly.
ContributorsNasrallah, Ahmed (Author) / Reisslein, Martin (Thesis advisor) / Syrotiuk, Violet R. (Committee member) / LiKamWa, Robert (Committee member) / Thyagaturu, Akhilesh (Committee member) / Arizona State University (Publisher)
Created2022