Time Sensitive Networking in Multimedia and Industrial Control Applications

Description
Ethernet based technologies are emerging as the ubiquitous de facto form of communication due to their interoperability, capacity, cost, and reliability. Traditional Ethernet is designed with the goal of delivering best effort services. However, several real time and control applications

Ethernet based technologies are emerging as the ubiquitous de facto form of communication due to their interoperability, capacity, cost, and reliability. Traditional Ethernet is designed with the goal of delivering best effort services. However, several real time and control applications require more precise deterministic requirements and Ultra Low Latency (ULL), that Ethernet cannot be used for. Current Industrial Automation and Control Systems (IACS) applications use semi-proprietary technologies that provide deterministic communication behavior for sporadic and periodic traffic, but can lead to closed systems that do not interoperate effectively. The convergence between the informational and operational technologies in modern industrial control networks cannot be achieved using traditional Ethernet. Time Sensitive Networking (TSN) is a suite of IEEE standards designed by augmenting traditional Ethernet with real time deterministic properties ideal for Digital Signal Processing (DSP) applications. Similarly, Deterministic Networking (DetNet) is a Internet Engineering Task Force (IETF) standardization that enhances the network layer with the required deterministic properties needed for IACS applications. This dissertation provides an in-depth survey and literature review on both standards/research and 5G related material on ULL. Recognizing the limitations of several features of the standards, this dissertation provides an empirical evaluation of these approaches and presents novel enhancements to the shapers and schedulers involved in TSN. More specifically, this dissertation investigates Time Aware Shaper (TAS), Asynchronous Traffic Shaper (ATS), and Cyclic Queuing and Forwarding (CQF) schedulers. Moreover, the IEEE 802.1Qcc, centralized management and control, and the IEEE 802.1Qbv can be used to manage and control scheduled traffic streams with periodic properties along with best-effort traffic on the same network infrastructure. Both the centralized network/distributed user model (hybrid model) and the fully-distributed (decentralized) IEEE 802.1Qcc model are examined on a typical industrial control network with the goal of maximizing scheduled traffic streams. Finally, since industrial applications and cyber-physical systems require timely delivery, any channel or node faults can cause severe disruption to the operational continuity of the application. Therefore, the IEEE 802.1CB, Frame Replication and Elimination for Reliability (FRER), is examined and tested using machine learning models to predict faulty scenarios and issue remedies seamlessly.

Details

Contributors
Date Created
2022
Resource Type
Language
  • eng
Note
  • Partial requirement for: Ph.D., Arizona State University, 2022
  • Field of study: Computer Engineering

Additional Information

English
Extent
  • 319 pages
Open Access
Peer-reviewed

Fault-tolerance in Time Sensitive Network with Machine Learning Model

Description
Nowadays, demand from the Internet of Things (IoT), automotive networking, and video applications is driving the transformation of Ethernet. It is a shift towards time-sensitive Ethernet. As a large amount of data is transmitted, many errors occur in the network.

Nowadays, demand from the Internet of Things (IoT), automotive networking, and video applications is driving the transformation of Ethernet. It is a shift towards time-sensitive Ethernet. As a large amount of data is transmitted, many errors occur in the network. For this increased traffic, a Time Sensitive Network (TSN) is important. Time-Sensitive Network (TSN) is a technology that provides a definitive service for time sensitive traffic in an Ethernet environment that provides time-synchronization. In order to efficiently manage these errors, countermeasures against errors are required. A system that maintains its function even in the event of an internal fault or failure is called a Fault-Tolerant system. For this, after configuring the network environment using the OMNET++ program, machine learning was used to estimate the optimal alternative routing path in case an error occurred in transmission. By setting an alternate path before an error occurs, I propose a method to minimize delay and minimize data loss when an error occurs. Various methods were compared. First, when no replication environment and secondly when ideal replication, thirdly random replication, and lastly replication using ML were tested. In these experiments, replication in an ideal environment showed the best results, which is because everything is optimal. However, except for such an ideal environment, replication prediction using the suggested ML showed the best results. These results suggest that the proposed method is effective, but there may be problems with efficiency and error control, so an additional overview is provided for further improvement.

Details

Contributors
Date Created
2022
Resource Type
Language
  • eng
Note
  • Partial requirement for: M.S., Arizona State University, 2022
  • Field of study: Electrical Engineering

Additional Information

English
Extent
  • 88 pages
Open Access
Peer-reviewed

Towards Fine-Grained Control of Visual Data in Mobile Systems

Description
With the rapid development of both hardware and software, mobile devices with their advantages in mobility, interactivity, and privacy have enabled various applications, including social networking, mixed reality, entertainment, authentication, and etc.In diverse forms such as smartphones, glasses, and watches,

With the rapid development of both hardware and software, mobile devices with their advantages in mobility, interactivity, and privacy have enabled various applications, including social networking, mixed reality, entertainment, authentication, and etc.In diverse forms such as smartphones, glasses, and watches, the number of mobile devices is expected to increase by 1 billion per year in the future. These devices not only generate and exchange small data such as GPS data, but also large data including videos and point clouds. Such massive visual data presents many challenges for processing on mobile devices. First, continuously capturing and processing high resolution visual data is energy-intensive, which can drain the battery of a mobile device very quickly. Second, data offloading for edge or cloud computing is helpful, but users are afraid that their privacy can be exposed to malicious developers. Third, interactivity and user experience is degraded if mobile devices cannot process large scale visual data in real-time such as off-device high precision point clouds. To deal with these challenges, this work presents three solutions towards fine-grained control of visual data in mobile systems, revolving around two core ideas, enabling resolution-based tradeoffs and adopting split-process to protect visual data.In particular, this work introduces: (1) Banner media framework to remove resolution reconfiguration latency in the operating system for enabling seamless dynamic resolution-based tradeoffs; (2) LesnCap split-process application development framework to protect user's visual privacy against malicious data collection in cloud-based Augmented Reality (AR) applications by isolating the visual processing in a distinct process; (3) A novel voxel grid schema to enable adaptive sampling at the edge device that can sample point clouds flexibly for interactive 3D vision use cases across mobile devices and mobile networks. The evaluation in several mobile environments demonstrates that, by controlling visual data at a fine granularity, energy efficiency can be improved by 49% switching between resolutions, visual privacy can be protected through split-process with negligible overhead, and point clouds can be delivered at a high throughput meeting various requirements.Thus, this work can enable more continuous mobile vision applications for the future of a new reality.

Details

Contributors
Date Created
2022
Topical Subject
Resource Type
Language
  • eng
Note
  • Partial requirement for: Ph.D., Arizona State University, 2022
  • Field of study: Computer Engineering

Additional Information

English
Extent
  • 133 pages
Open Access
Peer-reviewed

Analyzing Multi-viewpoint Capabilities of Light Estimation Frameworks for Augmented Reality Using TCP/IP and UDP

Description
Realistic lighting is important to improve immersion and make mixed reality applications seem more plausible. To properly blend the AR objects in the real scene, it is important to study the lighting of the environment. The existing illuminationframeworks proposed by

Realistic lighting is important to improve immersion and make mixed reality applications seem more plausible. To properly blend the AR objects in the real scene, it is important to study the lighting of the environment. The existing illuminationframeworks proposed by Google’s ARCore (Google’s Augmented Reality Software Development Kit) and Apple’s ARKit (Apple’s Augmented Reality Software Development Kit) are computationally expensive and have very slow refresh rates, which make them incompatible for dynamic environments and low-end mobile devices. Recently, there have been other illumination estimation frameworks such as GLEAM, Xihe, which aim at providing better illumination with faster refresh rates. GLEAM is an illumination estimation framework that understands the real scene by collecting pixel data from a reflecting spherical light probe. GLEAM uses this data to form environment cubemaps which are later mapped onto a reflection probe to generate illumination for AR objects. It is noticed that from a single viewpoint only one half of the light probe can be observed at a time which does not give complete information about the environment. This leads to the idea of having a multi-viewpoint estimation for better performance. This thesis work analyzes the multi-viewpoint capabilities of AR illumination frameworks that use physical light probes to understand the environment. The current work builds networking using TCP and UDP protocols on GLEAM. This thesis work also documents how processor load sharing has been done while networking devices and how that benefits the performance of GLEAM on mobile devices. Some enhancements using multi-threading have also been made to the already existing GLEAM model to improve its performance.

Details

Contributors
Date Created
2022
Language
  • eng
Note
  • Partial requirement for: M.S., Arizona State University, 2022
  • Field of study: Computer Engineering

Additional Information

English
Extent
  • 46 pages
Open Access
Peer-reviewed

Isle Aliquo: Improving Aural Rehabilitation for Individuals With Hearing Impairment Through Immersive Spatial Audio

Description

Computer-based auditory training programs (CBATPs) are used as an at-home aural rehabilitation solution in individuals with hearing impairment, most commonly in recipients of cochlear implants or hearing aids. However, recent advancements in spatial audio and immersive gameplay have not seen

Computer-based auditory training programs (CBATPs) are used as an at-home aural rehabilitation solution in individuals with hearing impairment, most commonly in recipients of cochlear implants or hearing aids. However, recent advancements in spatial audio and immersive gameplay have not seen inclusion in these programs. Isle Aliquo, a virtual-reality CBATP, is designed to reformat traditional rehabilitation exercises into virtual 3D space. The program explores how the aural exercise outcomes of detection, discrimination, direction, and identification can be improved with the incorporation of directional spatial audio, as well as how the experience can be made more engaging to improve adherence to training routines. Fundamentals of professional aural rehabilitation and current CBATP design inform the structure of the exercise modules found in Isle Aliquo.

Details

Contributors
Date Created
2022-05

Additional Information

English
Series
  • Academic Year 2021-2022
Open Access
Peer-reviewed

Augmented Coach: An Augmented Reality Tool for Immersive Sports Coaching

Description

Video playback is currently the primary method coaches and athletes use in sports training to give feedback on the athlete’s form and timing. Athletes will commonly record themselves using a phone or camera when practicing a sports movement, such as

Video playback is currently the primary method coaches and athletes use in sports training to give feedback on the athlete’s form and timing. Athletes will commonly record themselves using a phone or camera when practicing a sports movement, such as shooting a basketball, to then send to their coach for feedback on how to improve. In this work, we present Augmented Coach, an augmented reality tool for coaches to give spatiotemporal feedback through a 3-dimensional point cloud of the athlete. The system allows coaches to view a pre-recorded video of their athlete in point cloud form, and provides them with the proper tools in order to go frame by frame to both analyze the athlete’s form and correct it. The result is a fundamentally new concept of an interactive video player, where the coach can remotely view the athlete in a 3-dimensional form and create annotations to help improve their form. We then conduct a user study with subject matter experts to evaluate the usability and capabilities of our system. As indicated by the results, Augmented Coach successfully acts as a supplement to in-person coaching, since it allows coaches to break down the video recording in a 3-dimensional space and provide feedback spatiotemporally. The results also indicate that Augmented Coach can be a complete coaching solution in a remote setting. This technology will be extremely relevant in the future as coaches look for new ways to improve their feedback methods, especially in a remote setting.

Details

Contributors
Date Created
2022-05
Resource Type

Additional Information

English
Series
  • Academic Year 2021-2022
Open Access
Peer-reviewed

Augmented Coach: An Augmented Reality Tool for Immersive Sports
Coaching

Description

Video playback is currently the primary method coaches and athletes use in sports training to give feedback on the athlete’s form and timing. Athletes will commonly record themselves using a phone or camera when practicing a sports movement, such as

Video playback is currently the primary method coaches and athletes use in sports training to give feedback on the athlete’s form and timing. Athletes will commonly record themselves using a phone or camera when practicing a sports movement, such as shooting a basketball, to then send to their coach for feedback on how to improve. In this work, we present Augmented Coach, an augmented reality tool for coaches to give spatiotemporal feedback through a 3-dimensional point cloud of the athlete. The system allows coaches to view a pre-recorded video of their athlete in point cloud form, and provides them with the proper tools in order to go frame by frame to both analyze the athlete’s form and correct it. The result is a fundamentally new concept of an interactive video player, where the coach can remotely view the athlete in a 3-dimensional form and create annotations to help improve their form. We then conduct a user study with subject matter experts to evaluate the usability and capabilities of our system. As indicated by the results, Augmented Coach successfully acts as a supplement to in-person coaching, since it allows coaches to break down the video recording in a 3-dimensional space and provide feedback spatiotemporally. The results also indicate that Augmented Coach can be a complete coaching solution in a remote setting. This technology will be extremely relevant in the future as coaches look for new ways to improve their feedback methods, especially in a remote setting.

Details

Contributors
Date Created
2022-05
Resource Type

Additional Information

English
Series
  • Academic Year 2021-2022
Open Access
Peer-reviewed

Augmented Coach: An Augmented Reality Tool for Immersive Sports Coaching

Description

Video playback is currently the primary method coaches and athletes use in sports training to give feedback on the athlete's form and timing. Athletes will commonly record themselves using a phone or camera when practicing a sports movement, such as

Video playback is currently the primary method coaches and athletes use in sports training to give feedback on the athlete's form and timing. Athletes will commonly record themselves using a phone or camera when practicing a sports movement, such as shooting a basketball, to then send to their coach for feedback on how to improve. In this work, we present Augmented Coach, an augmented reality tool for coaches to give spatiotemporal feedback through a 3-dimensional point cloud of the athlete. The system allows coaches to view a pre-recorded video of their athlete in point cloud form, and provides them with the proper tools in order to go frame by frame to both analyze the athlete's form and correct it. The result is a fundamentally new concept of an interactive video player, where the coach can remotely view the athlete in a 3-dimensional form and create annotations to help improve their form. We then conduct a user study with subject matter experts to evaluate the usability and capabilities of our system. As indicated by the results, Augmented Coach successfully acts as a supplement to in-person coaching, since it allows coaches to break down the video recording in a 3-dimensional space and provide feedback spatiotemporally. The results also indicate that Augmented Coach can be a complete coaching solution in a remote setting. This technology will be extremely relevant in the future as coaches look for new ways to improve their feedback methods, especially in a remote setting.

Details

Contributors
Date Created
2022-05
Resource Type

Additional Information

English
Series
  • Academic Year 2021-2022
Open Access
Peer-reviewed

ARsome Chemistry:
The Use of Augmented Reality Notecards to Improve the Comprehension of Molecule Structures in Chemistry

Description

Augmented Reality (AR) especially when used with mobile devices enables the creation of applications that can help students in chemistry learn anything from basic to more advanced concepts. In Chemistry specifically, the 3D representation of molecules and chemical structures is

Augmented Reality (AR) especially when used with mobile devices enables the creation of applications that can help students in chemistry learn anything from basic to more advanced concepts. In Chemistry specifically, the 3D representation of molecules and chemical structures is of vital importance to students and yet when printed in 2D as on textbooks and lecture notes it can be quite hard to understand those vital 3D concepts. ARsome Chemistry is an app that aims to utilize AR to display complex and simple molecules in 3D to actively teach students these concepts through quizzes and other features. The ARsome chemistry app uses image target recognition to allow students to hand-draw or print line angle structures or chemical formulas of molecules and then scan those targets to get 3D representation of molecules. Students can use their fingers and the touch screen to zoom, rotate, and highlight different portions of the molecule to gain a better understanding of the molecule's 3D structure. The ARsome chemistry app also features the ability to utilize image recognition to allow students to quiz themselves on drawing line-angle structures and show it to the camera for the app to check their work. The ARsome chemistry app is an accessible and cost-effective study aid platform for students for on demand, interactive, 3D representations of complex molecules.

Details

Contributors
Date Created
2022-05
Resource Type

Additional Information

English
Series
  • Academic Year 2021-2022
Open Access
Peer-reviewed

Spatial Audio Localization with Internet of Things (IoT)

Description
Spatial audio can be especially useful for directing human attention. However, delivering spatial audio through speakers, rather than headphones that deliver audio directly to the ears, produces the issue of crosstalk, where sounds from each of the two speakers reach

Spatial audio can be especially useful for directing human attention. However, delivering spatial audio through speakers, rather than headphones that deliver audio directly to the ears, produces the issue of crosstalk, where sounds from each of the two speakers reach the opposite ear, inhibiting the spatialized effect. A research team at Meteor Studio has developed an algorithm called Xblock that solves this issue using a crosstalk cancellation technique. This thesis project expands upon the existing Xblock IoT system by providing a way to test the accuracy of the directionality of sounds generated with spatial audio. More specifically, the objective is to determine whether the usage of Xblock with smart speakers can provide generalized audio localization, which refers to the ability to detect a general direction of where a sound might be coming from. This project also expands upon the existing Xblock technique to integrate voice commands, where users can verbalize the name of a lost item using the phrase, “Find [item]”, and the IoT system will use spatial audio to guide them to it.

Details

Contributors
Date Created
2022-05

Additional Information

English
Series
  • Academic Year 2021-2022
Open Access
Peer-reviewed