Matching Items (34)
Filtering by

Clear all filters

168845-Thumbnail Image.png
Description
Ethernet based technologies are emerging as the ubiquitous de facto form of communication due to their interoperability, capacity, cost, and reliability. Traditional Ethernet is designed with the goal of delivering best effort services. However, several real time and control applications require more precise deterministic requirements and Ultra Low Latency (ULL),

Ethernet based technologies are emerging as the ubiquitous de facto form of communication due to their interoperability, capacity, cost, and reliability. Traditional Ethernet is designed with the goal of delivering best effort services. However, several real time and control applications require more precise deterministic requirements and Ultra Low Latency (ULL), that Ethernet cannot be used for. Current Industrial Automation and Control Systems (IACS) applications use semi-proprietary technologies that provide deterministic communication behavior for sporadic and periodic traffic, but can lead to closed systems that do not interoperate effectively. The convergence between the informational and operational technologies in modern industrial control networks cannot be achieved using traditional Ethernet. Time Sensitive Networking (TSN) is a suite of IEEE standards designed by augmenting traditional Ethernet with real time deterministic properties ideal for Digital Signal Processing (DSP) applications. Similarly, Deterministic Networking (DetNet) is a Internet Engineering Task Force (IETF) standardization that enhances the network layer with the required deterministic properties needed for IACS applications. This dissertation provides an in-depth survey and literature review on both standards/research and 5G related material on ULL. Recognizing the limitations of several features of the standards, this dissertation provides an empirical evaluation of these approaches and presents novel enhancements to the shapers and schedulers involved in TSN. More specifically, this dissertation investigates Time Aware Shaper (TAS), Asynchronous Traffic Shaper (ATS), and Cyclic Queuing and Forwarding (CQF) schedulers. Moreover, the IEEE 802.1Qcc, centralized management and control, and the IEEE 802.1Qbv can be used to manage and control scheduled traffic streams with periodic properties along with best-effort traffic on the same network infrastructure. Both the centralized network/distributed user model (hybrid model) and the fully-distributed (decentralized) IEEE 802.1Qcc model are examined on a typical industrial control network with the goal of maximizing scheduled traffic streams. Finally, since industrial applications and cyber-physical systems require timely delivery, any channel or node faults can cause severe disruption to the operational continuity of the application. Therefore, the IEEE 802.1CB, Frame Replication and Elimination for Reliability (FRER), is examined and tested using machine learning models to predict faulty scenarios and issue remedies seamlessly.
ContributorsNasrallah, Ahmed (Author) / Reisslein, Martin (Thesis advisor) / Syrotiuk, Violet R. (Committee member) / LiKamWa, Robert (Committee member) / Thyagaturu, Akhilesh (Committee member) / Arizona State University (Publisher)
Created2022
190757-Thumbnail Image.png
Description
Huge advancements have been made over the years in terms of modern image-sensing hardware and visual computing algorithms (e.g. computer vision, image processing, computational photography). However, to this day, there still exists a current gap between the hardware and software design in an imaging system, which silos one research domain

Huge advancements have been made over the years in terms of modern image-sensing hardware and visual computing algorithms (e.g. computer vision, image processing, computational photography). However, to this day, there still exists a current gap between the hardware and software design in an imaging system, which silos one research domain from another. Bridging this gap is the key to unlocking new visual computing capabilities for end applications in commercial photography, industrial inspection, and robotics. This thesis explores avenues where hardware-software co-design of image sensors can be leveraged to replace conventional hardware components in an imaging system with software for enhanced reconfigurability. As a result, the user can program the image sensor in a way best suited to the end application. This is referred to as software-defined imaging (SDI), where image sensor behavior can be altered by the system software depending on the user's needs. The scope of this thesis covers the development and deployment of SDI algorithms for low-power computer vision. Strategies for sparse spatial sampling have been developed in this thesis for power optimization of the vision sensor. This dissertation shows how a hardware-compatible state-of-the-art object tracker can be coupled with a Kalman filter for energy gains at the sensor level. Extensive experiments reveal how adaptive spatial sampling of image frames with this hardware-friendly framework offers attractive energy-accuracy tradeoffs. Another thrust of this thesis is to demonstrate the benefits of reinforcement learning in this research avenue. A major finding reported in this dissertation shows how neural-network-based reinforcement learning can be exploited for the adaptive subsampling framework to achieve improved sampling performance, thereby optimizing the energy efficiency of the image sensor. The last thrust of this thesis is to leverage emerging event-based SDI technology for building a low-power navigation system. A homography estimation pipeline has been proposed in this thesis which couples the right data representation with a differential scale-invariant feature transform (SIFT) module to extract rich visual cues from event streams. Positional encoding is leveraged with a multilayer perceptron (MLP) network to get robust homography estimation from event data.
ContributorsIqbal, Odrika (Author) / Jayasuriya, Suren (Thesis advisor) / Spanias, Andreas (Thesis advisor) / LiKamWa, Robert (Committee member) / Owens, Chris (Committee member) / Arizona State University (Publisher)
Created2023
190906-Thumbnail Image.png
Description
Graphic Processing Units (GPUs) have become a key enabler of the big-data revolution, functioning as defacto co-processors to accelerate large-scale computation. As the GPU programming stack and tool support have matured, the technology has alsobecome accessible to programmers. However, optimizing programs to run efficiently on GPUs requires developers to have

Graphic Processing Units (GPUs) have become a key enabler of the big-data revolution, functioning as defacto co-processors to accelerate large-scale computation. As the GPU programming stack and tool support have matured, the technology has alsobecome accessible to programmers. However, optimizing programs to run efficiently on GPUs requires developers to have both detailed understanding of the application logic and significant knowledge of parallel programming and GPU architectures. This dissertation proposes GEVO, a tool for automatically tuning the performance of GPU kernels in the LLVM representation to meet desired criteria. GEVO uses population-based search to find edits to programs compiled to LLVM-IR which improves performance on desired criteria and retains required functionality. The evaluation of GEVO on the Rodinia benchmark suite demonstrates many runtime optimization techniques. One of the key insights is that semantic relaxation enables GEVO to discover these optimizations that are usually prohibited by the compiler. GEVO also explores many other optimizations, including architecture- and application-specific. A follow-up evaluation of three bioinformatics applications at their different stages of development suggests that GEVO can optimize programs as well as human experts, sometimes even into a code base that is beyond a programmer’s reach. Furthermore, to unshackle the constraint of GEVO in optimizing neural network (NN) models, GEVO-ML is proposed by extending the representation capability of GEVO, where NN models and the training/prediction process are uniformly represented in a single intermediate language. An evaluation of GEVO-ML shows that GEVO-ML can optimize network models similar to how human developers improve model design, for example, by changing learning rates or pruning non-essential parameters. These results showcase the potential of automated program optimization tools to both reduce the optimization burden for researchers and provide new insights for GPU experts.
ContributorsLiou, Jhe-Yu (Author) / Forrest, Stephanie (Thesis advisor) / Wu, Carole-Jean (Thesis advisor) / Lee, Yann-Hang (Committee member) / Weimer, Westley (Committee member) / Arizona State University (Publisher)
Created2023
189373-Thumbnail Image.png
Description
Efficient visual sensing plays a pivotal role in enabling high-precision applications in augmented reality and low-power Internet of Things (IoT) devices. This dissertation addresses the primary challenges that hinder energy efficiency in visual sensing: the bottleneck of pixel traffic across camera and memory interfaces and the energy-intensive analog readout process

Efficient visual sensing plays a pivotal role in enabling high-precision applications in augmented reality and low-power Internet of Things (IoT) devices. This dissertation addresses the primary challenges that hinder energy efficiency in visual sensing: the bottleneck of pixel traffic across camera and memory interfaces and the energy-intensive analog readout process in image sensors. To overcome the bottleneck of pixel traffic, this dissertation proposes a visual sensing pipeline architecture that enables application developers to dynamically adapt the spatial resolution and update rates for specific regions within the scene. By selectively capturing and processing high-resolution frames only where necessary, the system significantly reduces energy consumption associated with memory traffic. This is achieved by encoding only the relevant pixels from the commercial image sensors with standard raster-scan pixel read-out patterns, thus minimizing the data stored in memory. The stored rhythmic pixel region stream is decoded into traditional frame-based representations, enabling seamless integration into existing video pipelines. Moreover, the system includes runtime support that allows flexible specification of the region labels, giving developers fine-grained control over the resolution adaptation process. Experimental evaluations conducted on a Xilinx Field Programmable Gate Array (FPGA) platform demonstrate substantial reductions of 43-64% in interface traffic, while maintaining controllable task accuracy. In addition to the pixel traffic bottleneck, the dissertation tackles the energy intensive analog readout process in image sensors. To address this, the dissertation proposes aggressive scaling of the analog voltage supplied to the camera. Extensive characterization on off-the-shelf sensors demonstrates that analog voltage scaling can significantly reduce sensor power, albeit at the expense of image quality. To mitigate this trade-off, this research develops a pipeline that allows application developers to adapt the sensor voltage on a frame-by-frame basis. A voltage controller is integrated into the existing Raspberry Pi (RPi) based video streaming pipeline, generating the sensor voltage. On top of that, the system provides a software interface for vision applications to specify the desired voltage levels. Evaluation of the system across a range of voltage scaling policies on popular vision tasks demonstrates that the technique can deliver up to 73% sensor power savings while maintaining reasonable task fidelity.
ContributorsKodukula, Venkatesh (Author) / LiKamWa, Robert (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Brunhaver, John (Committee member) / Nambi, Akshay (Committee member) / Arizona State University (Publisher)
Created2023
161984-Thumbnail Image.png
Description
The rapid growth of Internet-of-things (IoT) and artificial intelligence applications have called forth a new computing paradigm--edge computing. Edge computing applications, such as video surveillance, autonomous driving, and augmented reality, are highly computationally intensive and require real-time processing. Current edge systems are typically based on commodity general-purpose hardware such as

The rapid growth of Internet-of-things (IoT) and artificial intelligence applications have called forth a new computing paradigm--edge computing. Edge computing applications, such as video surveillance, autonomous driving, and augmented reality, are highly computationally intensive and require real-time processing. Current edge systems are typically based on commodity general-purpose hardware such as Central Processing Units (CPUs) and Graphical Processing Units (GPUs) , which are mainly designed for large, non-time-sensitive jobs in the cloud and do not match the needs of the edge workloads. Also, these systems are usually power hungry and are not suitable for resource-constrained edge deployments. Such application-hardware mismatch calls forth a new computing backbone to support the high-bandwidth, low-latency, and energy-efficient requirements. Also, the new system should be able to support a variety of edge applications with different characteristics. This thesis addresses the above challenges by studying the use of Field Programmable Gate Array (FPGA) -based computing systems for accelerating the edge workloads, from three critical angles. First, it investigates the feasibility of FPGAs for edge computing, in comparison to conventional CPUs and GPUs. Second, it studies the acceleration of common algorithmic characteristics, identified as loop patterns, using FPGAs, and develops a benchmark tool for analyzing the performance of these patterns on different accelerators. Third, it designs a new edge computing platform using multiple clustered FPGAs to provide high-bandwidth and low-latency acceleration of convolutional neural networks (CNNs) widely used in edge applications. Finally, it studies the acceleration of the emerging neural networks, randomly-wired neural networks, on the multi-FPGA platform. The experimental results from this work show that the new generation of workloads requires rethinking the current edge-computing architecture. First, through the acceleration of common loops, it demonstrates that FPGAs can outperform GPUs in specific loops types up to 14 times. Second, it shows the linear scalability of multi-FPGA platforms in accelerating neural networks. Third, it demonstrates the superiority of the new scheduler to optimally place randomly-wired neural networks on multi-FPGA platforms with 81.1 times better throughput than the available scheduling mechanisms.
ContributorsBiookaghazadeh, Saman (Author) / Zhao, Ming (Thesis advisor) / Ren, Fengbo (Thesis advisor) / Li, Baoxin (Committee member) / Seo, Jae-Sun (Committee member) / Arizona State University (Publisher)
Created2021
161975-Thumbnail Image.png
Description
Uncertainty is intrinsic in Cyber-Physical Systems since they interact with human and work with both analog and digital worlds. Since even minute deviation from the real values can make catastrophe in a safety-critical application, considering uncertainties in CPS behavior is essential. On the other side, time is a

Uncertainty is intrinsic in Cyber-Physical Systems since they interact with human and work with both analog and digital worlds. Since even minute deviation from the real values can make catastrophe in a safety-critical application, considering uncertainties in CPS behavior is essential. On the other side, time is a foundational aspect of Cyber-Physical Systems (CPS). Correct timing of system events is critical to optimize responsiveness to the environment, in terms of timeliness, accuracy, and precision in the knowledge, measurement, prediction, and control of CPS behavior. In order to design a more resilient and reliable CPS, first and foremost, there should be a way to specify the timing constraints that a constructed Cyber-Physical System must meet with considering existing uncertainties. Only then, we can seek systematic approaches to check if all timing constraints are being met, and develop correct-by-construction methodologies. In this regard, Timestamp Temporal Logic (TTL) is developed to specify the timing constraints on a distributed CPS. By TTL designers can specify the timing requirements that a CPS must satisfy in a succinct and intuitive manner and express the tolerable error as a part of the language. The proposed deduction system on TTL (TTL reasoning system) gives the ability to check the consistency among expresses system specifications and simplify them to be implemented on FPGA for run-time verification. Regarding CPS run-time verification, Timestamp-based Monitoring Approach(TMA) has been designed that can hook up to a CPS and take its timing specifications in TTL and verify if the timing constraints are being met with considering existing uncertainties in the system. TMA does not need to compute whether the constraint is being met at each and every instance of time but it re-evaluates constraint only when there is an event that can affect the outcome. This enables it to perform online timing monitoring of CPS for less computation and resources. Furthermore, the minimum design parameters of the timing CPS that are required to enable testing the timing of CPS are defined in this dissertation
ContributorsMehrabian, Mohammadreza (Author) / Shrivastava, Aviral (Thesis advisor) / Ren, Fengbo (Committee member) / Sarjoughian, Hessam (Committee member) / Derler, Patricia (Committee member) / Arizona State University (Publisher)
Created2021
161997-Thumbnail Image.png
Description
Many real-world engineering problems require simulations to evaluate the design objectives and constraints. Often, due to the complexity of the system model, simulations can be prohibitive in terms of computation time. One approach to overcome this issue is to construct a surrogate model, which approximates the original model. The focus

Many real-world engineering problems require simulations to evaluate the design objectives and constraints. Often, due to the complexity of the system model, simulations can be prohibitive in terms of computation time. One approach to overcome this issue is to construct a surrogate model, which approximates the original model. The focus of this work is on the data-driven surrogate models, in which empirical approximations of the output are performed given the input parameters. Recently neural networks (NN) have re-emerged as a popular method for constructing data-driven surrogate models. Although, NNs have achieved excellent accuracy and are widely used, they pose their own challenges. This work addresses two common challenges, the need for: (1) hardware acceleration and (2) uncertainty quantification (UQ) in the presence of input variability. The high demand in the inference phase of deep NNs in cloud servers/edge devices calls for the design of low power custom hardware accelerators. The first part of this work describes the design of an energy-efficient long short-term memory (LSTM) accelerator. The overarching goal is to aggressively reduce the power consumption and area of the LSTM components using approximate computing, and then use architectural level techniques to boost the performance. The proposed design is synthesized and placed and routed as an application-specific integrated circuit (ASIC). The results demonstrate that this accelerator is 1.2X and 3.6X more energy-efficient and area-efficient than the baseline LSTM. In the second part of this work, a robust framework is developed based on an alternate data-driven surrogate model referred to as polynomial chaos expansion (PCE) for addressing UQ. In contrast to many existing approaches, no assumptions are made on the elements of the function space and UQ is a function of the expansion coefficients. Moreover, the sensitivity of the output with respect to any subset of the input variables can be computed analytically by post-processing the PCE coefficients. This provides a systematic and incremental method to pruning or changing the order of the model. This framework is evaluated on several real-world applications from different domains and is extended for classification tasks as well.
ContributorsAzari, Elham (Author) / Vrudhula, Sarma (Thesis advisor) / Fainekos, Georgios (Committee member) / Ren, Fengbo (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
168306-Thumbnail Image.png
Description
Coarse-Grained Reconfigurable Arrays (CGRAs) are emerging accelerators that promise low-power acceleration of compute-intensive loops in applications. The acceleration achieved by CGRA relies on the efficient mapping of the compute-intensive loops by the CGRA compiler onto the CGRA. The CGRA mapping problem, being NP-complete, is performed in a two-step process, scheduling,

Coarse-Grained Reconfigurable Arrays (CGRAs) are emerging accelerators that promise low-power acceleration of compute-intensive loops in applications. The acceleration achieved by CGRA relies on the efficient mapping of the compute-intensive loops by the CGRA compiler onto the CGRA. The CGRA mapping problem, being NP-complete, is performed in a two-step process, scheduling, and mapping. The scheduling algorithm allocates timeslots to the nodes of the DFG, and the mapping algorithm maps the scheduled nodes onto the PEs of the CGRA. On a mapping failure, the initiation interval (II) is increased, and a new schedule is obtained for the increased II. Most previous mapping techniques use the Iterative Modulo Scheduling algorithm (IMS) to find a schedule for a given II. Since IMS generates a resource-constrained ASAP (as-soon-as-possible) scheduling, even with increased II, it tends to generate a similar schedule that is not mappable and does not explore the schedule space effectively. The problems encountered by IMS-based scheduling algorithms are explored and an improved randomized scheduling algorithm for scheduling of the application loop to be accelerated is proposed. When encountering a mapping failure for a given schedule, existing mapping algorithms either exit and retry the mapping anew, or recursively remove the previously mapped node to find a valid mapping (backtrack).Abandoning the mapping is extreme, but even backtracking may not be the best choice, since the root of the problem may not be the previous node. The challenges in existing algorithms are systematically analyzed and a failure-aware mapping algorithm is presented. The loops in general-purpose applications are often complicated loops, i.e., loops with perfect and imperfect nests and loops with nested if-then-else's (conditionals). The existing hardware-software solutions to execute branches and conditions are inefficient. A co-design approach that efficiently executes complicated loops on CGRA is proposed. The compiler transforms complex loops, maps them to the CGRA, and lays them out in the memory in a specific manner, such that the hardware can fetch and execute the instructions from the right path at runtime. Finally, a CGRA compilation simulator open-source framework is presented. This open-source CGRA simulation framework is based on LLVM and gem5 to extract the loop, map them onto the CGRA architecture, and execute them as a co-processor to an ARM CPU.
ContributorsBalasubramanian, Mahesh (Author) / Shrivastava, Aviral (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Ren, Fengbo (Committee member) / Pozzi, Laura (Committee member) / Arizona State University (Publisher)
Created2021
168714-Thumbnail Image.png
Description
Deep neural network-based methods have been proved to achieve outstanding performance on object detection and classification tasks. Deep neural networks follow the ``deeper model with deeper confidence'' belief to gain a higher recognition accuracy. However, reducing these networks' computational costs remains a challenge, which impedes their deployment on embedded devices.

Deep neural network-based methods have been proved to achieve outstanding performance on object detection and classification tasks. Deep neural networks follow the ``deeper model with deeper confidence'' belief to gain a higher recognition accuracy. However, reducing these networks' computational costs remains a challenge, which impedes their deployment on embedded devices. For instance, the intersection management of Connected Autonomous Vehicles (CAVs) requires running computationally intensive object recognition algorithms on low-power traffic cameras. This dissertation aims to study the effect of a dynamic hardware and software approach to address this issue. Characteristics of real-world applications can facilitate this dynamic adjustment and reduce the computation. Specifically, this dissertation starts with a dynamic hardware approach that adjusts itself based on the toughness of input and extracts deeper features if needed. Next, an adaptive learning mechanism has been studied that use extracted feature from previous inputs to improve system performance. Finally, a system (ARGOS) was proposed and evaluated that can be run on embedded systems while maintaining the desired accuracy. This system adopts shallow features at inference time, but it can switch to deep features if the system desires a higher accuracy. To improve the performance, ARGOS distills the temporal knowledge from deep features to the shallow system. Moreover, ARGOS reduces the computation furthermore by focusing on regions of interest. The response time and mean average precision are adopted for the performance evaluation to evaluate the proposed ARGOS system.
ContributorsFarhadi, Mohammad (Author) / Yang, Yezhou (Thesis advisor) / Vrudhula, Sarma (Committee member) / Wu, Carole-Jean (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2022
168629-Thumbnail Image.png
Description
With the rapid development of both hardware and software, mobile devices with their advantages in mobility, interactivity, and privacy have enabled various applications, including social networking, mixed reality, entertainment, authentication, and etc.In diverse forms such as smartphones, glasses, and watches, the number of mobile devices is expected to increase by

With the rapid development of both hardware and software, mobile devices with their advantages in mobility, interactivity, and privacy have enabled various applications, including social networking, mixed reality, entertainment, authentication, and etc.In diverse forms such as smartphones, glasses, and watches, the number of mobile devices is expected to increase by 1 billion per year in the future. These devices not only generate and exchange small data such as GPS data, but also large data including videos and point clouds. Such massive visual data presents many challenges for processing on mobile devices. First, continuously capturing and processing high resolution visual data is energy-intensive, which can drain the battery of a mobile device very quickly. Second, data offloading for edge or cloud computing is helpful, but users are afraid that their privacy can be exposed to malicious developers. Third, interactivity and user experience is degraded if mobile devices cannot process large scale visual data in real-time such as off-device high precision point clouds. To deal with these challenges, this work presents three solutions towards fine-grained control of visual data in mobile systems, revolving around two core ideas, enabling resolution-based tradeoffs and adopting split-process to protect visual data.In particular, this work introduces: (1) Banner media framework to remove resolution reconfiguration latency in the operating system for enabling seamless dynamic resolution-based tradeoffs; (2) LesnCap split-process application development framework to protect user's visual privacy against malicious data collection in cloud-based Augmented Reality (AR) applications by isolating the visual processing in a distinct process; (3) A novel voxel grid schema to enable adaptive sampling at the edge device that can sample point clouds flexibly for interactive 3D vision use cases across mobile devices and mobile networks. The evaluation in several mobile environments demonstrates that, by controlling visual data at a fine granularity, energy efficiency can be improved by 49% switching between resolutions, visual privacy can be protected through split-process with negligible overhead, and point clouds can be delivered at a high throughput meeting various requirements.Thus, this work can enable more continuous mobile vision applications for the future of a new reality.
ContributorsHu, Jinhan (Author) / LiKamWa, Robert (Thesis advisor) / Wu, Carole-Jean (Committee member) / Doupe, Adam (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2022