Matching Items (39)
157873-Thumbnail Image.png
Description
With the steady advancement of neural network research, new applications are continuously emerging. As a tool for test time reduction, neural networks provide a reliable method of identifying and applying correlations in datasets to speed data processing. By leveraging the power of a deep neural net, it is possible to

With the steady advancement of neural network research, new applications are continuously emerging. As a tool for test time reduction, neural networks provide a reliable method of identifying and applying correlations in datasets to speed data processing. By leveraging the power of a deep neural net, it is possible to record the motion of an accelerometer in response to an electrical stimulus and correlate the response with a trim code to reduce the total test time for such sensors. This reduction can be achieved by replacing traditional trimming methods such as physical shaking or mathematical models with a neural net that is able to process raw sensor data collected with the help of a microcontroller. With enough data, the neural net can process the raw responses in real time to predict the correct trim codes without requiring any additional information. Though not yet a complete replacement, the method shows promise given more extensive datasets and industry-level testing and has the potential to disrupt the current state of testing.
ContributorsDebeurre, Nicholas (Author) / Ozev, Sule (Thesis advisor) / Vrudhula, Sarma (Thesis advisor) / Kniffin, Margaret (Committee member) / Arizona State University (Publisher)
Created2019
157619-Thumbnail Image.png
Description
The past decade has seen a tremendous surge in running machine learning (ML) functions on mobile devices, from mere novelty applications to now indispensable features for the next generation of devices.

While the mobile platform capabilities range widely, long battery life and reliability are common design concerns that are crucial to

The past decade has seen a tremendous surge in running machine learning (ML) functions on mobile devices, from mere novelty applications to now indispensable features for the next generation of devices.

While the mobile platform capabilities range widely, long battery life and reliability are common design concerns that are crucial to remain competitive.

Consequently, state-of-the-art mobile platforms have become highly heterogeneous by combining a powerful CPUs with GPUs to accelerate the computation of deep neural networks (DNNs), which are the most common structures to perform ML operations.

But traditional von Neumann architectures are not optimized for the high memory bandwidth and massively parallel computation demands required by DNNs.

Hence, propelling research into non-von Neumann architectures to support the demands of DNNs.

The re-imagining of computer architectures to perform efficient DNN computations requires focusing on the prohibitive demands presented by DNNs and alleviating them. The two central challenges for efficient computation are (1) large memory storage and movement due to weights of the DNN and (2) massively parallel multiplications to compute the DNN output.

Introducing sparsity into the DNNs, where certain percentage of either the weights or the outputs of the DNN are zero, greatly helps with both challenges. This along with algorithm-hardware co-design to compress the DNNs is demonstrated to provide efficient solutions to greatly reduce the power consumption of hardware that compute DNNs. Additionally, exploring emerging technologies such as non-volatile memories and 3-D stacking of silicon in conjunction with algorithm-hardware co-design architectures will pave the way for the next generation of mobile devices.

Towards the objectives stated above, our specific contributions include (a) an architecture based on resistive crosspoint array that can update all values stored and compute matrix vector multiplication in parallel within a single cycle, (b) a framework of training DNNs with a block-wise sparsity to drastically reduce memory storage and total number of computations required to compute the output of DNNs, (c) the exploration of hardware implementations of sparse DNNs and architectural guidelines to reduce power consumption for the implementations in monolithic 3D integrated circuits, and (d) a prototype chip in 65nm CMOS accelerator for long-short term memory networks trained with the proposed block-wise sparsity scheme.
ContributorsKadetotad, Deepak Vinayak (Author) / Seo, Jae-Sun (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Vrudhula, Sarma (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2019
161275-Thumbnail Image.png
Description
The Internet-of-Things (IoT) boosts the vast amount of streaming data. However, even considering the growth of the cloud computing infrastructure, IoT devices will generate two orders of magnitude more than the capacity that centralized data center servers can process or store. This trend inevitability calls for the need for offloading

The Internet-of-Things (IoT) boosts the vast amount of streaming data. However, even considering the growth of the cloud computing infrastructure, IoT devices will generate two orders of magnitude more than the capacity that centralized data center servers can process or store. This trend inevitability calls for the need for offloading IoT data processing to a decentralized edge computing infrastructure. On the other hand, deep-learning-based applications gain great progress by taking advantage of heavy centralized computing resources for training large models to fit increasingly complicated tasks. Even though large-scale deep learning models perform well in terms of accuracy, their high computational complexity makes it impossible to offload them onto edge devices for real-time inference and timely response. To enable timely IoT services on edge devices, this dissertation addresses the challenge from two perspectives. On the hardware side, a new field-programmable gate array (FPGA)-based framework for binary neural network and an application-specific integrated circuit (ASIC) accelerator for natural scene text interpretation are proposed, with the awareness of the computing resources and power constraint on edge. On the algorithm side, this work presents both the methodology of building more compact models and finding better computation-accuracy trade-off for existing models.
ContributorsLi, Yixing (Author) / Ren, Fengbo (Thesis advisor) / Vrudhula, Sarma (Committee member) / Seo, Jae-Sun (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2021
161997-Thumbnail Image.png
Description
Many real-world engineering problems require simulations to evaluate the design objectives and constraints. Often, due to the complexity of the system model, simulations can be prohibitive in terms of computation time. One approach to overcome this issue is to construct a surrogate model, which approximates the original model. The focus

Many real-world engineering problems require simulations to evaluate the design objectives and constraints. Often, due to the complexity of the system model, simulations can be prohibitive in terms of computation time. One approach to overcome this issue is to construct a surrogate model, which approximates the original model. The focus of this work is on the data-driven surrogate models, in which empirical approximations of the output are performed given the input parameters. Recently neural networks (NN) have re-emerged as a popular method for constructing data-driven surrogate models. Although, NNs have achieved excellent accuracy and are widely used, they pose their own challenges. This work addresses two common challenges, the need for: (1) hardware acceleration and (2) uncertainty quantification (UQ) in the presence of input variability. The high demand in the inference phase of deep NNs in cloud servers/edge devices calls for the design of low power custom hardware accelerators. The first part of this work describes the design of an energy-efficient long short-term memory (LSTM) accelerator. The overarching goal is to aggressively reduce the power consumption and area of the LSTM components using approximate computing, and then use architectural level techniques to boost the performance. The proposed design is synthesized and placed and routed as an application-specific integrated circuit (ASIC). The results demonstrate that this accelerator is 1.2X and 3.6X more energy-efficient and area-efficient than the baseline LSTM. In the second part of this work, a robust framework is developed based on an alternate data-driven surrogate model referred to as polynomial chaos expansion (PCE) for addressing UQ. In contrast to many existing approaches, no assumptions are made on the elements of the function space and UQ is a function of the expansion coefficients. Moreover, the sensitivity of the output with respect to any subset of the input variables can be computed analytically by post-processing the PCE coefficients. This provides a systematic and incremental method to pruning or changing the order of the model. This framework is evaluated on several real-world applications from different domains and is extended for classification tasks as well.
ContributorsAzari, Elham (Author) / Vrudhula, Sarma (Thesis advisor) / Fainekos, Georgios (Committee member) / Ren, Fengbo (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
153968-Thumbnail Image.png
Description
The holy grail of computer hardware across all market segments has been to sustain performance improvement at the same pace as silicon technology scales. As the technology scales and the size of transistors shrinks, the power consumption and energy usage per transistor decrease. On the other hand, the transistor density

The holy grail of computer hardware across all market segments has been to sustain performance improvement at the same pace as silicon technology scales. As the technology scales and the size of transistors shrinks, the power consumption and energy usage per transistor decrease. On the other hand, the transistor density increases significantly by technology scaling. Due to technology factors, the reduction in power consumption per transistor is not sufficient to offset the increase in power consumption per unit area. Therefore, to improve performance, increasing energy-efficiency must be addressed at all design levels from circuit level to application and algorithm levels.

At architectural level, one promising approach is to populate the system with hardware accelerators each optimized for a specific task. One drawback of hardware accelerators is that they are not programmable. Therefore, their utilization can be low as they perform one specific function. Using software programmable accelerators is an alternative approach to achieve high energy-efficiency and programmability. Due to intrinsic characteristics of software accelerators, they can exploit both instruction level parallelism and data level parallelism.

Coarse-Grained Reconfigurable Architecture (CGRA) is a software programmable accelerator consists of a number of word-level functional units. Motivated by promising characteristics of software programmable accelerators, the potentials of CGRAs in future computing platforms is studied and an end-to-end CGRA research framework is developed. This framework consists of three different aspects: CGRA architectural design, integration in a computing system, and CGRA compiler. First, the design and implementation of a CGRA and its instruction set is presented. This design is then modeled in a cycle accurate system simulator. The simulation platform enables us to investigate several problems associated with a CGRA when it is deployed as an accelerator in a computing system. Next, the problem of mapping a compute intensive region of a program to CGRAs is formulated. From this formulation, several efficient algorithms are developed which effectively utilize CGRA scarce resources very well to minimize the running time of input applications. Finally, these mapping algorithms are integrated in a compiler framework to construct a compiler for CGRA
ContributorsHamzeh, Mahdi (Author) / Vrudhula, Sarma (Thesis advisor) / Gopalakrishnan, Kailash (Committee member) / Shrivastava, Aviral (Committee member) / Wu, Carole-Jean (Committee member) / Arizona State University (Publisher)
Created2015
132211-Thumbnail Image.png
Description
As the Internet of Things continues to expand, not only must our computing power grow
alongside it, our very approach must evolve. While the recent trend has been to centralize our
computing resources in the cloud, it now looks beneficial to push more computing power
towards the “edge” with so called edge computing,

As the Internet of Things continues to expand, not only must our computing power grow
alongside it, our very approach must evolve. While the recent trend has been to centralize our
computing resources in the cloud, it now looks beneficial to push more computing power
towards the “edge” with so called edge computing, reducing the immense strain on cloud
servers and the latency experienced by IoT devices. A new computing paradigm also brings
new opportunities for innovation, and one such innovation could be the use of FPGAs as edge
servers. In this research project, I learn the design flow for developing OpenCL kernels and
custom FPGA BSPs. Using these tools, I investigate the viability of using FPGAs as standalone
edge computing devices. Concluding that—although the technology is a great fit—the current
necessity of dynamically reprogrammable FPGAs to be closely coupled with a host CPU is
holding them back from this purpose. I propose a modification to the architecture of the Intel
Arria 10 GX that would allow it to be decoupled from its host CPU, allowing it to truly serve as a
viable edge computing solution.
ContributorsBarth, Brandon Albert (Author) / Ren, Fengbo (Thesis director) / Vrudhula, Sarma (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132550-Thumbnail Image.png
Description
Edge computing is an emerging field that improves upon cloud computing by moving the service from a centralized server to several de-centralized servers that are closer to the end user to decrease the latency, bandwidth, and cost requirements. Field programmable grid array (FPGA) devices are highly reconfigurable and excel in

Edge computing is an emerging field that improves upon cloud computing by moving the service from a centralized server to several de-centralized servers that are closer to the end user to decrease the latency, bandwidth, and cost requirements. Field programmable grid array (FPGA) devices are highly reconfigurable and excel in highly parallelized tasks, making them popular in many applications including digital signal processing and cryptography, while also making them a great candidate for edge computation. The purpose of this project was to explore existing board support packages for the Arria 10 GX FPGA and propose a BSP design with multiple partial reconfiguration regions to better support the use of FPGAs in edge computing. In this project, the general OpenCL development flow was studied, OpenCL workflow for Altera/Intel FPGAs was researched, the reference OpenCL BSP was explored to understand the connections between the modules, and a customized BSP with two partial reconfiguration regions was proposed. The existing BSP was explored using the Intel Quartus Prime software suite and the block diagrams for the existing and proposed designs were created using Microsoft Visio.
ContributorsLam, Evan (Author) / Ren, Fengbo (Thesis director) / Vrudhula, Sarma (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
164798-Thumbnail Image.png
Description
Real-Time Operating Systems are used in a variety of applications ranging from autonomous vehicles, flight controllers, and energy management systems to pacemakers, satellite tracking systems, amateur robotics and much more. It turns out that while general-purpose computers can perform tasks quite quickly, the execution time for various processes varies noticeably

Real-Time Operating Systems are used in a variety of applications ranging from autonomous vehicles, flight controllers, and energy management systems to pacemakers, satellite tracking systems, amateur robotics and much more. It turns out that while general-purpose computers can perform tasks quite quickly, the execution time for various processes varies noticeably between different executions. Execution time variation poses a big challenge for many computer-controlled systems that operate in the real-world such as robots, autonomous vehicles, drones, traffic signals, etc. The execution time variation matters in these systems since they must interact in the real world and perform actions at the proper times, and executing these tasks at other times can have varied effects ranging from a minor inconvenience to catastrophic failure. Many of these real-time systems are comprised of single board computers, such as a pacemaker. One single-board computer that is popular among hobbyists due to its form factor, cost, and performance is the Raspberry Pi, which uses an ARM-based processor. In order to provide a Real-Time Operating System for this single board computer this paper presents Jobbed, a single-core Real-Time Operating System which uses a fixed priority preemptive scheduler, targeted at the Raspberry Pi 2B. In this paper, we present the algorithmic structure behind this system and compare it to the Raspbian Operating System in an array of performance and behavioral tests targeted towards proper Real-Time Operating Systems.
ContributorsCunningham, Christian (Author) / Shrivastava, Aviral (Thesis director) / Vrudhula, Sarma (Committee member) / Barrett, The Honors College (Contributor) / Department of Physics (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2022-05
Description

In this thesis, I discuss the development of a novel physical design flow introducing standard-cell neurons for ASIC design. Standard-cell neurons are implemented on silicon as a circuit that realizes a threshold function. Each cell contains flash transistors, the threshold voltages of which correspond to the weights of the threshold

In this thesis, I discuss the development of a novel physical design flow introducing standard-cell neurons for ASIC design. Standard-cell neurons are implemented on silicon as a circuit that realizes a threshold function. Each cell contains flash transistors, the threshold voltages of which correspond to the weights of the threshold function. Since the threshold voltages are programmed after fabrication, any sequential logic containing a standard-cell neuron is a logical black box upon delivery to the foundry. Additionally, previous research has shown significant reductions in delay, power, and area with the utilization of these flash transistor (FTL) cells. This paper aims to reinforce this prior research by demonstrating the first automatically synthesized, placed, and routed secure RISC-V core.

ContributorsGrier, Willem (Author) / Vrudhula, Sarma (Thesis director) / Singh, Gian (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / Dean, W.P. Carey School of Business (Contributor)
Created2022-12