Matching Items (3)
Filtering by

Clear all filters

152905-Thumbnail Image.png
Description
Coarse-Grained Reconfigurable Architectures (CGRA) are a promising fabric for improving the performance and power-efficiency of computing devices. CGRAs are composed of components that are well-optimized to execute loops and rotating register file is an example of such a component present in CGRAs. Due to the rotating nature of register indexes

Coarse-Grained Reconfigurable Architectures (CGRA) are a promising fabric for improving the performance and power-efficiency of computing devices. CGRAs are composed of components that are well-optimized to execute loops and rotating register file is an example of such a component present in CGRAs. Due to the rotating nature of register indexes in rotating register file, it is very challenging, if at all possible, to hold and properly index memory addresses (pointers) and static values. In this Thesis, different structures for CGRA register files are investigated. Those structures are experimentally compared in terms of performance of mapped applications, design frequency, and area. It is shown that a register file that can logically be partitioned into rotating and non-rotating regions is an excellent choice because it imposes the minimum restriction on underlying CGRA mapping algorithm while resulting in efficient resource utilization.
ContributorsSaluja, Dipal (Author) / Shrivastava, Aviral (Thesis advisor) / Lee, Yann-Hang (Committee member) / Wu, Carole-Jean (Committee member) / Arizona State University (Publisher)
Created2014
150231-Thumbnail Image.png
Description
In this thesis I introduce a new direction to computing using nonlinear chaotic dynamics. The main idea is rich dynamics of a chaotic system enables us to (1) build better computers that have a flexible instruction set, and (2) carry out computation that conventional computers are not good at it.

In this thesis I introduce a new direction to computing using nonlinear chaotic dynamics. The main idea is rich dynamics of a chaotic system enables us to (1) build better computers that have a flexible instruction set, and (2) carry out computation that conventional computers are not good at it. Here I start from the theory, explaining how one can build a computing logic block using a chaotic system, and then I introduce a new theoretical analysis for chaos computing. Specifically, I demonstrate how unstable periodic orbits and a model based on them explains and predicts how and how well a chaotic system can do computation. Furthermore, since unstable periodic orbits and their stability measures in terms of eigenvalues are extractable from experimental times series, I develop a time series technique for modeling and predicting chaos computing from a given time series of a chaotic system. After building a theoretical framework for chaos computing I proceed to architecture of these chaos-computing blocks to build a sophisticated computing system out of them. I describe how one can arrange and organize these chaos-based blocks to build a computer. I propose a brand new computer architecture using chaos computing, which shifts the limits of conventional computers by introducing flexible instruction set. Our new chaos based computer has a flexible instruction set, meaning that the user can load its desired instruction set to the computer to reconfigure the computer to be an implementation for the desired instruction set. Apart from direct application of chaos theory in generic computation, the application of chaos theory to speech processing is explained and a novel application for chaos theory in speech coding and synthesizing is introduced. More specifically it is demonstrated how a chaotic system can model the natural turbulent flow of the air in the human speech production system and how chaotic orbits can be used to excite a vocal tract model. Also as another approach to build computing system based on nonlinear system, the idea of Logical Stochastic Resonance is studied and adapted to an autoregulatory gene network in the bacteriophage λ.
ContributorsKia, Behnam (Author) / Ditto, William (Thesis advisor) / Huang, Liang (Committee member) / Lai, Ying-Cheng (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2011
156791-Thumbnail Image.png
Description
General-purpose processors propel the advances and innovations that are the subject of humanity’s many endeavors. Catering to this demand, chip-multiprocessors (CMPs) and general-purpose graphics processing units (GPGPUs) have seen many high-performance innovations in their architectures. With these advances, the memory subsystem has become the performance- and energy-limiting aspect of CMPs

General-purpose processors propel the advances and innovations that are the subject of humanity’s many endeavors. Catering to this demand, chip-multiprocessors (CMPs) and general-purpose graphics processing units (GPGPUs) have seen many high-performance innovations in their architectures. With these advances, the memory subsystem has become the performance- and energy-limiting aspect of CMPs and GPGPUs alike. This dissertation identifies and mitigates the key performance and energy-efficiency bottlenecks in the memory subsystem of general-purpose processors via novel, practical, microarchitecture and system-architecture solutions.

Addressing the important Last Level Cache (LLC) management problem in CMPs, I observe that LLC management decisions made in isolation, as in prior proposals, often lead to sub-optimal system performance. I demonstrate that in order to maximize system performance, it is essential to manage the LLCs while being cognizant of its interaction with the system main memory. I propose ReMAP, which reduces the net memory access cost by evicting cache lines that either have no reuse, or have low memory access cost. ReMAP improves the performance of the CMP system by as much as 13%, and by an average of 6.5%.

Rather than the LLC, the L1 data cache has a pronounced impact on GPGPU performance by acting as the bandwidth filter for the rest of the memory subsystem. Prior work has shown that the severely constrained data cache capacity in GPGPUs leads to sub-optimal performance. In this thesis, I propose two novel techniques that address the GPGPU data cache capacity problem. I propose ID-Cache that performs effective cache bypassing and cache line size selection to improve cache capacity utilization. Next, I propose LATTE-CC that considers the GPU’s latency tolerance feature and adaptively compresses the data stored in the data cache, thereby increasing its effective capacity. ID-Cache and LATTE-CC are shown to achieve 71% and 19.2% speedup, respectively, over a wide variety of GPGPU applications.

Complementing the aforementioned microarchitecture techniques, I identify the need for system architecture innovations to sustain performance scalability of GPG- PUs in the face of slowing Moore’s Law. I propose a novel GPU architecture called the Multi-Chip-Module GPU (MCM-GPU) that integrates multiple GPU modules to form a single logical GPU. With intelligent memory subsystem optimizations tailored for MCM-GPUs, it can achieve within 7% of the performance of a similar but hypothetical monolithic die GPU. Taking a step further, I present an in-depth study of the energy-efficiency characteristics of future MCM-GPUs. I demonstrate that the inherent non-uniform memory access side-effects form the key energy-efficiency bottleneck in the future.

In summary, this thesis offers key insights into the performance and energy-efficiency bottlenecks in CMPs and GPGPUs, which can guide future architects towards developing high-performance and energy-efficient general-purpose processors.
ContributorsArunkumar, Akhil (Author) / Wu, Carole-Jean (Thesis advisor) / Shrivastava, Aviral (Committee member) / Lee, Yann-Hang (Committee member) / Bolotin, Evgeny (Committee member) / Arizona State University (Publisher)
Created2018