Description
With the advent of GPGPU, many applications are being accelerated by using CUDA programing paradigm. We are able to achieve around 10x -100x speedups by simply porting the application on to the GPU and running the parallel chunk of code

With the advent of GPGPU, many applications are being accelerated by using CUDA programing paradigm. We are able to achieve around 10x -100x speedups by simply porting the application on to the GPU and running the parallel chunk of code on its multi cored SIMT (Single instruction multiple thread) architecture. But for optimal performance it is necessary to make sure that all the GPU resources are efficiently used, and the latencies in the application are minimized. For this, it is essential to monitor the Hardware usage of the algorithm and thus diagnose the compute and memory bottlenecks in the implementation. In the following thesis, we will be analyzing the mapping of CUDA implementation of BLIINDS-II algorithm on the underlying GPU hardware, and come up with a Kepler architecture specific solution of using shuffle instruction via CUB library to tackle the two major bottlenecks in the algorithm. Experiments were conducted to convey the advantage of using shuffle instru3ction in algorithm over only using shared memory as a buffer to global memory. With the new implementation of BLIINDS-II algorithm using CUB library, a speedup of around 13.7% was achieved.
Reuse Permissions
  • Downloads
    PDF (2.1 MB)

    Details

    Title
    • Analysis of hardware usage of shuffle instruction based performance optimization in the Blinds-II image quality assessment algorithm
    Contributors
    Date Created
    2017
    Resource Type
  • Text
  • Collections this item is in
    Note
    • thesis
      Partial requirement for: M.S., Arizona State University, 2017
    • bibliography
      Includes bibliographical references (pages 91-96)
    • Field of study: Engineering

    Citation and reuse

    Statement of Responsibility

    by Ameya Wadekar

    Machine-readable links