Skip to main content

ASU Global menu

Skip to Content Report an accessibility problem ASU Home My ASU Colleges and Schools Sign In
Arizona State University Arizona State University
ASU Library KEEP

Main navigation

Home Browse Collections Share Your Work
Copyright Describe Your Materials File Formats Open Access Repository Practices Share Your Materials Terms of Deposit API Documentation
Skip to Content Report an accessibility problem ASU Home My ASU Colleges and Schools Sign In
  1. KEEP
  2. Theses and Dissertations
  3. ASU Electronic Theses and Dissertations
  4. Improving code overlay performance by pre-fetching in scratch pad memory systems
  5. Full metadata

Improving code overlay performance by pre-fetching in scratch pad memory systems

Full metadata

Description

Advances in electronics technology and innovative manufacturing processes have driven the semiconductor industry towards extensive miniaturization & ever greater integration of chip design. One consequence of this sustained evolution has been the growing relative cost of accessing off-chip components with external memory being one of the dominant contributors. In embedded systems and applications, where power consumption and cost are extremely crucial factors, the use of on chip Scratch Pad Memories (SPMs) has proven to be a good alternative to caches. SPMs are more efficient than on-chip caches in a wide variety of aspects including energy consumption, power dissipation, speed performance, area, and timing predictability. However, at the same time, they entail explicit software-level management. Specifically, the system performance depends upon overlay scheme for mapping code and data onto the size-limited SPMs. It has been found that for applications with large code sizes, the overlay overhead cost becomes significant. This work aims to evaluate and implement pre-fetching as a performance improvement technique for SPMs. It is implemented in code overlay manager, provided with the Cell Broadband Engine (CBE) Synergistic Processing Unit (SPU) compiler from IBM, spu-gcc. Four different approaches proposed in this work use profiling information to predict pre-fetch calls. The pre-fetching technique achieves considerable performance improvement by hiding some of the code overlay cost behind active computations by fetching the required code segment in advance into SPM. Experimental results supporting this claim are obtained using the IBM Cell architecture platform with substantial gain of more than 30%.

Date Created
2011
Contributors
  • Ghadge, Nikhil Dadasaheb (Author)
  • Chatha, Dr. Karamvir (Thesis advisor)
  • Shrivastava, Dr. Aviral (Committee member)
  • Lee, Dr. Yann-Hang (Committee member)
  • Arizona State University (Publisher)
Topical Subject
  • Computer Science
  • Code Overlay
  • Multicore
  • Scratch Pad Memory
  • Computer storage devices
  • Embedded computer systems
  • Mappings (Mathematics)
Resource Type
Text
Genre
Masters Thesis
Academic theses
Extent
ix, 59 p. : ill. (some col.)
Language
eng
Copyright Statement
In Copyright
Reuse Permissions
All Rights Reserved
Primary Member of
ASU Electronic Theses and Dissertations
Peer-reviewed
No
Open Access
No
Handle
https://hdl.handle.net/2286/R.I.8876
Statement of Responsibility
Nikhil Dadasaheb Ghadge
Description Source
Viewed on Feb. 23, 2012
Level of coding
full
Note
Partial requirement for: M.S., Arizona State University, 2011
Note type
thesis
Includes bibliographical references (p. 57-59)
Note type
bibliography
Field of study: Computer science
System Created
  • 2011-08-12 03:31:03
System Modified
  • 2021-08-30 01:55:20
  •     
  • 1 year 9 months ago
Additional Formats
  • OAI Dublin Core
  • MODS XML

Quick actions

About this item

Overview
 Copy permalink

Explore this item

Explore Document

Share this content

Feedback

ASU University Technology Office Arizona State University.
KEEP

Contact Us

Repository Services
Home KEEP PRISM ASU Research Data Repository
Resources
Terms of Deposit Sharing Materials: ASU Digital Repository Guide Open Access at ASU

The ASU Library acknowledges the twenty-three Native Nations that have inhabited this land for centuries. Arizona State University's four campuses are located in the Salt River Valley on ancestral territories of Indigenous peoples, including the Akimel O’odham (Pima) and Pee Posh (Maricopa) Indian Communities, whose care and keeping of these lands allows us to be here today. ASU Library acknowledges the sovereignty of these nations and seeks to foster an environment of success and possibility for Native American students and patrons. We are advocates for the incorporation of Indigenous knowledge systems and research methodologies within contemporary library practice. ASU Library welcomes members of the Akimel O’odham and Pee Posh, and all Native nations to the Library.

Number one in the U.S. for innovation. ASU ahead of MIT and Stanford. - U.S. News and World Report, 8 years, 2016-2023
Maps and Locations Jobs Directory Contact ASU My ASU
Copyright and Trademark Accessibility Privacy Terms of Use Emergency COVID-19 Information