Description
Alternative computation based on neural systems on a nanoscale device are of increasing interest because of the massive parallelism and scalability they provide. Neural based computation systems also offer defect finding and self healing capabilities. Traditional Von Neumann based architectures

Alternative computation based on neural systems on a nanoscale device are of increasing interest because of the massive parallelism and scalability they provide. Neural based computation systems also offer defect finding and self healing capabilities. Traditional Von Neumann based architectures (which separate the memory and computation units) inherently suffer from the Von Neumann bottleneck whereby the processor is limited by the number of instructions it fetches. The clock driven based Von Neumann computer survived because of technology scaling. However as transistor scaling is slowly coming to an end with channel lengths becoming a few nanometers in length, processor speeds are beginning to saturate. This lead to the development of multi-core systems which process data in parallel, with each core being based on the Von Neumann architecture.

The human brain has always been a mystery to scientists. Modern day super computers are outperformed by the human brain in certain computations. The brain occupies far less space and consumes a fraction of the power a super computer does with certain processes such as pattern recognition. Neuromorphic computing aims to mimic biological neural systems on silicon to exploit the massive parallelism that neural systems offer. Neuromorphic systems are event driven systems rather than being clock driven. One of the issues faced by neuromorphic computing was the area occupied by these circuits. With recent developments in the field of nanotechnology, memristive devices on a nanoscale have been developed and show a promising solution. Memristor based synapses can be up to three times smaller than Complementary Metal Oxide Semiconductor (CMOS) based synapses.

In this thesis, the Programmable Metallization Cell (a memristive device) is used to prove a learning algorithm known as Spike Time Dependant Plasticity (STDP). This learning algorithm is an extension to Hebb’s learning rule in which the synapses weight can be altered by the relative timing of spikes across it. The synaptic weight with the memristor will be its conductance, and CMOS oscillator based circuits will be used to produce spikes that can modulate the memristor conductance by firing with different phases differences.
Reuse Permissions
  • Downloads
    pdf (3 MB)

    Details

    Title
    • STDP implementation using CBRAM devices in CMOS
    Contributors
    Date Created
    2015
    Resource Type
  • Text
  • Collections this item is in
    Note
    • Partial requirement for: M.S., Arizona State University, 2015
      Note type
      thesis
    • Includes bibliographical references (pages 64-67)
      Note type
      bibliography
    • Field of study: Electrical engineering

    Citation and reuse

    Statement of Responsibility

    by Mahraj Sivaraj

    Machine-readable links