Matching Items (3)

128270-Thumbnail Image.png

An extension of the localist representation theory: grandmother cells are also widely used in the brain

Description

Based on considerable neurophysiological evidence, Roy (2012) proposed the theory that localist representation is widely used in the brain, starting from the lowest levels of processing. Grandmother cells are a

Based on considerable neurophysiological evidence, Roy (2012) proposed the theory that localist representation is widely used in the brain, starting from the lowest levels of processing. Grandmother cells are a special case of localist representation. In this article, I present the theory that grandmother cells are also widely used in the brain. To support the proposed theory, I present neurophysiological evidence and an analysis of the concept of grandmother cells. Konorski (1967) first predicted the existence of grandmother cells (he called them “gnostic” neurons)—single neurons that respond to complex stimuli such as faces, hands, expressions, objects, and so on. The term “grandmother cell” was introduced by Jerry Lettvin in 1969 (Barlow, 1995).

Contributors

Agent

Created

Date Created
  • 2013-05-24

128582-Thumbnail Image.png

The Theory of Localist Representation and of a Purely Abstract Cognitive System: The Evidence from Cortical Columns, Category Cells, and Multisensory Neurons

Description

The debate about representation in the brain and the nature of the cognitive system has been going on for decades now. This paper examines the neurophysiological evidence, primarily from single

The debate about representation in the brain and the nature of the cognitive system has been going on for decades now. This paper examines the neurophysiological evidence, primarily from single cell recordings, to get a better perspective on both the issues. After an initial review of some basic concepts, the paper reviews the data from single cell recordings – in cortical columns and of category-selective and multisensory neurons. In neuroscience, columns in the neocortex (cortical columns) are understood to be a basic functional/computational unit. The paper reviews the fundamental discoveries about the columnar organization and finds that it reveals a massively parallel search mechanism. This columnar organization could be the most extensive neurophysiological evidence for the widespread use of localist representation in the brain. The paper also reviews studies of category-selective cells. The evidence for category-selective cells reveals that localist representation is also used to encode complex abstract concepts at the highest levels of processing in the brain. A third major issue is the nature of the cognitive system in the brain and whether there is a form that is purely abstract and encoded by single cells. To provide evidence for a single-cell based purely abstract cognitive system, the paper reviews some of the findings related to multisensory cells. It appears that there is widespread usage of multisensory cells in the brain in the same areas where sensory processing takes place. Plus there is evidence for abstract modality invariant cells at higher levels of cortical processing. Overall, that reveals the existence of a purely abstract cognitive system in the brain. The paper also argues that since there is no evidence for dense distributed representation and since sparse representation is actually used to encode memories, there is actually no evidence for distributed representation in the brain. Overall, it appears that, at an abstract level, the brain is a massively parallel, distributed computing system that is symbolic. The paper also explains how grounded cognition and other theories of the brain are fully compatible with localist representation and a purely abstract cognitive system.

Contributors

Agent

Created

Date Created
  • 2017-02-16

127945-Thumbnail Image.png

A Classification Algorithm for High-dimensional Data

Description

With the advent of high-dimensional stored big data and streaming data, suddenly machine learning on a very large scale has become a critical need. Such machine learning should be extremely

With the advent of high-dimensional stored big data and streaming data, suddenly machine learning on a very large scale has become a critical need. Such machine learning should be extremely fast, should scale up easily with volume and dimension, should be able to learn from streaming data, should automatically perform dimension reduction for high-dimensional data, and should be deployable on hardware. Neural networks are well positioned to address these challenges of large scale machine learning. In this paper, we present a method that can effectively handle large scale, high-dimensional data. It is an online method that can be used for both streaming and large volumes of stored big data. It primarily uses Kohonen nets, although only a few selected neurons (nodes) from multiple Kohonen nets are actually retained in the end; we discard all Kohonen nets after training. We use Kohonen nets both for dimensionality reduction through feature selection and for building an ensemble of classifiers using single Kohonen neurons. The method is meant to exploit massive parallelism and should be easily deployable on hardware that implements Kohonen nets. Some initial computational results are presented.

Contributors

Agent

Created

Date Created
  • 2015-08-10