Neuromorphic Engineering Book
  • Welcome
  • Preliminaries
    • About the author
    • Preface
    • A tale about passion and fear
    • Before we begin
  • I. Introduction
    • 1. Introducing the perspective of the scientist
      • From the neuron doctrine to emergent behavior
      • Brain modeling
      • Take away lessons
    • 2. Introducing the perspective of the computer architect
      • Limits of integrated circuits
      • Emerging computing paradigms
      • Brain-inspired hardware
      • Take away lessons
      • Errata
    • 3. Introducing the perspective of the algorithm designer
      • From artificial to spiking neural networks
      • Neuromorphic software development
      • Take home lessons
  • II. Scientist perspective
    • 4. Biological description of neuronal dynamics
      • Potentials, spikes and power estimation
      • Take away lessons
      • Errata
    • 5. Models of point neuronal dynamic
      • Tutorial - models of point neuronal processes
        • The leaky integrate and fire model
        • The Izhikevich neuron model
        • The Hodgkin-Huxley neuron model
      • Synapse modeling and point neurons
      • Case study: a SNN for perceptual filling-in
      • Take away lessons
    • 6. Models of morphologically detailed neurons
      • Morphologically detailed modeling
      • The cable equation
      • The compartmental model
      • Case study: direction-selective SAC
      • Take away lessons
    • 7. Models of network dynamic and learning
      • Circuit taxonomy, reconstruction, and simulation
      • Case study: SACs' lateral inhibition in direction selectivity
      • Neuromorphic and biological learning
      • Take away lessons
      • Errate
  • III. Architect perspective
    • 8. Neuromorphic Hardware
      • Transistors and micro-power circuitry
      • The silicon neuron
      • Case study: hardware - software co-synthesis
      • Take away lessons
    • 9. Communication and hybrid circuit design
      • Neural architectures
      • Take away lessons
    • 10. In-memory computing with memristors
      • Memristive computing
      • Take away lessons
      • Errata
  • IV. Algorithm designer perspective
    • 11. Introduction to neuromorphic programming
      • Theory and neuromorphic programming
      • Take away lessons
    • 12. The neural engineering framework
      • NEF: Representation
      • NEF: Transformation
      • NEF: Dynamics
      • Case study: motion detection using oscillation interference
      • Take away lessons
      • Errate
    • 13. Learning spiking neural networks
      • Learning with SNN
      • Take away lessons
Powered by GitBook
On this page

Was this helpful?

  1. IV. Algorithm designer perspective
  2. 11. Introduction to neuromorphic programming

Take away lessons

PreviousTheory and neuromorphic programmingNext12. The neural engineering framework

Last updated 3 years ago

Was this helpful?

Computational complexity theory: A theoretical framework for classifying and relating computational problems according to their resource usage. Resource usage can be divided into classes and sub classes. Problems that are solvable in polynomial time are in class P. Other problems which may or may not be solved in polynomial time but are verifiable in polynomial time, belong to class NP. Some problems in NP can be classified in the subclass NP- complete. A problem is in NP-complete if any problem in NP can be reduced to it in polynomial time. Another interesting class of complexity is class BQP, the class of problems solvable by a quantum computer in polynomial time, with some bounding error probability

Turing machine: A mathematical formulation of an abstract machine which manipulates symbols on a strip of tape according to a set of rules. A computing system that can simulate any Turing machine is said to be Turing-complete or computationally universal. A SNN is turing complete.

Neuromorphic theory of complexity: the classic complexity classes take into consideration time and space. While time and space are essential resources, neuromorphic architectures consider another essential resource: energy. Unfortunately, energy is not captured by Turing machines as a relevant nor an essential resource for computation. Therefore, we need an extended theory of complexity which will capture energy as a vital resource. Such a theory will enable us to decide what problems can and cannot be solved efficiently on neuromorphic architectures. The development of such a theory is still in progress.

Corelet / Compass software ecosystem: an example of a high-level neuromorphic programming language that allows the algorithm developer to concentrate on his work’s creative parts, such as network design, data curation, and learning rules articulation instead of on low-level technical details, such as spiking rates and time constants. IBM came out with a new eco-system, comprised of a programming language, named Corelet, a Corelet library, and a simulation framework named Compass. A corelet abstracts away the underlying hardware architecture. It encapsulates a TrueNorth core, hiding the underlying network details and connectivity scheme, only exposing input and output ports. Each Corelet is optimized to carry out one specific, relatively simple, function. The same Corelet can be used in different applications; different Corelets can be concatenated together like LEGO bricks in a great variety of ways.