Neuromorphic Engineering Book
  • Welcome
  • Preliminaries
    • About the author
    • Preface
    • A tale about passion and fear
    • Before we begin
  • I. Introduction
    • 1. Introducing the perspective of the scientist
      • From the neuron doctrine to emergent behavior
      • Brain modeling
      • Take away lessons
    • 2. Introducing the perspective of the computer architect
      • Limits of integrated circuits
      • Emerging computing paradigms
      • Brain-inspired hardware
      • Take away lessons
      • Errata
    • 3. Introducing the perspective of the algorithm designer
      • From artificial to spiking neural networks
      • Neuromorphic software development
      • Take home lessons
  • II. Scientist perspective
    • 4. Biological description of neuronal dynamics
      • Potentials, spikes and power estimation
      • Take away lessons
      • Errata
    • 5. Models of point neuronal dynamic
      • Tutorial - models of point neuronal processes
        • The leaky integrate and fire model
        • The Izhikevich neuron model
        • The Hodgkin-Huxley neuron model
      • Synapse modeling and point neurons
      • Case study: a SNN for perceptual filling-in
      • Take away lessons
    • 6. Models of morphologically detailed neurons
      • Morphologically detailed modeling
      • The cable equation
      • The compartmental model
      • Case study: direction-selective SAC
      • Take away lessons
    • 7. Models of network dynamic and learning
      • Circuit taxonomy, reconstruction, and simulation
      • Case study: SACs' lateral inhibition in direction selectivity
      • Neuromorphic and biological learning
      • Take away lessons
      • Errate
  • III. Architect perspective
    • 8. Neuromorphic Hardware
      • Transistors and micro-power circuitry
      • The silicon neuron
      • Case study: hardware - software co-synthesis
      • Take away lessons
    • 9. Communication and hybrid circuit design
      • Neural architectures
      • Take away lessons
    • 10. In-memory computing with memristors
      • Memristive computing
      • Take away lessons
      • Errata
  • IV. Algorithm designer perspective
    • 11. Introduction to neuromorphic programming
      • Theory and neuromorphic programming
      • Take away lessons
    • 12. The neural engineering framework
      • NEF: Representation
      • NEF: Transformation
      • NEF: Dynamics
      • Case study: motion detection using oscillation interference
      • Take away lessons
      • Errate
    • 13. Learning spiking neural networks
      • Learning with SNN
      • Take away lessons
Powered by GitBook
On this page

Was this helpful?

  1. II. Scientist perspective
  2. 7. Models of network dynamic and learning

Take away lessons

PreviousNeuromorphic and biological learningNextErrate

Last updated 3 years ago

Was this helpful?

Back-propagation: A prominent algorithm for training ANNs in which a network’s weights are modulated following a gradient of an error function.

Hebbian learning: Activity-dependent synaptic plasticity where correlated activation of pre- and postsynaptic neurons leads to the strengthening of the connection between the two neurons.

Spike Timing Dependent Plasticity: Spike-tailored Hebbian- based learning in which the relative timing of pre- and postsynaptic spikes are used to modulate synaptic connection strength. With STDP, a positive increase in a synaptic weight occurs when the presynaptic spike precedes the postsynaptic spike (LTP, colored red), and vice versa (LTD, colored blue).

Long Term Depression: Activity-dependent reduction in the efficacy of neuronal synapses.

Long Term Potentiation: Activity-dependent increase in the efficacy of neuronal synapses.

BCM learning: Hebbian learning-based rule according to which a neuron will undergo LTP if it is in a high-activity state or LTD if it is in a lower-activity state. With the BCM, synaptic modification is characterized by two thresholds separating non-modifying, positive and negative activity lev- els. LTP is colored in red and LTD is colored in blue.

Oja’s learning: Multiplicative normalized Hebbian learning.