Neuromorphic Engineering Book
  • Welcome
  • Preliminaries
    • About the author
    • Preface
    • A tale about passion and fear
    • Before we begin
  • I. Introduction
    • 1. Introducing the perspective of the scientist
      • From the neuron doctrine to emergent behavior
      • Brain modeling
      • Take away lessons
    • 2. Introducing the perspective of the computer architect
      • Limits of integrated circuits
      • Emerging computing paradigms
      • Brain-inspired hardware
      • Take away lessons
      • Errata
    • 3. Introducing the perspective of the algorithm designer
      • From artificial to spiking neural networks
      • Neuromorphic software development
      • Take home lessons
  • II. Scientist perspective
    • 4. Biological description of neuronal dynamics
      • Potentials, spikes and power estimation
      • Take away lessons
      • Errata
    • 5. Models of point neuronal dynamic
      • Tutorial - models of point neuronal processes
        • The leaky integrate and fire model
        • The Izhikevich neuron model
        • The Hodgkin-Huxley neuron model
      • Synapse modeling and point neurons
      • Case study: a SNN for perceptual filling-in
      • Take away lessons
    • 6. Models of morphologically detailed neurons
      • Morphologically detailed modeling
      • The cable equation
      • The compartmental model
      • Case study: direction-selective SAC
      • Take away lessons
    • 7. Models of network dynamic and learning
      • Circuit taxonomy, reconstruction, and simulation
      • Case study: SACs' lateral inhibition in direction selectivity
      • Neuromorphic and biological learning
      • Take away lessons
      • Errate
  • III. Architect perspective
    • 8. Neuromorphic Hardware
      • Transistors and micro-power circuitry
      • The silicon neuron
      • Case study: hardware - software co-synthesis
      • Take away lessons
    • 9. Communication and hybrid circuit design
      • Neural architectures
      • Take away lessons
    • 10. In-memory computing with memristors
      • Memristive computing
      • Take away lessons
      • Errata
  • IV. Algorithm designer perspective
    • 11. Introduction to neuromorphic programming
      • Theory and neuromorphic programming
      • Take away lessons
    • 12. The neural engineering framework
      • NEF: Representation
      • NEF: Transformation
      • NEF: Dynamics
      • Case study: motion detection using oscillation interference
      • Take away lessons
      • Errate
    • 13. Learning spiking neural networks
      • Learning with SNN
      • Take away lessons
Powered by GitBook
On this page
  • Python demonstration
  • Python / BRIAN demonstration

Was this helpful?

  1. II. Scientist perspective
  2. 7. Models of network dynamic and learning

Neuromorphic and biological learning

Read Chapter 7.4

Python demonstration

STDP demonstration:

import numpy as np
import matplotlib.pyplot as plt

A_W_P = lambda w: 1
tau_p = 0.01
w     = 0
dt_space = np.linspace(-0.05, 0.05, 100)

def dw (dt):
    if dt < 0:
        return A_W_P(w) * np.exp(dt/tau_p)
    if dt == 0:
        return 1
    return -A_W_P(w) * np.exp(-dt/tau_p)

dw_value = [dw(dt) for dt in dt_space]
plt.plot(dt_space, dw_value)

Check the results!

Python / BRIAN demonstration

Imports:

import matplotlib.pyplot as plt
from brian2 import NeuronGroup, Network, SpikeMonitor
from brian2.units import second, ms, Hz
from brian2 import seed

Model creation:

dict_data = {}

# Vary the correlation within the group
plt.figure(figsize=(12, 12))
for i, c in enumerate([0.0, 0.5, 1.0]):

    N = 10            # Number of neuron in the group
    rate = 10 * Hz    # Mean firing rate of each neuron

    G = NeuronGroup(N, 'v : 1 (shared)', threshold='((v < rate*dt) and rand() < sqrt(c)) or rand() < rate*(1 - sqrt(c))*dt')
    G.run_regularly('v = rand()')

Simulating:

    duration = 4 * second
    S = SpikeMonitor(G)
    net = Network(G, S)
    net.run(duration)

Plotting:

    plt.subplot(3, 1, i + 1)
    plt.title('Correlation = %1.1f' % (c), fontsize=15)
    plt.ylabel('Neurons', fontsize=15)
    plt.plot(S.t / ms, S.i, '.')
    print('Mean firing rate (c = %1.1f): ' % (c), len(S.t) / (N * duration))
    
    dict_data[c] = {'i': S.i, 't': S.t/ms}

plt.xlabel('Time [sec]', fontsize=15)
plt.tight_layout()
plt.savefig('STDP1.jpg', dpi=350)
plt.show()

Results:

Visualizing weight change for neurons with different correlations:

plt.figure()

for corr in [0.0, 0.5, 1.0]:
    
    W_0_1 = 0
    W_0_1_t = []

    for i, spike_cell in enumerate(dict_data[corr]['i']):

        weight_update = 0

        if i == len(dict_data[corr]['i'])-2:
            break

        if (dict_data[corr]['i'][i]==1) and (dict_data[corr]['i'][i+1]==0):
            dt = dict_data[corr]['t'][i+1] - dict_data[corr]['t'][i]
            weight_update = dw(dt/1000)
            #print('diminishing weight by: {}, dt: {}'.format(weight_update, dt))

        if (dict_data[corr]['i'][i]==0) and (dict_data[corr]['i'][i+1]==1):
            dt = dict_data[corr]['t'][i] - dict_data[corr]['t'][i+1]
            weight_update = dw(dt/1000)
            #print('augmenting weight by: {}, dt: {}'.format(weight_update, dt/1000))

        W_0_1 = W_0_1 + weight_update
        W_0_1_t.append(W_0_1)
    
    plt.plot(W_0_1_t, label = 'Correlation = {}'.format(corr), linewidth=3)

plt.xlim(0,350)
plt.ylim(-5,35)
plt.legend(loc='best')
plt.ylabel("Synaptic weight")
plt.xlabel("Time (sec)")

Result:

PreviousCase study: SACs' lateral inhibition in direction selectivityNextTake away lessons

Last updated 3 years ago

Was this helpful?