Take away lessons
Last updated
Last updated
Back-propagation: A prominent algorithm for training ANNs in which a network’s weights are modulated following a gradient of an error function.
Hebbian learning: Activity-dependent synaptic plasticity where correlated activation of pre- and postsynaptic neurons leads to the strengthening of the connection between the two neurons.
Spike Timing Dependent Plasticity: Spike-tailored Hebbian- based learning in which the relative timing of pre- and postsynaptic spikes are used to modulate synaptic connection strength. With STDP, a positive increase in a synaptic weight occurs when the presynaptic spike precedes the postsynaptic spike (LTP, colored red), and vice versa (LTD, colored blue).
Long Term Depression: Activity-dependent reduction in the efficacy of neuronal synapses.
Long Term Potentiation: Activity-dependent increase in the efficacy of neuronal synapses.
BCM learning: Hebbian learning-based rule according to which a neuron will undergo LTP if it is in a high-activity state or LTD if it is in a lower-activity state. With the BCM, synaptic modification is characterized by two thresholds separating non-modifying, positive and negative activity lev- els. LTP is colored in red and LTD is colored in blue.
Oja’s learning: Multiplicative normalized Hebbian learning.