Neuromorphic Computing Journey (Part 6)
Hello everyone, welcome to another blog of this series. In this blog we will talk about learning in Spiking Neural Network.

It is not possible to use classical learning techniques and methods that are appropriate for conventional neural networks with spiking neural networks. Training an SNN can be done in several ways. Spike-timing-dependent plasticity (STDP) is a popular method for unsupervised learning in SNNs. In STDP, synaptic weight is determined by the difference in firing times between pre- and post-synaptic neurons. In synapses, presynaptic spikes occurring before post-synaptic spikes cause Long-Term Potentiation (LTP), whereas presynaptic spikes occurring after post-synaptic spikes result in Long-Term Depression (LTD). In STDP functions or learning windows, the change of the synapse is plotted as a function of the relative time between pre- and post-synaptic spikes.
SNNs can be used for supervised learning through back-propagation using algorithms such as SpikeProp and FreqProp, which demonstrate how spiking neurons with a biologically plausible time constant can perform complex non-linear classification tasks in temporal coding. Another such method is ReSuMe, which is suitable not only for movement control but also for other applications like identification and modelling of non-stationary objects.
SNN also supports reinforcement learning in a few models. By combining local plasticity rules with global reward signals, the actor-critic model uses temporal-difference learning. The network is capable of solving nontrivial grid-world tasks with sparse reward. There is also a method for reinforcement learning in SNNs through modulation of STDP. This modulation is used as a global reward signal, that leads to reinforcement learning.
Training the Spiking Neural Networks:
Unsupervised Learning
- Spike-timing-dependent plasticity (STDP)
- Growing Spiking Neural Networks
- Bienenstock, Cooper, Munro (BCM) rule
Supervised Learning
Reinforcement Learning