In this blog post, I will summarize the firing rate model, as it forms the basis of the development of neural networks. I’ll work in how this relates to feedforward networks at the lowest level of processing before leading to recurrent networks. It also was one of the first models of artificial neural networks designed.
Explaining the Basis of the Firing Rate Equation
To be better able to present information from a large set of neurons, we simplify the what we model. The actual neural response function ρ(t) is replaced with the average firing rate r(t). This is a good model to use if the presynaptic spike trains do not fire in synchrony ie, they are random. I’ve seen this model used for the visual system and reaching tasks a lot.
The equations used in modeling feedforward and recurrent networks need to account for Dale’s Law: Neurons are typically classified as either excitatory or inhibitory, meaning that they have either excitatory or inhibitory effects on all of their postsynaptic targets. Ie, neurons can either make it easier to make another neuron fire or harder for another neuron to fire through synaptic communication.
Firing Rate Equation:
τdIs=−I+w·u with v=F(I). –1
In this case, it is assumed that firing rates follow time-varying input currents instantaneously, without attenuation or delay.
τ dv =−v+F(w·u). (7.8) r dt –2
In this second equation, the firing rate is a low-pass filtered version of the input current.
What is the difference between a feedforward network and a recurrent network?
The answer is kinda in the name: in a feedforward network, information travels in one direction. There are no cycles or loops in the network, meaning that information linearly flows from one point to another. In a recurrent network, information and data are incorporated on several levels.