I think that now I understand that the basic principle of propagating recurrent and root networks is an explicit time step.
In forward forwarding, propagation occurs in stages, therefore, layer 1 neurons are triggered first, followed by layers 2, 3, etc., so propagation is an activation activation of activation of neurons in neurons that accept it as an input.
Alternatively, we can think of propagation instead, since the neurons whose inputs are active at any given time are the ones who shoot. So, if we have time t = 0, then level 1 neurons are active, and the next time t = 1, the next Layer 2 layer is activated, since the neurons in layer 2 take the neurons in layer 1. As input.
Although the difference in thinking might seem semantics, it was important for me to figure out how to implement repeating networks. In forward mode, the time interval is implicit, and the code moves through the layers of neurons in turn, activating them as falling dominoes. In a recursive network, trying to activate a falling domino, where each neuron indicates which neuron it activates next will become a nightmare for large, intricate networks. Instead, it makes sense to interrogate a very neuron in the network at time t to see if it is activated based on its inputs.
Of course, there are many different types of recurrent neural network, but I think that it is this key explicit time step that is the key to the recurrent distribution of the network.
The problem of differential equations that I was interested in arises if, instead of discrete time steps t, there are 0, 1, 2, etc., in order to try to achieve a smoother and more continuous network flow by simulating propagation over very small time increments, such as 0.2, 0.1, 0.05, etc.