I read about the advancement of artificial neural networks (ANNs), and as a rule, they need training to change their weights in order to achieve the desired result. They will also always get the same result when they get the same input after tuning (biological networks are optional).
Then I started reading about evolutionary neural networks. However, evolution usually involves recombining the genomes of two parents into a new genome, there is no “learning”, but it actually recombines and checks using a fitness test.
I thought the human brain controls its own connections. It creates bonds, strengthens some and weakens others.
Is there a neural network topology that allows this? In cases where a neural network that has a bad reaction, or adjusts its weight accordingly, and possibly creates random new connections (I'm not sure how the brain creates new connections, but even if I did not, an accidental mutation a new connection could would make it easier). A good reaction will strengthen these bonds.
I believe this type of topology is known as a neural network of type Turing type B, but I have not seen any coded examples or documents.
source share