Neural network activation function

This is an entry-level issue. I have several training inputs in the binary and for the neural network. I use the sigmoid threshold function SigmoidFn(Input1*Weights) , where

 SigmoidFn(x) = 1./(1+exp(-1.*x)); 

Using the specified function will give continuous real numbers. But I want the output to be binary, because the network is a Hopfield neural network (input nodes with one layer of 5 and 5 output nodes). The problem I am facing is that I cannot correctly understand the use and implementation of various threshold functions. Below are the weight weights and are given in the article. So, I use weights to create several training examples, several output samples, keeping the weight fixed, which triggers the neural network several times.

 Weights = [0.0 0.5 0.0 0.2 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.6 0.0]; Input1 = [0,1,0,0,0] x = Input1*Weights; % x = 0 0 1 0 0 
  • As you can see, the result of multiplication is the second row of weights. Is this just a coincidence?

  • Further,

     SigmoidFn = 1./(1+exp(-1.*x)) SigmoidFn = 0.5000 0.5000 0.7311 0.5000 0.5000 
  •  round(SigmoidFn) ans = 1 1 1 1 1 
  •  Input2 = [1,0,0,0,0] x = Input2*Weights x = 0 0.5000 0 0.2000 0 SigmoidFn = 1./(1+exp(-1.*x)) SigmoidFn = 0.5000 0.6225 0.5000 0.5498 0.5000 >> round(SigmoidFn) ans = 1 1 1 1 1 

    Is it good to use the round function round(SigmoidFn(x)) .? The result is incorrect. or how do I get a binary result when I use any threshold function: (a) HArd constraint (b) Logistic sigmoid (c) Tank

Can someone please show the correct code for the threshold and a brief explanation of when to use which activation function? I mean, there must be a certain logic otherwise, why there are different functions? EDIT: Hopfield implementation for re-entering an input template by successive iterations by fixing the weight.

 Training1 = [1,0,0,0,0]; offset = 0; t = 1; X(t,:) = Training1; err = 1; while(err~=0) Out = X(t,:)*Weights > offset; err = ((Out - temp)*(Out - temp).')/numel(temp); t = t+1 X(t,:) = temp; end 
+2
source share
1 answer

Hopfield networks do not use sigmoid nonlinearity; node state is simply updated to see if its weighted input is greater than or equal to its offset.

Do you want something like

 output2 = Weights * Input1' >= offsets; 

where offsets is the same size as Input1 . I used Weights * Input1' instead of Input1 * Weights because most of the examples I've seen use left-multiply multiplication (i.e. weight matrix label strings, input nodes and columns indicate output nodes), but you'll have to look where You have not received your weight matrix.

You should know that you will have to perform this update operation many times before converging to a fixed point that represents a saved template.

In response to your further questions, the weight matrix you select does not store any memories that may be called up on the Hopfield network. It contains a cycle 2 -> 3 -> 4 -> 2 ... , which will not allow converging to the network.

In general, you would restore the memory as you wrote in your editing:

 X = [1,0,0,0,0]; offset = 0; t = 1; err = 1; nIter = 100; while err ~= 0 && t <= nIter prev = X; X = X * Weights >= offset; err = ~isequal(X, prev); t = t + 1; end if ~err disp(X); end 

If you link to a wikipedia page, this is what is called a synchronous update method.

+2
source

All Articles