This is an entry-level issue. I have several training inputs in the binary and for the neural network. I use the sigmoid threshold function SigmoidFn(Input1*Weights) , where
SigmoidFn(x) = 1./(1+exp(-1.*x));
Using the specified function will give continuous real numbers. But I want the output to be binary, because the network is a Hopfield neural network (input nodes with one layer of 5 and 5 output nodes). The problem I am facing is that I cannot correctly understand the use and implementation of various threshold functions. Below are the weight weights and are given in the article. So, I use weights to create several training examples, several output samples, keeping the weight fixed, which triggers the neural network several times.
Weights = [0.0 0.5 0.0 0.2 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.6 0.0]; Input1 = [0,1,0,0,0] x = Input1*Weights; % x = 0 0 1 0 0
As you can see, the result of multiplication is the second row of weights. Is this just a coincidence?
Further,
SigmoidFn = 1./(1+exp(-1.*x)) SigmoidFn = 0.5000 0.5000 0.7311 0.5000 0.5000
round(SigmoidFn) ans = 1 1 1 1 1
Input2 = [1,0,0,0,0] x = Input2*Weights x = 0 0.5000 0 0.2000 0 SigmoidFn = 1./(1+exp(-1.*x)) SigmoidFn = 0.5000 0.6225 0.5000 0.5498 0.5000 >> round(SigmoidFn) ans = 1 1 1 1 1
Is it good to use the round function round(SigmoidFn(x)) .? The result is incorrect. or how do I get a binary result when I use any threshold function: (a) HArd constraint (b) Logistic sigmoid (c) Tank
Can someone please show the correct code for the threshold and a brief explanation of when to use which activation function? I mean, there must be a certain logic otherwise, why there are different functions? EDIT: Hopfield implementation for re-entering an input template by successive iterations by fixing the weight.
Training1 = [1,0,0,0,0]; offset = 0; t = 1; X(t,:) = Training1; err = 1; while(err~=0) Out = X(t,:)*Weights > offset; err = ((Out - temp)*(Out - temp).')/numel(temp); t = t+1 X(t,:) = temp; end
source share