Try to simulate a neural network in Matlab

I tried to create a neural network to evaluate y = x ^ 2. Therefore, I created a suitable neural network and gave it some samples for input and output. I tried to build this network in C ++. but the result is different than expected.

With the following inputs:

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51 -52 -53 -54 -55 -56 -57 -58 -59 -60 -61 -62 -63 -64 -65 -66 -67 -68 -69 -70 -71

and the following outputs:

0 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 289 324 361 400 441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225 1296 1369 1444 1521 1600 1681 1764 1849 1936 2025 2116 2209 2304 2401 2500 2601 2704 2809 2916 3025 3136 3249 3364 3481 3600 3721 3844 3969 4096 4225 4356 4489 4624 4761 4900 5041 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 289 324 361 400 441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225 1296 1369 1444 1521 1600 1681 1764 1849 1936 2025 2116 2209 2304 2401 2500 2601 2704 2809 2916 3025 3136 3249 3364 3481 3600 3721 3844 3969 4096 4225 4356 4489 4624 4761 4900 5041

I used instrumental network setup. with matrix rows. training - 70%, verification - 15%, and testing - 15%. the number of hidden neurons is 2. then on the command lines I wrote this:

purelin(net.LW{2}*tansig(net.IW{1}*inputTest+net.b{1})+net.b{2}) 

Additional Information:

my net.b [1]: -1.16610230053776 1.16667147712026

my net.b [2]: 51.3266249426358

and net.IW (1): 0.344272596370387 0.344111217766824

net.LW (2): 31.7635369693519 -31.8082184881063

When my input test is 3, the result of this command is 16, while it should be around 9.
If I made a mistake somewhere, let me know. give thanks

Edited: I found a neural network link in MATLAB in which there is a problem similar to my problem, but there is a slight difference, and the differences in this problem are that the input and output ranges are the same, but there is no problem in my problem. This solution says that I need to scale the results, but I do not know how to reduce my result. any idea?

+3
source share
1 answer

You are right to scale. As mentioned in the linked answer , the neural network by default scales input and output to the range [-1,1]. This can be seen in the configuration of the network processing functions:

 >> net = fitnet(2); >> net.inputs{1}.processFcns ans = 'removeconstantrows' 'mapminmax' >> net.outputs{2}.processFcns ans = 'removeconstantrows' 'mapminmax' 

The second preprocessing function applied to I / O is mapminmax with the following parameters:

 >> net.inputs{1}.processParams{2} ans = ymin: -1 ymax: 1 >> net.outputs{2}.processParams{2} ans = ymin: -1 ymax: 1 

to display them in the range [-1,1] ( up to for learning).

This means that the trained network expects input values ​​in this range and outputs values ​​in the same range. If you want to manually submit the input to the network and calculate the output yourself, you need to scale the data at the input and cancel the display at the output.

The last thing to remember is that every time you train ANN, you will get different weights. If you need reproducible results, you need to fix the state of the random number generator (each time initialize it with the same seed). Read the documentation for features like rng and RandStream .

You should also note that if you divide the data into training / test / validation sets, you should use the same split every time (probably also the affected aspect of randomness that I mentioned).


Here is an example illustrating an idea (adapted from another post ):

 %%# data x = linspace(-71,71,200); %# 1D input y_model = x.^2; %# model y = y_model + 10*randn(size(x)).*x; %# add some noise %%# create ANN, train, simulate net = fitnet(2); %# one hidden layer with 2 nodes net.divideFcn = 'dividerand'; net.trainParam.epochs = 50; net = train(net,x,y); y_hat = net(x); %%# plot plot(x, y, 'b.'), hold on plot(x, x.^2, 'Color','g', 'LineWidth',2) plot(x, y_hat, 'Color','r', 'LineWidth',2) legend({'data (noisy)','model (x^2)','fitted'}) hold off, grid on %%# manually simulate network %# map input to [-1,1] range [~,inMap] = mapminmax(x, -1, 1); in = mapminmax('apply', x, inMap); %# propagate values to get output (scaled to [-1,1]) hid = tansig( bsxfun(@plus, net.IW{1}*in, net.b{1}) ); %# hidden layer outLayerOut = purelin( net.LW{2}*hid + net.b{2} ); %# output layer %# reverse mapping from [-1,1] to original data scale [~,outMap] = mapminmax(y, -1, 1); out = mapminmax('reverse', outLayerOut, outMap); %# compare against MATLAB output max( abs(out - y_hat) ) %# this should be zero (or in the order of `eps`) 

I decided to use the mapminmax function, but you could do it manually mapminmax . The formula is a fairly simple linear mapping:

 y = (ymax-ymin)*(x-xmin)/(xmax-xmin) + ymin; 

screenshot

+6
source

Source: https://habr.com/ru/post/1415615/


All Articles