You are right to scale. As mentioned in the linked answer , the neural network by default scales input and output to the range [-1,1]. This can be seen in the configuration of the network processing functions:
>> net = fitnet(2); >> net.inputs{1}.processFcns ans = 'removeconstantrows' 'mapminmax' >> net.outputs{2}.processFcns ans = 'removeconstantrows' 'mapminmax'
The second preprocessing function applied to I / O is mapminmax
with the following parameters:
>> net.inputs{1}.processParams{2} ans = ymin: -1 ymax: 1 >> net.outputs{2}.processParams{2} ans = ymin: -1 ymax: 1
to display them in the range [-1,1] ( up to for learning).
This means that the trained network expects input values ββin this range and outputs values ββin the same range. If you want to manually submit the input to the network and calculate the output yourself, you need to scale the data at the input and cancel the display at the output.
The last thing to remember is that every time you train ANN, you will get different weights. If you need reproducible results, you need to fix the state of the random number generator (each time initialize it with the same seed). Read the documentation for features like rng
and RandStream
.
You should also note that if you divide the data into training / test / validation sets, you should use the same split every time (probably also the affected aspect of randomness that I mentioned).
Here is an example illustrating an idea (adapted from another post ):
%%# data x = linspace(-71,71,200); %
I decided to use the mapminmax
function, but you could do it manually mapminmax
. The formula is a fairly simple linear mapping:
y = (ymax-ymin)*(x-xmin)/(xmax-xmin) + ymin;