Neural network settings for quick learning

I create a tool to predict the time and cost of software projects based on past data. The tool uses a neural network for this, and so far the results are promising, but I think I can do a lot more optimization just by changing the properties of the network. There seems to be no rules or even a lot of best practices when it comes to these settings, so if someone with experience can help me, I would really appreciate it.

The input consists of a whole series of integers that can rise higher than the user wants, but most of them will be less than 100,000, I would think. Some of them will be as low as 1. They represent such details as the number of people in the project and the cost of the project, as well as information about the essence of the database and precedents.

Only 10 inputs and 2 outputs (time and cost). I use Resilient Propagation to train the network. Currently it has: 10 input nodes, 1 hidden layer with 5 nodes and 2 output nodes. I train to get a 5% error.

The algorithm should run on a web server, so I put in moderation to stop learning when it looks like it is not going anywhere. For this, 10,000 training iterations have been established.

Currently, when I try to teach him some data that is slightly different from each other, but within the limits of what we expect from users, it takes a lot of time to train, breaking the limit of 10,000 iterations and again.

This is the first time I've used a neural network, and I really don't know what to expect. If you could give me some advice on what settings I should use for the network and to limit the iteration, I would really appreciate it.

Thanks!

+7
neural-network backpropagation
source share
1 answer

First of all, thanks for providing so much information about your network! Here are a few pointers that should give you a clearer picture.

  • You need to normalize your inputs. If one node sees an average of 100,000 and the other 0.5, you will not see an equal effect on the two inputs. That is why you need to normalize them.
  • Only 5 hidden neurons for 10 input nodes? I remember reading somewhere that you need to at least double the number of inputs; try 20+ hidden neurons. This will provide your neural network model with the opportunity to develop a more complex model. However, too many neurons and your network will simply remember the training data set.
  • Sustainable backpropagagation is wonderful. Just remember that there are other learning algorithms like Levenberg-Marquardt.
  • How many training kits do you have? Neural networks typically need a large dataset to be useful in creating useful forecasts.
  • Consider adding a momentum factor to your weight training algorithm to speed things up if you haven't already.
  • Online training is best suited for making general forecasts than for batch training. The former updates the weights after running each training set through the network, while the latter updates the network after passing each data set. This is your call.
  • Is your data discrete or continuous? Neural networks work better with 0 and 1 than continuous functions. If this is the first, I would recommend using the sigmoid activation function. The combination of tanh and linear activation functions for the hidden and output levels tends to do a good job with continuously changing data.
  • Do you need another hidden layer? This can help if your network is dealing with complex I / O surface mapping.
+10
source share

All Articles