I create a tool to predict the time and cost of software projects based on past data. The tool uses a neural network for this, and so far the results are promising, but I think I can do a lot more optimization just by changing the properties of the network. There seems to be no rules or even a lot of best practices when it comes to these settings, so if someone with experience can help me, I would really appreciate it.
The input consists of a whole series of integers that can rise higher than the user wants, but most of them will be less than 100,000, I would think. Some of them will be as low as 1. They represent such details as the number of people in the project and the cost of the project, as well as information about the essence of the database and precedents.
Only 10 inputs and 2 outputs (time and cost). I use Resilient Propagation to train the network. Currently it has: 10 input nodes, 1 hidden layer with 5 nodes and 2 output nodes. I train to get a 5% error.
The algorithm should run on a web server, so I put in moderation to stop learning when it looks like it is not going anywhere. For this, 10,000 training iterations have been established.
Currently, when I try to teach him some data that is slightly different from each other, but within the limits of what we expect from users, it takes a lot of time to train, breaking the limit of 10,000 iterations and again.
This is the first time I've used a neural network, and I really don't know what to expect. If you could give me some advice on what settings I should use for the network and to limit the iteration, I would really appreciate it.
Thanks!
neural-network backpropagation
danpalmer
source share