I am currently creating a deep learning model for pattern recognition. From what I read, increasing data, such as accidentally cropping images, will lead to less retraining of the model. However, I'm not sure that overly this will lead to a worse model. Of course, I can try one with a larger crop, and the other with a smaller crop. But the problem is, how can I find out if a problem arises from among the crops produced.
Will all possible mxm crops be made from an nxn image to improve model performance?
I believe that it will be so. My reasoning is this: when we train the deep learning model, we look at the loss of the train and the loss of validation and train the model until it has very low losses. Suppose we initially have a set of trains of 1000 images, and the model requires 100 eras. Now we crop 10x additional images from the original train set. Each epoch can now be considered equivalent to 10 eras in the previous model, which has less training data. However, each of the training data for these 10 epochs is slightly different compared to 10-fold duplication in the previous model. Undoubtedly, this will lead to less rethinking. Are my reasoning correct?
In this case, are there any drawbacks to cropping all possible smaller images, assuming we have enough computing resources?
I am currently looking at cropping all possible 64x64 images from a 72x72 image, which gives me a total of 64 new images to the original image.
I have not seen any papers that relate to this. I would be grateful if someone could point me to him. Thanks.
source share