I am testing printed numbers (0-9) in a convolutional neural network. This gives 99% accuracy in the MNIST dataset, but when I tried it using fonts installed on the computer (Ariel, Calibri, Cambria, Cambria math, Times New Roman) and trained images created in fonts (104 images per font ( Total 25 fonts - 4 images per font (slight difference)), the learning error rate does not fall below 80%, i.e. 20% accuracy. Why?
Here is the number "2" Sample Image -

I resized each image to 28 x 28.
Details: -
Workout data size = 28 x 28 images. Network Settings - Like LeNet5 Network Architecture -
Input Layer -28x28 | Convolutional Layer - (Relu Activation); | Pooling Layer - (Tanh Activation) | Convolutional Layer - (Relu Activation) | Local Layer(120 neurons) - (Relu) | Fully Connected (Softmax Activation, 10 outputs)
This works by giving 99 +% accuracy to MNIST. Why is it so bad with computer fonts? CNN can handle many variations in data.
source share