Well, if you leave the dropout probability as a percentage of 0.9, it means that there is a 10% chance that the neuron connection will come out at each iteration. Thus, for dropout must also be an optimum value.

As in the above, you can understand that we are also scalable, our neurons on a roll. The above case is 0.5. If this o.9, it will again be another scaling.
So, basically, if he has fallen 0.9, we have to scale it by 0.9. This means that we have more than 0.1 in the test.
This is what you can understand how an attack can affect. Thus, for some it is likely to saturate your nodes, etc., which causes the problem of convergence.
source share