This is not a problem with C #, but with your computer. It is not very difficult or difficult to understand, but it is a long read. You should read this article if you want to have a deep knowledge of how your computer works.
An excellent TL; DR site, which imho is a better introduction to the topic than the above article:
http://floating-point-gui.de/
I will tell you a very short explanation of what is happening, but you should definitely read at least this site in order to avoid future troubles, since your area of application will require deep knowledge.
What happens is this: you have 1e-20, which is less than 1.11e-16. This other number is called the epsilon machine for double precision on your computer (most likely). If you add a number equal to or greater than 1, something smaller than the epsilon of the machine, it will be rounded, back to a large number. This is due to the introduction of IEEE 754. This means that after adding a result that is “correct” (as if you had infinite accuracy), the result is saved in a limited / finite accuracy format that rounds 4.00 .... 001 to 4 because the rounding error is less than 1.11e-16, therefore it is considered acceptable.
source share