My opinion is that they help to avoid magic numbers.
Magic numbers are mostly in your code, where you have an arbitrary number floating around. For example:
int i = 32;
This is problematic in the sense that no one can understand why I am assigned the value 32, or what it means 32, or if it should be 32 at all. It is magical and mysterious.
In the same vein, I often see code that does this.
int i = 0; int z = -1;
Why are they set to 0 and -1? Is this just a coincidence? Do they mean something? Who knows?
While Decimal.One , Decimal.Zero , etc. They don’t tell you what the values mean in the context of your application (perhaps zero means “absent”, etc.), it tells you that the value was intentionally set, and probably has some meaning.
Although not ideal, it is much better than not saying anything at all :-)
Note This is not for optimization. Observe this C # code:
public static Decimal d = 0M; public static Decimal dZero = Decimal.Zero;
When viewing the generated bytecode using ildasm, both parameters result in an identical MSIL. System.Decimal is a value type, so Decimal.Zero no more "optimal" than just using a literal value.
Orion Edwards Apr 13 '09 at 22:23 2009-04-13 22:23
source share