Is there a reason why I should not test a set of variables for 0 by testing their product?
Often in my coding in different languages I will test a set of variables to do something if they consist of zeros.
For example (C #):
if( myString.Length * myInt * (myUrl.LastIndexOf(@"\")+1) == 0 )
Instead:
if( myString.Length == 0 || myInt == 0 || myUrl.LastIndexOf(@"\") < 0)
Is there a reason why I should not test this way?
Here are a few reasons. All of them are important, and they do not have a special order.
if (myObj != null && myObj.Enabled)
myString.Length * myInt * myUrl.LastIndexOf(@"\") == 0
if( myString.Length > 0 && myInt != 0 && myUrl.LastIndexOf(@"\") <= 0)
&&
, , . , .
, , , "" , , .
, .
, , 65536, - "\". 65536 * myInt * 65536 , 2 32 int - 32 . , , .
65536 * myInt * 65536
int
myInt , 65k . (URL- 65k , )
myInt
, , , , , 2 . , ( ), ( ), .