What is the difference between> 0 and> = 1?

The title says it all. Is there a reason some professors and programmers generally write

if ( x >= 1 ) 

instead

 if ( x > 0 ) 

?

+6
source share
4 answers

Performance is considered here - in some cases, the CPU can compare with 0 faster than with 1. A smart compiler can optimize this, but comparing with 0 is usually better when possible.

EDIT: A little clarification on this issue - Processors have a flag of "zero", which is set when the result of the operation of an arithmetic operation or comparison leads to a value of zero. There is also a "negative" flag. The “compare” command is actually more or less identical to the “subtract” instruction, except that the result is not saved, but the flags are set.

It depends on the context, but if this variable was just set as a result of an arithmetic operation, and now it is 0, the zero flag will already be set, and no comparison command will be needed to determine if x> 0. When it is 1 , you must perform a comparison with constant 1 to set the flag to zero, and to fulfill the condition.

For example, for (pseudo-code) some compilers (I saw that Delphi does this) will optimize

 for x = 0 to 10 { print "hello world " } 

to

 for x = 10 down to 0 { .. } 

simply because it doesn’t need to “compare x with 10” each time, since the zero flag is already set at the last iteration as a result of decreasing x. Of course, this can only be done if there is no link in the loop, otherwise it will change functionality.

Wikipedia has received further clarification regarding the zero flag: http://en.wikipedia.org/wiki/Zero_flag

0
source

if both are integers no difference

+4
source

Using this or that technique does not matter (if x is an integer, which, apparently, it is).

However, the choice of one comparison in comparison with another may have semantic meaning. One says:

This action has more than 0 X es. I can work with any positive number of elements.

Another says

This action has at least 1 X I cannot complete this action if at least 1 element

They have the same meaning, but are expressed differently, which may help to understand why you are comparing them with such a value.

+1
source

It depends on type x . If this is an integral type, then they are the same, and this is a personal preference. I have always been with the shortest x > 0 . However, if it is a floating point / fixed point type, then there is a big difference, since there are many values ​​between 0 and 1.

However, there can be two reasons why people choose one after another:

1) For clarity of intent: if the requirement is "give me an integer from one or higher", then they use x >= 1 , and if the requirement is "give me a positive integer", then they use x > 0 . Now that you read the code later, you can easily understand the requirements and intentions. I see that it is especially useful at school to understand this while students are still studying.

2) Some developers do not know what they are doing, and are confused even regarding the basics, so they implement the requirements literally, as described above, without analyzing the impact on performance, security, etc. In our example, there is actually no harm, but in some other code there can be a huge difference, and I saw many developers who simply do not know the difference and influence.

0
source

All Articles