What is the difference between #define and declaration

#include <stdio.h> #include <math.h> // #define LIMIT 600851475143 int isP(long i); void run(); // 6857 int main() { //int i = 6857; //printf("%d\n", isP(i)); run(); } void run() { long LIMIT = 600851475143; // 3, 5 // under 1000 long i, largest =1, temp=0; for(i=3; i<=775147; i+=2) { temp = ((LIMIT/i)*i); if(LIMIT == temp) if(isP(i)==1) largest = i; } printf("%d\n",largest); } int isP(long i) { long j; for(j=3; j<= i/2; j+=2) if(i == (i/j)*j) return 0; return 1; } 

I just met an interesting problem. As shown above, this piece of code is designed to calculate the largest prime number of LIMITs. The program shown above gave me an answer of 29, which is incorrect.

Although, by a miracle, when I determined the value of LIMIT (instead of declaring it as long as possible), it can give me the correct value: 6857.

Can someone help me figure out the reason? Thank you very much!

+4
source share
6 answers

A long on many platforms is a 4 byte integer and will overflow at 2,147,483,647 . For example, see the Visual C ++ Data Type Ranges Page .

When you use #define , the compiler can choose a more suitable type, which can contain a very large number. This can lead to proper operation and give the answer that you expect.

In general, however, I would recommend being explicit about the data type and choosing a data type that will correctly represent the number without requiring compiler and platform behavior.

+3
source

I would suspect a problem with a numeric type.

#define is a preprocessor directive, so instead of starting this compiler, it will replace LIMIT with this number in the code. This leaves the door open for the compiler to interpret this number as it wants, which may not be as long .

In your case, long is probably not big enough, so the compiler chooses something else when you use #define . For consistent behavior, you must specify a type that you know has an appropriate range and not rely on the compiler to guess correctly.

You should also include full warnings in your compiler, it may detect such a problem for you.

+1
source

When entering an expression like:

 (600851475143 + 1) 

everything is fine, since the compiler automatically advances both of these constants to the appropriate type (for example, long long in your case), large enough to perform the calculation. You can make as many expressions as you want. But when you write:

 long n = 600851475143; 

the compiler tries to assign a long long (or regardless of whether the constant is implicitly converted) to long , which leads to a problem in your case. Your compiler should warn you about this, for example, gcc:

 warning: overflow in implicit constant conversion [-Woverflow] 

Of course, if a long is large enough to hold this value, there is no problem, since the constant will have the same size as long or less.

+1
source

Probably because 600851475143 larger than LONG_MAX (2147483647 according to this )

0
source

Try replacing the type of restriction with a long one. As it stands, it wraps around (try the print limit as long). The preprocessor knows this and uses the correct type.

0
source

Your code basically boils down to two possibilities:

 long LIMIT = 600851475143; x = LIMIT / i; 

vs.

 #define LIMIT 600851475143 x = LIMIT / i; 

The first equivalent of casting a constant to long :

 x = (long)600851475143 / i; 

while the second will be precompiled into:

 x = 600851475143 / i; 

And here is the difference: 600851475143 too big for your long compiler, so if it gets thrown into long , it overflows and goes crazy. But if it is used directly in a section, the compiler knows that it does not fit into long , automatically interprets it as a literal long long , i progresses, and division is performed as long long .

Note, however, that even if the algorithm runs most of the time, you still have overflows elsewhere, so the code is incorrect. You must declare any variable that may contain these large values ​​as long long .

0
source

All Articles