ANSI C # define VS functions

I have a question about the performance of my code. Let them say that I have a structure in C for the point:

typedef struct _CPoint { float x, y; } CPoint; 

and a function in which I use struct.

 float distance(CPoint p1, CPoint p2) { return sqrt(pow((p2.x-p1.x),2)+pow((p2.y-p1.y),2)); } 

I was wondering if it would be a smart idea to replace this function with #define,

 #define distance(p1, p2)(sqrt(pow((p2.x-p1.x),2)+pow((p2.y-p1.y),2))); 

I think it will be faster because there will be no overhead, and I am wondering if I should use this approach for all other functions of my program in order to increase productivity. So my question is:

Should I replace all my functions with #define to improve the performance of my code?

+4
source share
5 answers

No. You should never make a decision between a macro and a function based on the perceived difference in performance. You should evaluate it this way based on the merits of functions over macros. Usually choose functions.

There are many hidden flaws in macros that can bite you. The fact is that your translation into a macro is incorrect here (or, at least, does not preserve semantics with the original function). The distance macro argument is evaluated 2 times. Imagine I made the next call

 distance(GetPointA(), GetPointB()); 

In the macro version, this actually leads to 4 function calls, because each argument is evaluated twice. If distance remained as a function, this could lead to only 3 function calls (distance and each argument). Note. I ignore the effect of sqrt and pow in the above calculations, since they are the same in both versions.

+8
source

There are three things:

  • normal functions like distance above
  • built-in functions
  • preprocessor macros

Although functions guarantee some type safety, they also suffer performance degradation due to the need to use a stack frame with every function call. the code from the built-in functions is copied on the call site, so the penalty is not paid, however, the size of your code will increase. Macros do not provide type safety and also include text replacement.

Choosing from all three, I usually used the built-in functions. Macros only when they are very short and very useful in this form (for example, hlist_for_each from the Linux kernel)

+3
source

I would recommend the inline function, not the macro. This will give you any possible macro performance benefits, without ugliness. (Macros have some gotchas that make them very possible as a general replacement for functions. In particular, macro arguments are evaluated every time they are used, and function arguments are evaluated once before the "call".)

 inline float distance(CPoint p1, CPoint p2) { float dx = p2.x - p1.x; float dy = p2.y - p1.y; return sqrt(dx*dx + dy*dy); } 

(Note: I also replaced pow(dx, 2) with dx * dx . The two are equivalent, and multiplication is likely to be effective. Some compilers may try to optimize the pow call ... but guess what, replace it.)

+3
source

Jared right, and in this particular case, the cycles spent on pow calls and the sqrt call will be in the range 2 orders of magnitude more than the cycles spent on the distance call.

Sometimes people assume that small code is equal to small times. Not this way.

+3
source

If you use a rather mature compiler, then propaby will do this for you at the build level if optimization is turned off.

For gcc -O3 or (for "small" functions) even the -O2 option will do this.

You can read more about this here http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html for the -finline * options.

+1
source

All Articles