What is the fastest way to call and execute a function in C?

I have many functions (huge list) that are defined and compiled. And I use function pointers to call and execute functions, dynamically sending arguments at runtime. This is an iterative process involving over hundreds of thousands of function calls at each iteration. I want to know what is an efficient way to call a compiled function. I feel my way is slower.

+4
source share
7 answers

You need to profile your program to see if this is a problem. If you spend 99% of your time on individual functions, the best improvement you can hope for is 1%, and even this is hardly possible.

+6
source

The only way to speed up function calls is that the compiler knows which function it will call.

That is, something like:

void foo(void) { /* do some stuff */ } int main(void) { foo(); } 

May be added:

 int main(void) { /* do some stuff */ } 

But if the compiler does not know which one to call:

 void foo(void) { /* do some stuff */ } void bar(void) { /* do some other stuff */ } typedef void(*Function)(void); int main(void) { Function func = /* choose a function at runtime */ func(); } 

The compiler cannot predict which function will be called, and therefore cannot embed it.

If your compiler supports it, you can try using __fastcall , but you need to profile your code and see the positive difference.

This one level of indirection will not matter much. Profile your code and find where the real slowdowns are.

+3
source

It depends on how you determine which of these hundreds of thousands of functions to call. If you do a linear search through a list of function pointers, then yes, you probably spend a lot of time. In this case, you should take a look at the fact that the function pointers fall into the hash table, or at least store them in a sorted list so that you can perform a binary search. Without additional information about what you do and how you do it, it’s hard to give you useful tips.

You also need a profile, as others have indicated. It seems you don’t know if what you are doing is slower, in which case you also don’t know if it is worth optimizing.

+1
source

The overhead of calling functions is mainly:

  • function call itself
  • parameters that you pass
  • retun value
  • the number of times you need to call a function

So, for starters, ask questions:

  • Can you change the algorithm to require fewer function calls?
  • can you reduce the amount of data you need to transfer back and forth?
  • Can you change the algorithm for batch processing calls to each function (so that you can process a group of values ​​in one call or at least repeat the same function for a group of values ​​so that all the code remains in the processor cache)?

Once you have a good algorithm and efficient implementation, you will have to move on to lower level optimization methods - you can use assembler to execute your own function call protocol, which requires less data that needs to be pushed onto the stack. If they are “worksheet functions” (which do not call other functions), you may not even need to use the stack, so you can avoid several overhead instructions with each call. (Some of this could be done in C by replacing the calls to gotos functions - this is very ugly, though)

Finally, you can fall into the realms of self-modifying code - build a new machine code from fragments representing functions, and then call the generated code. This can be very processor specific and complex, although it is quite low.

+1
source

Well, you can create your own function linker that can bind certain calls to “fragments” of certain functions and cache them to avoid overhead. It probably won't help you though.

Much depends on the size of the functions. How close are they to each other in memory and all kinds of other things. It would make little sense to delete function pointers, for example, if the second function call was immediately after the first in memory, since the beginning of this function was most likely already cached.

This is not an easy question to answer, even if you have provided us with a few details.

As Mark says ... Profiler is your friend.

0
source

You should use a tool like QProf , Valgrind, or gprof to profile your code and see where the most runtime is spent. Based on the results, you should optimize the functions that take the most time.

If the list iteration procedure does take most of the code time, you should try to optimize the calls. If you are browsing the list to find a given function, you can try to make a list of the most frequently used functions or order them by call frequency so that your search algorithm does not have to go so far into the list to find the desired function.

0
source

The number of additional instructions required to dereference a function pointer must be a very small part of the number of instructions that make up the function body. Aggressive inclusion of each function call will not matter much. As suggested by earlier answers, you really need to use a profiler to identify bottlenecks.

In the big scheme of things, dropping a few instructions here or there will not be any significant improvements. Big wins will come from improving your algorithms.

0
source

All Articles