Advantages of C ++ / Disadvantages of implementing functions as macros

Is it good to implement business logic functions as macros?

I have inherited some inherited C ++ code, and I find that many of the business logic functions are implemented as long cryptic macros.

Is there any advantage to macros over functions? What is the general rationale for using macros?

Which logic is best for macros?

Here is a simple example from the code

#define INSERT_VALUES(IN,ID,EO) {\ double evaluationOutput = EO;\ int controls = 0;\ int input_controls = m_input_controls[IN];\ if(m_value_list[IN].ShouldProcess())\ {\ evaluationOutput = m_evaluationOutput[IN];\ controls = m_controls[IN];\ }\ VALUE_EXIST(evaluationOutput,controls,input_controls,IN,ID,Adj);\ m_evaluationOutput[IN] = controls > 0 ? evaluationOutput : 0.0;\ m_controls[IN] = controls;\ m_input_controls[IN] = input_controls;\ 

}

+4
source share
3 answers

In Effective C ++ , Scott Myers notes in paragraph 2:

Prefer consts , enums and inlines - #defines

In particular, referring to the practice of writing macros instead of functions, Meyers says:

Another common (incorrect) use of the #define directive is to use macros that look like functions but do not carry the overhead of calling the function.

Macros like this have so many flaws, just thinking about them is painful.

Fortunately, you do not need to put up with this stupidity. You can get all the macro efficiency plus all the predictable behavior and security type of a regular function using the template for the built-in function.

[Real functions] are subject to areas and access rules. For example, it makes perfect sense to talk about a built-in function that is private to the class. In general, there is simply no way to do this with a macro.

To answer your questions specifically:

  • Some programmers write macros instead of functions to avoid the tangible overhead of calling a function — a dubious practice that often has little measurable benefit.
  • Programmers used the preprocessor to generate the template code (and possibly still executed). In modern C ++, processor usage should be limited to #include and (possibly) #ifdef / #ifndef for conditional compilation.

As Myers notes in closing:

This is not the time to exit the preprocessor, but you should definitely give it a long and frequent vacation.

+6
source

No. They are absolutely not a general replacement for functions.

Macros are capable of the fact that the code itself is not, especially creating and modifying tokens (code) before compilation.

In return, they lose all type safety and almost all syntactic sugar that may be available for actual code.

When you have to do something that cannot be done with code, or need conditional code (super-debugging stuff like memory tracing), macros come in handy. They also have some value in providing more concise or more readable ways of doing a certain thing (a common example is #define SUCCESS(x) (x >= 0) . However, for something that requires type safety, catch-exception, or consisting of often of the code used, t will have to mutate at compile time, they almost never work. Most of the code, which seems to be a macro, may be more safely expressed in real code, which is important for business logic.

The closest rule of thumb you can get is for something to happen or change at compile time. If so, consider the macro, otherwise just execute its code. Remember that templates are considered code and can do a lot of things that you might encounter under the influence of using a macro, but can be somewhat safer.

+6
source

A long, mysterious nothing wrong. If macros exist for performance problems, you can achieve the same result using the built-in functions. If they are used to simplify expressions, they can be reorganized as less cryptic and more easily maintained, if you have time and money for such efforts.

+2
source

All Articles