#define vs. Enumerations for Addressing Peripherals

I need to program peripheral registers in an ARM9 microcontroller.

For example, for USART, I save the corresponding memory addresses in enum :

 enum USART { US_BASE = (int) 0xFFFC4000, US_BRGR = US_BASE + 0x16, //... }; 

Then I use pointers in the function to initialize the registers:

 void init_usart (void) { vuint* pBRGR = (vuint*) US_BRGR; *pBRGR = 0x030C; //... } 

But my teacher says I better use #define s, for example:

 #define US_BASE (0xFFFC4000) #define US_BRGR (US_BASE + 0x16) #define pBRGR ((vuint*) US_BRGR) void init_usart (void) { *pBRGR = 0x030C; } 

So, he says, you don't have the overhead of allocating pointers on the stack.

Personally, I don't like the #define many other preprocessor directives. So, the question is in this particular case: is it really worth using #define instead of enum and pointers assigned by stacks?


Related question: Want to configure a specific peripheral register in an ARM9-based chip

+6
c enums c-preprocessor arm embedded
source share
7 answers

The approach that I have always preferred is to first define a structure that reflects the location of the peripheral register

 typedef volatile unsigned int reg32; // or other appropriate 32-bit integer type typedef struct USART { reg32 pad1; reg32 pad2; reg32 pad3; reg32 pad4; reg32 brgr; // any other registers } USART; USART *p_usart0 = (USART * const) 0xFFFC4000; 

Then in code I can just use

 p_usart0->brgr = 0x030C; 

This approach is much cleaner when you have multiple instances of the same type of peripherals:

 USART *p_usart1 = (USART * const) 0xFFFC5000; USART *p_usart2 = (USART * const) 0xFFFC6000; 

User sbass provided a link to an excellent column from Dan Saks, which gives much more information about this technique and indicates its advantages over other approaches.

If you are fortunate enough to use C ++, you can add methods for all common operations on the periphery and encapsulate device features beautifully.

+12
source share

I am afraid that enum is a dead end for such a task. The standard defines enum constants for the int type, therefore, in general, they are incompatible with pointers.

Once in an architecture with 32-bit int and 64-bit pointers, you might have a constant that doesn't fit in int . It is not clear what will happen.

On the other hand, the argument that enum will allocate something on the stack is invalid. They are compile-time constants and have nothing to do with the function stack, or nothing more than any constants that you specify with macros.

+5
source share

Dan Sachs wrote several columns for programming embedded systems. Here is one of his last . He discusses C, C ++, enumerations, definitions, structures, classes, etc. And why you can one by one. Definitely worth reading and always good advice.

+3
source share

In my experience, one of the main reasons for using #define for this kind of thing is that it is more of a standard idiom used in the embedded community.

Using enumerations instead of #define will generate questions / comments from instructors (and in the future, colleagues), even when using other methods, they may have other advantages (for example, not stomping in the namespace of global identifiers).

I personally like to use enumerations for numerical constants, but sometimes you need to do what is customary for what and where you work.

However, performance should not be a problem.

+1
source share

The answer always does what the teacher wants, and passes the class, and then to your own question, and find out if their reasons were valid and formed your own opinions. You can't win school, it's not worth it.

In this case, it is easy to compile assembler or disassemble to see the difference between enumeration and definition.

I would recommend a definition over an enumeration, had the discomfort of a compiler with enums. I really do not recommend using pointers the way you use them, I saw that every compiler was not able to accurately generate the desired instructions, this is rare, but when that happens, you will wonder how your last decades of coding have ever worked. Pointer structures or something even worse. I often pay for it and expect this time. Too many miles around the block, too much broken code with these problems fixed to ignore the root cause.

+1
source share

I would not say that in any case is better. This is just a personal preference. As for your professor argument, this is really a moot point. Allocating variables on the stack is one instruction, no matter how much it is, usually in the form of sub esp, 10h . Therefore, if you have one local or 20, all the same, one instruction allocates space for all of them.

I would say that one of the advantages of #include is that if for some reason along the way you wanted to change the way you access this pointer, you just need to change it in one place.

0
source share

I would like to use an enumeration for potential future compatibility with C ++ code. I say this because at my work we have many C header files shared between projects, some of which use C code, and some of them use C ++. For those using C ++, we often would like to wrap the definitions in the namespace to prevent masking characters, but you cannot assign #define to the namespace.

0
source share

All Articles