Why does C ++ support hexadecimal assignment but not binary assignment? What is the best way to store flags?

I have a set of bit flags that are used in a program that I port from C to C ++.

To start ...

The flags in my program were previously defined as:

/* Define feature flags for this DCD file */ #define DCD_IS_CHARMM 0x01 #define DCD_HAS_4DIMS 0x02 #define DCD_HAS_EXTRA_BLOCK 0x04 

... Now I collect that #defines for constants (compared to class constants, etc.) are usually considered bad.

The question arises as to how best to store bit flags in C ++ and why C ++ does not support assigning binary text to int, for example, it allows you to assign hexadecimal numbers in this way (via "0x"). These questions are summarized at the end of this publication.

I could see one simple solution - just create separate constants:

 namespace DCD { const unsigned int IS_CHARMM = 1; const unsigned int HAS_4DIMS = 2; const unsigned int HAS_EXTRA_BLOCK = 4; }; 

Let me call this idea 1.

Another idea I used was to use an integer enumeration:

 namespace DCD { enum e_Feature_Flags { IS_CHARMM = 1, HAS_4DIMS = 2, HAS_EXTRA_BLOCK = 8 }; }; 

But one thing that bothers me about this is that its less intuitive when it comes to higher values, it seems ... i.e.

 namespace DCD { enum e_Feature_Flags { IS_CHARMM = 1, HAS_4DIMS = 2, HAS_EXTRA_BLOCK = 8, NEW_FLAG = 16, NEW_FLAG_2 = 32, NEW_FLAG_3 = 64, NEW_FLAG_4 = 128 }; }; 

Let this approach 2 choose.

I am considering using the Tom Torf macro:

 #define B8(x) ((int) B8_(0x##x)) #define B8_(x) \ ( ((x) & 0xF0000000) >( 28 - 7 ) \ | ((x) & 0x0F000000) >( 24 - 6 ) \ | ((x) & 0x00F00000) >( 20 - 5 ) \ | ((x) & 0x000F0000) >( 16 - 4 ) \ | ((x) & 0x0000F000) >( 12 - 3 ) \ | ((x) & 0x00000F00) >( 8 - 2 ) \ | ((x) & 0x000000F0) >( 4 - 1 ) \ | ((x) & 0x0000000F) >( 0 - 0 ) ) 

Converts to built-in functions, e.g.

 #include <iostream> #include <string> .... /* TAKEN FROM THE C++ LITE FAQ [39.2]... */ class BadConversion : public std::runtime_error { public: BadConversion(std::string const& s) : std::runtime_error(s) { } }; inline double convertToUI(std::string const& s) { std::istringstream i(s); unsigned int x; if (!(i >> x)) throw BadConversion("convertToUI(\"" + s + "\")"); return x; } /** END CODE **/ inline unsigned int B8(std::string x) { unsigned int my_val = convertToUI(x.insert(0,"0x").c_str()); return ((my_val) & 0xF0000000) >( 28 - 7 ) | ((my_val) & 0x0F000000) >( 24 - 6 ) | ((my_val) & 0x00F00000) >( 20 - 5 ) | ((my_val) & 0x000F0000) >( 16 - 4 ) | ((my_val) & 0x0000F000) >( 12 - 3 ) | ((my_val) & 0x00000F00) >( 8 - 2 ) | ((my_val) & 0x000000F0) >( 4 - 1 ) | ((my_val) & 0x0000000F) >( 0 - 0 ); } namespace DCD { enum e_Feature_Flags { IS_CHARMM = B8("00000001"), HAS_4DIMS = B8("00000010"), HAS_EXTRA_BLOCK = B8("00000100"), NEW_FLAG = B8("00001000"), NEW_FLAG_2 = B8("00010000"), NEW_FLAG_3 = B8("00100000"), NEW_FLAG_4 = B8("01000000") }; }; 

This is madness? Or does it seem more intuitive? Let me call this choice 3.

So, to repeat, my excessive questions:

1. Why does C ++ not support the "0b" flag, similar to the "0x"?
2 .. What would be the best style for defining flags ...
i. . Wrapped namespace constants.
ii. A nameless roaming namespace assigned directly.
iii. A numbered unsigned namespace name assigned using a readable binary string.

Thanks in advance! And please do not close this topic as subjective, because I really want to get help on what is the best style and why C ++ does not have a built-in binary assignment capability.


EDIT 1

Some additional information. I will read the 32-bit bit field from the file and then check it with these flags. So keep that in mind when you post offers.

+6
c ++ variable-assignment binary flags inline
source share
7 answers

Prior to C ++ 14, binary literals were discussed and continued over the years, but as far as I know, no one has ever written a serious proposal to bring it into line with the standard, so he never passed the discussion stage about this.

For C ++ 14, someone finally wrote a proposal, and the committee accepted it, so if you can use the current compiler, the basic premise of the question is false - you can use binary literals that have the form 0b01010101 .

In C ++ 11, instead of directly adding binary literals, they added a much more general mechanism for using common user literals, which you could use to support binary or base 64 or other things in their entirety. The basic idea is that you specify a literal of a number (or string) followed by a suffix, and you can define a function that will receive that literal and convert it to whatever shape you prefer (and you can save its status as "constant" too ...)

Regarding usage: if possible, binary literals embedded in C ++ 14 or higher are the obvious choice. If you cannot use them, I would prefer option 2: an enumeration with initializers in hexadecimal format:

 namespace DCD { enum e_Feature_Flags { IS_CHARMM = 0x1, HAS_4DIMS = 0x2, HAS_EXTRA_BLOCK = 0x8, NEW_FLAG = 0x10, NEW_FLAG_2 = 0x20, NEW_FLAG_3 = 0x40, NEW_FLAG_4 = 0x80 }; }; 

Another possibility is something like:

 #define bit(n) (1<<(n)) enum e_feature_flags = { IS_CHARM = bit(0), HAS_4DIMS = bit(1), HAS_EXTRA_BLOCK = bit(3), NEW_FLAG = bit(4), NEW_FLAG_2 = bit(5), NEW_FLAG_3 = bit(6), NEW_FLAG_4 = bit(7) }; 
+13
source share

With option two, you can use a left shift, which is probably a little less "unintuitive":

 namespace DCD { enum e_Feature_Flags { IS_CHARMM = 1, HAS_4DIMS = (1 << 1), HAS_EXTRA_BLOCK = (1 << 2), NEW_FLAG = (1 << 3), NEW_FLAG_2 = (1 << 4), NEW_FLAG_3 = (1 << 5), NEW_FLAG_4 = (1 << 6) }; }; 
+7
source share

As a note, Boost (as usual) provides an implementation of this idea.

+4
source share

Why not use a bitfield structure ?

 struct preferences { unsigned int likes_ice_cream : 1; unsigned int plays_golf : 1; unsigned int watches_tv : 1; unsigned int reads_stackoverflow : 1; }; struct preferences fred; fred.likes_ice_cream = 1; fred.plays_golf = 0; fred.watches_tv = 0; fred.reads_stackoverflow = 1; if (fred.likes_ice_cream == 1) /* ... */ 
+2
source share

GCC has an extension that provides its binary purpose:

 int n = 0b01010101; 

Edit: Starting with C ++ 14, this is now the official part of the language.

+2
source share

What happened to hex for this use case?

 enum Flags { FLAG_A = 0x00000001, FLAG_B = 0x00000002, FLAG_C = 0x00000004, FLAG_D = 0x00000008, FLAG_E = 0x00000010, // ... }; 
+1
source share

I think the bottom line is that it really is not necessary.

If you just want to use binary code for flags, the approach below is how I usually do it. After defining the original, you never have to worry about looking at the “dirty” larger multiples of 2, as you mentioned

 int FLAG_1 = 1 int FLAG_2 = 2 int FLAG_3 = 4 ... int FLAG_N = 256 

you can easily check them with

 if(someVariable & FLAG_3 == FLAG_3) { // the flag is set } 

And btw, Depending on your compiler (I use the GNU GCC compiler) it may support "0b"

note Edited to answer the question.

0
source share

All Articles