I have a set of bit flags that are used in a program that I port from C to C ++.
To start ...
The flags in my program were previously defined as:
#define DCD_IS_CHARMM 0x01 #define DCD_HAS_4DIMS 0x02 #define DCD_HAS_EXTRA_BLOCK 0x04
... Now I collect that #defines for constants (compared to class constants, etc.) are usually considered bad.
The question arises as to how best to store bit flags in C ++ and why C ++ does not support assigning binary text to int, for example, it allows you to assign hexadecimal numbers in this way (via "0x"). These questions are summarized at the end of this publication.
I could see one simple solution - just create separate constants:
namespace DCD { const unsigned int IS_CHARMM = 1; const unsigned int HAS_4DIMS = 2; const unsigned int HAS_EXTRA_BLOCK = 4; };
Let me call this idea 1.
Another idea I used was to use an integer enumeration:
namespace DCD { enum e_Feature_Flags { IS_CHARMM = 1, HAS_4DIMS = 2, HAS_EXTRA_BLOCK = 8 }; };
But one thing that bothers me about this is that its less intuitive when it comes to higher values, it seems ... i.e.
namespace DCD { enum e_Feature_Flags { IS_CHARMM = 1, HAS_4DIMS = 2, HAS_EXTRA_BLOCK = 8, NEW_FLAG = 16, NEW_FLAG_2 = 32, NEW_FLAG_3 = 64, NEW_FLAG_4 = 128 }; };
Let this approach 2 choose.
I am considering using the Tom Torf macro:
#define B8(x) ((int) B8_(0x##x)) #define B8_(x) \ ( ((x) & 0xF0000000) >( 28 - 7 ) \ | ((x) & 0x0F000000) >( 24 - 6 ) \ | ((x) & 0x00F00000) >( 20 - 5 ) \ | ((x) & 0x000F0000) >( 16 - 4 ) \ | ((x) & 0x0000F000) >( 12 - 3 ) \ | ((x) & 0x00000F00) >( 8 - 2 ) \ | ((x) & 0x000000F0) >( 4 - 1 ) \ | ((x) & 0x0000000F) >( 0 - 0 ) )
Converts to built-in functions, e.g.
#include <iostream> #include <string> .... /* TAKEN FROM THE C++ LITE FAQ [39.2]... */ class BadConversion : public std::runtime_error { public: BadConversion(std::string const& s) : std::runtime_error(s) { } }; inline double convertToUI(std::string const& s) { std::istringstream i(s); unsigned int x; if (!(i >> x)) throw BadConversion("convertToUI(\"" + s + "\")"); return x; } /** END CODE **/ inline unsigned int B8(std::string x) { unsigned int my_val = convertToUI(x.insert(0,"0x").c_str()); return ((my_val) & 0xF0000000) >( 28 - 7 ) | ((my_val) & 0x0F000000) >( 24 - 6 ) | ((my_val) & 0x00F00000) >( 20 - 5 ) | ((my_val) & 0x000F0000) >( 16 - 4 ) | ((my_val) & 0x0000F000) >( 12 - 3 ) | ((my_val) & 0x00000F00) >( 8 - 2 ) | ((my_val) & 0x000000F0) >( 4 - 1 ) | ((my_val) & 0x0000000F) >( 0 - 0 ); } namespace DCD { enum e_Feature_Flags { IS_CHARMM = B8("00000001"), HAS_4DIMS = B8("00000010"), HAS_EXTRA_BLOCK = B8("00000100"), NEW_FLAG = B8("00001000"), NEW_FLAG_2 = B8("00010000"), NEW_FLAG_3 = B8("00100000"), NEW_FLAG_4 = B8("01000000") }; };
This is madness? Or does it seem more intuitive? Let me call this choice 3.
So, to repeat, my excessive questions:
1. Why does C ++ not support the "0b" flag, similar to the "0x"?
2 .. What would be the best style for defining flags ...
i. . Wrapped namespace constants.
ii. A nameless roaming namespace assigned directly.
iii. A numbered unsigned namespace name assigned using a readable binary string.
Thanks in advance! And please do not close this topic as subjective, because I really want to get help on what is the best style and why C ++ does not have a built-in binary assignment capability.
EDIT 1
Some additional information. I will read the 32-bit bit field from the file and then check it with these flags. So keep that in mind when you post offers.