The standard allows you to choose between an integer type, enum and a std::bitset .
Why does the library developer use one of these with these options?
The case, llvm libcxx, seems to use a combination of (at least) two of these implementation options:
ctype_base::mask is implemented using an integer type: <__locale>
regex_constants::syntax_option_type is implemented using enum + overloaded statements: <regex>
The gcc libstdc ++ project uses all three:
ios_base::fmtflags is implemented using overloaded + overloaded operators: <bits/ios_base.h>
regex_constants::syntax_option_type is implemented using an integer type, regex_constants::match_flag_type is implemented using std::bitset
Both: <bits/regex_constants.h>
AFAIK, gdb cannot "detect" the bit field of any of these three options, so there would be no difference in enhanced debugging.
The solution enum solution and the integer type must always use the same space. std::bitset doesn't seem to guarantee that sizeof(std::bitset<32>) == std::uint32_t , so I don't see what is especially attractive to std::bitset .
enum solution seems a little less secure since mask combinations do not generate an enumerator.
Strictly speaking, the above applies to n3376 and not to FDIS (since I do not have access to FDIS).
Any education available in this area would be appreciated.
user1290696
source share