Are there any good reasons to use Flags enumeration (i.e. a bitmask) over a HashSet of regular enums? As far as I can tell, both solve the same problem:
enum Color { Red, Green, Blue }
[Flags()]
enum Colors { None = 0, Red = 1, Green = 2, Blue = 4 }
void Test()
{
var supportedColors1 = new HashSet<Color> { Color.Red, Color.Green };
var supportedColors2 = Colors.Red | Colors.Green;
if (supportedColors1.Contains(Color.Green)) { }
if ((supportedColors2 & Colors.Green) != 0) { }
supportedColors1.Remove(Color.Red);
supportedColors2 ^= Colors.Red;
supportedColors2 &= ~Colors.Red;
}
It may be a matter of taste, but for someone who does not have a hardware or system background with overflow (= my staff), I think the Set option is more readable. I see the advantage of the Flags options when micro-optimization is required (higher performance, less memory) or when P / Invoking for the Windows API, but for standard business application database applications, I am tempted to select the Set parameter for readability.
Are there some advantages of the Flags options that I skipped that justify its use in the "right" code?