It depends.
If you want to maximize the selection speed, use int (tinyint to save space), because the bit in the where clause is slower than int (not radically, but every millisecond counts). Also make the column non-null, which also speeds up the process. Below is a link to a real performance test, which I would recommend running in my own database, and also expand it using non-zero, indexes and using multiple columns at the same time. At home, I even tried to compare using multiple bit columns versus several tinyint columns, and tinyint columns were faster ( select count(*) where A=0 and B=0 and C=0 ). I thought SQL Server (2014) would be optimized by performing only one comparison using a bitmask, so it should be three times faster, but that is not the case. If you use indexes, you will need more than 5,000,000 rows (as in the test) to notice the difference (which I didn’t have enough patience for, since filling out a table with several million rows would take a lot of time on my machine).
https://www.mssqltips.com/sqlservertip/4137/sql-server-performance-test-for-bit-data-type-in-a-where-clause/
If you want to save space, use a bit, since 8 of them can occupy one byte, while 8 tiny ones will occupy 8 bytes. Which is about 7 megabytes for every million lines.
The differences between the two cases are mostly minor, and since using a bit takes precedence over a signal that the column is just a flag, I would recommend using a bit.
source share