Many database schemas seem to follow the following standard:
(2 ^ n) -1 for large fields:
varchar(511) varchar(255) varchar(127)
... then (2 ^ n) for smaller
varchar(64) varchar(32) varchar(16) varchar(8)
I understand why the numbers (2 ^ n) -1 are used, so I do not understand why there is no need to continue the trend down to small fields.
eg.
varchar(63) varchar(31) varchar(15) varchar(7)
Is there a reason for this, or is it just that profitability has declined too much?
database varchar schema
Jon winstanley
source share