Is there a reason to use Base2 lengths for columns in SQL Server?

Possible duplicates:
varchar fields - the power of two more effective? Nvarchar or varchar, which is better to use, multiply by 2 or rounded full numbers?

Out of sheer habit, I determine the column sizes that I use in SQL Server for Base2 sizes. For example, here is a table I'm working on:

  • ID int
  • FirstName nvarchar (64)
  • LastName nvarchar (64)
  • Col4 varchar (16)
  • Col5 nvarchar (32)
  • Col6 nvarchar (128)
  • etc...

I have no idea where this habit comes from, and I'm not sure if that even makes sense. Could the following table definition be less effective in some way than the previous one?

  • ID int
  • FirstName nvarchar (50)
  • LastName nvarchar (50)
  • Col4 varchar (10)
  • Col5 nvarchar (30)
  • Col6 nvarchar (100)
  • etc...

I assume my main question is: are there legitimate reasons for using Base2 column lengths?

+7
database sql-server database-design
source share
5 answers

Creating columns larger than they should be could be detrimental to your database design. From BOL:

A table can contain a maximum of 8,060 bytes per row. In SQL Server 2008, this restriction is relaxed for tables containing columns of types varchar, nvarchar, varbinary, sql_variant, or CLR. Exceeding a row size limit of 8,060 bytes can affect performance because SQL Server still maintains a limit of 8 KB per page. If a combination of varchar, nvarchar, varbinary, sql_variant, or CLR columns exceeds this limit, the SQL Server Database Engine moves the column with the largest width to another page in the ROW_OVERFLOW_DATA distribution block, while retaining 24 bytes on the original page. Large records are moved to another page dynamically, as records are extended based on update operations. Update operations that shorten records can cause records to be returned to the original page in the IN_ROW_DATA allocation block. In addition, querying and performing other selection operations, such as sorting or joining in large records containing row overflow data, slows down processing time because these records are processed synchronously rather than asynchronously.

I found if you give them an extra size sooner or later, they will use it. In addition, if you set something as varchar (64), and you really need only 10 characters, you will most likely use this field for other purposes, in addition, you will find that in these fields you get bad data (as a phone number field containing notes about the office secretary to contact in order to select a not-so-random example).

However, at least this design is much better than all nvarchar (max).

+4
source share

There is no reason for this, especially with (n) varchar data, where the storage size is the actual data length + 2 bytes.

+3
source share

No, it’s just a programmer’s habit of thinking and acting in the power of 2 — there is definitely no technical reason for SQL Server to do this — no increase in speed or performance or the like.

+1
source share

Doubtful. First of all, the exact lengths of the columns mainly concern your own data schemes. Secondly, if lengths come into effect, the total length of all columns is probably the most important criterion, and even then there will be overheads in accounting, which would mean that a good round number is unlikely to be the best answer.

Thus, you can find recommendations on limiting the size of the line to a certain amount, so that the entire line fits into the page or something in this direction. This reduces the number of write I / O operations. But the sizes of individual columns do not matter, this will be the total amount.

0
source share

This is not a SQL SERver question ... I am doing the same thing in Oracle and MySQL. There is no particular reason, besides the fact that it may be more convenient for me to use base2 sizes.

0
source share

All Articles