The thing that matters is not the number of columns in the table, but the "width" of the table.
For example, if all 50 columns of a column are bits , you are looking at 7 bytes in a row that is tiny .
On the other hand, if all 50 columns are VARCHAR(4000) columns, then you are looking at a potential maximum row size of about 200 MB per row (yes, SQL Server will allow you to do this), which obviously can cause problems (actually this probably won't, but I want to say that the width of the data matters, not the number of columns).
just a surefire way to find out if you have problems, try and see, but it’s usually very general , it’s nice to try to keep the line size below 4 KB (1 page), however this is a very general rule:
- Usually, you probably want your line size to be much smaller than this so that you can put many lines on the page
- However, if you have several large object fields (for example,
VARCHAR or VARCHAR(MAX) ), the size of the rows in your table is likely to exceed 4 KB quite regularly, this is normal.
A complex subject - as I said, the only way to find out is to try it and see if it performs.
Note that with the exception of large objects (such as VARCHAR ), SQL Server will not allow you to create a row larger than 1 page.
Why can there be a problem with a "large" table?
Because it increases the amount of data that needs to be read.
As a very simple / far-fetched example, suppose you have a table sorted by identifier (i.e., has a clustered index for ID), and you want to get records for identifiers from 100 to 110 inclusive. If the line size is small (say 200 bytes), then the size of all these records is about 2 KB, which is much smaller than the page size (4 KB). Since the table is sorted by identifier, it is very likely that these records are suitable for one page, no more than 2, and therefore it takes only a few reads to read all 10 records.
Now suppose the row size is larger (say 2 KB), then the total size of all these records is now 20 KB. The minimum number of reads required is a minimum of 5, possibly 6. On a busy database server, this data is added to the additional I / O and memory overhead in cahce.
Large objects
Depending on the amount of data stored, the fields of large objects and variable lengths (for example, VARCHAR ) can store data on separate pages, or on pages of large pages, or on page overflow pages.
What does it mean? Well, if you have a table with many such columns, and you execute a SELECT * ... query, then SQL Server should get all these extra pages to read all this extra data. We ended up with the same situation as above - a lot of posts that are bad.
However, if instead we specify only some of the columns in our query, for example. SELECT ID, Address ... then SQL Server does not need to worry about reading pages containing data that we are not interested in. Despite the fact that this table can define many columns with a huge row width, since we indicated the columns of interest to us and because this data is stored on separate pages, the number of readings required is still relatively small.