SQL Server: maximum number of rows in a table

I am developing software that stores a lot of data in one of the database tables (SQL Server version 8, 9 or 10). Let's say about 100,000 records are inserted into this table per day. This is about 36 million records per year. Due to concerns that I will lose in performance, I decided to create a new table every day (a table with the current date in its name) to reduce the number of records in the table.

Could you tell me if this was a good idea? Is there a write restriction for SQL server tables? Or do you know how many records (more or less) can be stored in a table before performance drops significantly?

+59
sql-server database-design
Apr 17 '09 at 6:20
source share
12 answers

It is difficult to give a general answer to this. It really depends on a number of factors:

  • what is the size of your string
  • what data do you store (lines, drops, numbers)
  • what do you do with your data (just keep it in the archive, regularly access it)
  • You have indexes on your table - how many
  • what are the characteristics of your server

and etc.

As stated elsewhere here, 100,000 per day, and therefore too much on the table — I offer monthly or weekly, perhaps even quarterly. The more tables you have, the more the service / request nightmare, the more it will become.

+31
Apr 17 '09 at 6:27
source share

Here are some of the maximum capacity for SQL Server 2008 R2

  • Database Size: 524,272 Terabytes
  • Databases per SQL Server instance: 32,767
  • Filegroups per database: 32,767
  • Files for each database: 32,767
  • File Size (Data): 16 Terabytes
  • File size (log): 2 terabytes
  • Rows in the table: limited by available memory
  • Tables per database: limited by the number of objects in the database
+77
Oct 06 '10 at
source share

I have a three-column table with just over 6 billion rows in SQL Server 2008 R2.

We request it every day to create graphical system analysis charts for our customers. I did not notice any database performance performance (although the fact that it grows ~ 1 GB every day makes backup management more active than we would like).

Update July 2016

Number of lines

We did this up to ~ 24.5 billion lines , until the backups became large enough so that we could truncate records older than two years (~ 700 GB stored in several backups, including on expensive tapes). It is worth noting that performance was not a significant incentive in this decision (i.e., it still worked perfectly).

For anyone trying to remove 20 billion rows from SQL Server, I highly recommend this article . The corresponding code in case the link freezes (read the article for a full explanation):

ALTER DATABASE DeleteRecord SET RECOVERY SIMPLE; GO BEGIN TRY BEGIN TRANSACTION -- Bulk logged SELECT * INTO dbo.bigtable_intermediate FROM dbo.bigtable WHERE Id % 2 = 0; -- minimal logged because DDL-Operation TRUNCATE TABLE dbo.bigtable; -- Bulk logged because target table is exclusivly locked! SET IDENTITY_INSERT dbo.bigTable ON; INSERT INTO dbo.bigtable WITH (TABLOCK) (Id, c1, c2, c3) SELECT Id, c1, c2, c3 FROM dbo.bigtable_intermediate ORDER BY Id; SET IDENTITY_INSERT dbo.bigtable OFF; COMMIT END TRY BEGIN CATCH IF @@TRANCOUNT > 0 ROLLBACK END CATCH ALTER DATABASE DeleteRecord SET RECOVERY FULL; GO 

November 2016 update

If you plan to store this data in one table, do not do this. I highly recommend that you consider table partitioning (either manually or with built-in functions if you are using the Enterprise edition). This reduces old data as easily as trimming a table once (week / month / etc.). If you don’t have Enterprise (which we don’t have), you can simply write a script that runs once a month, discards tables older than 2 years, creates a table for the next month and restores a dynamic view that joins all partition tables to simplify queries. Obviously, "once a month" and "older than 2 years" should be determined by you based on what makes sense for your use case. Deleting directly from a table with tens of billions of rows of data will: a) take a HUGE amount of time and b) fill up the transaction log hundreds or thousands of times.

+23
Mar 04 '14 at 13:48
source share

I don't know the row limit, but I know tables with over 170 million rows. You can speed it up with partitioned tables (2005+) or views that join multiple tables.

+18
Apr 17 '09 at 6:27
source share

I don’t know MSSQL on purpose, but 36 million rows are small for the enterprise database - working with mainframe databases, for me 100,000 rows sounds like a configuration table :-).

While I am not a big fan of some Microsoft programs, this is not Access, which we are talking about here: I assume that they can handle fairly large database sizes using their corporate DBMS.

I suspect that the days may have been too beautiful to share, if indeed he needs to share at all.

+12
Apr 17 '09 at 6:23
source share

We have tables in SQL Server 2005 and 2008 with over 1 billion rows in it (30 million are added daily). I can’t imagine how to get down from rats, laying them out on a new table every day.

It is much cheaper to add the appropriate disk space (which you need) and RAM.

+5
Oct 06 '10 at 21:01
source share

It depends, but I would say that it’s best to keep everything in one table for the sake of simplicity.

100,000 lines per day is actually not so much. (Depending on your server hardware). I personally saw that MSSQL processes up to 100M rows in a single table without any problems. As long as you keep your indexes in order, everything should be fine. The key is to have heaps of memory so that indexes cannot be replaced on disk.

On the other hand, it depends on how you use the data, if you need to make a lot of queries, and you need unlikely data that spans several days (so you won’t need to join the tables) it will be faster to split it into several tables. This is often used in applications such as process control, where you can read a value, for example, 50,000 tools every 10 seconds. In this case, speed is extremely important, but simplicity is not.

+3
Apr 22 '09 at 9:02
source share

We overflowed the integer primary key once (this is ~ 2.4 billion rows) on the table. If there is a row limit, you are unlikely to ever hit it for just 36 million rows per year.

+3
06 Oct '10 at 21:11
source share

You can fill in the table until there is enough disk space. For best performance, you can try upgrading to SQL Server 2005, and then split the table and place the parts on different disks (if you have a RAID configuration that can really help you). Partitioning is only possible in the corporate version of SQL Server 2005. You can see an example section at this link: http://technet.microsoft.com/en-us/magazine/cc162478.aspx

You can also try to create representations for most of the used pieces of data, which is also one of the solutions.

Hope this helps ...

+2
Apr 22 '09 at 8:25
source share

The largest table I encountered on SQL Server 8 on Windows2003 was 799 million with 5 columns. But regardless of whether it is good, whether it is measured by the SLA and the case of use - for example, download 50-100 million records and see if everything works.

0
Aug 10 2018-12-12T00:
source share
 SELECT Top 1 sysobjects.[name], max(sysindexes.[rows]) AS TableRows, CAST( CASE max(sysindexes.[rows]) WHEN 0 THEN -0 ELSE LOG10(max(sysindexes.[rows])) END AS NUMERIC(5,2)) AS L10_TableRows FROM sysindexes INNER JOIN sysobjects ON sysindexes.[id] = sysobjects.[id] WHERE sysobjects.xtype = 'U' GROUP BY sysobjects.[name] ORDER BY max(rows) DESC 
-one
Jan 18
source share

Separate the table monthly. This is the best way to process tables with large daily influx, be it Oracle or MSSQL.

-3
Sep 07 '12 at 18:10
source share



All Articles