SQL-Server Database Practical Limits

I am setting up a database that I expect will be quite large, used for calculations and data storage. It will be one table with 10 fields, containing one primary key and two foreign keys for itself. I expect about a billion entries to be added per day.

Each entry should be quite small, and first of all I will do inserts. With each insert, I will need to do a simple update on one or two fields of the related record. All queries should be relatively simple.

How much will I start to encounter performance issues with sql server? I saw mention of vldb systems, but also heard that they can be a real pain. Is there a threshold where I should start looking at this? Is there a better db than sql server that is designed for this kind of thing?

+7
sql-server database-design
source share
4 answers

Speaking of transaction rates of more than 10 fps, you should not ask for advice on the forums ... This is close to the performance of the TPC-C test in 32 and 64 ways, which cost millions to configure.

How big are you going to have problems?

With a good data model and circuit diagram, a correctly configured one and a planning server with the right throughput will not run into problems per 1 beat. entries per day. The latest published SQL Server tests are approximately 1.2 million T / min. This is approximately 16 thousand transactions per second, priced at $ 6 million in 2005 (64-bit Superdome). To achieve the planned load of 10,000 km / s, you will not need a Superdome, but you will need a rather meaty system (maybe at least 16) and an especially very good I / O subsystem. When re-planning envelope capacity, it usually takes about 1K tran / sec per HBA and 4 processor cores to feed the HBA. And you will need a lot of database clients (middle tier applications) to file 1 beat. records per day to the database. I do not claim that I planned my opportunities here, but I just wanted to give you an example of what we are talking about. This is a project of several million dollars, and something like this is not being developed, asking for advice on the forums.

+22
source share

Unless you say more, as in the large Google index, Enterprise databases, such as SQL Server or Oracle, will work fine.

James Devlin over Wheel Coding let him down well (although this is more like comparing a free database such as MySQL with Oracle / SQL Server

Currently, I like to think of SQL Server and Oracle as the death stars of a relational database. Extremely powerful. Monolithic. Brilliant. The complex is almost unable to understand one human mind. And a monumental waste of money, with the exception of those rare situations when you really need to destroy the planet.

In terms of performance, it all depends on the indexing strategy. Inserts really are the bottleneck here, since records need to be indexed as they become available, the more indexing you have, the more inserts there will be.

In the case of something like the Google index, read in the "Big Table", he forgot how Google configured it to use server clusters to process requests through a huge amount of data in milliseconds.

+11
source share

You can do this, but given your hardware costs and plans, run MS to find out what you need. This will be part of your HW cost.

To say that Paul Nilson wrote about 35 thousand TPS (3 billion lines per day) 2 years ago. The comments are worth reading too and reflect some of what Remus said.

+5
source share

The size of the database itself does not pose a performance problem. Practical problems with the size of the database arise due to operational / maintenance problems.

For example:

  • Defragmenting and recovery indices take too much time.
  • Backups take up too much time or take up too much space.
  • Database recovery cannot be performed fast enough in the event of a failure.
  • Future changes to database tables have been taking too long.

I would recommend designing / building in some kind of separation from the start. This can be partitioning SQL Server, partitioning applications (for example, one table per month), archiving (for example, to another database).

I believe that these problems occur in any database product.

Also, be sure to consider the size of the transaction log files.

+4
source share

All Articles