Multiple MDF fields VS Single DataBase (SQL Server)

I am working on a web2 project that would like to have thousands of lines per day for users. for processing the size of the data that I created such a database: one .mdf and .ldf file as Minor DataBase and 1 Major DB for saving and querying the addresses of user accounts and database files.

I worked for several months for this plan, and now I can easily deal with it. I want to know if it is worth processing the sheer size of independent data? in your opinion, better performance? opening a connection of many small .mdf files or just a huge database.

after that I will parse the mdf repository on several computers.

they are all handled by C # and linq (.net4)

// Later descriptions

I built this plan and it works great. for example: opening each small mdf file takes 1 second and requests it in 0.0sec. it makes a static time for each connection, but in one database for the 50rows system it should find them, for example, 200,000 lines and takes about 4-5 seconds on my system with a simple query of choice using the main key.

for another instance, I want to get a line between 500,000 lines for linking the contents of the page and select 50 comments from the 2milmions line, and also count the number of votes for each comment, the number of views per day, week, month and total. the number of likes, the response of comments and getting more data from 2-3 other tables, these queries are heavy and take longer than a small database of subordinates.

I think that good design and processes should work easily for the system.

the only problem is that small subordinate databases with sql server files take up more than the physical size of about 3 MB per database.

+4
source share
3 answers

There is no reason to share what can / should exist as a single database for several independent parts.

There is already a mechanism for dividing one logical database into several files: Files and filegroup architecture , as well as partition large tables (several thousand rows per day do not really qualify as a large table).

+3
source

Thousands of lines per day should be replaced with pockets for Sql Server.

Firstly, I voted in favor of Alex K. Filegroups will take you to where you want to be most likely. Split tables may be redundant and available only in the Enterprise version and not for the lungs.

What I will add:

http://www.google.com/#q=glenn+berry+dmv&bav=on.2,or.r_gc.r_pw.&fp=73d2ceaabb6b01bf&hl=en

You need to customize your indexes. In the good and the best compared to the best category, the Glenn Berry DMV requests are “better”. These queries will help you solve most problems. In the “best” category, there is pain looking at each stored procedure, and looking at the execution plan and trying different things. This is what good dBA can provide.

Here are some “basics” in file settings. Pay attention to the TEMP database setup. http://technet.microsoft.com/en-us/library/cc966534.aspx

+1
source

its difficult to manage a small MDF file, and you have to go with a SQL server, and the SQL server database provides storage of 10 GB data on one database os its simple

0
source

All Articles