How to create the maximum possible database in a SQL Server 2012 cluster, sacrificing any durability

I have a test suite that works with a database in a SQL Server 2012 cluster. I want this test suite to run as fast as possible. I wish to sacrifice every guarantee of durability and accessibility for performance. The database is recreated during each test run, so even a server reboot is not required.

Changing the recovery model with ALTER DATABASE [dbname] SET RECOVERY SIMPLE does not cause a noticeable difference.

A good option seems to be DELAYED_DURABILITY , but this is new in 2014 and therefore not available to me.

What can I do to create a crazy fast database in this cluster? I tried looking for databases in memory, but could not find any parameters. The cluster will not allow me to create a database on a local disk, insisting that it should be located on a cluster disk.

Update: The application uses the advanced features of SQL Server, so I most likely got stuck with MS SQL Server. The database itself is quite small because it is for testing (8 MB mdf, 1 MB ldf). Cluster nodes are the fastest servers on the network, so if I can abuse one of these nodes for a database in memory, which will undoubtedly be the fastest. But how?

+7
performance sql sql-server testing durability
source share
4 answers

If for some reason you are stuck on a clustered instance of sql server, but you do not need durability, perhaps you can run the application on tempdb. Tempdb can be hosted locally to avoid cluster overhead.

Also note that the data stored on tempdb initially remains in the buffer pool, which is RAM, and asynchronously go to disk because the sql server engine finds the best use for this memory space.

This solution can be implemented by writing all the database objects and using a text editor to replace the name of your database with "tempdb". Then run this script to create all the objects on tempdb. Also set the initial directory of the user starting the application in tempdb and / or edit the necessary connection strings. Keep in mind that tempdb is restored every time the instance is restarted. This way you lose all data and ddl changes.

This would be an excellent commitment to “sacrifice every guarantee of longevity and affordability”.

+3
source share

Could something like this work ( doc )?

 CREATE DATABASE Sales ON ( NAME = Sales_dat, FILENAME = 'R:\saledat.mdf', SIZE = 10, MAXSIZE = 50, FILEGROWTH = 5 ) 

Where R: is a RAM disk

+2
source share

If you want to create a database on a local disk, you can bypass the limitations of the cluster by creating it on a shared resource.

You need to create a shared folder located on the local disk for the cluster; Then create your database using the UNC patch (for example: \\ share \ DATA \ data.mdf); There should be no restrictions for this in 2012, in 2008 you had to use the flag 1807 trace;

+1
source share

Use a technique called Continuous Attach-Detach to quickly create databases on the fly:

I'm sure you know how to detach the database, but as a reminder, do something like the following:

EXEC sp_detach_db

At this point, the SQL server assumes that you dumped it. Now attach to an existing file using something like the following: EXEC sp_attach_db @dbname = NmyD, @ filename1 = NmyCurrentPath, @ filename2 = NpathToNewFile

OK It was easy, but how are these new files created that I attach to my database? Thanks to the very small and extremely simple .NET console application CSharp that read the contents of your .mdf and .ldf in memory once and wrote it to your file selection:

Reading and writing to a new data file

How does the whole situation work? Attach a DDL trigger using DDL triggers as described in

Attach database startup

The cue point is attached to an existing file that has been read (once) -write [many times] into memory [using your selector, for example CSharp] so you can attach your database without additional overhead.

+1
source share

All Articles