Impact of empty file performance by transferring data to other files in the same filegroup

We have a database currently running on 15,000 RPM disks, which is just a logging database, and we want to move it to disks at 10,000 rpm. Although we can easily separate the database, move the files and reconnect, which will result in a minor disconnect, which we are trying to avoid.

So, we are considering using DBCC ShrinkFile with EMPTYFILE . We will create the data and transaction file on a disk with a frequency of 10,000 RPM slightly larger than the existing files on a disk with a frequency of 15,000 RPM, and then run DBCC ShrinkFile with EMPTYFILE to transfer the data.

What impact will this have?

+4
source share
3 answers

I tried this and had mixed luck. I had instances in which the file could not be emptied because it was the main file in the main filegroup, but I also had instances where it worked completely fine.

At the same time, it stores huge locks in the database. If you are trying to do this on a live production system that runs user requests, forget about it. They will have problems because it will take some time.

+6
source

Why not use log shipping. Create a new database on 10,000 rpm drives. Delivery of installation log from db to 15K RPM to DB at 10,000 rpm. When both databases record insync breaks and switch to the database at a speed of 15,000.

0
source

Is it a system connected to a SAN, or is it direct storage? If his SAN makes the SAN transition to a new raid group, and the server will never know that a change has occurred.

0
source

All Articles