Using code
Using bcp As New SqlBulkCopy(destConnection)
bcp.DestinationTableName = "myOutputTable"
bcp.BatchSize = 10000
bcp.WriteToServer(reader)
End Using
Where the reader is essentially an IDataReader that reads in a table containing rows of 200,000 pages.
The input table looks like this:
CREATE TABLE [dbo].[MyTable](
[TagIndex] [SMALLINT] NOT NULL,
[TimeStamp] [DATETIME] NOT NULL,
[RawQuality] [SMALLINT] NOT NULL,
[ValQuality] [SMALLINT] NOT NULL,
[Sigma] [REAL] NULL,
[Corrected] [REAL] NULL,
[Raw] [REAL] NULL,
[Delta] [REAL] NULL,
[Mean] [REAL] NULL,
[ScadaTimestamp] [DATETIME] NOT NULL
) ON [PRIMARY
And TimeStamp is ordered.
The output table has the same structure and has the following index (and is empty at the beginning of the process).
CREATE CLUSTERED INDEX [MyOutputTable_Index] ON [dbo].[MyOutputTable]
(
[TimeStamp] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
GO
Artificial throttling of the process to start small (ish) amounts of data in the output table at a time (about <35k) will result in fragmentation <5%, and that's all right.
But if I run a larger fragment, say 45k (or let the whole process run 200k), the fragmentation becomes 99% +.
, 39 773, < 5% , 39 774, 99% .
, , , DBCC PAGE.
FileId PageId Row Level ChildFileId ChildPageId TimeStamp (key)
1 18937 0 1 1 18906 2015-10-22 01:37:32.497
1 18937 1 1 1 18686 2015-10-22 01:38:12.497
1 18937 2 1 1 18907 2015-10-22 01:38:47.497
1 18937 3 1 1 18687 2015-10-22 01:39:27.497
1 18937 4 1 1 18908 2015-10-22 01:40:02.497
1 18937 5 1 1 18688 2015-10-22 01:40:42.497
1 18937 6 1 1 18909 2015-10-22 01:41:17.497
1 18937 7 1 1 18689 2015-10-22 01:41:57.497
1 18937 8 1 1 18910 2015-10-22 01:42:32.497
ChildPageId , .
, 18906 18686, 18907 18686, , 18906, 99%.
, , , ?