I have an insert with direct download of 54,061,487 records. I am looking for speed. I don’t have to roll back at all.
All tables involved are NOLOGGING
.
Here is one way to do it.
EXECUTE IMMEDIATE 'TRUNCATE TABLE meRegionsNow'; ... INSERT INTO meRegionsNow( carrierId , region , zip , side , subPlanTypeId , monthIn ) SELECT r.carrierId as carrierId , r.region as region , r.zip as zip , r.side as side , r.subPlanTypeId as subPlanTypeId , t.monthIn as monthIn FROM meTimeline t INNER JOIN region r ON t.monthIn >= r.effective AND t.monthIn <= r.expiry;
The execution plan for this, as expected (using the right indexes to speed up the connection):
Statement Id=5336 Type= Cost=2.64022111505165E-308 TimeStamp=25-10-11::15::35:08 (1) SELECT STATEMENT ALL_ROWS Est. Rows: 5,667 Cost: 483 (5) TABLE TABLE ACCESS BY INDEX ROWID SCHEMA.REGION [Analyzed] (5) Blocks: 2,826 Est. Rows: 944 of 377,779 Cost: 80 Tablespace: USERS (4) NESTED LOOPS Est. Rows: 5,667 Cost: 483 (2) INDEX INDEX FULL SCAN SCHEMA.METL$MONTHIN [Analyzed] Est. Rows: 6 Cost: 1 (3) INDEX INDEX RANGE SCAN SCHEMA.RGN$MULTI3 [Analyzed] Est. Rows: 944 Cost: 72
Here is another way to do this:
EXECUTE IMMEDIATE 'TRUNCATE TABLE meRegionsNow'; ... DECLARE CURSOR meTimeline_cur IS SELECT monthIn FROM meTimeline ORDER BY monthIn; BEGIN FOR meTimeline_rec IN meTimeline_cur LOOP INSERT INTO meRegionsNow( carrierId , region , zip , side , subPlanTypeId , monthIn ) SELECT r.carrierId as carrierId , r.region as region , r.zip as zip , r.side as side , r.subPlanTypeId as subPlanTypeId , meTimeline_rec.monthIn as monthIn FROM region r WHERE meTimeline_rec.monthIn >= r.effective AND meTimeline_rec.monthIn <= r.expiry; COMMIT; END LOOP; END;
What is the fastest way? I don’t think there is a big difference between inserting a direct load with a cursor and a direct insert with a direct SQL load.
Again, I don't care about registration, rollback, saving any cancellation data. I suspect the reason this takes a lot of time is because the tablespace data file is automatically distributed too often, using default sizes that are too small in size.
I think this problem will be solved as soon as I change the size of the table space meRegionsNow
data file.