I have some huge tables on a SQL 2005 production database that require a schema update. This is basically adding columns with default values ββand some column type changes that require a simple conversion. All this can be done with a simple "SELECT INTO", where the goal is a table with a new schema.
Our tests so far show that even this simple operation, performed entirely inside the server (not fetching or clicking any data), can take hours, if not days, on a table with many millions of lines.
Is there a better update strategy for such tables?
edit 1: We are still experimenting without final output. What happens if one of my conversions to a new table involves merging each five rows into one. There is some code that must be executed on every conversion. The best performance we could get has led us to a speed that takes at least several days to convert a 30M row table.
Will I use SQLCLR in this case (doing the conversion with code running inside the server), give me significant speedup?
source share