This is not a complete answer, I hope someone gives you useful tips on this.
But I can give you at least one piece of advice.
Of course, with the serializable isolation level, the biggest problem you will encounter is when you have a long transaction, you have a lot of chances that it will automatically roll back by the engine. At the serialized isolation level, if any other transaction commits something that modifies one of your deleted data, your transaction is disconnected. In the isolation level of serialization, you should think of your transaction as something that you may have to redo several times until this is normal. So ... if it is big and you are not alone, it can become very long or perhaps impossible.
If we are talking about changing all the rows of a table from millions of rows in a serializable transaction, you will certainly have to put an application level lock or semaphore, something more than a database transaction, to tell another process that you are doing a difficult task and that they should wait a bit and let you complete this important task :-)
But if you can consider this process a maintenance task on each line and that this is not a problem for your environment, that some of the lines are in a new state and some others are not yet ... then complete the transaction for each and not large. You should only make large serializable transactions if it is really important that all affected lines should switch their status at the same time (this is Atomicity). Most likely, this is not so, is it?
regilero
source share