DataObjects.Net offers an intermediate solution:
- Currently, it cannot perform server-side deletion of entities selected by request. It will be implemented someday, but so far there is one more solution.
- On the other hand, the so-called generalized grouping is supported: requests that are sent are sent in batches at the same time up to 25 points, when possible. โMaybeโ means โthe query result is not needed right now.โ This is almost always correct to create, update, and delete. Since such queries always lead to one (or several, if there is inheritance) search operations, they are quite cheap. If they are sent in packages, SQL Server can cache plans for the entire package, and not just for individual queries.
So, this is very fast, although not ideal:
- Now DO4 does not use IN (...) to optimize such exceptions.
- So far it does not support asynchronous batch execution. When this is done (I hope it will be done in a month or so), its speed on CUD (a subset of CRUD) will be almost the same as that of SqlBulkCopy (~ = 1.5 ... 2 times faster than now).
So, in case volume removal of DO is as follows:
var customersToRemove = from customer in Query<Customer>.All where customer.IsDeleted select customer; foreach (customer in customersToRemove) customer.Remove();
I can name the advantage of this approach: any of these objects will be able to respond to deletion; Session subscribers will be notified of each sharing. Therefore, any common logic associated with deletions will work as expected. This is not possible if such an operation is performed on the server.
The code for soft deletion should look like this:
var customersToRemove = from customer in Query<Customer>.All where ... select customer; foreach (customer in customersToRemove) customer.IsRemoved = true;
Obviously, this approach is slower than mass server-side updates. According to our estimates, what we have now is about 5 times slower than true server-side deletion in the worst case (table [bigint Id, bigint Value], cluster primary index, other indexes); in real cases (more columns, more indexes, more data) it should provide comparable performance right now (i.e. be 2-3 times slower). Asynchronous batch execution will improve this further.
Btw, we split tests for bulk CUD operations with entities for various ORM structures on ORMBattle.NET . Note that the tests there do not use bulk updates on the server side (in fact, such a test will check the database performance, not ORM); instead, they check to see if ORM can optimize this. In any case, the information provided there + test code may be useful if you are evaluating several ORM tools.
source share