I have been using Linq-to-SQL for quite some time and it works great. However, lately I have been experimenting with using it to pull out really large amounts of data, and I am facing some problems. (Of course, I understand that L2S may not be the best tool for this particular type of processing, but because I'm experimenting to find its limits.)
Here is a sample code:
var buf = new StringBuilder(); var dc = new DataContext(AppSettings.ConnectionString); var records = from a in dc.GetTable<MyReallyBigTable>() where a.State == "OH" select a; var i = 0; foreach (var record in records) { buf.AppendLine(record.ID.ToString()); i += 1; if (i > 3) { break;
As soon as I start iterating over the data, the query is executed as expected. When I go through the code, I immediately enter the loop, and that is what I was hoping for - this means that L2S seems to be using the DataReader behind the scenes instead of transferring all the data first. However, as soon as I get to break , the request will continue to execute and pull all the other entries. Here are my questions for the SO community:
1.) Is there a way to stop Linq-to-SQL from completing a really large query in the middle, as you can with DataReader ?
2.) If you are executing a large Linq-to-SQL query, is there a way to prevent the DataContext from populating with change tracking information for each returned object. Basically, instead of filling up the memory, can I make a large query with short object loops in the way you can using the DataReader methods?
I am fine if this is not functionality built into the DataContext and requires an extension of functionality with some tweaking. I just want to use the simplicity and power of Linq for large queries for nightly processing tasks, rather than relying on T-SQL for everything.
source share