One big challenge versus several smaller TSQL calls

I have a question about ADO.NET/TSQL performance. There are two options in our application:

1) One big database call with many result sets, then in the code step through each result set and populating my objects. This results in one call to the database.

2) A few small database queries.

There is much more code reuse with option 2, which is an advantage of this option. But I would like to get some information about what the cost of execution is. Are two small round trips twice as slow as one big round trip to the database, or is it just a small, say, 10% performance loss? We use C # 3.5 and Sql Server 2008 with stored procedures and ADO.NET.

+7
tsql
source share
8 answers

I would think that partly it will depend on when you need the data. For example, if you return ten data sets to one large process and immediately see all ten on the screen, then go to it. But if you return ten datasets, and the user can only click on the pages to see three of them, then sending the others was a waste of server and network resources. If you return ten data sets, but the user really needs to see sets seven and eight only after making changes to sets 5 and 6, then the user will see the wrong information if you return it too soon.

If you use separate stored procs for each dataset called in one stored master file, there is no reason why you cannot reuse the code elsewhere, so reusing the code is not really a problem in my mind.

+6
source share

It sounds a little obvious, but just send what you need in one call.

For example, we have a getStuff stored procedure for presentation. The "updateStuff" handler calls "getStuff" proc, and the client wrapper method for "updateStuff" expects a type of "Thing". So, once round trip.

Chatty servers are one thing you prevent with minimal effort. Then you can set up the database or client code as needed ... but it's difficult to defer callbacks later, no matter how fast your code runs. In extreme cases, what if your web server is located in another country on your database server ...?

Edit: It’s interesting to note the guys from SQL (HLGEM, observer, me) saying β€œone trip,” and the client guys say β€œreuse code" ...

+1
source share

I myself am struggling with this problem. And I have no answer yet, but I have some thoughts.

Having considered the answers provided by others, there is still a third option.

In my application to the server, it takes about ten or twelve calls to get the data I need. Some data fields are varchar max and varbinary max fields (images, large documents, video, and audio files). All my calls are synchronous - that is, while the data is being requested, the user (and on the client side) has no choice but to wait. He may only want to read or view data that has a general meaning only when he is ALL, and not only partially. This process, I believe, is slower, and I am developing an alternative approach based on asynchronous server calls from the DLL, which raises events for the client to report progress to the client. The client is programmed to handle DLL events and set a client-side variable indicating that the chich requests are complete. The client program can then do what it needs to do to prepare the data received in call # 1, while the DLL goes asynchronously to receive call data # 2. When the client is ready to process the data of call No. 2, he must check the status and wait to continue if necessary (I hope it will be short or not need to wait at all). Thus, both server and client software perform work in a more efficient manner.

+1
source share

If you're interested in performance, try checking them out and see which one works best.

Personally, I prefer the second method. This makes life easier for developers, makes code more reusable, and modulates things, making changes on the road easier.

0
source share

I personally like option two for the reason you stated: code reuse

But keep this in mind: for small queries, the wait time may be longer than what you do with the query. You have to find the right balance.

0
source share

As an ADO.Net developer, your task is to make the code as correct, understandable, and maintainable as possible. This means you must separate your concerns.

This is SQL Server connectivity technology to make it fast.

If you implement a correct, understandable, supported application that solves business problems, and it turns out that access to the database is the main bottleneck that prevents the system from working within acceptable limits, then and only if you start trying to solve the problem. This may or may not include consolidating database queries.

0
source share

Personally, I would go with 1 big round trip.

This, of course, will be affected by the accurate reuse of the calling code and how it can be reorganized.

But, as already mentioned, this will depend on your exact situation, in which the coefficient of support and performance may be a factor.

-one
source share

Do not optimize for performance, until you need it. This means that you should analyze your intended patterns of use and determine what the typical frequency of use of this process is and what latency the user interface will result from this design. If the user receives feedback from the application, it will be less than a few (2-3) seconds, and downloading the application from this process will not be an excessive load on the server capacity, then do not worry about it. If the otoh user expects an unacceptable amount of time for a response (subject, but definitely measurable) or if the server is overloaded, then it's time to start optimizing. And then, which optimization methods will make the most sense or be the most cost-effective, depend on what your analysis of the problem tells you.

Thus, at the same time focus on maintainability. That means in your case code reuse

-one
source share

All Articles