SQL Performance Question

I have a question about SQL performance. I will illustrate my problem with pseudo-code.

I am wondering what will reform better and by how much? Say 10 points on each page load. In .NET Is it much faster? a little bit faster? Difference not noticeable in SQL?

foreach(item in mylist) { CallSQLStoredProc(item.id); } 

vs

 int[] ids; // array of ids CallSQLStoredProc(ids) // stored procedure returns more than one row for each id 
+4
source share
8 answers

The second option, of course, will be faster, because this is a one-way round trip, as well as one call to SP.

+11
source

Definitely a second, ranging from about 10x faster and faster.

If everything you do with the identifier can be performed in a given operation, you will get several times more performance than calling SP separately.

I regularly have procs that look like this:

 create procedure proc ( @ids varchar(max) ) as select * from users_tbl u inner join spiltCSVs(@ids) c on c.id = u.id --so on and so forth 

This is a set-based operation; unlike a procedural method that uses a cursor in proc, or using a for loop to iterate through a procedure call with a separate identifier.

+2
source

Since this is not suitable for comment for ocdecio's answer ...

Just to expand it ... on most systems I've seen, network traffic is a limiting factor in performance (assuming the configured database and external code are not absolutely terrible). Even if your web server and database server are on the same machine, interprocess communication can be a limiting factor if you have frequent calls between them.

+1
source

When loading each page or when loading a page for the first time? I would not want to do this for every postback on an ASP.NET page.

To more accurately answer your question, if you can get multiple records by sending multiple identifiers, do this. More efficient and scalable if you need more than 10 elements.

0
source

it all depends on how the proc is encoded, if you pass 10 elements in the second proc and this proc then uses the cursor to get these lines, then the first call may be faster

0
source

How much faster the second will really depend on too many things. Network overhead can be small compared to the size of your result sets.

There is another alternative (which should be faster than depending on the behavior of the lock), that is, call everything asynchronously - then your page can effectively end when the longest is completed. Obviously, this will require some additional coding.

In this example, there is only one SP overhead. Suppose that SP returns either a single rowset that the client will split / process or multiple rowset:

 int[] ids; // array of ids CallSQLStoredProc(ids) // stored procedure returns more than one row for each id 

In this example, the SP call overhead has n times one call. and calls are serialized:

 foreach(item in mylist) { CallSQLStoredProc(item.id); } 

In the third alternative:

 foreach(item in mylist) { StartSQLStoredProc(item.id); } // Continue building the page until you reach a point where you absolutely have to have the data wait(); 

It still has the overhead of n DB, but the performance improvement may depend on the capacity of SQL Server and the network to parallelize the workload. In addition, you get the opportunity to start SQL Server, which works during page creation.

The only SP solution can still win, especially if it can put together a single result set with UNION, where SQL Server can parallelize the task. However, if there are separate schemas in the result sets or UNION may not work well, an asynchronous solution with several SPs can beat it (and also take the opportunity to do other work on the page).

0
source

Iterating something will always lead to a lot of overhead. There are not many situations where iteration improves performance.

My advice has always been to avoid two things in programming:

  • if then other statuses
  • iteration

You will always have situations in which you will use both options, but the less you use them, the more opportunities your application should perform faster and smoother.

0
source

IF you want scalability in your application, you will want to use caching as much as possible. You should run any general queries only once and save the result in the cache.

As for your request, if you do not use cursors in the request for each identifier, it should be faster if network latency has a significant impact on what you do. When in doubt, measure. I was very surprised many times when I actually implemented time in my functions to see how long different things were.

At .net System.Diagnostics.StopWatch is your friend :).

0
source

All Articles