I have an A () method that handles some requests. This method, from the opening of the bracket until the moment of its return, once for +/- 70 ms. A good 50% of which comes from opening a connection, about 20% comes from real requests, 5-10% is used by some memory access, the rest (possibly) is used to dispose of the connection, commands and reader.
Although this big chunk of time used to process the connection is annoying enough, what worries me more is that when I call A () from method B ():
B() { var timer = Stopwatch.Startnew() A(); timer.Stop();
Another 180 ms of lag is added, and I cannot understand why. I already tried to have A return null, which does not change anything.
The only disk I / O and network occur in A. I thought that the transfer from disk and network to local memory should happen in A, and so calling A from B should not depend on this, but apparently this is not the case ? Is this network latency that I'm experiencing here? If so, why does this also happen when I just allow B to return null?
I have no other explanation at the moment ...
- Everything is in one assembly,
- Measuring without an attached debugger doesn’t change anything,
- The return "null" immediately shows 0 ms, returning the value null instead of the normal return value does not change anything (but imposes the idea in one way or another connected with the delay).
A is roughly implemented as follows, like any simple database access method. It is contrived, but shows the main idea and flow:
A() { var totalTimer = Stopwatch.StartNew(); var stuff = new Stuffholder(); using(connection) { using(command) { using(reader) {
Any ideas?
c # sql-server latency networking
Apeiron
source share