[Cm. edit @ below - the problem may not be what I originally thought)
Hello to all,
I am writing a graphics library that handles several filters / effects, including blurring.
I am trying to optimize my code and have come across something I donβt understand ...
When I run the code without a performance wizard, a simple 3x3 blur on a small image can take a few seconds (much longer than necessary). If I break execution during this delay, I get:
No Source Available System.dll!Microsoft.Win32.SystemEvents.WindowThreadProc() + 0xc2 bytes
Address agreed between runs / clicks
If I run the code using the performance wizard, the blur effect occurs with a noticeable delay no .
I see that the processor is at the level of 50% (dual-core processor, without a multi-threaded ATM, which maximizes 1 core) for the blur time, no matter what method I use to launch my application.
If I increase the blur complexity to get noticeable delays, I would suggest that attaching a profiler improves performance by at least 2 orders of magnitude.
I tried switching from Debug to Release build definition and getting the same result.
Can someone explain to me why my code will work faster with a connected profiler? It seems like I'm making a stupid mistake somewhere
EDIT:
Scenarios / Speed:
- In Windows XP:
- Debugging in VS: Slow
- Profiling in VS: Fast
- Debug / release builds outside of VS: Fast
Then I switched to my second car and got ...
- In Windows 7:
- Debugging in VS: Fast
- Profiling in VS: Fast
- Debug / release builds outside of VS: Fast
Which seems to indicate that I misidentified the problem. This does not mean that the profiler improves the situation, it is that debugging in the IDE kills it ... I did not suspect that debugging was a problem, since I started developing on a Win7 machine that had no problems, then switched to the XP machine and suggested that the change in speed was due to differences in equipment. Only after I started profiling did I see how fast he ran ...