Large buffers versus large static buffers, is there any advantage?

Consider the following code.

Is DoSomething1 () faster than DoSomething2 () in 1000 consecutive executions? I would suggest that if I call DoSomething1 () somewhere, it will be 1000 times faster than call DoSomething2 () 1000 times.

Is there a drawback that all of my large buffers are static?

#define MAX_BUFFER_LENGTH 1024*5 void DoSomething1() { static char buf[MAX_BUFFER_LENGTH] ; memset( buf, 0, MAX_BUFFER_LENGTH ); } void DoSomething2() { char buf[MAX_BUFFER_LENGTH] ; memset( buf, 0, MAX_BUFFER_LENGTH ); } 

Thank you for your time.

+3
c ++ performance optimization
source share
6 answers

Lack of static buffers:

  • If you need to be thread safe, then using static buffers is probably not a good idea.
  • Memory will not be freed until the end of your program, so memory consumption will be higher.

Advantages of static buffers:

  • Deductions with static buffers are less. You do not have to allocate on the stack every time.
  • With a static buffer, the probability of allocating too much is less.
+8
source share

Stack allocation is a bit more expensive if you enabled / GS in the VC ++ compiler, which provides a security check for buffer overflows (GS is enabled by default). Indeed, you should profile two options and see which is faster. Perhaps something like localizing the cache in static memory and on the stack may matter.

Here's a non-static version with a VC ++ c / O 2 compiler.

 _main PROC ; COMDAT ; Line 5 mov eax, 5124 ; 00001404H call __chkstk mov eax, DWORD PTR ___security_cookie xor eax, esp mov DWORD PTR __$ArrayPad$[esp+5124], eax ; Line 7 push 5120 ; 00001400H lea eax, DWORD PTR _buf$[esp+5128] push 0 push eax call _memset ; Line 9 mov ecx, DWORD PTR __$ArrayPad$[esp+5136] movsx eax, BYTE PTR _buf$[esp+5139] add esp, 12 ; 0000000cH xor ecx, esp call @__security_check_cookie@4 add esp, 5124 ; 00001404H ret 0 _main ENDP _TEXT ENDS 

And here is the static version

 ; COMDAT _main _TEXT SEGMENT _main PROC ; COMDAT ; Line 7 push 5120 ; 00001400H push 0 push OFFSET ?buf@?1??main@@9@4PADA call _memset ; Line 8 movsx eax, BYTE PTR ?buf@?1??main@@9@4PADA+3 add esp, 12 ; 0000000cH ; Line 9 ret 0 _main ENDP _TEXT ENDS END 
+6
source share

There will hardly be any difference in speed between them. Allocating a buffer on the stack is very fast - all this is decreasing the stack pointer by a value. If you allocate a very large buffer on the stack, however, there is a possibility that you might overflow your stack and cause a segfault / access violation. Conversely, if you have a lot of static buffers, you will significantly increase the size of the working range of your program, although this will be somewhat mitigated if you have good link locality.

Another significant difference is that the stack buffers are thread safe and reentrant, while static buffers are neither thread safe nor reentrant.

+4
source share

You may also consider introducing your code into the class. For example. something like

 const MAX_BUFFER_LENGTH = 1024*5; class DoSomethingEngine { private: char *buffer; public: DoSomethingEngine() { buffer = new buffer[MAX_BUFFER_LENGTH]; } virtual ~DoSomethingEngine() { free(buffer); } void DoItNow() { memset(buffer, 0, MAX_BUFFER_LENGTH); ... } } 

This is a tread if each tread simply highlights its own engine. This avoids allocating large amounts of memory on the stack. Allocation on the heap is a little overhead, but it is not significant if you reuse instances of the class many times.

+2
source share

Am I the only one working on multithreaded software? Static buffers are absolute no-no in this situation, if you do not want to take responsibility for locking and unlocking performance.

+2
source share

As others have said, stack distribution is very fast, the acceleration from having to be reused every time is probably more for more complex objects like ArrayList or HashTable (now List <> and Dictionary <,> in the common world), where it runs every time build code, also if the settings are incorrect, then each time the container overflows, you get unwanted redistributions, and it should allocate new memory and copy the contents from the old memory to the new one. I often have a List <> objects list that I allow to grow to any size, and I reuse them by calling Clear () - which leaves the allocated memory / capacity intact. However, you should be wary of memory leaks if you have a rouge call, which tends to allocate a lot of memory, which does not occur very often or only once.

0
source share

All Articles