Will C garbage collection be faster than C ++?

I was interested for some time on how to manage memory in my next project. What DSL writes in C / C ++.

This can be done in any of three ways.

  • The link is calculated by C or C ++.
  • Garbage collected by C.
  • In C ++, copying a class and structures from stack to stack and managing strings separately with some GC.

The community probably already has a lot of experience with each of these methods. Which one will be faster? What are the pros and cons for each?

Related question. Will malloc / free be slower than allocating a large fragment at the beginning of the program and starting my own memory manager? .NET seems to do this. But I am confused why we cannot count on the OS to do this job better and faster than what we can do ourselves.

+6
c ++ optimization c garbage-collection memory-management
source share
11 answers

It all depends! This is a fairly open question. He needs an essay to answer it!

Hey, here someone prepared earlier:

http://lambda-the-ultimate.org/node/2552

http://www.hpl.hp.com/personal/Hans_Boehm/gc/issues.html

It depends on how large your objects are, how many of them are, how quickly they are allocated and discarded, how much time you want to invest in optimization and tuning for optimization. If you know the limits of how much memory you need, for quick work I would think that you cannot really beat the capture of all the memory you need from the OS, and then manage it yourself.

The reason this can be a slow allocation of memory from the OS is because it deals with a lot of processes and memory on disk and in ram, so to get the memory, it needs to decide if it is enough. Perhaps this may cause the page to be transferred from memory from disk to disk so that it can give you enough. There is a lot going on. Thus, managing it yourself (or with a bunch of GCs built) can be much faster than switching to the OS for each request. In addition, the OS usually deals with larger chunks of memory, so it can round up the size of the requests you make so that you can lose memory.

Do you have real stringent requirements for a quick transition? Many DSL applications do not require raw performance. I would advise going with the simplest code. You could spend a lifetime using memory management systems and worry about what’s best.

+8
source share

uh ... It depends on how you write the garbage collection system for your DSL. Neither C nor C ++ has a built-in garbage collector, but can be used to write a very efficient or very inefficient garbage collector. Writing such a thing, by the way, is a non-trivial task.

DSLs are often written in higher-level languages ​​such as Ruby or Python, especially because the author of the language can use garbage collection and other language tools. C and C ++ are great for writing complete industrial-strength languages, but you certainly need to know what you are doing to use them - knowing yacc and lex is especially useful, but a good understanding of dynamic memory management is also important, as you say. You can also check out keykit , an open source DSL written in C if you still like the idea of ​​DSL in C / C ++.

+4
source share

Why will C garbage collector be faster than C ++? The only garbage collectors available for C are pretty inefficient things that are more about connecting memory leaks, rather than improving the quality of your code.

In any case, C ++ has the potential to achieve better performance with less code (note that this is only potential. It is also very easy to write C ++ code, which is much slower than the C equivalent).

Given the current state of both languages, GC does not currently improve the performance of your code. GC can be very effective in languages ​​developed for it. C / C ++ are not among those .;)

In addition, it is impossible to say. Languages ​​do not have speed. It makes no sense to ask which language is faster. It depends on 1) the specific code, 2) the compiler that compiles it, and 3) the system on which it works (hardware, as well as the OS).

malloc is a rather slow operation, much slower than the .NET equivalents, so yes, if you are doing a lot of small allocations, you might be better off allocating a large memory pool once and then using chunks of this.

The reason is that the OS must find a free piece of memory, mainly using a linked list of all areas of free memory. In .NET, calling new () is basically nothing more than moving the heap pointer by as many bytes as it takes to allocate.

+4
source share

In most garbage collection implementations, distribution may see an improvement in speed, but then you have the additional cost of the collection phase, which can be triggered at any time during the execution of your program, which leads to a sudden (seemingly random) delay.

As for your second question, it depends on your memory management algorithms. You would be safe to stick with your default malloc library, but there are alternatives that have better performance.

+3
source share

Related question. Will malloc / free be slower than allocating a large cartridge at the beginning of the program and starting my own memory manager? .NET seems to do this. But I am confused why we cannot count on the OS to do this job better and faster than what we can do ourselves.

The problem with providing OS memory for memory allocation is that it introduces undefined behavior. There is no way for the programmer to find out how long the OS will need to return a new piece of memory - allocation can be quite expensive if the memory needs to be unloaded to disk.

Consequently, pre-allocation may be a good idea, especially when using a trash collector. This will increase memory consumption, but distribution will be fast, because in most cases it will only be a pointer increment.

+1
source share

As people point out, GC allocates faster (because it just gives you the next block on its list), but slower overall (because it has to heap a bunch regularly so that the algorithms are fast).

so go for a compromise solution (which is actually pretty damn good):

You create your own heaps, one for each object size that you usually allocate (or 4-byte, 8-byte, 16-byte, 32-byte, etc.), when you need a new piece of memory, you grab the last "block" on the corresponding heap. Since you are pre-allocating from these heaps, all you have to do when the selection is to grab the next free block. This works better than a standard allocator because you are wasting memory happily - if you want to allocate 12 bytes, you will discard the entire 16-byte block from the 16-byte heap. You save a raster image of used v free blocks so that you can quickly distribute them without wasting extra memory or without needing compactness.

In addition, since you have several heaps running, highly parallel systems work much better, since you do not need to block as often (i.e. you have several locks for each heap so that you do not become rivals almost as much)

Let's try - we used it to replace the standard heap in a very intensive application, productivity has grown significantly.

BTW. the reason standard distributors are slow is because they try not to waste memory - so if you allocate 5-byte, 7-byte and 32 bytes from the standard heap, it will keep these "boundaries". The next time you need to allocate, it will go through those who are looking for enough space to give you what you requested. This works well for low memory systems, but you only need to look at how much memory most applications use today to see GC systems go the other way and try to allocate as quickly as possible without worrying about how much memory is wasted .

+1
source share

The problem has many variables, but if your application is written with garbage in mind, and if you use special Boehm collector functions, for example, various allocation calls for blocks that do not contain pointers, then as a general rule your application will have simpler interfaces - will work a little faster - Requires 1.2x to 2x of space than a similar application that uses explicit memory management.

For documentation and evidence supporting these claims, you can see information on the measured cost of conservative waste collection on the Boehm website, as well as Ben Zorn.

Most importantly, you save a ton of effort and don't have to worry about a significant class of memory management errors.

The C vs C ++ issue is orthogonal, but GC will definitely be faster than reference counting, especially when there is no compiler support for reference counting.

+1
source share

Neither C nor C ++ will provide you with garbage for free. What they will give you is the memory allocation libraries (which provide malloc / free, etc.). There are many online resources for writing garbage collection libraries. A good start is link text

0
source share

Most non-GC languages ​​will allocate and de-allocate memory as needed and are no longer needed. GC'd languages ​​usually allocate large chunks of memory before starting work and free up memory only in standby mode, and not in the middle of an intensive task, so I'm going to yes if the GC starts at the right time.

The programming language D is a garbage collection language and ABI compatible with C, and partially ABI compatible with C ++. This page shows some string performance tests in C ++ and D.

0
source share

I suggest that if you write a program in which allocating and freeing memory (explicitly or GC'ed) is a bottleneck, you should review your architecture, design, and implementation.

0
source share

If you do not want to explicitly manage memory, do not use C / C ++. There are many link counting or garbage collector languages ​​that are likely to work much better for you.

C / C ++ are developed in an environment where the programmer manages his own memory. Trying to modify GC or ref, counting on them, may help some, but you will find that you either have to compromise the performance of GC (since it does not have a compiler hinting at where the pointers may be) you will find new and exciting ways that you can spoil the reference counter or GC or something else.

I know this sounds good, but in fact you just need to take a language that is more suitable for the task.

0
source share

All Articles