C ++: multithreading and reference counting

Ive is currently getting some counted classes using the following:

class RefCounted { public: void IncRef() { ++refCnt; } void DecRef() { if(!--refCnt)delete this; } protected: RefCounted():refCnt(0){} private: unsigned refCnt; //not implemented RefCounted(RefCounted&); RefCounted& operator = (RefCounted&}; }; 

I also have a class of smart pointers that handles reference counting, although it is not used uniformly (for example, in one or two bits of a critical performance code, where I minimized the number of calls to IncRef and DecRef).

 template<class T>class RefCountedPtr { public: RefCountedPtr(T *p) :p(p) { if(p)p->IncRef(); } ~RefCountedPtr() { if(p)p->DecRef(); } RefCountedPtr<T>& operator = (T *newP) { if(newP)newP->IncRef(); if(p) p ->DecRef(); p = newP; return *this; } RefCountedPtr<T>& operator = (RefCountedPtr<T> &newP) { if(newP.p)newP.p->IncRef(); if(p) p ->DecRef(); p = newP.p; return *this; } T& operator *() { return *p; } T* operator ->() { return p; } //comparison operators etc and some const versions of the above... private: T *p; }; 

For general use of the classes themselves, I plan to use a read / write lock system, however I really do not need to create a write lock for every call to IncRef and DecRef.

I also just thought of a scenario in which a pointer might be invalid just before the IncRef call, think:

 class Texture : public RefCounted { public: //...various operations... private: Texture(const std::string &file) { //...load texture from file... TexPool.insert(this); } virtual ~Texture() { TexPool.erase(this); } freind CreateTextureFromFile; }; Texture *CreateTexture(const std::string &file) { TexPoolIterator i = TexPool.find(file); if(i != TexPool.end())return *i; else return new Texture(file); } 
  Threada threadb
 t = CreateTexture ("ball.png");
 t-> IncRef ();
 ... use t ... t2 = CreateTexture ("ball.png"); // returns * t
 ... thread suspended ...
 t-> DecRef (); // deletes t ...
 ... t2-> IncRef (); // ERROR

So, I think I need to completely change the reference counting model, the reason I added ref after returning to the design was to support things like the following:

 MyObj->GetSomething()->GetSomethingElse()->DoSomething(); 

instead of:

 SomeObject a = MyObj->GetSomething(); AnotherObject *b = a->GetSomethingElse(); b->DoSomething(); b->DecRef(); a->DecRef(); 

Is there a clean way to quickly count references in C ++ in a multi-threaded environment?

+6
c ++ multithreading reference-counting
source share
10 answers

Make the reference counter atomic and you wonโ€™t need a lock. On Windows :: InterlockedIncrement and :: InterlockedDecrement can be used. In C ++ 0x, you have atom <>.

+16
source share

If you don't know this bottleneck, I would just use boost::shared_ptr

This is very fast, but there is a bit of extra overhead in the optional control unit. On the other hand, it has many advantages:

  • It is tolerated
  • It is right.
  • You do not need to spend your mental cycles on this, leaving you time to actually get the material.
  • Fast
  • This is an industry standard, and other programmers will immediately understand it.
  • This forces you to use boost , which, if you do not, should be

Also note that you probably won't need a read / write lock for the counted object. Competition is minimal and the extra overhead will completely overwhelm all your benefits. The general pointer is implemented using the atomic input function on the chip, which is much better than a regular mutex, which is much faster than a read / write lock.

+10
source share

If you do not want to use boost or C ++ 0X, but you still want to block the re-arrangement, you can do this by including in your code the correct platform procedure for assembling the atomic increment / atomic decrement in your code. For example, here is the AtomicCounter class that I use to count links; It works on most OS:

https://public.msli.com/lcs/muscle/html/AtomicCounter_8h_source.html

Yes, this is an unpleasant mess #ifdefs. But it works.

+6
source share

osg, OpenSceneGraph has such a structure.

you get your classes from osg :: Referenced, and you don't care about the destructor even in multi-threaded mode.

you just create classes like:

 osg::ref_ptr<MyClass> m = new MyClass(); 

instead:

 MyClass* m = new MyClass(); 
+2
source share

Do you want thread safe or atomic stream security? boot :: shared_ptr is just thread safe. You still need to โ€œownโ€ shared_ptr to copy it safely.

There were some experimental materials that I did on the atomic stream reference counter here http://atomic-ptr-plus.sourceforge.net/ , which can give you an idea of โ€‹โ€‹what is involved.

+2
source share

boost :: shared_ptr and Poco :: SharedPtr both wrap this idiom with a free smart pointer.

If you want to do an intrusive reference count, as you showed above, Poco AutoPtr is a good, efficient implementation.

EDIT: I would add links, but I was too low in reputation. Google for any of the class names, and you have to find your way.

+1
source share

The main problem is that you are not getting the link until the CreateTexture returns. If you use such open coding, the easiest way to handle this is to lock around TexPool, which is also used when releasing links before deleting, for example:

 // PSEUDOCODE WARNING: --refcnt MUST be replaced by an atomic decrement-and-test // Likewise, AddRef() MUST use an atomic increment. void DecRef() { if (!--refcnt) { lock(); if (!refcnt) delete this; unlock(); } } 

and

 Texture *CreateTexture(const std::string &file) { lock(); TexPoolIterator i = TexPool.find(file); if(i != TexPool.end()) { *i->AddRef(); unlock(); return *i; } unlock(); return new Texture(file); } 

However, as others have noted, boost :: shared_ptr (aka std :: tr1 :: shared_ptr) implements all this without blocking, is safe, and also supports weak pointers, which will help with your texture cache.

+1
source share

In your cache you need to use boost::weak_ptr or a similar construct.

+1
source share

I think you need critical sections for this particular design. One place that is required is CreateTexture, because otherwise you risk having more than one identical texture object in the system. And, in general, if multiple threads can create and destroy the same texture, it makes this a "mutable general state."

0
source share

Take a look at this pdf: http://www.research.ibm.com/people/d/dfb/papers/Bacon01Concurrent.pdf

A link counting system that does not require blocking is described here. (Well, you need to "pause" the threads one at a time, which may be considered a lock.) It also collects garbage cycles. The disadvantage is that it is much more complicated. There are also some important things left as an exercise for the reader. Like what happens when a new thread is created or the old one is deleted, or how to handle inherently acyclic objects. (If you decide to do this, let me know how you loved them.)

0
source share

All Articles