RAII in Python: what's the point of __del__?

At first glance, it seems that the special Python __del__ method offers the same advantages as the destructor in C ++. But according to the Python documentation ( https://docs.python.org/3.4/reference/datamodel.html ) there is no guarantee that your __del__ method calls at all!

It is not guaranteed that the __del__ () methods are called on objects that still exist when the translator exits.

In other words, the method is useless! Is not it? The hook function, which may or may not be called, is really not very good, so __del__ nothing regarding RAII. If I have any necessary cleanup, I don’t need to run it for a while, oh, when ever the GC feels like this, I need it to perform reliably, deterministically and in 100% of cases .

I know that Python provides context managers that are much more useful for this task, but why was __del__ at all? What is the point?

+5
source share
2 answers

__del__ is a finalizer. This is not a destructor. Finalizers and destructors are completely different animals.

Destructors are called reliably and exist only in languages ​​with deterministic memory management (for example, C ++). Python context managers (the with statement) can achieve similar effects under certain circumstances. They are reliable because the service life of an object is precisely fixed; in C ++, objects die when they explicitly delete d or when an area comes out (or when a smart pointer deletes them in response to their own destruction). And this is when the destructors are started.

Finalizers are not called reliably. The only permissible use of the finalizer is the emergency security system (NB: this article is written from a .NET perspective, but the concepts translate intelligently Well). For example, file objects returned by open() are automatically closed upon completion. But you should still close them yourself (for example, using the with statement). This is due to the fact that objects are destroyed dynamically by the garbage collector, which may or may not start immediately, and with the generation garbage collection, it may or may not collect some objects in any given passage. Since no one knows what kind of optimization we can think of in the future, it is safe to assume that you simply cannot know when the garbage collector will collect your objects. This means that you cannot rely on finalizers.

In the specific case of CPython, you get somewhat stronger guarantees through the use of reference counting (which is much simpler and more predictable than garbage collection). If you can guarantee that you will never create a reference loop associated with this object, this object finalizer will be called at a predictable point (when the last link dies). This is true only for CPython, the reference implementation, and not for PyPy, IronPython, Jython, or any other implementation.

+9
source

Because the call to __del__ calls. It is simply unclear when this will happen, because in CPython, if you have circular references, the refcount mechanism cannot take care of fixing the object (and therefore ending it through __del__ ) and must delegate it to the garbage collector.

The garbage collector has a problem: he cannot know in what order to break circular links, because it can cause additional problems (for example, frees up memory that will be needed when another object that is part of the assembled loop initiating segfault is completed).

What you are emphasizing is that the interpreter may exit due to reasons that prevent it from performing the cleanup (for example, it is segfaults or some module C calls exit () indistinctly).

There PEP 442 for the finalization of the safe facility, which was completed in 3.4. I suggest you take a look at it.

https://www.python.org/dev/peps/pep-0442/

0
source

Source: https://habr.com/ru/post/1212536/


All Articles