It is rather a philosophical question.
In C ++ , we have a beautiful brilliant idiom - RAII . But often I see it as incomplete. This is not consistent with the fact that my application can be killed using SIGSEGV .
I know, I know that such programs are wrong, you say. But there is a sad fact that on POSIX (Linux in particular) you can allocate the limits of physical memory and meet SIGSEGV in the middle of execution, working with correctly allocated memory.
You can say: "The application is dying, why should you care about those poor destructors that are not called?" Unfortunately, there are some resources that are not automatically released when the application terminates, for example, the file system .
I'm pretty sick right now, developing hacks, disrupting a good app design to handle this. So, I ask - this is a good, elegant solution to such problems.
Edit :
It seems that I was wrong, and on the Linux applications the kernel pager was killed. In this case, the question remains the same, but the cause of death of the application is different.
Code snippet:
struct UnlinkGuard
{
UnlinkGuard(const std::string path_to_file)
: _path_to_file(path_to_file)
{ }
~UnlinkGuard() {
unlink();
}
bool unlink() {
if (_path_to_file.empty())
return true;
if (::unlink(_path_to_file.c_str())) {
return false;
}
disengage();
return true;
}
void disengage() {
_path_to_file.clear();
}
private:
std::string _path_to_file;
};
void foo()
{
std::string path_to_temp_file = "...";
UnlinkGuard unlink_guard(path_to_temp_file);
unlink_guard.disengage();
}
If successful, I use the file. On failure, I want this file to be absent.
, POSIX link() - , : (.