I am trying to delete 10,000+ files simultaneously, atomically, for example. either everything needs to be removed immediately, or everything needs to remain in place.
Of course, the obvious answer is moving all the files to a temporary directory and deleting it recursively with success, but this doubles the number of I / O required.
Compression does not work, because 1) I do not know which files will need to be deleted, and 2) the files need to be edited often.
Is there anything that can help lower the cost of I / O? Any platform will do.
EDIT: Suppose a power outage can occur at any time.
Kibbee : . , , , . :
, (1) , , , , .
, : . , , " " .
On * nix, - , , . .
, , - op, ( ) , .
, ? .
, , . , . , . . , , , . NTFS Windows Vista Transactional NTFS. , , .
, -, Windows, Linux LVM. . , , , . WMI VBScript, , apis C/++.
Shadow Copy LVM Snapsots. . , . . , . , , , . , , , . . , . , . , , .
. , , . .
, to_be_deleted.log, , (fsync()), . , , to_be_deleted.log.
to_be_deleted.log
fsync()
to_be_deleted.log, , ( " " ).
, copy-and-then-delete . , -?
, , , . .. -, .
(, ) ? ( , ).
"" , "" , "" .
, / , - ..
Windows Vista Transactional NTFS , :
HANDLE txn = CreateTransaction(NULL, 0, 0, 0, 0, NULL /* or timeout */, TEXT("Deleting stuff")); if (txn == INVALID_HANDLE_VALUE) { /* explode */ } if (!DeleteFileTransacted(filename, txn)) { RollbackTransaction(txn); // You saw nothing. CloseHandle(txn); die_horribly(); } if (!CommitTransaction(txn)) { CloseHandle(txn); die_horribly(); } CloseHandle(txn);
The main answer to your question is "No." The more complex answer is that this requires file system support, and very few file systems have such support. Apparently, NT has a transactional FS that supports this. Perhaps BtrFS for Linux will also support this.
In the absence of direct support, I believe that the hardlink, move, remove option is the best thing you are going to get.