While the suggestions offered are good, like using FILE_SHARE_READ, FILE_DELETE_ON_CLOSE etc., I don’t think there is an absolutely safe way to do this.
I used Process Explorer to close files designed to prevent the second process from starting. I did this because the first process was stuck and "was not killed, not dead, but did not respond," so I had a good reason for this - and I did not want to restart the machine at this particular point due to other processes running in system.
If someone uses some kind of debugger (including something non-commercial, written specifically for this purpose), joins your running process, sets a breakpoint and stops the code, then closes the file you opened, it can write to the file, which you just created.
You can make it harder, but you cannot stop someone with sufficient privileges / skills / capabilities from intercepting your program and manipulating data.
Please note that file and folder protection only works if you reliably know that users do not have privileged accounts on the computer. Typically, Windows users are administrators immediately or have a different account for administrator purposes - and I have access to sudo / root in almost all the Linux boxes I use at work - there are some file servers that I don’t [and should not] have root access. But all the boxes that I use myself or I can borrow for testing purposes, I can get in the root environment. This is not very unusual.
The solution I can think of is to find another library that uses a different interface [or get the sources of the library and change it so that it]. Not that this would prevent the stop, change, and move attack using the debugger approach described above.
source share