Lock Execution of files: Windows does, Linux does not work. What for?

I noticed that when a file runs on Windows (.exe or .dll), it is locked and cannot be deleted, moved or modified.

Linux, on the other hand, does not block the execution of files, and you can delete, move or modify them.

Why is Windows locked when Linux is down? Is there an advantage to blocking?

+75
linux windows filesystems locking operating-system
Oct 13 '08 at 7:05
source share
8 answers

Linux has a link counting mechanism, so you can delete a file at runtime, and it will continue to exist as long as some process (which previously opened it) has an open descriptor for it. The directory entry for the file is deleted when it is deleted, so it cannot be opened anymore, but processes already using this file can still use it. After completion of all processes using this file, the file will be deleted automatically.

Windows does not have this feature, so forcibly block the file until all processes executed from it are completed.

I find Linux behavior preferable. There are probably some deep architectural reasons, but the main (and simple) reason that I find most attractive is that on Windows sometimes you cannot delete a file, you don't know why, and all you know is that some process stores it in use. On Linux, this never happens.

+96
Oct 13 '08 at 7:18
source share

As far as I know, linux blocks executables when they work, however it blocks inodes . This means that you can delete the โ€œfileโ€, but the inode is still in the file system, untouched, and all you really deleted is the link.

Unix programs constantly use this way of thinking about the file system, create a temporary file, open it, delete the name. Your file still exists, but the name is freed for others, and no one can see it.

+26
Oct 13 '08 at 7:19
source share

Linux blocks files. If you try to overwrite the executable, you will get "ETXTBUSY" (the text file is busy). However, you can delete the file, and the kernel will delete the file when the last link to it is deleted. (If the machine was not completely turned off, these files are the reason that the "Deleted inode messages had zero d-time" when checking the file system, they were not completely deleted, because in the current process there was a link to them, and now they are. )

This has some basic advantages, you can update the executable process by deleting the executable file, replacing it, and then restarting the process. Even init can be updated as follows: replace the executable and send it a signal, and it will be re-exec () itself, without requiring a reboot. (This is usually done automatically by your package management system as part of the update)

In windows, replacing the file that is being used is a serious problem; a reboot is usually required to ensure that processes are not running.

There are some problems, for example, if you have a very large log file and you delete it, but forget to inform that the process that was registered in this file will contain a link to reopen the file, and you will be wondering why your disk suddenly did not get much more free space.

You can also use this trick on Linux for temporary files. open the file, delete it, then continue to use the file. When your process terminates (regardless of the reason - even a power failure), the file will be deleted.

Programs like lsof and fuser (or just scrolling in / proc // fd) can show you which processes open files that no longer have a name.

+22
Dec 30 '09 at 10:51
source share

I think that linux / unix does not use the same locking mechanics, because they are built from scratch as a multi-user system, which implies the possibility of multiple users using the same file, possibly even for different purposes.

Is there an advantage to blocking? Well, this could reduce the number of pointers that the OS should manage, but now in a few days the amount of savings is pretty small. The biggest advantage that I can come up with for blocking is this: you keep some ambiguity visible to the user. If user a is working with a binary file, and user b deletes it, then the actual file should stick until user A's process is complete. However, if user B or other users look at the file system for him, they will not be able to find him, but he will continue to take up space. This is actually not a big concern.

I think this is more a question of backward compatibility with window file systems.

+5
Oct 13 '08 at 7:14
source share

I think you are too absolute regarding Windows. Typically, it does not allocate swap space for part of the executable file code. Instead, it holds the lock on excutable and DLL. If the discarded code pages are needed again, they simply reload. But with / SWAPRUN these pages are stored in a swap. This is used for executable files on CDs or network drives. Therefore, windows do not need to lock these files.

For .NET, see Shadow Copy .

+5
Oct 13 '08 at 9:46
source share

If the executable code in the file should be blocked or not, this is a design decision, and MS just decided to block it, since in practice it has obvious advantages: in this way you do not need to know which code, which version the application is used in. This is a serious problem with the default behavior of Linux, which is simply ignored by most people. If you replace the system libraries, you cannot easily find out which applications use the code of such libraries; in most cases, the best you can get is that the package manager knows some users of these libraries and restarts them. But this only works for common and well-known things, such as maybe Postgres and its libraries or such. More interesting scenarios are if you are developing your own application against some third-party libraries, and they are replaced, because most of the time the package manager simply does not know your application. And this is not only a problem with native C code or something, it can happen with almost everything: just use httpd with mod_perl and some Perl libraries installed using the package manager, and let the package manager update these Perl libraries for any reason. It will not restart your httpd, simply because it does not know the dependencies. There are many examples like this, simply because any file can potentially contain code used in memory by any runtime, think of Java, Python, and all such things.

Thus, there is good reason to believe that locking files by default may be a good choice. However, you do not have to agree with these reasons.

So what did MS do? They simply created an API that allows the calling application to decide whether files should be locked or not, but they decided that the default value of this API was to provide an exclusive lock for the first calling application. See the API on CreateFile and its dwShareMode argument. It is for this reason that you will not be able to delete files used by some applications, it just does not apply to your use case, uses the default values โ€‹โ€‹and, therefore, receives an exclusive Windows lock for the file.

Please do not believe that people tell you something that Windows does not use ref counting on HANDLE or does not support Hardlinks or something like that, this is completely wrong. Nearly every API that uses HANDLEs documents their reference counting behavior, and you can easily read almost any NTFS article that actually supports Hardlinks and has always run because Windows Vista also supports Symlinks, and support for Hardlinks has improved by providing an API for read all hard links for a given file , etc.

In addition, you may just want to take a look at the structures used to describe the file, for example. Ext4 compared to NTFS , which have a lot in common. Both work with the concept of extents, which separates data from attributes such as the file name, and inodes is just another name for an older but similar concept. Even Wikipedia lists both file systems in an article.

There really are a lot of FUDs around locking files in Windows compared to other OSs on the network, as is defragmentation. ;-) Some of these FUDs can be eliminated by simply reading a bit on Wikipedia .

+2
May 16 '15 at 8:39
source share

NT options have

openfiles

which will show which processes have file descriptors. However, for this you need to enable the global flag of the "save list of objects"

openfiles / local /?

will tell you how to do it, and also that it may be related to performance.

0
Oct 13 '08 at 9:11
source share

Executable files are gradually displayed in memory at startup. This means that parts of the executable are loaded as needed. If a file is uploaded before all partitions are displayed, this can cause serious instability.

0
Oct 21 '14 at 18:00
source share



All Articles