Your decision is wrong. First, you use a simple open that buffers read and write, which is difficult when you want multiple processes to communicate through a single file.
As you seem to already suspect, and as others have commented, on a Unix-like operating system, there is no (reasonable) way to force so that only one process can read from a file. In a sense, the right way to handle this is to use a lock file and only have the process of reading the lock from the data / connection file at the moment. Check perldoc -f flock to find out more about this.
File locks on Unix have some drawbacks, unfortunately. In particular, if the lock file is located on a network file system, they may be unreliable. For example, when using NFS, functional locks depend on all machines that mount the file system with the start of the lock daemon. One somewhat hacky but traditional way to get around this is to abuse mkdir semantics. If a group of processes tries to create a directory with the same name, it is guaranteed that only one of them will succeed (well, or nothing, but do not miss it yet). You can use this to synchronize processes. Before a process begins to do what needs to be done only one at a time, it tries to create a directory with a predefined name. If it succeeds, great, it can go on. If this fails, someone else is working, and he must wait. When the active process runs, it deletes the directory so that another process can create it.
In any case, the main message is that you need two file systems: one that your processes use to determine which one can work, and one for real work.
source share