ClosedByInterruptException is not thrown

JDK docs say that if a thread is interrupted, which is currently blocked in the interrupt operation, the channel is closed and a ClosedByInterruptException is thrown. However, when using FileChannel, I get a different behavior:

public class Main implements Runnable { public static void main(String[] args) throws Exception { Thread thread = new Thread(new Main()); thread.start(); Thread.sleep(500); thread.interrupt(); thread.join(); } public void run() { try { readFile(); } catch (IOException ex) { ex.printStackTrace(); } } private void readFile() throws IOException { FileInputStream in = new FileInputStream("large_file"); FileChannel channel = in.getChannel(); ByteBuffer buffer = ByteBuffer.allocate(0x10000); for (;;) { buffer.clear(); // Thread.currentThread().interrupt(); int r = channel.read(buffer); if (Thread.currentThread().isInterrupted()) { System.out.println("thread interrupted"); if (!channel.isOpen()) System.out.println("channel closed"); } if (r < 0) break; } } 

}

Here, when the thread is interrupted, the read () call returns normally, even if the channel is closed. No exception is thrown. This code prints "thread abort" and "channel closed", and then the next time read () is called, it throws a ClosedChannelException.

I am wondering if this behavior is acceptable. Since I understand the docs, read () should either return normally, or not close the channel, or close the channel, and throw a ClosedByInterruptException. The return is ok and closing the channel seems wrong. The problem for my application is that I get an unexpected and seemingly unbound ClosedChannelException somewhere else when the FutureTask program that does io is canceled.

Note. A ClosedByInterruptException will be thrown as expected when the stream is already interrupted when read () is entered.

I saw this behavior using a 64-bit server VM (JDK 1.6.0 u21, Windows 7). Can anyone confirm this?

+4
source share
2 answers

I remember reading somewhere, but I cannot quote the source here that the FileChannel is kind of intermittent. After the read / write operation has moved from the JVM to the OS, the JVM cannot really do much, so the operation will take the time it takes. The recommendation was to read / write to managed size chunks, so the JVM can check the status of a thread interrupt before processing the job on the OS.

I think your example is a great demonstration of this behavior.

EDIT

I think the FileChannel behavior that you describe violates the principle of "least surprise", but works from a certain angle, as expected, even if you subscribe to this angle, as desired.

Because the FileChannel is β€œintermittent” and the read / write operations are indeed blocked, this lock operation succeeds and returns valid data that represents the state of the file on disk. In the case of a small file, you can even return the entire contents of the file. Since you have reliable data, the designers of the FileChannel class believe that you can use it just before you start unwinding an interrupt.

I think this behavior should be really , really well documented, and for this you can send an error. However, do not hold your breath on what is ever fixed.

I think the only way to tell if a thread is interrupted in the same iteration of the loop is to do what you are doing and explicitly check the thread interrupt flag.

+2
source

One possibility is that the program actually reaches the end of the file before the interrupt is delivered. Try changing your program to count the total number of bytes read and displayed when you see that the current stream has been interrupted.

0
source

All Articles