Netty server does not close / release socket

I am facing a resource problem in a netty server application.

[io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.: java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [rt.jar:1.7.0_60] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [rt.jar:1.7.0_60] at io.netty.channel.socket.nio.NioServerSocketChannel.doReadMessages(NioServerSocketChannel.java:135) [netty-all-4.0.25.Final.jar:4.0.25.Final] at io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:69) [netty-all-4.0.25.Final.jar:4.0.25.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) [netty-all-4.0.25.Final.jar:4.0.25.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) [netty-all-4.0.25.Final.jar:4.0.25.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) [netty-all-4.0.25.Final.jar:4.0.25.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) [netty-all-4.0.25.Final.jar:4.0.25.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) [netty-all-4.0.25.Final.jar:4.0.25.Final] at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) [netty-all-4.0.25.Final.jar:4.0.25.Final] at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_60] 

As a workaround, I increased the maximum open files with ulimit -n, but I can still increase the number of files / sockets:

 lsof -p 5604 | grep socket | wc -l 

now over 3000 ...

Unable to see any open or dangling connections with netstat ...

I use ReadTimeoutHandler to close unused connections with the following exception. Handler Code:

 @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { if (cause instanceof ReadTimeoutException) { logger.debug("Read timeout - close connection"); } else { logger.info(cause.getMessage()); } ctx.close(); } 

The server boot buffer is as follows:

 ServerBootstrap b = new ServerBootstrap(); b.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class).childHandler(new ChannelInitializer<SocketChannel>() { @Override public void initChannel(SocketChannel ch) throws Exception { ch.pipeline().addLast(new ReadTimeoutHandler(60)); ch.pipeline().addLast(new LoggingHandler(mySpec.getPortLookupKey().toLowerCase())); ch.pipeline().addLast(new RawMessageEncoder()); ch.pipeline().addLast(new RawMessageDecoder()); ch.pipeline().addLast(new RequestServerHandler(ctx.getWorkManager(), factory)); } }).option(ChannelOption.SO_BACKLOG, 128).childOption(ChannelOption.SO_KEEPALIVE, true); ChannelFuture channelFuture = b.bind(port).sync(); 

Did I miss something? Should the number of open files decrease after the connection is closed (either from a remote host, or using a timeout handler)?

What do I need to change to save resources here?

Update : I am using netty 4.0.25

Update 2: At the request, I moved the logging handler before ReadTimeouthandler, here are the logs. The situation in which the client usually disconnects:

 09:41:39,755 [3-1] [id: 0xca6601a2, /127.0.0.1:64258 => /127.0.0.1:4300] REGISTERED 09:41:39,756 [3-1] [id: 0xca6601a2, /127.0.0.1:64258 => /127.0.0.1:4300] ACTIVE 09:41:39,810 [3-1] [id: 0xca6601a2, /127.0.0.1:64258 => /127.0.0.1:4300] RECEIVED(1024B) 09:41:39,813 [3-1] [id: 0xca6601a2, /127.0.0.1:64258 => /127.0.0.1:4300] RECEIVED(1024B) 09:41:39,814 [3-1] [id: 0xca6601a2, /127.0.0.1:64258 => /127.0.0.1:4300] RECEIVED(150B) 09:41:40,854 [3-1] [id: 0xca6601a2, /127.0.0.1:64258 => /127.0.0.1:4300] WRITE(1385B) 09:41:40,855 [3-1] [id: 0xca6601a2, /127.0.0.1:64258 => /127.0.0.1:4300] FLUSH 09:41:40,861 [3-1] [id: 0xca6601a2, /127.0.0.1:64258 :> /127.0.0.1:4300] INACTIVE 09:41:40,864 [3-1] [id: 0xca6601a2, /127.0.0.1:64258 :> /127.0.0.1:4300] UNREGISTERED 

The situation when the client is NOT disconnected:

 10:04:24,104 [3-1] [id: 0x48076684, /127.0.0.1:50525 => /127.0.0.1:4300] REGISTERED 10:04:24,107 [3-1] [id: 0x48076684, /127.0.0.1:50525 => /127.0.0.1:4300] ACTIVE 10:04:24,594 [3-1] [id: 0x48076684, /127.0.0.1:50525 => /127.0.0.1:4300] RECEIVED(1024B) 10:04:24,597 [3-1] [id: 0x48076684, /127.0.0.1:50525 => /127.0.0.1:4300] RECEIVED(1024B) 10:04:24,598 [3-1] [id: 0x48076684, /127.0.0.1:50525 => /127.0.0.1:4300] RECEIVED(150B) 10:04:25,638 [3-1] [id: 0x48076684, /127.0.0.1:50525 => /127.0.0.1:4300] WRITE(1383B) 10:04:25,639 [3-1] [id: 0x48076684, /127.0.0.1:50525 => /127.0.0.1:4300] FLUSH 10:05:25,389 [3-1] [id: 0x48076684, /127.0.0.1:50525 => /127.0.0.1:4300] CLOSE() 10:05:25,390 [3-1] [id: 0x48076684, /127.0.0.1:50525 :> /127.0.0.1:4300] CLOSE() 10:05:25,390 [3-1] [id: 0x48076684, /127.0.0.1:50525 :> /127.0.0.1:4300] INACTIVE 10:05:25,394 [3-1] [id: 0x48076684, /127.0.0.1:50525 :> /127.0.0.1:4300] UNREGISTERED 

Thus, before closing, there are 60 seconds (as expected from ReadTimeoutHandler)

After some analysis, I got the impression that even with a normal disconnection from the client, the number of open files increases! Also, in this situation, there is no CLOSE () ...

+8
java sockets netty
source share
1 answer

Perhaps this is due to this netty problem https://github.com/netty/netty/issues/1731

This is the expected behavior and nothing can be changed. The JVM signals that it cannot receive the channel - therefore, no connection can be initiated, and no response can be sent. The client will see a connection failure. If you have a load balancer, it should try again with an alternative host or return 503 to your own on behalf of the application.

+1
source share

All Articles