Reusing NioWorkerPool for multiple server and client bootstraps

Can I create 1 instance of Netty NioWorkerPool and share it across multiple instances of ServerBootstrap and ClientBootstrap? There are so many endless threads in my application, and at the moment, each Bootstrap creates its own NioWorkerPool for use with 2 * the number of cores on my machine.

If I share this pool, what are the consequences? Will everyone have the same chance of launching in the end, or will they try to either connect to the server or the client, and all whistle?

Even having one NioWorkerPool for servers and one for clients will be better than mine.

As far as I can tell, this is not a duplicate question. I saw others talking about sharing the Contractor, which I already do, I'm more interested in sharing the actual instance of NioWorkerPool. I have a thread dump of my processes, and I have about 3,000 threads, most of which are waiting for Netty NIO events.

+4
source share
1 answer

Yes, you can. Here is an example:

ExecutorService executor = Executors.newCachedThreadPool(); NioClientBossPool clientBossPool = new NioClientBossPool(executor, clientBossCount); NioServerBossPool serverBossPool = new NioServerBossPool(executor, serverBossCount); NioWorkerPool workerPool = new NioWorkerPool(executor, workerCount); ChannelFactory cscf = new NioClientSocketChannelFactory(clientBossPool, workerPool); ChannelFactory sscf = new NioServerSocketChannelFactory(serverBossPool, workerPool); ... ClientBootstrap cb = new ClientBootstrap(cscf); ServerBootstrap sb = new ServerBootstrap(sscf); 

Note that you should not create a new ChannelFactory for each bootstrap instance created. You must reuse the factory.

Sharing a work pool between different connections means that the client socket and socket received by the socket server can be processed by the same I / O stream that belongs to the worker pool. This is usually a good idea, assuming that the handlers of these pipes do not spend too much time when they are called by an I / O stream.

However, if the handlers of a certain type of channel spend much more time than the handlers of other channels, you can observe pending responses from channels that did not receive a turn soon. This problem can be fixed by making sure that all handlers do not block and do their work as quickly as possible and return quickly.

+4
source

All Articles