Pipelines in Tomkat - in parallel?

I am writing a service using TomCat and trying to understand the HTTP1.1 pipelining function and its implementation in Tomcat.

Here are my questions:

1] Conveyor processing is performed in TomCat in parallel. ie => After receiving a pipelined request, it breaks it into a separate request and calls it all in parallel? Here is a little test I did: it looks from my tests, but I'm trying to find an official document, etc.

public static void main(String[] args) throws IOException, InterruptedException { Socket socket = new Socket(); socket.connect(new InetSocketAddress("ServerHost", 2080)); int bufferSize = 166; byte[] reply = new byte[bufferSize]; DataInputStream dis = null; //first without pipeline - TEST1 // socket.getOutputStream().write( // ("GET URI HTTP/1.1\r\n" + // "Host: ServerHost:2080\r\n" + // "\r\n").getBytes()); // // final long before = System.currentTimeMillis(); // dis = new DataInputStream(socket.getInputStream()); // Thread.currentThread().sleep(20); // final long after = System.currentTimeMillis(); // // dis.readFully(reply); // System.out.println(new String(reply)); //now pipeline 3 Requests - TEST2 byte[] request = ("GET URI HTTP/1.1\r\n" + "Host:ServerHost:2080\r\n" + "\r\n"+ "GET URI HTTP/1.1\r\n" + "Host: ServerHost:2080\r\n" + "\r\n"+ "GET URI HTTP/1.1\r\n" + "Host: ServerHost:2080\r\n" + "\r\n").getBytes(); socket.getOutputStream().write(request); bufferSize = 1000*1; reply = new byte[bufferSize]; final long before = System.currentTimeMillis(); dis = new DataInputStream(socket.getInputStream()); Thread.currentThread().sleep(20); final long after = System.currentTimeMillis(); dis.readFully(reply); System.out.println(new String(reply)); long time = after-before; System.out.println("Request took :"+ time +"milli secs"); } 

In the above test, in test 2, the response time is not equal [20 * 3 = 60+ ms]. The actual GET request is very fast. Does this hint that they are parallelizing if I am missing something?

2] What is the default pipeline depth in Tomcat? How can I control it?

3] If you enable server-side pipelining for my service, I need to consider something else, assuming the client follows http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1.4 spec during conveyor processing? Any impressions are welcome.

+4
source share
2 answers

I had a similar question about how Apache works, and after several tests I can confirm that Apache is doing infact, expecting each request to be processed before starting processing the next, so that SEQUENTIAL is processed

+1
source

The concept of Pipelining says that we should be able to accept requests at any given time, but the processing of requests occurs in the order in which we receive it. That is, parallel processing is not performed.

+1
source

All Articles