I want to create a C ++ server / client that maximizes throughput through TCP socket communications on my local host. As preparation I used iperf to find out what is the maximum bandwidth for my i7 MacBookPro.
------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 256 KByte (default) ------------------------------------------------------------ [ 4] local 127.0.0.1 port 5001 connected with 127.0.0.1 port 51583 [ 4] 0.0-120.0 sec 329 GBytes 23.6 Gbits/sec
Without any configuration, ipref showed me that I can achieve at least 23.2 GBit / s. Then I performed my own implementation of the server / client in C ++, here you can find the full code: https://gist.github.com/1116635
In this code, I basically pass an array of 1024bytes int with each read / write operation. So my send loop on the server is as follows:
int n; int x[256]; //fill int array for (int i=0;i<256;i++) { x[i]=i; } for (int i=0;i<(4*1024*1024);i++) { n = write(sock,x,sizeof(x)); if (n < 0) error("ERROR writing to socket"); }
My client acquisition loop looks like this:
int x[256]; for (int i=0;i<(4*1024*1024);i++) { n = read(sockfd,x,((sizeof(int)*256))); if (n < 0) error("ERROR reading from socket"); }
As mentioned in the header, starting this (compiled with -O3) results in the following runtime, which is about 3 Gb / s:
./client 127.0.0.1 1234 Elapsed time for Reading 4GigaBytes of data over socket on localhost: 9578ms
Where will I lose bandwidth, what am I doing wrong? Again, the full code can be seen here: https://gist.github.com/1116635
Any help is appreciated!
source share