We recently completed multicast performance analysis. Fortunately, Java and C performed almost the same when we tested different traffic rates on Windows and Solaris.
However, we noticed that the time it takes to send a multicast message increases with the time between sendings. The more often we call send, the less time it takes to complete sending a call.
The application allows us to control the amount of time that we wait between the send call, below you can see that the time increases as the delay between packets increases. When sending 1000 packets / second (waiting time 1 ms) to call send requires only 13 microseconds. At 1 packet / second (latency 1000 ms), this time increases to 20 microseconds.
Wait time (ms) us to send
0 8.67
1 12.97
10 13.06
100 18.03
1000 20.82
10000 57.20
We see this phenomenon with both Java and C, as well as with Windows and Solaris. Tests were conducted on a Dell 1950 server with an Intel Pro 1000 network card with two ports. Micro benchmarking is difficult, especially in Java, but we donβt think this is due to JITing or GC.
The Java code and command line that I use for tests are located at: http://www.moneyandsoftware.com/2009/09/18/multicast-send-performance/