This is my first question here @stackoverflow.
I am writing a monitoring tool for some VoIP production servers, in particular, a sniff tool that allows you to capture all traffic (VoIP calls) that matches a given pattern using the pcap library in Perl.
I canβt use bad selective filters, for example, udp, and then do all the filtering in the application code, because it will be caused by excessive traffic, and the kernel will not cope with the message about packet loss.
What I do then is iteratively build a more selective filter that can be used during capture. At first, I only capture (all) SIP signaling traffic and IP fragments (pattern matching should be done at the application level anyway), then when I find some information about RTP in SIP packets, I add sentences or sentences to the actual line filter with specific IP and PORT and re-set the filter using setfilter ().
So basically something like this:
Source filter: "(udp and port 5060) or (udp and ip [6: 2] and 0x1fff! = 0)" β captures all SIP traffic and IP fragments
Updated filter: "(udp and port 5060) or (udp and ip [6: 2] and 0x1fff! = 0) or (host IP and port PORT)" β Also captures RTP for a specific IP address, PORT
Updated filter: "(udp and port 5060) or (udp and ip [6: 2] and 0x1fff! = 0) or (host IP address and PORT port) or (host IP2 and PORT2 port)" β Captures the second stream RTP as well
Etc.
This works very well, since I can get a βrealβ packet loss of RTP streams for monitoring purposes, while with a poor sample filter version of my tool, the percentage of loss of RTP packets was unreliable, because there were some packets that were missing due to a packet drop by the kernel .
But let's move on to the disadvantage of this approach.
The call to setfilter () during capture includes the fact that libpcap discards packets received "when the filter was changed", as indicated in the code comments for the set_kernel_filter () function, in pcap-linux.c (versions of libpcap version 0.9 and 1.1 have been verified) )
So, what happens when I call setfilter () and some packets arrive in IP fragmentation, I lose some fragments, and this is not reported in the libpcap statistics at the end: I noticed that it is digging into traces.
Now I understand the reason why this action is performed by libpcap, but in my case I definitely do not need to refuse the package (I do not want to receive some unrelated traffic).
Do you have an idea on how to solve this problem that does not change the libpcap code?