Results 1 to 2 of 2
Enjoy an ad free experience by logging in. Not a member yet? Register.
- Join Date
- Sep 2008
Packet drop measured by ethtool, tcpdump and ifconfig
I have a question regarding packet drops.
I am running a test to determine when packet drops occur. I'm using a Spirent TestCenter through a switch (necessary to aggregate Ethernet traffic from 5 ports to one optical link) to a server using a Myricom card.
While running my test, if the input rate is below a certain value, ethtool does not report any drop (except dropped_multicast_filtered which is incrementing at a very slow rate). However, tcpdump reports X number of packets "dropped by kernel". Then if I increase the input rate, ethtool reports drops but "ifconfig eth2" does not. In fact, ifconfig doesn't seem to report any packet drops at all. Do they all measure packet drops at different "levels", i.e. ethtool at the NIC level, tcpdump at the kernel level etc?
And am I right to say that in the journey of an incoming packet, the NIC level is the "so-called" first level, then the kernel, then the user application? So any packet drop is likely to happen first at the NIC, then the kernel, then the user application? So if there is no packet drop at the NIC, but packet drop at the kernel, then the bottleneck is not at the NIC?
I forgot the name at the moment but if you could find out for one program what library that could be than you probably could find out for the rest.
In that case different numbers of drops wouldn't make sense really ...
Are those drops generally due to a certain ring buffer size or is it rather processing speed?
The NIC has probably the smallest buffer but might be faster than the kernel.
I remember having once used a programm named "lookbusy", small and easily portable, to put some load on the kernel which is otherwise more difficult.Bus Error: Passengers dumped. Hech gap yo'q.