Results 1 to 2 of 2
Hello guys, I'm actually working on my master thesis which has for subject the evaluation of virtual firewall in a cloud environment. To do so, I installed my own cloud ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
- 10-28-2011 #1
- Join Date
- Oct 2011
iptables latency evaluation
I'm actually working on my master thesis which has for subject the evaluation of virtual firewall in a cloud environment. To do so, I installed my own cloud using OpenNebula (as a frontend) and Xen (as a Node) on two different machines. The Xen machine is my virtual firewall thanks to iptables.
I am running a number of different performance tests over the xen machine to evaluate the performance of iptables. One of this test, would be the latency time introduced by the processing of the packet in iptables; and this is where I'm having troubles testing it.
Here are the different ideas I had so far, and their problems:
- ICMP Timestamp pinging. An ICMP Timestamp reply contains three timestamps: originate timestamp which is the time the sender last touched the message, receive timestamp which is the time the receiver first touched the message, and transmit timestamp which is the time the receiver last touched the message before sending it back. By subtracting the transmit timestamp by the receive timestamp, we get the processing latency of the packet. The problem is the time is in milliseconds which is no precise enough as the latency (at least when a very little number of rules are active in iptables) is lower than 1ms.
- Normal ping ran two times with the firewall on, and then off. The process time is the subtraction between this two times, divided buy 2 (because of round-trip latency) A little more precise has it is in microsecond, but still not enough (nanoseconds would be good). And I fear all this calculation adds too much approximation anyway...
- Wireshark timestamp calculation: sucks totally as wireshark capture the packets before they enter iptables
- Normal ping one time. Displaying the latency as round-trip latency. I won't get the processing latency, but I will still be able to display in a graph the effect of rules and throughput level on the overall latency of a connection going through the firewall. That's my "best" plan so far, but it sucks because it's off the original idea which is measuring the firewall latency only.
Do you guys have any comments on my ideas, or even better a solution to accurately measure firewall latency ?
- 10-30-2011 #2
- Join Date
- Apr 2009
- I can be found either 40 miles west of Chicago, or in a galaxy far, far away.
Modify the source code for ICMP on your system to use a finer scaled clock (microseconds)? One of the nice things about open source tools is that you can do that...Sometimes, real fast is almost as good as real time.
Just remember, Semper Gumbi - always be flexible!