I am working on a project related to Multipath tcp and I want to measure the goodput and the RTT at the application level. I don't want to use a traditional tool(netperf, iperf, ...) and I want to build my own one.

So far, I have thought on several methods to do this, in each case are done a fixed number of send calls per second(I intend to increase the number of send calls per second and draw a graph with the network performance):

  • request-response tests(the client does 1 send with a fixed amount of data to the server and then the server sends that data back) - the rtt consists of the elapsed time between the send and the recv at the client
  • burst mode(the client does a fixed number of successive send() calls with its timestamp and the server sends back the timestamp) - the rtt is computed at the client: the difference between the recv call (with the timestamp that was sent back by the server) and the moment when the recv call returnes
  • stream tests(continuous number of send calls)


In each case the goodput consists of the number of send calls * the number of bytes in the send calls
I would like to ask if this methods are considered correct.

Thank you.