Results 1 to 1 of 1
I have a client / server app. Client server calls are made only when the user runs a query. The client request is approx 100KB and the server response is ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
- 11-18-2009 #1
- Join Date
- Nov 2009
Minimising WAN response times
Client server calls are made only when the user runs a query.
The client request is approx 100KB and the server response is about the same size.
Some background is that the client app currently hits a server that is in the same region globally. Therefore the impact of the network on response times is minimal. However I would like to retire the regional servers and instead have all global trafiic comming to a single server instance in London.
In this case many of my clients would be making WAN round trips.
My target response time for the round trip from a client app in Singapore to the server in London is sub-second for a 100K message to the server and a 100k message back to the client.
I did a few experiments using TCP and UDP with the client in Singapose and the Server in London.
FYI RTT from ping was 200ms.
With a TCP client/server it was taking 1.5 seconds from the moment the client started sending its query to the last read() of the response completing.
When I did a similar test using UDP the response time was about 205ms, which is close to the RTT.
Does the 1.4sec TCP round trip seem reasonable or is it likely I've not tuned TCP adequately - is so then what should I be looking at?
Now I understand that UDP is unreliable and thats bad for my application but at the same time the 1.5 seconds TCP timing is a killer for the responsiveness of my app across the WAN.
What if anything are my options to reduce the response times in this scenario?
FYI the TCP client/server had their SO_SNDBUF/SO_RCVBUF set to 1MB.
What I noticed for TCP across the WAN was that for a single 100KB send() by the client call I got lots of small reads (mostly 1.4KB but some a lot bigger) at the server.
I noticed that the UDP client was able to send data grams up to about 60K (I think there is a 64K limit), so I had to send a couple to handle 100KB.
One option I had considered with UDP reliability was sending the pair of datagrams from the client to the server then after a short delay of say 50ms send them again just in case the first pair didn't make it, then if the client doesn't get it's response within say 400ms of the start then try one more time before giving up.
If this all fails then give the user an error message about the network being down.
The hope here is that 99.99% of the time one of the three attemps will succeed and so I get response times between 200 and 600ms.
This is still well short of the 1.4 seconds I get from TCP.
But I have no idea whether this approach is likely to be at all reliable - how many attempts to send a UDP datagram does one need to make before one is likely to be successful and how many attempts does one need to make befor it's worth giving up and signalling a comms error?
I was really hoping that the 'simpler' TCP approach would give response times that were closer to the RTT. And to be honest all that adhoc reliability stuff I describe above is a pain.
If UDP is a sensible option for my use case - ie occasional 100KB round trips across the WAN - then does anyone have any advice on making it more reliable?
Do things like UDT help here? www .cs.uic.edu /~ygu1/]UDT Manual
All help / advice appreciated.