Find the answer to your Linux question:
Results 1 to 9 of 9
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1

    does tcp_slow_start option work at all?

    I'm trying to disable tcp slow start
    writing to /proc/sys/net/ipv4/tcp_slow_start_after_idle has no effect
    any ideas?

  2. #2
    I know this is a relatively old post but did you get a response? I'm seeing the same thing and would like to disable slow start.


  3. #3
    Quote Originally Posted by oldhoot View Post
    I know this is a relatively old post but did you get a response? I'm seeing the same thing and would like to disable slow start.

    nope. but we managed to find combination of technical and administrative solutions which works

  4. $spacer_open
  5. #4
    Is your solution sharable?

  6. #5
    sure, why not. but the story is a bit long
    first, linux implements (non-standard) socket option: TCP_QUICKACK. in brief linux tcp stack delays ACKs for up to 40 ms; the idea is simple - in hope some reply is generated by the application shortly and the ACK be sent along with the data.
    unfortunately when this algorithm meets tcp slow start on peer end (meaning no data is sent until ACK is received) and since tcp session enters slow start occasionally we have our data delayed for 40ms. it happens several times a day, about 1% of total session messages but we can't afford such a big transaction latency
    to resolve the issue we set the option in our code:
    setsockopt(_socket, IPPROTO_TCP, TCP_QUICKACK, (int[])[1], sizeof(int));
    however the option is dynamic, the flag is periodically reset inside tcp stack adaptive algorithm so we had to call this at every read/write, while a little bit clumsy it nevertheless worked for data portion sent from us to peer. for the data sent to us by our peer we still had that 40ms delay so my idea was to disable slow start on our side to avoid our client code changed.
    and finally while we were trying to disable slow start on our side the client reported they also implemented that TCP_QUICKACK option and we've never seen the 40ms issue after that.
    does it help? if you have access to both sides of your tcp session code you may easily do same thing

  7. #6
    Thank you for all the detail. Before I take off and start implementing, perhaps I should get your take as to whether I'm really fighting a slow_start issue. Here's the output of our net testing benchmark for one of our 10GE interfaces...

    hoot@ubuntu1-1004:~/nuttcp$ sudo ./nuttcp -T60 -i3
    [sudo] password for hoot:
    1638.8125 MB / 3.00 sec = 4582.1693 Mbps 0 retrans
    2931.7500 MB / 3.00 sec = 8197.8164 Mbps 0 retrans
    3359.4375 MB / 3.00 sec = 9393.7182 Mbps 0 retrans
    3366.8125 MB / 3.00 sec = 9414.2744 Mbps 0 retrans
    3367.0000 MB / 3.00 sec = 9414.8018 Mbps 0 retrans
    3367.0000 MB / 3.00 sec = 9414.7265 Mbps 0 retrans

    You can see it starts out at half but works it's way up to were it should be.


  8. #7
    difficult to say, it looks like slow start but may be nuttcp acceleration time as well or anything else, the measured data are too rough.
    if you trace the session with any sniffer like tcpdump you will detect slow start immediately, key sign is new portion of data is sent only after the previous one is acknowledged
    and does it really bother you? slow start should take few milliseconds and happens at the beginning or after data retransmit only. unless you have hf transaction oriented data flow with harsh latency requirements it shouldn't affect performance.

  9. #8
    Admittedly, I'm not exactly sure what I'm looking at. Attached is the screen shoot from wireshark. Does it meet your criteria for slow start?
    Attached Images Attached Images

  10. #9
    I can't read it, too small and resolution is too low, sorry
    did you read this? (Stevens TCP/IP illustarated.v1) it describes the pattern in details

    20.6 Slow Start
    In all the examples we've seen so far in this chapter, the sender starts off by injecting
    multiple segments into the network, up to the window size advertised by the receiver. While
    this is OK when the two hosts are on the same LAN, if there are routers and slower links
    between the sender and the receiver, problems can arise. Some intermediate router must
    queue the packets, and it's possible for that router to run out of space. [Jacobson 1988]
    shows how this naive approach can reduce the throughput of a TCP connection drastically.
    TCP is now required to support an algorithm called slow start. It operates by observing that
    the rate at which new packets should be injected into the network is the rate at which the
    acknowledgments are returned by the other end.
    Slow start adds another window to the sender's TCP: the congestion window, called cwnd.
    When a new connection is established with a host on another network, the congestion
    window is initialized to one segment (i.e., the segment size announced by the other end).
    Each time an ACK is received, the congestion window is increased by one segment, (cwnd
    is maintained in bytes, but slow start always increments it by the segment size.) The sender
    can transmit up to the minimum of the congestion window and the advertised window. The
    congestion window is flow control imposed by the sender, while the advertised window is
    flow control imposed by the receiver.
    The sender starts by transmitting one segment and waiting for its ACK. When that ACK is
    received, the congestion window is incremented from one to two, and two segments can be
    sent. When each of those two segments is acknowledged, the congestion window is
    increased to four. This provides an exponential increase.
    At some point the capacity of the internet can be reached, and an intermediate router will
    start discarding packets. This tells the sender that its congestion window has gotten too
    large. When we talk about TCP's timeout and retransmission algorithms in the next chapter,
    we'll see how this is handled, and what happens to the congestion window. For now, let's
    watch slow start in action.
    An Example
    Figure 20.8 shows data being sent from the host sun to the host The data traverses a slow SLIP link, which should be the
    bottleneck. (We have removed the connection establishment from this time line.)
    Figure 20.8 Example of slow start.
    We see the sender transmit one segment with 512 bytes of data and then wait for its ACK.
    The ACK is received 716 ms later, which is an indicator of the round-trip time. The
    congestion window is then increased to two segments, and two segments are sent. When the
    ACK in segment 5 is received, the congestion window is increased to three segments.
    Although three more could be sent, only two are sent before another ACK is received.
    We'll return to slow start in Section 21.6 and see how it's normally implemented with
    another technique called congestion avoidance.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts