Find the answer to your Linux question:
Page 2 of 3 FirstFirst 1 2 3 LastLast
Results 11 to 20 of 23
Like Tree1Likes
Well that's interesting, and not what I expected... I believe it means the connections are setting up just fine, but as soon as the client (curl) sits and waits for ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #11
    Linux Newbie
    Join Date
    Jun 2012
    Location
    SF Bay area
    Posts
    173

    Well that's interesting, and not what I expected... I believe it means the connections are setting up just fine, but as soon as the client (curl) sits and waits for data from the server it hangs for ~75 seconds in your example. So something is happening after the client establishes a working connection that stalls the delivery of the first packet of content from the server. And once that starts the external IP needed 3 seconds to receive the data. The internal connection didn't see any significant delay after the connection, and only needed 0.082 seconds to receive the whole payload.

    A delay of 75 seconds isn't a number that screams out any obvious timeout condition to me, so I'm really not sure what's up. Maybe try doing reverse DNS lookups manually on the server for the IP presented by the client where you ran the "curl" test? Also try to do a reverse DNS lookup for the IP address the server is listen on for external requests. Something like "time dig -x IP-ADDRESS-HERE" would give you wallclock time to do the lookups.

    Also, is there anything else in the HTTP path, like load balancers or firewalls?

  2. #12
    Just Joined!
    Join Date
    Jul 2012
    Posts
    9
    Quote Originally Posted by cnamejj View Post
    Well that's interesting, and not what I expected... I believe it means the connections are setting up just fine, but as soon as the client (curl) sits and waits for data from the server it hangs for ~75 seconds in your example. So something is happening after the client establishes a working connection that stalls the delivery of the first packet of content from the server. And once that starts the external IP needed 3 seconds to receive the data. The internal connection didn't see any significant delay after the connection, and only needed 0.082 seconds to receive the whole payload.

    A delay of 75 seconds isn't a number that screams out any obvious timeout condition to me, so I'm really not sure what's up. Maybe try doing reverse DNS lookups manually on the server for the IP presented by the client where you ran the "curl" test? Also try to do a reverse DNS lookup for the IP address the server is listen on for external requests. Something like "time dig -x IP-ADDRESS-HERE" would give you wallclock time to do the lookups.

    Also, is there anything else in the HTTP path, like load balancers or firewalls?
    Thanks for the reply, I will look into that ASAP.

    I'm wondering if the 75 seconds is because of the processing? Basically in simpleish flow, the php script does the following:

    pulls staff numbers of certain members of staff, pulls their times(in/out) from another table, uses them times to work out certain criteria. Works out more criteria if its normal shift or night shifts, then it pulls future holidays. But this is the exact same code as internal users use.. which doesn't take so long :S

  3. #13
    Linux Engineer Kloschüssel's Avatar
    Join Date
    Oct 2005
    Location
    Italy
    Posts
    773
    It would probably be a good idea to place milestones into the php script to see how long distinct parts of the script take to execute. You may be able to narrow down the bottleneck this way. Maybe some code is executed for external calls or parts of the code itself take longer than expected when invoked externally.

  4. #14
    Linux Newbie nplusplus's Avatar
    Join Date
    Apr 2010
    Location
    Charlotte, NC, USA
    Posts
    106
    Quote Originally Posted by cnamejj View Post
    The first thing that comes to mind is that the IP address you used probably changed when you switch ISP's (meaning the DSL upgrade). So it's possible that something odd is happening at the connection setup, like reverse DNS lookups taking longer. I'd try using "curl" to get timing metrics for the different phases of the HTTP fetch from both an internal and external client and compare the two. Here's a sample command you can use. Just replace "URL-GOES-HERE" with the real URL for you fetches.

    Code:
    curl -w '\nTotal=%{time_total} DNS=%{time_namelookup} Conn=%{time_connect} AppConn=%{time_appconnect} PreTrans=%{time_pretransfer} Redir=%{time_redirect} DatStart=%{time_starttransfer}\n' -o /dev/null --url URL-GOES-HERE
    That is an awesome use of curl that I have never encountered before! Thanks, cnamejj! I am sure I will definitely have a use for this!

    N

  5. #15
    Linux Newbie nplusplus's Avatar
    Join Date
    Apr 2010
    Location
    Charlotte, NC, USA
    Posts
    106
    I am with cnamejj in leaning toward a networking problem here. I mean, internal users making requests to the internal IP address work fine, but internal or external users making requests to the public IP are hosed...

    Do you have full (i.e. root) access to the gateway? I am wondering if some packet captures aren't in order. If so, you are probably best off breaking this down into three distinct test cases: 1) Private IP to Private IP, 2) Private IP to Public IP, and 3) Public IP to Public IP.

    N

  6. #16
    Linux Newbie
    Join Date
    Jun 2012
    Location
    SF Bay area
    Posts
    173
    Here's a couple other ideas to help debug this.

    First, to make a clear distinction between an application code problem and something in the OS or webserver, put some static content on the webserver and fetch it both internally and externally. Use the same "curl" test or the Firefox plugin, since they are giving you the same timing breakdown of the HTTP transaction. If there's no delay for static content, then it's almost certainly something in the PHP code.

    Second, you can making the HTTP request from the external IP through an SSH tunnel. That would appear to be an internal HTTP fetch to the webserver in terms of the client IP, but the data would still flow back across the external network path. To do that run an ssh client similar to this on the external site,

    Code:
    ssh -L 8888:WEBSERVER-DOMAIN-NAME-HERE:80 user-AT-WEBSERVER-DOMAIN-HERE
    I had to change the at symbol to "-AT-" in the example about since I'm not allowed to post URL's on this forum yet.

    Then aim "curl" or Firefox at "localhost:8888/REST-OF-URL-HERE" and see if there's any delay.

    Third, you could run "socat" or some other tool to forward TCP connections on the webserver from another port to the webserver port 80 and run the external/internal tests again. That does the same test as the second option, sends the data over the same network path, but gives the webserver the impression that the request was from a local address.

    And FYI, for both the second and third experiments you could use another server other than the webserver as the SSH or "socat" termination point. That would really simulate the internal test well, since the webserver would see the HTTP request coming from another machines internal IP address.

    Finally, if this is a PHP application issue I don't think it will be related to the database calls. It has to be some processing that uses the IP address of the client. If you check all the application code for anyplace it uses the client IP address that's where I'd start investigating.

    As last resort, you could always run a packet capture on the webserver (or networking gear in the path) and compare the results from an internal and external fetch. But if the server is busy at all that could be pretty tedious, especially since you need a 75+ packet capture for the external one.

  7. #17
    Just Joined!
    Join Date
    Jul 2012
    Posts
    9
    Quote Originally Posted by cnamejj View Post
    Here's a couple other ideas to help debug this.

    First, to make a clear distinction between an application code problem and something in the OS or webserver, put some static content on the webserver and fetch it both internally and externally. Use the same "curl" test or the Firefox plugin, since they are giving you the same timing breakdown of the HTTP transaction. If there's no delay for static content, then it's almost certainly something in the PHP code.

    Second, you can making the HTTP request from the external IP through an SSH tunnel. That would appear to be an internal HTTP fetch to the webserver in terms of the client IP, but the data would still flow back across the external network path. To do that run an ssh client similar to this on the external site,

    Code:
    ssh -L 8888:WEBSERVER-DOMAIN-NAME-HERE:80 user-AT-WEBSERVER-DOMAIN-HERE
    I had to change the at symbol to "-AT-" in the example about since I'm not allowed to post URL's on this forum yet.

    Then aim "curl" or Firefox at "localhost:8888/REST-OF-URL-HERE" and see if there's any delay.

    Third, you could run "socat" or some other tool to forward TCP connections on the webserver from another port to the webserver port 80 and run the external/internal tests again. That does the same test as the second option, sends the data over the same network path, but gives the webserver the impression that the request was from a local address.

    And FYI, for both the second and third experiments you could use another server other than the webserver as the SSH or "socat" termination point. That would really simulate the internal test well, since the webserver would see the HTTP request coming from another machines internal IP address.

    Finally, if this is a PHP application issue I don't think it will be related to the database calls. It has to be some processing that uses the IP address of the client. If you check all the application code for anyplace it uses the client IP address that's where I'd start investigating.

    As last resort, you could always run a packet capture on the webserver (or networking gear in the path) and compare the results from an internal and external fetch. But if the server is busy at all that could be pretty tedious, especially since you need a 75+ packet capture for the external one.
    Firstly, thanks for the input, really appreciated!

    I will do the above tests, but unsure about one thing.

    When you say:
    Code:
    ssh -L 8888:WEBSERVER-DOMAIN-NAME-HERE:80 user-AT-WEBSERVER-DOMAIN-HERE
    is that 8888 a new port number, or should that be 8080 (how i currently access the content?)

    Also, is that a permanent change, ie can I undo the above if it doesn't work, or is it a one off change?

    Thanks

  8. #18
    Linux Newbie
    Join Date
    Jun 2012
    Location
    SF Bay area
    Posts
    173
    Quote Originally Posted by MercJones View Post
    Firstly, thanks for the input, really appreciated!

    I will do the above tests, but unsure about one thing.

    When you say:
    Code:
    ssh -L 8888:WEBSERVER-DOMAIN-NAME-HERE:80 user-AT-WEBSERVER-DOMAIN-HERE
    is that 8888 a new port number, or should that be 8080 (how i currently access the content?)

    Also, is that a permanent change, ie can I undo the above if it doesn't work, or is it a one off change?

    Thanks
    That ssh command is temporary. It establishes a connection from the machine where you run it to the webserver to port 22 (the default SSH server port). If all you typed was,

    Code:
    ssh user-AT-webserver-domain-name
    that's all it would do. Again replace "-AT-" with the at symbol... The ssh login to the webserver would only be around until you logout of the webserver or somehow stop/kill the "ssh" command you started.

    The "-L 8888:webserver-domain-name:80" option just adds a little something extra to the ssh connection. It's essentially mapping connections to "localhost 8888" to "webserver-domain-name 80" and using the ssh connection you setup as a tunnel. So you can use any port you want in place of "8888" and it will forward connections to that port on your localhost (the one where you ran the ssh client) to whatever host/port you list in the "webserver-domain-name:80" part of the command. The reason it's worth trying is that since the sshd (yes "d") process on the webserver is the one that initiates the connection to the webserver on behalf of the client. So it looks like a local connection to the webserver and any weirdness with the client IP address in the code shouldn't be an issue, since it doesn't know the IP address of the real client.

    So that's a long way of saying, it's temporary until the ssh client dies. And it's effectively a port forward from locahost:8888 to webserver-domain-name:80.

    Finally if your using webserver:8080 now to pull content then the tunnel option should be "8888:webserver-domain-name:8080" in the example I gave.

  9. #19
    Just Joined!
    Join Date
    Jul 2012
    Posts
    39
    Quote Originally Posted by MercJones View Post
    Our secondary site accesses the internal intranet via a link, which is basically:

    externalip:8080/test/test/test/index.php
    This is absolutely a network problem. I suppose you are using public WAN here, you have to coordinate with your network team to investigate. They are responsible with making sure you get the bandwidth you paid for.

  10. #20
    Just Joined!
    Join Date
    Jul 2012
    Posts
    9
    Quote Originally Posted by ilesterg View Post
    This is absolutely a network problem. I suppose you are using public WAN here, you have to coordinate with your network team to investigate. They are responsible with making sure you get the bandwidth you paid for.
    There are no bandwidth limits imposed on us, so it's not that

Page 2 of 3 FirstFirst 1 2 3 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •