Find the answer to your Linux question:
Results 1 to 5 of 5
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1

    Exclamation Help a Noob use WGET

    Heya guys.

    I'm trying to download all the data under this directory, using wget:


    I would like to achieve this using wget, and from what I've read it should be possible using the --recursive flag. Unfortunately, I've had no luck so far.

    The only files that get downloaded are robots.txt and index.html (which doesn't actually exist on the server), but wget does not follow any of the links on the directory list.

    The code I've been using is:

    wget -r *ttp://***/components/game/mlb/year_2010/
    Please help me out guys.

    Last edited by odinswand; 07-02-2010 at 04:45 AM. Reason: Adjusting URLs..

  2. #2
    Hi & welcome odinswand!

    I can suggest typing in terminal:

    $ man wget
    Linux User #489667

  3. #3
    Quote Originally Posted by nujinini View Post
    Hi & welcome odinswand!

    I can suggest typing in terminal:

    Thanks for your suggestion. I've read the man pages and nothing has helped. Perhaps I am missing something.

  4. $spacer_open
  5. #4
    Linux Newbie theNbomr's Avatar
    Join Date
    May 2007
    BC Canada
    It is possible that the server or application running there has identified your wget client as a non-browser, and is returning a subset or alternate content, to prevent site copying. You may be able to spoof the server by using some wget options, such as --user-agent and --random-wait.
    It is also possible that the website you see in your browser cotains a lot of Javascript and/or Java applet content that wget is unable to cope with.
    --- rod.
    Stuff happens. Then stays happened.

  6. #5

    Try this?

    wget *ttp://***/components/game/mlb/year_2010/*

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts