I want to retrieve a some datasets from a particular website. Now the way to do it is that you open the URL and then select different options form drop down boxes, radio boxes, check boxes etc and submit this request. A dialogue box then pops up in windows asking you to save or open the file ( a .txt file).

Now, the six million dollar question is that if I want to retreive a large amount of data based on choice of several parameters that I can specify in the URL in a non user-active way like using wget. How should I go about it?

This is what I tried.

I right-clicked to view open-source in the URL to see how the input arguments are supplied to the URL, and appended them in front of the URL. It works perfectly in IE. (internet explorer which promptly opens the open or save as dilog box ) when I put this on the address bar directly without me having to make individual selections on the page. But, wget does not seem to return any thing useful. It only retrives something with type 'file' (that only contains URL headers) and not .txt file as expected.

Trying to retreive .txt file (directly) without using arguments appended to the URL name (i.e using ftp) seems to return an empty file. This method is still illogical since I believe that the .txt file does not exist before hand it is only after the user has supplied his query in the form of arguments that this data is returned as text file.

All replies/help very appreciated