Find the answer to your Linux question:
Page 2 of 2 FirstFirst 1 2
Results 11 to 13 of 13
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #11

    As a sanity check, I went to my samba server which has a large number of files on it, and did the following commands, with the following results:
    smbserver:/myfiles/users # echo */* */*/* | wc
          1  300273 4932638
    smbserver:/myfiles/users # ls */*
    -bash: /bin/ls: Argument list too long
    */* has about 30,000 files in it */* plus */*/* has 300273 files in it.
    In my book, that's as close as unlimited as I need to get.

  2. #12
    Linux User
    Join Date
    Nov 2008
    Tokyo, Japan
    Quote Originally Posted by John Rodkey View Post
    I'm sure that there are limits to echo *.
    However, my experience is that when ls * gives too many arguments, echo * does not in the same directory.

    Is it possible that echo, being a 'built-in' to the shell, uses a different method for processing its arguments than ls, which is an external program?
    Oh yes! I didn't think of that, but you're right. I've never tried it, but I don't think bash limits the arguments to its built-in functions. I'm sure your experience verifies that there are no kernel-imposed limits.

    I guess I would only avoid listing files with "echo *" because it separates arguments with spaces rather than newlines, so filenames with spaces in them can cause confusion, especially when piping the output of echo to a filtering program. But if you only want to see what is there, then your technique would work just fine.

  3. #13
    OK, just tested echo * on 3.5 million files, and it didn't barf.
    Looked at the source code for echo and for ls, and found that ls is calling getopt, and echo doesn't.
    I think it's getopt that's doing the argument number limitation.

  4. $spacer_open

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts