Find the answer to your Linux question:
Page 1 of 2 1 2 LastLast
Results 1 to 10 of 11
I have a machine where run a script that generates log in the (root)\tmp\TEST folder. But for some reason from last week these files after created are undeletable . If ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Oct 2011
    Posts
    3

    Cannot delete files created by a script


    I have a machine where run a script that generates log in the (root)\tmp\TEST folder. But for some reason from last week these files after created are undeletable.

    If i rename the tmp\TEST folder then i can delete them (apparently) but if i recreate a folder in tmp directory with the same name 'TEST' the files 'magically' reappears.

    I tought to unmount the folder and do a checkdisk but only in this folder (as the machine is actually hosting a firewall system and i want to be sure to not compromise the other files in the system).

    As i'm really at first step in linux/freebsd command learning:

    how can i unmount the folder tmp and how can i do a checkdisk only in tmp folder?

  2. #2
    Linux Enthusiast scathefire's Avatar
    Join Date
    Jan 2010
    Location
    Western Kentucky
    Posts
    626
    You should post your script.
    linux user # 503963

  3. #3
    Just Joined!
    Join Date
    Sep 2008
    Posts
    8
    2 thoughts -
    1. Are you using backslashes ("\") in your paths, as in your post? Linux separator is forward slash ("/"). Backslash is an escape character.
    2. "test" is a reserved keyword in the shell. But "TEST" isn't, as for as I know.

  4. $spacer_open
    $spacer_close
  5. #4
    Linux User
    Join Date
    Nov 2008
    Location
    Tokyo, Japan
    Posts
    260
    Do NOT use the back-slash \ character, use the forward-slash /.
    Code:
    rm /tmp/TEST    # Yes
    
    rm \tmp\TEST    # NO!!!
    Another possibilty is that your "/tmp/TEST" folder does not have "writable" file-access permissions. If a directory is not writable, the "rm" command will return an error when you try to delete files inside of it. Use "rm -f /tmp/TEST/file" to delete the file without an error. Or you can set the file-access permissions of the "/tmp/test" directory to writable using this command:
    Code:
    chmod a+w /tmp/TEST
    rm /tmp/TEST/*
    This means "change permission mode" "a" means "all people on the system" and "+w" means "will now be granted write access to this directory".

    Another possibility is that your script executed as a different user, and created files with different file-access permission settings.

    For example, if the script ran as "root" user, and created files in "/tmp", then ONLY "root" can delete those files. This is because the "/tmp" filesystem has the "sticky" file-access setting enabled for it. If a folder has "sticky" access control, then only people who created the files can delete those files. You need to run the commnad:
    Code:
    sudo -u other-user rm -r /tmp/TEST
    Where "other-user" is "root" or whatever. To find out which user owns the file, run "ls -l /tmp". The output should look something this:
    Code:
    bash-4.1$ ls -l /tmp
    -rw-r--r-- 1 user2   user2   16 2011-10-28 07:42 foo-bar.txt
    -rw-r--r-- 1 user1   user1    0 2011-10-28 07:42 file1.txt
    -rw-r--r-- 1 root    root    16 2011-10-28 07:42 TEST
    bash-4.1$
    I highlighted the column in blue above; this column shows you who owns (who created) that file. You can also use "ls -l" to display information on only one file:
    Code:
    bash-4.1$ ls -l /tmp/TEST
    -rw-r--r-- 1 root    root    16 2011-10-28 07:42 TEST
    bash-4.1$
    Again, I highlighted the "user" column for you.
    Last edited by ramin.honary; 10-28-2011 at 06:25 AM.

  6. #5
    Just Joined!
    Join Date
    Jul 2008
    Posts
    54

    Cron job?

    In addition to the other excellent answers above, if you run the script using CRON, it runs as root, unless you specified to run as another user. It shouldn't be a surprise that as a regular user, you cannot delete a file or directory owned by root.

    If you'd post the output of " ls -la" we can help you confirm or deny this possibility.

  7. #6
    Just Joined!
    Join Date
    Oct 2011
    Posts
    3
    Ramin (first i have to thank u for very detailed information)


    Then i have to say that my first explanation wasn't very detailed.

    I have 2 script that run in sequence:

    In the first script i extract many logs in a tmp/testlog folder

    In the second script i do many action:
    2a i extract some fields from previous logs and i write all in a single file.

    then after i have copied the file in a remote folder at the very end of the script i have to clean my tmp/testlog folder with the command rm -f.

    What is very strange is:

    1) The command deletes only some of the previously created files.
    2) If i try to delete them the day after they where created i can delete them all without problems. (It seems the process that created the script locks the files for 'x' hours).

    Other information: the scripts runs as root but the files are created by another user (it's a special user that virtually administrates the software the script is owned by).

    This is the output of the ls -o:


    Code:
    -rw-r--r--  1 nsm  8981497 Oct 28 14:44 20111027-CSM00.log
    -rw-r--r--  1 nsm  8516029 Oct 28 14:43 20111027-CSM01.log
    -rw-r--r--  1 nsm  8608245 Oct 28 14:44 20111027-CSM02.log
    -rw-r--r--  1 nsm  9205203 Oct 28 14:44 20111027-CSM03.log
    -rw-r--r--  1 nsm  8053867 Oct 28 14:41 20111027-CSM04.log
    -rw-r--r--  1 nsm  9294391 Oct 28 14:44 20111027-CSM05.log
    -rw-r--r--  1 root      97 Oct 28 14:10 ftp_log_fase2

  8. #7
    Just Joined!
    Join Date
    Oct 2011
    Posts
    3
    Toadbrooks (Thank u for answering) no the script is not in cron. I'm still testing it manually.
    I'm executing it as root and i'm trying to delete files as root.

  9. #8
    Just Joined!
    Join Date
    Jul 2008
    Posts
    54
    Quote Originally Posted by blackx View Post
    1) The command deletes only some of the previously created files.
    2) If i try to delete them the day after they where created i can delete them all without problems. (It seems the process that created the script locks the files for 'x' hours).

    Other information: the scripts runs as root but the files are created by another user (it's a special user that virtually administrates the software the script is owned by).

    This is the output of the ls -o:


    Code:
    -rw-r--r--  1 nsm  8981497 Oct 28 14:44 20111027-CSM00.log
    -rw-r--r--  1 nsm  8516029 Oct 28 14:43 20111027-CSM01.log
    -rw-r--r--  1 nsm  8608245 Oct 28 14:44 20111027-CSM02.log
    -rw-r--r--  1 nsm  9205203 Oct 28 14:44 20111027-CSM03.log
    -rw-r--r--  1 nsm  8053867 Oct 28 14:41 20111027-CSM04.log
    -rw-r--r--  1 nsm  9294391 Oct 28 14:44 20111027-CSM05.log
    -rw-r--r--  1 root      97 Oct 28 14:10 ftp_log_fase2
    Based on the size of the files and the statement 2) above, I wonder if the files are still opened for write at the time tyou try to delete them. You can't do that, of course, even if you are running the script as root.

    Suggestion: Include a sleep statement in your script to give the write routine enough time to complete and close the file before attempting to delete them.

  10. #9
    Linux User
    Join Date
    Nov 2008
    Location
    Tokyo, Japan
    Posts
    260
    Quote Originally Posted by blackx View Post
    then after i have copied the file in a remote folder at the very end of the script i have to clean my tmp/testlog folder with the command rm -f. What is very strange is:
    1) The command deletes only some of the previously created files.
    2) If i try to delete them the day after they where created i can delete them all without problems. (It seems the process that created the script locks the files for 'x' hours).

    Other information: the scripts runs as root but the files are created by another user (it's a special user that virtually administrates the software the script is owned by).
    I see. Then could it be possible that the scripts which create these files are doing many short append operations to your files over a long period of time. As soon as you delete one file, information is appended to the same file path, and the file is thus created again.

    Maybe your first script is reading from a fifo or a socket? If so, the script will continues to run until the fifo or socket is closed by its parent process, which could be several hours later, and during that time will execute many more appends to the file paths in the /tmp/testlog directory. This is especially common for logging programs.

    Try running tail on the files, then deleting them, then running tail again to see if the contents of the file is different. If it is different, the file was re-created with new information after you deleted it. Another thing you can do is run the sudo fuser command on the files in the /tmp/testlog directory, to see if they are currently opened by any processes.

    Also, check what kinds of files your scripts are reading. Are they ordinary files, or fifos, or sockets? Look at the output of ls -l, the very first character before the permission bits is one of these: [-dlpsbc]. If it is "p" (a fifo, or "named pipe") or "s" (a socket), you could be reading from a stream of information, rather than a static file. Your script will slowly execute until the stream is closed, appending information line-by-line to the /tmp/testlog directory.

  11. #10
    Linux User
    Join Date
    Nov 2008
    Location
    Tokyo, Japan
    Posts
    260
    Quote Originally Posted by Toadbrooks View Post
    Based on the size of the files and the statement 2) above, I wonder if the files are still opened for write at the time tyou try to delete them. You can't do that, of course, even if you are running the script as root.
    On my system, if you delete a file that is still opened by a process, the file-entry is deleted from the containing directory. It still exists and is being written-to by the owner process, but as soon as that process exits, the file is deleted permanently. So you can rm -f a file that is still open, and it will no longer appear in output from ls -l.

    I think the scripts may be opening, appending, and closing the file paths many times. So as soon as the files are deleted, they are created once more (because of append) by the still-running script.

Page 1 of 2 1 2 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •