Find the answer to your Linux question:
Page 2 of 2 FirstFirst 1 2
Results 11 to 20 of 20
You might also look at logrotate, assuming your system has it. If so, you could schedule a regular rotation of the file. I believe the configuration attribute you would want ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #11
    Linux Newbie nplusplus's Avatar
    Join Date
    Apr 2010
    Location
    Charlotte, NC, USA
    Posts
    106

    You might also look at logrotate, assuming your system has it. If so, you could schedule a regular rotation of the file. I believe the configuration attribute you would want to use is "copytruncate," which I believe does "> filename" exactly.

    -N

  2. #12
    Just Joined! VirtualLinuxUser's Avatar
    Join Date
    Mar 2008
    Location
    Pietermaritzburg, KwaZulu Natal, South Africa
    Posts
    28

    Question Suspend Process First

    Quote Originally Posted by Irithori View Post
    Code:
    cat /dev/null > BIGFILE
    Quote Originally Posted by clowenstein View Post
    To quote _Unix Power Tools_ (O'Reilly 2003):
    . . .
    You can also "almost" empty the file, leaving just a few lines, this way:
    $ tail afile > tmpfile
    $ cat tmpfile > afile
    $ rm tmpfile
    Wouldn't it be better to suspend the process first (using [Ctrl]+[Z] or some such combo) first?
    Last edited by VirtualLinuxUser; 07-16-2010 at 08:22 PM. Reason: Quoting clowenstein

  3. #13
    Linux Newbie
    Join Date
    Apr 2005
    Location
    Clinton Township, MI
    Posts
    103

    Deleting files will reclaim space, but could raise problems

    Quote Originally Posted by hristo77 View Post
    Thanks for everybodys help.

    I have deleted this kinds of file before, and when using "du -sk *" the space seems to be reclaimed.

    Regards
    H
    Space isn't the issue; having a log file, assuming that's what this is, which is no longer usable could be the issue.

    Instead of removing it, CLEARING it is the better approach as stated, and that is why one of three options, all of which effectively do a similar thing, is appropriate.

    The logrotate approach typically appends .n, where n is a number, however many logs it rotates. I've seen it keep 3 versions, but maybe some keep more. Check the documentation for details on logrotate. That would be a standard systems administration kind of approach.

    The second one is to explicitly write null or empty data. You can do that with the echo or cat approach, using /dev/null. That is clear and obvious. The idea of using tail to keep the last ten lines is a good idea too if you want to keep some context from what was previously there. Finally, the idea of redirecting output using the > effectively replaces whatever is there, meaning it empties it out, accomplishing what you intended to accomplish.

    The choice is yours; the ONE thing I would NOT do is remove the files IF the application writing to it is open. That effectively gives you the equivalent of a zombie file handle. It has an inode, but the typical tools won't access it, only low level tools that understand the inode will even see it, so that's not good.
    Brian Masinick
    masinick AT yahoo DOT com

  4. #14
    Linux Newbie unlimitedscolobb's Avatar
    Join Date
    Jan 2008
    Posts
    120
    Quote Originally Posted by VirtualLinuxUser View Post
    Wouldn't it be better to suspend the process first (using [Ctrl]+[Z] or some such combo) first?
    I don't think this is required. I am not sure whether all filesystems allow parallel writes or whether some only some of them do, while the others block processes attempting to write something in parallel, but the worst thing that can happen if parallel writes are allowed is that one log message will be corrupted. As far as I know, this is not a problem in the majority of situations.

  5. #15
    Just Joined!
    Join Date
    Feb 2009
    Posts
    1

    trouble

    Clearing with > file, or any other type of write will just cause you BIG issues

    To prove my point do this:
    1. have an app write continuously to a file
    2. clear the file 10-20 times very fast...

    you will see the file will be interpreted as binary and would be unreadable

    I would use logrotate, check out man on it, it has a safe option to clear logs that are in use.

  6. #16
    Just Joined! VirtualLinuxUser's Avatar
    Join Date
    Mar 2008
    Location
    Pietermaritzburg, KwaZulu Natal, South Africa
    Posts
    28

    Angry On Windows no {Ctrl} + {Z} equivalent "Considered Harmful"

    Quote Originally Posted by unlimitedscolobb View Post
    I don't think this is required. I am not sure whether all filesystems allow parallel writes or whether some only some {sic} of them do, while {sic} the others block processes attempting to write something in parallel, but the worst thing that can happen if parallel writes are allowed is that one log message will be corrupted. As far as I know, this is not a problem in the majority of situations.
    Thanks for that, u*scolobb. While informative, that doesn't technically or completely answer my question. That's my own fault for not asking it precisely. What I meant was this :

    Wouldn't it be better to suspend the process first {...} or would suspending the process first also be harmful (although I can't see how)? Please keep in mind that I'm migrating to GNU-Linux from Windows and have been in a situation where Windoze doesn't perform a file lock, with the net result that two processes can open a file, one (or both) write to it and then save it (without checking for modifications or size diffs) so that either the data is corrupted or erased completely! Anyone who doesn't believe me should do this :

    1. Create a new plain-text document (winsucks.txt) with CRLF line endings.
    2. Open the document with two instances of MS Notepad.
    3. Leave one instance of the document with the line of text in it.
    4. In the other instance of the document, write a short essay on why you like GNU-Linux.
    5. Save and exit from the essay.
    6. Save and exit from the document containing only one line of text. (Notepad won't prompt you to confirm that you want to over-write the saved text.)
    7. Open "winsucks.txt". It will only contain one line of text.

    <strike>If anybody can show me conclusive evidence of the statements being bogus (a screencast would be nice), please send it to my by email (nigel.nq.ngw[at]gmail[dot]com) and I will insist on dual Booting Windows 7 and Fedora on the laptop I'm planning to buy.</strike>

    I've been using versions of Windows for ~10 years. Please give me some time to adjust my state of mind.

    Edit : Please note that the Windows versions I am refering to are all those from 3.x to XP (inclusive). I do not know if this occurs in Vista (Codename Longhorn) or Codename Vienna (Windows 7). If this problem doesn't occur in Win 7 :

    If anybody can show me conclusive evidence of the statements about Win < 7 being bogus (a screencast would be nice), please send it to my by email (nigel.nq.ngw[at]gmail[dot]com) and I will insist on dual Booting Windows 7 and Fedora on the laptop I'm planning to buy.
    Last edited by VirtualLinuxUser; 07-19-2010 at 09:28 PM. Reason: Adding information about Windoze version

  7. #17
    Linux Newbie unlimitedscolobb's Avatar
    Join Date
    Jan 2008
    Posts
    120
    Quote Originally Posted by mititelu View Post
    To prove my point do this:
    1. have an app write continuously to a file
    2. clear the file 10-20 times very fast...

    you will see the file will be interpreted as binary and would be unreadable
    Please note that usually log files are never cleared that fast What I suspect to be a reason for the behaviour you describe is that the file is modified by several writers in parallel and the data going into it gets corrupted to have unreadable characters. This should be especially well observable if one uses a multibyte charset. Nevertheless, to me this situation looks somewhat artificial.

  8. #18
    Linux Newbie unlimitedscolobb's Avatar
    Join Date
    Jan 2008
    Posts
    120
    Quote Originally Posted by VirtualLinuxUser View Post
    Wouldn't it be better to suspend the process first {...} or would suspending the process first also be harmful (although I can't see how)?
    Of course, explicitly sequencing the actions to prevent parallel writes would be best. However, suppose you have a webserver which writes logs on itself (not using an independent log server process). In this case, suspending the server will cause your site to go down for a while.

    Another reason for suspension being bad is that some processes won't react well to being suspended. This is mainly about processes which do real-time activities, like video/audio streaming.

    Quote Originally Posted by VirtualLinuxUser View Post
    Please keep in mind that I'm migrating to GNU-Linux from Windows and have been in a situation where Windoze doesn't perform a file lock, with the net result that two processes can open a file, one (or both) write to it and then save it (without checking for modifications or size diffs) so that either the data is corrupted or erased completely! Anyone who doesn't believe me should do this :
    In my opinion, the situation you are describing is not Windows' fault, but Notepad's, because it doesn't check file times. In Windows, it is perfectly possible (and actually implemented in advanced editors like that built into Visual Studio) to check for modications and tell the user.

    Also, one normally locks files when one has an open file descriptor, which Notepad most likely has only during the save process. Thus, the OS sees two processes which write to a file sequentially (so it's not even about parallel writes).

    Note that I am not a fan of Windows and I didn't mean to offend anybody Just wanted to keep the discussion as true to fact as possible

  9. #19
    Just Joined! VirtualLinuxUser's Avatar
    Join Date
    Mar 2008
    Location
    Pietermaritzburg, KwaZulu Natal, South Africa
    Posts
    28

    Thumbs down What's True at the Micro Level is Often True at the Macro Level

    Quote Originally Posted by unlimitedscolobb View Post
    Of course, explicitly sequencing the actions to prevent parallel writes would be best. However, suppose you have a webserver which writes logs on itself (not using an independent log server process). In this case, suspending the server will cause your site to go down for a while.

    Another reason for suspension being bad is that some processes won't react well to being suspended. This is mainly about processes which do real-time activities, like video/audio streaming.



    In my opinion, the situation you are describing is not Windows' fault, but Notepad's, because it doesn't check file times. In Windows, it is perfectly possible (and actually implemented in advanced editors like that built into Visual Studio) to check for modications and tell the user.

    Also, one normally locks files when one has an open file descriptor, which Notepad most likely has only during the save process. Thus, the OS sees two processes which write to a file sequentially (so it's not even about parallel writes).

    Note that I am not a fan of Windows and I didn't mean to offend anybody Just wanted to keep the discussion as true to fact as possible
    Thankyou for that Information. I wasn't sure on the process suspension deal, so that's now clarified for me.

    As for parallel writes : Now that you mention that some apps do check for external file changes, I have to agree with you (but reluctantly). So far, the only apps I've used that do this are written by third parties (NetBeans kicks Visual Studio's donkey!), so I've always assumed that this was an implementation flaw in the Windows OS itself. After all, what is true on the micro (hehe) scale is often true on the macro scale. Well-known scientific principle, that.

    Objectivity is a highly-valued commodity, in my opinion. I've used Windows long enough to be able to find fault with it without resorting to fabrication, so I stand (or sit, rather) corrected. However, one little flaw that varies from app to app is not going to get me to gloss over the other glaring flaws.

    Anyway, this post is now off-topic ; I'm not going to post any more about Windoze woes in a thread about parallel file writes.

  10. #20
    Just Joined! VirtualLinuxUser's Avatar
    Join Date
    Mar 2008
    Location
    Pietermaritzburg, KwaZulu Natal, South Africa
    Posts
    28

    Wink It is Possible to find Flaws in Everything if You Look Hard Enough

    Quote Originally Posted by mititelu View Post
    Clearing with > file, or any other type of write will just cause you BIG issues

    To prove my point do this:
    1. have an app write continuously to a file
    2. clear the file 10-20 times very fast...

    you will see the file will be interpreted as binary and would be unreadable

    I would use logrotate, check out man on it, it has a safe option to clear logs that are in use.
    Technically, binary is readable, although not necessarily parsable. Open the file with a hex editor and invest in a hexadecimal-to-ascii/ unicode chart.

    I'm not implying that this would be a worth-while thing to do, but neither is continuously writing to a file and clearing it repeatedly in quick succession just to prove a point (however valid). It's a bit like putting motion-charging batteries in a vibrator and not using it. That seems a bit contrived to me.

Page 2 of 2 FirstFirst 1 2

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •