Find the answer to your Linux question:
Page 1 of 2 1 2 LastLast
Results 1 to 10 of 13
Hello folks, I'm looking for a log cleaner script to remove entries in a log that are clogging things up. Basically it's for my FTP server where I've restricted people ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Mar 2007
    Posts
    5

    Log Clean Up script - Suggestions


    Hello folks,

    I'm looking for a log cleaner script to remove entries in a log that are clogging things up.

    Basically it's for my FTP server where I've restricted people to only 3 concurrent connections to the server. So if they try to download more than 3 files at a time, they get a message "Connection refused: too many sessions for this address."

    Well it's clogging my log file and and it chokes awstats.

    Anyone know a good script that can parse a log file, find that phrase (or any phrase I enter) and then rewrites it with the removed entries?

    Thx ahead of time!

  2. #2
    Linux Enthusiast likwid's Avatar
    Join Date
    Dec 2006
    Location
    MA
    Posts
    649
    Are the errors consecutive?

  3. #3
    Just Joined!
    Join Date
    Mar 2007
    Posts
    5
    likwid,

    What the logs show are those errors, then a couple downloads start, then the errors, more uploads / downloads etc etc.

    So if you mean are they all in a line, the answer is no. I can grep them out of the log file but what I would like to do is remove them from the log file so it stops choking awstats.

    Never been very good at scripting, but I've been learning.

  4. $spacer_open
    $spacer_close
  5. #4
    Linux Enthusiast likwid's Avatar
    Join Date
    Dec 2006
    Location
    MA
    Posts
    649
    Well I was going to say use uniq if they are consecutive, but you can do what you want with sed, something like

    Code:
    sed -e /^"Connection Refused for this"/d
    I'm sure there's a better regular expression for it, but I am not the uber regexp hacker. You'll want to copy the log to like /tmp, cat it and pipe to that sed command, then output to the OG file.

  6. #5
    Linux Enthusiast likwid's Avatar
    Join Date
    Dec 2006
    Location
    MA
    Posts
    649
    A little bored at work and looking for scripting exercises, so I threw this together, seems to work on a small test. I accept no responsibility for problems with this:

    Code:
    #!/bin/bash
    
    if [ $# -lt 2 ]; then
       printf "Usage: `basename $0` full_path_to_log_file start_of_lines_to_be_removed\n"
       exit 1
    elif [ ! -f $1 ]; then
       printf "$1 is either non-existant or is not a regular file.\n"
       exit 1
    else /bin/cp $1 /tmp/cleanup.work && cat /tmp/cleanup.work | sed -e /^"$2"/d > $1
         rm -f /tmp/cleanup.work
         exit 0
    fi
    Remember to put the start_of_lines argument in double quotes if there is whitespace.

  7. #6
    Just Joined!
    Join Date
    Mar 2007
    Posts
    5
    That's great likwid, however please forgive my ignorance, but how do i import the log file to be parsed by the script? I've tried "cat logname.log | script.sh" but I don't get anything.

  8. #7
    Linux Enthusiast likwid's Avatar
    Join Date
    Dec 2006
    Location
    MA
    Posts
    649
    This script takes the logfile as the first argument, so say you wanted to move "This is a duplicate line" from the file /var/log/fakelog, you would type:


    whatever.sh /var/log/fakelog "This is a duplicate line"


    The script will check to make sure fakelog exists, copy it to /tmp, cat it through a sed one liner that will remove lines starting with the second argument, then outputs to the original fakelog.


    You can test it out by creating a file called /var/log/fakelog, making the contents something like
    Code:
    Duplicate Line
    Duplicate Line
    Duplicate Line
    Unique Line 1
    Duplicate Line
    Duplicate Line
    Unique Line 2
    and running

    Code:
    ./cleanup.sh /var/log/fakelog "Duplicate Line"
    If it doesn't spew an error, you should then see
    Code:
    Unique Line 1
    Unique Line 2
    as the contents of /var/log/fakelog

    Take note that whatever you name the script, it has to be marked executable with something like chmod 755 script

  9. #8
    Just Joined!
    Join Date
    Mar 2007
    Posts
    5
    Thanks again for replying, i've created the bash script and tried to run it as you suggested but I don't get the desired results.

    After reading the script, I see what you're trying to do and that's this:

    1. Copy log file to /tmp
    2. Cat it and then pipe it to sed to remove the line we don't want.
    3. Remove the copy in /tmp and should have new log from /var/log

    Unfortunately, this doesn't seem to be working. Couple questions. 1. what should i put in to the script for "full_path_to_log_file start_of_lines_to_be_removed"?
    2. if [ $# -lt 2 ]; then -- Should the $# really be $1?

    When I run the script, it appears to run but not very long and then just returns to the command line. The log file is over 1 meg in size so I would think it should take longer than a few seconds to copy, parse, and replace.

    Thanks again for all your help, it's greatly appreciated.

  10. #9
    Just Joined! masam's Avatar
    Join Date
    Nov 2006
    Location
    Martinsburg, WV
    Posts
    47

    Cool

    http://www.linuxforums.org/forum/ubu...er-script.html
    wow.
    thats, about the same issue.
    dude.
    i feel like "Neo" watching the black cat....
    or George Carlin talking about "VuJa De"..."i dont remember none of this sh!t"

    love, light, and laterz.
    tank's...

  11. #10
    Linux Enthusiast likwid's Avatar
    Join Date
    Dec 2006
    Location
    MA
    Posts
    649
    Quote Originally Posted by morgolis
    Thanks again for replying, i've created the bash script and tried to run it as you suggested but I don't get the desired results.
    Howso? The lines starting with argument 2 are not removed? I've tested here and it works fine.

    Quote Originally Posted by morgolis
    After reading the script, I see what you're trying to do and that's this:

    1. Copy log file to /tmp
    2. Cat it and then pipe it to sed to remove the line we don't want.
    3. Remove the copy in /tmp and should have new log from /var/log

    Unfortunately, this doesn't seem to be working. Couple questions. 1. what should i put in to the script for "full_path_to_log_file start_of_lines_to_be_removed"?
    The path to log file is the path to the log file you want these lines removed from, the ABSOLUTE path. Start of lines to be removed is "IN QUOTES" how the line starts. If the line you want removed starts with "This is a duplicate line" then for argument 2 you put "This is a duplicate line"
    Quote Originally Posted by morgolis
    2. if [ $# -lt 2 ]; then -- Should the $# really be $1?
    No, $# is a variable that holds the argument count. This logic says that if the number of arguments is less than 2, print a usage command and exit with an error status of 1.

    Quote Originally Posted by morgolis
    When I run the script, it appears to run but not very long and then just returns to the command line. The log file is over 1 meg in size so I would think it should take longer than a few seconds to copy, parse, and replace.

    Thanks again for all your help, it's greatly appreciated.
    Normally in unix just returning to the command line without printing an error, means success. You can check the status return of your last execution in bash by typing
    Code:
    echo $?
    What makes you think this didn't work? Lines beginning with "Argument 2" in /full/path/to/log aren't removed?

Page 1 of 2 1 2 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •