Find the answer to your Linux question:
Results 1 to 5 of 5
Greetings. I have been working on a bash script to capture smtp brute force attempts and send email alert when certain criteria are met. The email server is postfix. and ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Linux User
    Join Date
    May 2008
    Location
    NYC, moved from KS & MO
    Posts
    251

    non-repeat greping a growing log file


    Greetings. I have been working on a bash script to capture smtp brute force attempts and send email alert when certain criteria are met. The email server is postfix.

    and here are the approaches I've tried so far:
    1. [ pseudo code ]
    Code:
    Get current date/time up to minute
    grep that timestamp against the log file and search for sasl login attempts
    if the number of lines found is larger than a pre-set number, say 15, then
        send out email alert to me (or add to the postfix access file to block the ip that initialise the attempts if the number is even bigger, 30, for example)
    fi
    add that script to the crontab to run every minute.

    This works great except it could take up to a full minute before any action is taken due to the way that crontab works. I want the action to be executed at a more timely fashion.

    2.
    Code:
    [ pseudo code ]
    get the initial number of lines in the log file OLD (using wc)
    while true:
        do
        sleep 15
        get the new number of lines in the log file: NEW (using wc again)
        NumLines=NEW-OLD
        search for the pattern in the last #N lines
        take action if match found
        OLD=NEW
        done
    also there are some codes that deal situations with when the log got rotated, but that's not the important issue here. The problem with this approach is that the performance of counting lines with wc suffers when the log grows. It's not that elegant since the script has to count the total number of lines from the beginning of the log in every loop.

    3. The newest approach:
    Code:
    get initial (modified) times tamp IT of the log file (using stat)
    while true:
        sleep 15
        get current (modified) time stamp CT  of the log file (using stat again)
        if CT differs from IT
            grep the pattern against the last N lines of logs 
            if found: take action
        fi
    [ N is a fixed number that is reasonably large to make sure that no log lines are missed ]

    This also works fine except that new cycle's search sometimes overlaps the previous one. It is caused by the fixed number of lines to search. The advantage of this approach is that it greps the log file only when the log is found modified.

    I am thinking about combining approaches 1 an 3 so that I could use the time stamp to solve the overlap problem but there's a little new problem here: postfix's log times stamp is up to seconds so that sharing the same time stamp among a number of lines is quite often.

    I wonder if anybody has a better solution for this task. Thanks for reading.

  2. #2
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    Location
    I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
    Posts
    11,572
    Why not just use tail -f logfile | grep ...
    That will continuously scan new data appended to the log file and pipe that to grep. You can pipe the output of grep to some process of yours that acts upon the matched data. This would be a continuous process until you stopped it or the system was shut down.
    Code:
    tail --retry --follow=logfile | grep pattern | process_log_results
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

  3. #3
    Linux User
    Join Date
    May 2008
    Location
    NYC, moved from KS & MO
    Posts
    251
    Man! That was simple yet brilliant. I modified the method you suggested a bit and here's the example code:
    Code:
    tail -f log| while read line; do echo $line|grep -q "pattern"; if [ $? -eq 0 ]; then echo "matched: $line"; fi; done
    The bold parts will be replaced with the actual pattern and action.

    Thanks a lot, Rubberman.

  4. #4
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    Location
    I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
    Posts
    11,572
    Quote Originally Posted by secondmouse View Post
    Man! That was simple yet brilliant. I modified the method you suggested a bit and here's the example code:
    Code:
    tail -f log| while read line; do echo $line|grep -q "pattern"; if [ $? -eq 0 ]; then echo "matched: $line"; fi; done
    The bold parts will be replaced with the actual pattern and action.

    Thanks a lot, Rubberman.
    Cool. Nice solution. I suggested the --retry option so if the log file is missing, it will keep trying until it comes online, and if the file is swapped out and a new log file of the same name is created, it will switch to the new file automatically.
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

  5. #5
    Linux User
    Join Date
    May 2008
    Location
    NYC, moved from KS & MO
    Posts
    251
    Suggestion taken. Another useful usage of your method is to import log into database in real time. I used a program called MySar which imports squid log into its database every minute to generate Up to the last minute Internet usage report. It's using crontab and timestamp (of last imported entry) as a marker. I think tail -f would make the data importing part much easier. A prototype example of this approach is:

    Code:
    tail --retry -f log| while read line; do <import $line to database>; done&

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •