Find the answer to your Linux question:
Results 1 to 7 of 7
Hi All, I recently had a serious break-in on a mailserver I administer. The system had to be wiped and re-installed from scratch. On re-installation I looked at various intrusion ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Aug 2007
    Posts
    3

    Kernel Fingerprinting


    Hi All,

    I recently had a serious break-in on a mailserver I administer. The system had to be wiped and re-installed from scratch. On re-installation I looked at various intrusion detection systems and tools like tripwire, etc and decided to write my own. This may sound like an ambitious project but it is as much a learning exercise as anything.

    Anyway, my home made system monitoring daemon is started via upstart on ubuntu 10.04 server and constantly loops through a series of checks aimed at tamper detection - eg MD5 checks on itself and key system files and monitoring running processes with ps.

    So far so good. However it occurs to me that although my running process monitor seems very good it's only as good as the ps command. What if, for example, a more sophisticated attacker were able to compromise my system at the kernel level with some sort of rootkit and run processes that ps doesn't reveal? That possibility leads me into an area where I have no knowledge.

    So my question is, given a known clean system to start with, what would I monitor in order to have any chance of detecting such an intrusion from an ordinary running program. So far the only idea I have is to monitor kernel modules with lsmod.

    I'm not looking for watertight perfection, just a quick and dirty hack.

    Thanks

  2. #2
    Just Joined!
    Join Date
    Aug 2009
    Posts
    83
    Quote Originally Posted by prsjm3qf View Post
    I recently had a serious break-in on a mailserver I administer.
    What was the root cause if I may ask?


    Quote Originally Posted by prsjm3qf View Post
    I looked at various intrusion detection systems and tools like tripwire, etc and decided to write my own.
    For what particular reasons? Or in other words: why re-invent the wheel (Samhain, Aide, OSSEC HIDS, et cetera)?


    Quote Originally Posted by prsjm3qf View Post
    my running process monitor seems very good it's only as good as the ps command. What if, for example, a more sophisticated attacker were able to compromise my system at the kernel level with some sort of rootkit and run processes that ps doesn't reveal? (..) given a known clean system to start with, what would I monitor in order to have any chance of detecting such an intrusion (..)
    These days the majority of (successful) attacks:
    - (still) are the result of a lack of proper hardening (no Linux let alone admin knowledge),
    - (still) occur due to running vulnerable versions of (web stack) software or misconfiguration,
    - do not attempt to gain root as running as almost any service account is enough to send spam etc, etc,
    - are preceded by "noise" (reconnaissance),
    - could be mitigated quickly by looking at and learning from any early warnings,
    - do not use "traditional" rootkits.

    What you could conclude from this is that proper hardening (conditions which can be tested for?) should always precede deployment and that vigilance pays (grep logs for anomalous strings?). So it isn't as much binaries binaries being replaced as it is (being allowed to) brute force accounts, drop files in writable directories, local and remote file inclusions, SQL injections, etc. FWIW that's just what *I've* seen the past years so feel free to ignore it.


    What I'd monitor are (in no particular order):
    - any change in LKM's, core system binaries and /etc,
    - any anomalies Logwatch filters,
    - fail2ban logging,
    - any increase in MAC policy violations,
    - created files in writable directories by web stack accounts,
    - sudden increase in activity (logins) or erratic behaviour (file dropping, recon) of human account owners,
    - any root account access,
    - outbound connections to ports like IRC or a sudden increase in HTTP, SMTP or SSH traffic,
    - any cronjob change.
    There's more but that's about it for now.

  3. #3
    Just Joined!
    Join Date
    Aug 2007
    Posts
    3
    The root cause was mainly my slackness.
    A couple of years ago I experimented with a php based ecommerce system and left it running unattended and unpatched on this server and forgot about it. They uploaded and ran their code through this system then used cron and a chat program to do their dirty deeds. Running their own mini webserver and subverting a non-bash shell I didn't even know I had.
    My system was deeply involved in a criminal botnet for about 10 weeks. It all left me feeling rather foolish because the whole thing was easily detectable and easily preventable. My little tool, as it stands now, would have easily detected this penetration within seconds and alerted me immediately.

    So why re-invent the wheel?
    Basically to learn more about how wheels work and because I enjoy doing it. I spent a fascinating day tracing their activities and learning how they did what they did. As you point out, it wasn't so much a case of them breaking in as me leaving the door wide open.
    Also, I learned a very important lesson from this episode. Large and complex, standardised, commonly available systems are hard to truly secure if you don't know them at a sourcecode level. They have standard commonly known weaknesses that are always being scanned for - you can patch and update but you never REALLY know.. Additionally, some of my servers are vulnerable to tampering by people who have physical access to them. I want to know what's going on using my own lightweight automated tools that I understand intimately, can easily extend, adapt and re-purpose, and that any attacker, internal or external, would also need to know intimately in order to be sure of bypassing.
    Much of my work involves using linux in areas other than a traditional client/server internet facing scenario and I need a system monitoring tool that I can combine with other non-standard functionality as needed. Not all security is client/server network security and all possible scenarios are not always covered by existing tools
    Last edited by prsjm3qf; 04-10-2011 at 06:49 AM.

  4. #4
    Just Joined!
    Join Date
    Aug 2009
    Posts
    83
    Quote Originally Posted by prsjm3qf View Post
    Large and complex, standardised, commonly available systems are hard to truly secure if you don't know them at a sourcecode level. (..) you can patch and update but you never REALLY know..
    With all due respect but I disagree: hardening docs for systems and services have been around for years as have been benchmarks (CIS, DISA-STIG), regulatory guidelines (PCI-DSS, SoX) locally and remotely run system checkers and vulnerability scanners (GNU Tiger, LSAT, OpenVAS, Nessus, Nikto, Web scarab, etc, etc). Any change in configuration, access rights et cetera is something that can be tested to see if it enhances security or makes things less secure.


    Quote Originally Posted by prsjm3qf View Post
    Additionally, some of my servers are vulnerable to tampering by people who have physical access to them.
    A combination of a properly hardened machine, a decent MAC (SELinux, Apparmor, GRSecurity, etc), the audit service, a logging shell (rootsh) plus an impenetrable remote syslog host may not thwart root changing things but given the amount of logging that precedes it you'd have plenty of early warning to act on.


    Quote Originally Posted by prsjm3qf View Post
    Not all security is client/server network security and all possible scenarios are not always covered by existing tools
    Interesting. Care to elaborate? Just curious.

  5. #5
    Just Joined!
    Join Date
    Aug 2007
    Posts
    3
    Well, I see what you are saying but I still partially disagree. Yes, all this stuff has been around for years and yet large organisations with full access to it all are regularly proven to be vulnerable. Codes of practice, vulnerability scanners and certification don't mean jack when you are dealing with the unknown. They merely allow someone to claim they took every reasonable care. There is ultimately always someone who knows something you don't. I think that such standardisation can in itself present vulnerabilities that come from shared knowledge and predictability.

    Having said that I'm quite happy to use any available tool that suits the situation, however for my current project such tools are (I think) largely pointless.

    My task is to safeguard intellectual property within industrial control and machine vision applications on turnkey systems where an attacker is assumed to have unrestricted physical access. On first hearing this sounds like a hopeless task and I'm not massively optimistic. But I do have one angle that may give me an opportunity to inconvenience an attacker. I can probably arrange it so that to gain access to these algorithms the original system and specific programs on it must be run. I may be able to ensure that the only way to steal this intellectual property is directly from the memory of a running computer. If this is the case then running intrusion detection and process monitoring code that is tightly integrated with the valuable algorithms as they execute may offer some opportunity for security on a computer totally owned by a malicious party.

    I'm not using intrusion detection with the intention of preventing intrusion and I'm not trying to prevent access because I can't. I'm trying to stop certain parts of specially designed executables from ever executing on a compromised system.

    In this scenario standard tools are useless because they will be totally owned by any attacker and he can do whatever he likes with them. But I'm not trying to preserve the integrity of the OS, in fact I don't really care about it directly, it is of no value in itself.

    There are a lot of big ifs, as I'm sure you realise.

  6. #6
    Just Joined!
    Join Date
    Aug 2009
    Posts
    83
    Quote Originally Posted by prsjm3qf View Post
    My task is to safeguard intellectual property within industrial control and machine vision applications on turnkey systems where an attacker is assumed to have unrestricted physical access.
    Auch. Indeed here deploying regular tools wouldn't make sense at all. As guarding intellectual property (IP) in this way isn't my forte I'll refrain from making further suggestions. I do hope though in time you'll have your setup tested by skilled penetration testers. It may cost a buck but more so would competitors getting hold of parts of your companies IP. Good luck with it!

  7. #7
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    Location
    I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
    Posts
    11,448
    Well prsjm3qf, as they say, you live and learn! Your experience is so common, that it could be used as a textbook case. In any event, I would recommend that you write up your experience as a magazine article and post it to one of the online rags such as Network World, Computer World, or Linux.com. You will help inform a lot of others about unintended vulnerabilities, as well as how to deal with them. Thanks for your posting - it is an eye-opener for many, I am sure! You might even get paid for your efforts!
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •