Find the answer to your Linux question:
Results 1 to 5 of 5
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1

    Document Linux server configuration


    We have a product for documenting network systems like Windows, SQL, Exchange etc. and we want to be able to document Unix/Linux servers too.

    I'm having problems getting any kind of common standard / API and wanted to know what people thought. The information I'm looking for is

    Configuration Files
    Server Hardware, manufacturer, model
    Serial Number
    Installed RAM
    Hard Disks
    Installed Software
    Configured users and groups
    Network Interfaces

    Looking at the APIs available it's pretty difficult to reliably get this information.

    Installed software
    Network Interfaces

    This seems to get server serial numbers but DMIDecode isn't installed by default so would be a pre-requisite. Also I'm not sure if this is available on all Unix platforms?

    Also this means some parsing of returned text. Quite dangerous. Most comments on the net seem to grep for things like "Model" - what happens if this is a foreign installation... I assume this wouldn't work?

    This is great for picking up configuration files. It seems any user has read access to most of /etc which makes things easy. I was hoping Linux would write more hardware information to /proc/sys so I could just read it from there.

    Many Thanks,


  2. #2
    Trusted Penguin Irithori's Avatar
    Join Date
    May 2009

    As you experienced, linux/*bsd/solaris/etc are not uniform.
    They are always evolving, and imho this is ultimately a good thing.
    Just not for your usecase

    Letīs go through your list:
    - Server Hardware, manufacturer, model
    - Serial Number
    - Installed RAM
    - Hard Disks
    - Network Interfaces

    lshw could be used.

    This produces a nice human readable html page:
    lshw -html > output.html
    And similar, this here is better for further processing:
    lshw -json
    - Installed software
    Always ask the package manager.

    rpm/yum on a fedora/centos/redhat/scientific linux
    dpkg/aptitude on a debian/ubuntu based machine.
    "rpm -qa" or "dpkg -l" are the most basic commands to get a packagelist.

    My 2c, or <small rant>:
    The usual approach from commercial vendors is to offer a behemoth (sometimes several hundred megabytes) of a tar.gz with a suspicios "", which is a big red minus in my books.
    - There are a lot of unneccessary, redundant and sometimes blocking bits and pieces in it:
    Multiple jres, static compiled binaries for all platforms, pre-configured special-purpose apaches, etc
    - The helper software (jre, apache, etc) is often helplessly outdated.
    - It is hard to tell, what this does.
    I do not want it to distribute its files all over the system in a shotgun style.
    - These usually try to figure out on which system they are and do "smart" actions accordingly.
    a) The detection is seldom reliable
    b) No, I do not want software X to mess with e.g. sysctl or other parts of the systemconfig.
    Rather tell me the requirements of the software so that I can judge and define the config:
    Software X might not be the only app running on the host,
    and a small VM may need other values than a 8 core physical machine.

    Imho software *NEEDS* to be packaged and under the *native* package manager control, for various reasons:
    - install/update/removing is done via standard tools and procedures
    - only reliable way to be able to deploy hundreds or thousands of servers with the very same software
    - software dependencies are sorted out and will be installed if neccessary
    - the package only contains what is needed for *this* distribution and the files dirs are (hopefully) in the right places. See here for details: Filesystem Hierarchy Standard
    - easy to investigate what is where and why:
    rpm -qf /bin/ls
    rpm -qivl coreutils
    rpm -q --scripts coreutils

    In consequence, you might want to consider building standard deb/rpms of your software for each distribution you want to support.
    A rpm/deb can have a dependency on other rpms/debs.
    This solves your "lshw might not be available" problem.

    Of course, the devil is in the details:
    - On a {ubuntu,fedora,debian} lshw is available via the standard repositories.
    So just require it as a dependency in your package and thatīs it.
    - On a {Centos,RedHat/Scientific Linux} {5,6} it is not.
    For those distributions a way would be to package it yourself (which is possible as it is GPL v2, but you need to confirm with the rules for distributing open source)

    </small rant>

    - Configured users and groups
    This is a bit tricky, because potential additional user/group/password databases like ldap depend on (correct) configuration.
    /etc/{passwd,shadow,group} are a relatively safe fallback, though.

    In general, the "getent" command should help.
    It is part of the glibc-common package and as such most probably available.
    getent passwd
    getent group
    getent shadow

    Which brings us to the last point:
    - Configuration Files
    The approach to copy config files via sftp and a regular user account will not work:
    a) Some files are not readable by a regular user, and rightly so. Think about private keys, PreSharedKeys, passwords, etc
    b) Even within one distribution, there are literally thousands of ways to configure a system.
    e.g. config can be in /etc/apache or /etc/apache2 or /etc/http or /etc/nginx or..
    Nowadays, configfiles can include other configfiles, for various reaons:
    - e.g. I have a bacula backup system with several hundred clients. I decided to have the "Backup Job" description per host in a file each.
    Another sysadmin might not do so or solve it by groups.
    - Another example: There is /etc/cron.d, which essentially extends /etc/crontab. However, in /etc/cron.d you just place a file for a new cronjob.
    You do not have to edit /etc/crontab. This difference makes it way easier to deploy specific crons to group A and other crons to group B.
    c) Configs do not neccessarily need to be in /etc.
    e.g. The postgres DB has its config in its datadir. The datadir is machine dependent.
    d) Apart from systemconfig, there is also user specific config. This is usually done via dotfiles in homedirectories. Oracle is an example for that.

    Imho, it is better to turn that around:
    I wouldnt try to figure out a serverconfig after the fact (after the deploy).
    One approach is to define the config *beforehand* and then just deploy the machines according to such manifests.
    puppet, cfengine, chef are ways to accomplish that.

    A manifest is an abstract way to describe the desired state of a machine.
    And e.g. puppet will make sure, that this state is reached.

    The modules and manifests would of course be files in a repository of a version control system.
    So your software would just need to be able to talk to e.g. git, svn, etc.

    I know, easier said than done
    Especially as not all of your future customers will have such a config/system management tool in place.

    Good luck
    You must always face the curtain with a bow.

  3. #3

    Thanks for the reply.

    I wanted to avoid just storing human readable HTML output as we want to be able to report against it etc

    www . centrel-solutions . com / XIAConfiguration

    I guess parsing /etc/passwd should be suitable to record "local users" then perhaps...

    Regards configuration files I was thinking I could configure a file with a name and possible locations such as the following. At least the product could then detect changes within the configuration files.

    Samba Configuration

    What a dreadful nightmare. It's such a shame all Unix operating systems have never employed something like the VMware Web Service API for system management. Would make life so much easier.

    Thanks again I'll let you know how I get on...


  4. $spacer_open
  5. #4
    Quote Originally Posted by davidhomer View Post
    What a dreadful nightmare. It's such a shame all Unix operating systems have never employed something like the VMware Web Service API for system management. Would make life so much easier.
    And do all other virtualisation implementations support the same API?
    That would make life easier too...

  6. #5
    No HyperV uses WMI and Citrix uses er... something else think is Web Services based.

    Point is that at least they have APIs


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts