May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Integrity Checkers

Old News See also Recommended Links Recommended Articles Reference Malware MD5 Tools

Windows Integrity Checkers

Perl-based Integrity checkers C-based Integrity Checkers RPM-based checking The Solaris[tm] Fingerprint Database False Positives and False Virus Epidemics InstallWatch
Fcheck Tripwire SUID/SGID checkers Rootkit detectors Humor Random Findings Etc

Integrity checkers are very useful for finding Trojan programs and backdoors like Rootkit. Theoretically they are also useful for maintenance, but in reality this goal is pretty difficult to achieve.  Please note that many existing program can be used to create a pure man integrity checker. Tar with -d mode is an obvious candidate (star is actually a better tool), but even ls -l along with diff is an extremely useful tool. Perl-written (or Python-written) integrity checkers are more flexible and thus have an edge over C-written tools like Tripwire. Mixed TCL+C tools are probably the most flexible approach, but both Perl and Python are OK "as is".  Perl based checkers are now moved to a separate page: Perl-based Integrity checkers. Fcheck is pretty simple Perl script that works both on Unix and Windows.

The most popular integrity checker for Unix, Tripwire never was able to supersede its beginning as a student project. A free version of Tripwire realistically can only be used in a limited way for static servers like appliances.  A commercial version is a little bit more flexible, but not by much and is overly complex for the pretty primitive functionality provided. Moreover an introduction of central console for all Tripwire instances created an additional security risk.

The fairy tails that Tripwire can detect or prevent host intrusions as a standalone application are not credible. Theoretically it can, but the tool itself is so inflexible that it largely defeats its purpose.  As a result its reports are by and large ignored. This is classic "crying wolf" situation and is typical for many other integrity checkers.  On Linux in most case you might have more success with RPM-based checking that with Tripwire.

If you really have some time to spend you can install Tripwire on a regular server;  just do not try to write all-encompassing rulebase, this is a proven road to nowhere ;-). Older versions of tripwire are strictly file oriented and you need to list all the files and directories you need to check. Newer version (commercial version 4.0 and later) permit specification of all files in the directory (better late then never ;-)

Still the best policy in using Tripwire is to limit yourself to a few critical system and configuration files (for example used in the rootkits), plus several critical configuration files.  Actually control of configuration files is more important and here Tripwire while weak, can at least can provide some return on investment. If you are thinking about using Tripwire for tracking changes, please think again. This is possible by writing custom scripts, but there are better tools for the same purpose.  One problem is that if you do not compare with the baseline, you compare with the set of attributes. Also if you control both directory and a file in this directory, then for each change Tripwire will complain twice. In old times there was an option  -loosedir which would prevent tripwire from complaining about directory modification time updates that can filter out some noise. Later it became a configuration option. 

The real challenge for integrity checkers is not super-duper change detection (the path taken by many useless security tools) but intelligent report about changes.  That's where most of those program fail and that's why they did not get sufficient traction despite extremely promising idea. 

Top Visited
Past week
Past month


Old News ;-)

2009 2008 2007 2006 2005 2004 2003

[Dec 15, 2010] Another File Integrity Checker 2.16-1

Some bugs were fixed in checksum computing.

[Mar 31, 2010] Samhain Labs file integrity checkers

Remarks on individual programs










The table shows results (averaged over 5 runs each) for initializing the baseline database and running a check. All data are for a 1.2 Gb dataset.

File integrity checking is essentially I/O bound, i.e. most of the time is spent waiting for data from the disk, and thus most of the tested programs run at similar speed.

[Mar 31, 2010] Yet Another File Integrity Checker

yafic is Yet Another File Integrity Checker, similar to programs like Tripwire, integrit, and AIDE. I created yafic because no existing file integrity checker did all the things I wanted. I wanted something fast, simple, and yet be flexible enough to be used in different situations. yafic uses NIST's SHA-1 hash algorithm to fingerprint files. The latest stable version of yafic is 1.2.2. You will need Berkeley DB 1.85 from Sleepycat Software to build it. Note that *BSD and most distributions of Linux seem to have it by default.

In case you're wondering, I couldn't think of any good names. :)

yafic's feature set is relatively small compared to other integrity checkers. It gets done what I need done, so it's enough for me. If you like simple, you just might like yafic. :)

[Mar 31, 2010] integrit File integrity checker, like Tripware (installed binaries and support files)

    Sun Mar  2 19:56:04 2008              0 etc/
    Sun Mar  2 19:56:04 2008              0 etc/postinstall/
    Sun Mar  2 19:56:04 2008            220 etc/postinstall/
    Sun Mar  2 19:55:10 2008              0 usr/
    Sun Mar  2 19:55:10 2008              0 usr/share/
    Sun Mar  2 19:55:10 2008              0 usr/share/doc/
    Sun Mar  2 19:55:10 2008              0 usr/share/doc/integrit-4.1/
    Sun Mar  2 19:55:12 2008            104 usr/share/doc/integrit-4.1/AUTHORS
    Sun Mar  2 19:55:12 2008            941 usr/share/doc/integrit-4.1/HACKING
    Sun Mar  2 19:55:12 2008          18007 usr/share/doc/integrit-4.1/LICENSE
    Sun Mar  2 19:55:12 2008            287 usr/share/doc/integrit-4.1/NEWS
    Sun Mar  2 19:55:12 2008          12748 usr/share/doc/integrit-4.1/README
    Sun Mar  2 19:55:14 2008           4809 usr/share/doc/integrit-4.1/ChangeLog
    Sun Mar  2 19:55:14 2008           1287 usr/share/doc/integrit-4.1/todo.txt
    Fri Feb 29 16:27:18 2008              0 usr/share/doc/integrit-4.1/examples/
    Sat Jun  2 21:41:38 2007           1669 usr/share/doc/integrit-4.1/examples/integrit_check
    Sat Jun  2 21:41:38 2007           1626 usr/share/doc/integrit-4.1/examples/install_db
    Sat Jun  2 21:41:38 2007            376 usr/share/doc/integrit-4.1/examples/README
    Sat Jun  2 21:41:38 2007            546 usr/share/doc/integrit-4.1/examples/crontab
    Sat Jun  2 21:41:38 2007            646 usr/share/doc/integrit-4.1/examples/src.conf
    Sat Jun  2 21:41:38 2007            612 usr/share/doc/integrit-4.1/examples/usr.conf
    Sat Jun  2 21:41:38 2007           8846 usr/share/doc/integrit-4.1/examples/viewreport
    Sat Jun  2 21:41:38 2007           3804 usr/share/doc/integrit-4.1/examples/integrit-run.c
    Sat Jun  2 21:41:38 2007           2380 usr/share/doc/integrit-4.1/examples/root.conf
    Sun Mar  2 19:55:24 2008              0 usr/share/doc/Cygwin/
    Sun Mar  2 19:55:24 2008           1570 usr/share/doc/Cygwin/integrit-4.1.README
    Sun Mar  2 19:55:22 2008              0 usr/share/info/
    Sun Mar  2 19:55:22 2008          50464 usr/share/info/
    Sun Mar  2 19:55:50 2008              0 usr/share/man/
    Sun Mar  2 19:55:50 2008              0 usr/share/man/man1/
    Sun Mar  2 19:55:50 2008           1085 usr/share/man/man1/i-ls.1.gz
    Sun Mar  2 19:55:50 2008           1064 usr/share/man/man1/i-viewdb.1.gz
    Sun Mar  2 19:55:50 2008           1310 usr/share/man/man1/integrit.1.gz
    Sun Mar  2 19:55:52 2008              0 usr/sbin/
    Sun Mar  2 19:55:52 2008           7168 usr/sbin/i-viewdb.exe
    Sun Mar  2 19:55:56 2008          48640 usr/sbin/integrit.exe
    Sun Mar  2 19:55:54 2008              0 usr/bin/
    Sun Mar  2 19:55:54 2008          17920 usr/bin/i-ls.exe

[Mar 31, 2010] Monitoring your filesystem for unauthorised change

If you're running a stable server and are worried about an intruder modifying your system binaries to install new corrupted versions you should be using a filesystem integrity checker.

There are several available as part of Debian's stable and unstable archive.

The most widely known integrity checker is tripwire, but several other packages are available to do the same job including integrit and aide.

All these tools work in the same way, and it's mostly a matter of personal preference which one you choose to install.

When they are first installed you use them to build up a database of all your important files, and a corresponding checksum of their contents.

Later you can recompute the checksums of the binaries and compare them against those you stored in your initial database. This will allow you to detect any binaries, or files, on your local filesystem which have been modified.

It is important that you store the database of the filesystem securely - such that any malicious intruder couldn't update it to hide their tracks.

A good way of doing this if you have physical access is to burn it onto CD-ROM as this allows you to mount it without the ability to write to it.

However each time you legitimately update your system you must rebuild your pristine database - so this might get expensive fast.

As a simple guide we'll walk through installing both integrit and aide, the latter seems to be the most popular integrity checker available.

Installing integret is very straightforward. Download and install it via apt-get and you'll be presented with a couple of simple questions.

apt-get install integrit
When it is installed you must be careful to not tell the software to update or create its database - because we've not configured it yet. All the other questions may be safely answered with the defaults.

Once installed you'll find a configuration file /etc/integrit/integrit.conf.

This configuration file contains a list of directories, or paths, which are checked.

Every file beneath the named directory will be checksumed using the SHA-1 hash, and its details will be stored in the integret database located at /var/lib/integrit.

The configuration file contains a list of example directories along with a brief explaination of how to add new entries.

A minimal configuration for my machine looks like this:

# Global settings

# Ignore '!' the following directories because we don't care if their contents are modified.
Once this is setup you can create the initial database:

integrit -C /etc/integrit/integrit.conf -u
This saves the current state of the system into the file /var/lib/integrit/current.cdb, we need to move this into the known state - and also take a copy offsite.

mv /var/lib/integrit/current.cdb /var/lib/integrit/known.cdb
Mailing a copy of this file offsite to a safe location is useful as it allows you to test again later - even if you think your database might have been modified by a local user.

To check the filesystem for changes we can now run:

integrit -C /etc/integrit/integrit.conf -c
As you've just created a pristine database you should see no errors.

To test that the system is working run:

touch /bin/ls # Modify a file
integrit -C /etc/integrit/integrit.conf -c
This time you should see an error message:

changed: /bin/ls m(20020318-151001:20041130-142618) c(20031107-102841:20041130-142618)
(m in this case is the modification date of the file, c being the creation date).

The Debian package will mail you every day if files have changed - and even if they haven't. There is a cron job setup by the file /etc/cron.daily/integrit. You can edit that file if you only wish to see an email in the case of differences, the comments explain how to do so:

# * UNCOMMENT the two following lines marked with `# !' if you don't
# * want to receive reports if no mismatches were found

# ! if [ '$(echo '$output' | egrep -v '^integrit: ')' ]; then
message=$(echo '$message' && echo '$output')
# ! fi
This overview really showed you the kind of thing that you will have to do with any integrity checking system:

Create an initial database.
Move it somewhere safe. (So that it can be used if you don't trust the local copy).
Run regular checks of the current system against that database.
All the systems we've mentioned so far, aide, integrit, and tripwire use exactly this mode of operation.

Looking at aide next we can work through the same example.

First of all install it, and when prompted decline the opertunity to create the initial database:

apt-get install aide
aide is configured by the file /etc/aide/aide.conf and the process is mostly the same as that shown for integrit already.

The configuration file defines a list of checks, such as the following:

Binlib = p+i+n+u+g+s+b+m+c+md5+sha1
Here we see the check called Binlib is defined as a combination of different tests from the following table:

# Here are all the things we can check - these are the default rules
#p: permissions
#i: inode
#n: number of links
#u: user
#g: group
#s: size
#b: block count
#m: mtime
#a: atime
#c: ctime
#S: check for growing size
#md5: md5 checksum
#sha1: sha1 checksum
#rmd160: rmd160 checksum
#tiger: tiger checksum
#R: p+i+n+u+g+s+m+c+md5
#L: p+i+n+u+g
#E: Empty group
#>: Growing logfile p+u+g+i+n+S
There are a number of tests defined for different purposes, such as ConfFiles designed to cover things in /etc, Logs for logfiles, etc.

Then these tests are applied to a group of directories.

So my previous example covering most of the important directories looks like this for aide:

# Binaries
/bin Binlib
/sbin Binlib
/usr/bin Binlib
/usr/sbin Binlib
/usr/local/bin Binlib
/usr/local/sbin Binlib
/usr/games Binlib

# Libraries
/lib Binlib
/usr/lib Binlib
/usr/local/lib Binlib

# Logfiles
/var/log$ StaticDir
/var/log Logs

# Things to ignore
Once this is done you can intialise the database, with the following command:

The database, by default, will be placed in /var/lib/aide/ If you're happy with the output you can copy it to the real location for running tests against:

mv /var/lib/aide/ /var/lib/aide/aide.db
As before we'll modify a file and then run a test:

touch /bin/ls
aide --check
This gives the following output:

AIDE found differences between database and filesystem!!
Start timestamp: 2004-11-30 14:39:45
Total number of files=11247,added files=0,removed files=0,changed files=1

Changed files:

Detailed information about changes:

File: /bin/ls
Mtime : 2004-11-30 14:26:18 , 2004-11-30 14:39:39
Ctime : 2004-11-30 14:26:18 , 2004-11-30 14:39:39
As you can see this is more readable than the example we showed previously with integrit, but this is offset by the trickier setup required.

If you're happy to acknowlege the change just re-run the aideinit command and move the new database into the live location - this will cause future checks to be error free.

I hope this was a useful comparision between two local filesystem integrity checkers.



In the article you mention

>> It is important that you store the database of the filesystem securely - such that any malicious intruder couldn't update it to hide their tracks.

>> A good way of doing this if you have physical access is to burn it onto CD-ROM as this allows you to mount it without the ability to write to it.

Wouldn't just storing the md5sum of the database be good enough? I just store this md5sum on the desktop of one of my other machines (not the server that aide is running on). This way when I ssh in I do an md5sum on the aide database make sure it matches then run aide -check.

Second. The one problem that you don't mention is that by storing this db elsewhere you keep people from modifying it to hide their tracks. Couldn't they just modify the aide binary to hide their tracks so that it gives the appropriate answers? \


Yes if they can modify the binary then you would not spot anything, the perfect solution is to burn the binary to the CD-ROM too - and use that for the checking.

Or boot from a known-good kernel, Knoppix etc.

Then any kernel rootkits will be unable to disguise the tampering.


Anonymous :

Mailing a copy of this file offsite to a safe location is useful as it allows you to test again later - even if you think your database might have been modified by a local user.

Won't simply doing md5sum "$known" and storing the checksum securely suffice?

[Mar 31, 2010] Yet Another File Integrity Checker

The latest stable version of yafic is 1.2.2. You will need Berkeley DB 1.85 from Sleepycat Software to build it. Note that *BSD and most distributions of Linux seem to have it by default.

[Jan 23, 2010] Samhain Labs file integrity checkers

Interesting but semi-useless study of integrity checker with equally semi-useless comparison table ;-)

What is the focus of this study ?

There are many reviews which focus on features and tell colourful stories of guys sitting in a server room and watching alerts whooshing over the terminal screen. This one is hopefully different.

This study compares eight free (open-source) host/file integrity checkers (file integrity monitoring programs) with respect to the implementation of the core functionality, i.e. questions like:

The results presented here are based on test runs, and sometimes also on investigation of the source code. All test were performed on a Ubuntu Linux system (6.06 for most programs, 9.04 for OSSEC which was tested end of 2009). In general, tests were performed only with console logging (stdout/stderr). Client/server systems (osiris, OSSEC, Samhain) were tested in a client/server setup with client and server on the same machine.

Thus, while some "features" of these programs are mentioned that may be of interest for usability, the focus of the study was on testing the scanner's functionality, not on listing and/or comparing their features.

Explanation of table rows

The version number of the file integrity scanner.
The release date of the file integrity scanner. For PGP signed source code, this is the date of the PGP signature, otherwise the date listed on the web site, or in the source.
PGP signature
Is the distributed source code PGP-signed ?

If there is no signature, it may be possible to put a trojan into the source code (this has happened in the past with several high-profile security-related programs)!

The programming language of the file integrity scanner.
Requirements (other than compiler or interpreter).
Log Options
What channels are supported for logging?
DB sign/crypt
Does the scanner support signed or encrypted baseline databases?
Conf sign/crypt
Does the scanner support signed or encrypted configuration files?
Name Expansion
Does the scanner support expansion of file names (shell-style patterns or regular expressions) in the configuration file?
Duplicate Path
Does the scanner check the configuration file for duplicate entries of files/directories (possibly with a different checking policy for the duplicate)? Strict checking of the configuration file can help to avoid user errors.
Can the scanner handle a file whose path has the maximum allowed length (4095 on Linux)?
Root Inode
Can the scanner handle the "/" directory inode? This is the file with the shortest possible path, and also the only one with a "/" in its filename, so it may expose programming bugs (and you do want to check that inode).
Can the scanner handle filenames with weird or non-printable characters? And if it can handle them internally, can it report results in a useful way? Checked filenames were:

bash$ ls -l --quoting-style=c /
-rwxr-xr-x 1 root root 0 Feb 11 20:16 "\002\002\002\002"

As "\002" is non-printable, incorrect reporting will result in a report about removal of the root directory ("/"), if this file is removed ...

bash$ ls -l --quoting-style=c /opt
-rwxr-xr-x 1 root root 0 Feb 11 19:51 "this is_a_love_song\b\b\b\b\b\b\b\b\bwrong_filename"

As "\b" is backspace, incorrect reporting will result in a report for the non-existing file "this is_a_wrong_filename"

bash$ ls -l --quoting-style=c /opt
-rwxr-xr-x 1 root root 0 Feb 11 19:51 "this_has\n_a_newline"

If filenames are not properly encoded, the newline may easily corrupt the baseline database.

No User
Can the scanner handle files owned by a non-existing user (UID with no entry in /etc/passwd)?
No Group
Can the scanner handle files owned by a non-existing group (GID with no entry in /etc/group)?
Can the scanner handle files if another process has acquired a mandatory (kernel-enforced) lock on it (yes, Linux has that kind of locks)? It is possible to open() such a file for reading, but the read() itself will block, so the scanner will hang indefinitely, unless precautions are taken. On Linux, mandatory locking requires a special mount option, thus cannot usually be enforced by unprivileged users.
File integrity scanners first lstat() a file to determine whether it is a regular file, then open() it to read it for checksumming. In between these two calls, a user with write access to the directory may replace the file with a named pipe. As a result, the open() call will block and the scanner may hang indefinitely, unless precautions are taken.
Is the scanner able to scan the /proc directory? On Linux, at least some files in /proc are writeable and can be used to configure the kernel at runtime, so you may want to check these files. However, files in /proc may be listed with zero filesize, even if you can read plenty of data from them. Almost all scanners "optimize" by not checksumming zero-length files, which is incorrect in the Linux /proc filesystem. Additionally, some files may block on an attempt to read from them.
Has the scanner problems with the /dev directory? Does it allow to check device files (e.g. for correct permissions)?
Can the scanner report on missing (deleted) or newly created files?

[Oct 23, 2008] Another File Integrity Checker 2.12 by Gerbier Eric

Overly complex. The author tried to do too much and complexity failed him. Due to complexity is not modifiable by a regular sysadmin.

About: afick is another file integrity checker, designed to be fast and fully portable between Unix and Windows platforms. It works by first creating a database that represents a snapshot of the most essential parts of your computer system. You can then run the script to discover all modifications made since the snapshot was taken (i.e. files added, changed, or removed). The configuration syntax is very close to that of aide or tripwire, and a graphical interface is provided.

Changes: The code now works with perl 5.10. On Windows, afick_planning now sends a report instead of a summary and uses the "LINES" macro. On Unix, a new MOUNT macro allows you to use a remote database in afick_cron. Udev files were removed from scan. The tar.gz installer was recoded to display better diagnostics.

[Aug 10, 2007] Host Integrity Monitoring Best Practices for Deployment by Brian Wotring

Brian Wotring is the lead developer for the Osiris project and CTO of Host Integrity
Mar 31, 2004 (

This article is written with the open source host integrity applications Osiris and Samhain in mind, however the material presented is certainly not unique to these applications.

... ... ...

The basic idea behind host integrity monitoring applications is that they detect and report on change to the system. It gets most interesting when a change is unauthorized or unwanted. Much of the monitoring is focused on the file system. However, other environmental vectors can be monitored as well. For example, Samhain has the ability to search for rootkits and monitor login and logout activities. Osiris has the ability to monitor the state of loaded kernel extensions and the details of changes to the local user and group databases. Detected change is reported in the form of log files, syslog, the Windows Event Viewer, and possibly emailed to an administrator.

... ... ...

To better appreciate the role a host integrity system serves, imagine you find a new link to /etc/passwd that has been created in /tmp, a new kernel module that gets loaded without your knowledge, or a new user gets mysteriously created. How would you know if and when these types of changes occurred? There are commands that can be used to look for these happenings, but how would you know if and when to run them? What if these commands that you depend on for finding such changes were altered to hide specific information? Now, imagine you have hundreds of hosts that need to be monitored regularly to look for changes such as these.

Table One, below, compares features of some popular host integrity applications.

Samhain Osiris INTEGRIT AIDE
Monitors Files yes yes yes yes
Monitors Kernel yes yes no no
Platforms Linux, FreeBSD, AIX 4.x, HP-UX 10.20, Unixware 7.1.0, Solaris 2.6, 2.8, and Alpha/True64 Windows NT/2k/XP, Mac OS X, Linux, Solaris, FreeBSD, OpenBSD Linux, FreeBSD, Solaris, HP-UX, Cygwin Linux, FreeBSD, OpenBSD, AIX Unixware 7.1.0, Solaris True64, BSDi, Cygwin
Multiple Administrators no yes no no
Supports Modules no yes no no
License GPL BSD style GPL GPL
Centralized Management yes yes no no
Signed Databases yes no no no
Database Integration yes no no no
Table One: a comparison of popular host integrity applications

More information on the above products can be found on their websites:

Samhain -
Osiris -

[Jan 26, 2006] Sys Admin v15, i02 File Integrity Assessment via SSH

This is a logical centralization approach. Without some kind of management tool like Tivoli the suggested implementation is not scalable.

security incident and as a host-based intrusion detection tool to help detect unauthorized file system changes (this also makes them useful monitoring tools for existing change control procedures, though that is not the focus of this article). The concept is simple: the administrator creates a configuration file that lists the critical system files and directories that the FIA tool should monitor, then uses the FIA tool to create a database that tracks common parameters about those files, such as permissions and ownerships, file size, and MAC times, along with one or more cryptographic checksums over the file contents (typically via common hashing algorithms like MD5, SHA-1, etc.). The FIA tool is then re-run periodically, and the current state of the file system is compared to the values stored for the various files in the database -- if there are any discrepancies, the files are flagged as having been modified and a report is generated.

The canonical problem with FIA tools, however, is protecting the database generated by the FIA tool, as well as the binary for the FIA tool itself, from unauthorized tampering by attackers who gain root access to the system. After all, if the attacker installs a rootkit and then updates the FIA database for the system to reflect the new state of the file system, then the administrator may be unaware of the attacker's changes. Similarly, the attacker could modify the FIA tool binary to either ignore or lie about the state of files installed by the attacker.

Lately, however, I've been experimenting with a different approach. The concept is to store FIA databases and binaries for the various systems being monitored on a central, highly secure system on a protected network -- call this the FIA management server. Periodically from cron, we run a script on the FIA management server that uses scp to copy the FIA database and binary to a given system and then runs the FIA tool on that remote system via SSH. Any report output and/or updates to the FIA database can then be pulled back to the FIA management server via scp, and all traces of the FIA tool can then be removed from the remote system.

Slashdot Host Integrity Monitoring Using Osiris and Samhain

I've been using Tripwire [] (and Tripwire Portable []) for years. Recently I have started using Samhain in its place and have been quite happy with it. Some useful features that it has which Tripwire doesn't is the ability to monitor kernel system call tables for changes (a common attack vector), and to run as a daemon to alert on changes immediately. Its definitely worth a look.

Bob Bruen's review of Host Integrity Monitoring Using Osiris and Samhain by Wotring, Brian and Potter, Bruce, Syngress 2005, IEEE Cipher, E68 Sep 19, 2005

Syngress 2005.
ISBN 1-597490-18-0. $44.95. Index, three appendices.

Reviewed by Bob Bruen 09/16/05

We all know and love Tripwire, even if we still use the non-commercial version. It was one of, if not the, first complete tool for managing file integrity. The name comes from the alarm set off because a file was changed in some way. There are other tools that are popular and are still free. This book covers two of those tools, Osiris and Samhain.

Host intrusion detection systems are not quite as widespread in use as the network intrusion detection systems, generally because the number of hosts can be large. Each one requires time and attention, making a network approach more efficient, but not necessarily more effective. It seems that a blended approach would be useful and that appears to be common. The important machines on the network get good host monitoring and the less important are left to fend for themselves. The minimal blending is simply running an NIDS and some HIDS.

A still better approach might be to really integrate both approaches so that there is a distributed set of HIDS with a central management system. Perhaps this could part of the non-commercial version. Moreover, it would nice to have a sophisticated method to manage change on a host. After all, system file changes should be thoughtfully managed anyway. The intruder changes can be caught up the relentless change management system. Without any criticism of file integrity monitoring, it is not enough by itself because other things happen on a system and intruder code can be hidden in places other than the disks.

Osiris came into being as a few Perl scripts which eventually evolved into a extensive and sophisticated package. The architecture is geared toward central management with encrypted communications. The hosts will naturally require a client to be installed, which is a drawback of any HIDS, but otherwise nothing is stored on the host. The central manager does the heavy lifting. This is not unlike an application which needs to be installed on every computer on the LAN, something that is done all the time, but it is still overhead.

Samhain is similar to Osiris in its architecture of client, server and manager, but the clients initiate communication with the server. The Osiris manager initiates the communication with the clients required to keep a port open. The local situation will probably dictate which is the preferred method. Samhain offers the ability to run different scans on different schedules instead of running everything at one time. Both Osiris and Samhain run on Linux, BSD and Windows, but not with the exact same feature set or ease of configuration. This can be a bit of a problem, but still helpful in a heterogeneous environment.

Although the book is about Osiris and Samhain, there is a wealth of information about host integrity monitoring systems (HIMS). The advances in rootkits, intrusion techniques and defenses require us to update what we know. Anything learned ten years ago may still be valuable, just incomplete for today's environment. I always like books that are well written and provide good information, so I recommend Wotring's work to help bring you up to date

[May 2, 2005] The Second Commandment of system administration (NewsForge)

NewsForge takes a look at integrity checkers. "Each integrity checker is a little different, so do some research before deciding on one. There are many excellent integrity checking applications out there, but the one I recommend and prefer is called afick (Another File Integrity ChecKer).

Afick offers several advantages over integrity checkers such as Tripwire and AIDE.

The first and foremost difference is that afick is written in Perl, which gives it the advantage of speed. Afick finishes the initialization of the database that stores filesystem attributes almost a minute faster than AIDE.

Being written in Perl also means that afick is highly portable between operating systems."

A comparison of several host/file integrity checkers (scanners) By Rainer Wichmann

Table of results (alphabetic order)
AIDE FCheck Integrit Nabou Osiris Samhain Tripwire
Version 0.10 2.07.59 3.02 2.4 4.0.5 1.8.4 2.3.1-2
Date Nov 30, 2003 May 03, 2001 Sep 08, 2002 Aug 30, 2004 Sep 27, 2004 Mar 17, 2004 Mar 04, 2001
Language C Perl C Perl C C C++
Required libmhash md5sum (or md5) PARI/GP library + about 11 Perl modules OpenSSL 0.9.6j or newer GnuPG (only if signed config/database used)
Log Options stdout, stderr, file, file descriptor stdout, syslog stdout stdout, email central log server (email+file on server side) stderr, email, file, pipe, syslog, RDBMS, central log server, prelude, external script, IPC message queue stdout, file, email, syslog
DB sign/crypt NO NO NO sign NO sign sign+crypt
Conf sign/crypt NO NO NO NO NO sign sign+crypt
Name Expansion regex NO NO see remarks regex glob (shell-style) NO
Duplicate Path NO NO NO NO NO Warns Exits
Root Inode see remarks NO OK NO OK OK OK
Non-printable NO NO NO NO NO OK OK
No User OK OK OK see remarks OK OK OK
No Group OK OK OK see remarks OK OK OK
Lock OK Hangs Hangs Hangs Hangs Times out Hangs
Race Hangs Hangs Hangs Hangs Hangs OK Hangs
/proc NO NO Hangs Hangs NO OK NO

Host Integrity Monitoring Using Osiris and Samhain

By Brian Wotring

Excellent one-of-a-kind book on an overlooked security discipline, August 7, 2005
Reviewer: Richard Bejtlich "" (Washington, DC) - See all my reviews
Host Integrity Monitoring Using Osiris and Samhain (HIM) is an excellent book on a frequently overlooked security discipline. Most people who hear about host integrity monitoring nod their heads and agree that performing it is a good idea. These same people usually don't implement HIM, and frequently cannot count the number of hosts, operating systems, and applications working in their enterprise. Thankfully, HIM provides a way to use open source tools to help remedy this situation. Consistent with the Visible Ops methodology, HIM provides guidance on how to keep track of host integrity.

When writing HIM, author Brian Wotring could have easily concentrated on the program he coded -- Osiris. Luckily for readers, Brian chose to address his program and another open source host integrity monitor -- Samhain. By comparing and contrasting these two programs, readers learn more about each and understand the capabilities and limitations of each application's approach to the HIM problem. Consistent with this dual methodology, Brian explains how to install Osiris on both Unix and Windows platforms. (Samhain is mainly a Unix solution.)

The first third of the book provides background information on HIM rationales and planning. I was initially inclined to skip ahead, but I found the explanations of monitoring various system elements to be helpful. Brian's view of security closely mirrors my own, but he approaches it from a host-centric view. He still accepts that prevention eventually fails and that preparation for incident response is a necessity, not a luxury. Brian also correctly uses the term "threat" and recognizes threats are not vulnerabilities. Bravo.

The middle third and some of the final third of the book deal exclusively with installing and configuring Osiris and Samhain. The instructions are wise and very thorough. I was impressed by guidance on how to compile and install Osiris on Windows from source, using MinGW and MSYS. I also liked the book's frequent use of FreeBSD as a Unix reference platform.

I found a few minor issues with HIM, and one major drawback that prevented a five star review. First, I disagree with the statement on p 19 that "most attacks originate from within the network by authorized users." The annual CSI/FBI study has repeatedly shown this to not be true; rather, insider attacks, when they do occur, are typically more damaging that those perpetrated by outsiders. Second, I found some minor rough editing, e.g. "Nimbda" repeatedly used in place of "Nimda." Third, and most important, it would have been extremely helpful to have shown case studies of Osiris and Samhain in action when detecting configuration changes and/or intrusions. I left the book with a lot of ideas on installation and configuration, but it would have been helpful to see case studies on using host-based data to identify intrusions.

I am adding HIM to my recommended reading list for system administrators. HIM gives administrators the documentation and theory they need to add another critical tool to their security arsenal. I would like to see a second edition that adds case studies, and perhaps chapters on using Radmind for open source change management.

[Feb 20, 2005] Avoiding Trojans and Rootkits

Trojans, rootkits, and DDoS agents are a sad reality. It's a little disheartening to think that software exists which, given a chance, can install unwanted files on your system, overwrite or destroy your own files, send your data or user input elsewhere, or use your computer to attack another system.

The more advanced among you may be smiling and smugly thinking "that's why I run a Unix system". True, there are fewer nasties out there which target Unix systems, but they do exist. Further, as the Unix user base increases, so will the amount and frequency of exploits against Unix systems. Fortunately, as a FreeBSD user, there are many utilities available to you, as well as many good habits that you can teach yourself. The next two articles will discuss these utilities and habits.

[Feb 20, 2005] SANS Intrusion Detection FAQ How to Examine a Unix Box for Possible Compromise

To identify a potential compromised Unix box is some what of an arcane art, though there are some simple things to look for. Always keep an eye on the un-seen trust relationships. Who mounts who via NFS? Who has who in their .rhosts, .shosts, or hosts.equiv? Who has a .netrc from that host?

[Feb 18, 2005] NewsForge Check your filesystems' integrity with afick

Title Check your filesystems' integrity with afick
Date 2005.02.18 3:00
Author Brian Warshawsky

With new threats showing up every day, administrators find it increasingly hard to establish continued trust with their filesystems. Luckily, it's easier than you might think to maintain omniscient control of your filesystem. Through effective use of a filesystem integrity checker, you can keep a watchful eye on every aspect of an important machine's filesystem.

There are several filesystem integrity checker applications, both commercial and open source. I chose to deploy afick, because it is written in Perl, which makes it lightweight and easily portable between different operating systems. Though by nature designed for the command line, afick also has an optional Webmin module and a graphical interface written in perl-Tk.

For this article we will focus on the command-line implementation on a SUSE Enterprise 8.0 server, but what you see here should be applicable to just about any *nix distribution.


You can either download and build afick from the source code, or, if you're using a package-based distribution, you can install from an RPM or Debian binary packages. Let's walk through building it from source. To begin, download and unpack the latest version, navigate to the afick directory, and run the following command:

# perl

Since you aren't installing the GUI, you can safely ignore any errors about missing perl-Tk modules.

Next, type make install to install the console tool. When your machine finishes copying all the necessary files, afick will be installed on your system. Installation is only half the battle, though. The real fun lies in the configuration and testing phase.


Afick begins by creating a database of values representing the current state of your filesystem. When you run it later in update mode, afick compares the current filesystem to the original and notes changes.

The exact attributes of your filesystem that are checked are controlled by a configuration file. Afick provides a default configuration file for both Linux and Windows in the directory where it was originally unpacked. In our case, we are interested in the file linux.conf. You can modify a wealth of options in this file, but we will focus only on the essentials.

Afick provides multiple ways to check every file and directory on your system, so no one configuration file is going to work for everyone. For instance, I am running PHPNuke on a Web server that includes forums, which are going to change constantly as users post items and change their preferences. I don't want those changes dumped into my mailbox every day, possibly burying something important. Someone else with a static Web site, however, might want to see that that content is never changed unexpectedly, and would therefore closely monitor that directory. It takes a bit of trial and error to fine-tune afick (or any other integrity checker for that matter) for your specific needs.

Let's begin by initializing the database with the command:

# afick -c /[path_to_linux.conf]/linux.conf -i

The -c tells afick where to find the configuration file it should use, while the -i instructs it to make an initial copy of the filesystem database. This process will take a few minutes. Once it has finished, let your server run for a while, then compare the databases with this slightly different command:

# afick -c /[path_to_linux.conf]/linux.conf -k

The -k argument tells afick to compare the current filesystem against the database specified in the first line of the linux.conf file, which is the initial database. Any changes will be noted on stdout.

Repeat this process a few times until you're getting a feel for what is changing and how so. For instance, if you've got busy log files somewhere outside of /var, they might produce a bundle of changes every time you run afick, which will create white noise around potentially useful data. After you've got a list of repeat offenders, you can tune the linux.conf file.

The linux.conf file actually has a decent description of all the file attributes you can monitor, including device, inode, permissions, owner, last time modified, and several others. You can even create your own rule sets for certain types of files and directories. For instance, you don't want afick reporting warnings about the individual files in /var/log being modified, as these files are going to be modified almost constantly on some systems. To create a rule set that would check the user and group ownership, the device they reside on, and the permissions, you would first add:

specialrule = u+g+d+p

Then, to apply this custom rule to the /var/log directory, you would add the following line to the =/Dir section of the conf file:

/var/log specialrule

If you want to define a rule set that ignores the /tmp directory, checks only the files ending in .backup in /root, and ignores all files in /home/user that end in .old, you would add the following lines to the alias section of the config file.

/root/*.backup special_rule
exclude_re := /home/user/.*\.old

Afick recognizes the standard Unix wild cards and regular expressions in rule sets. With a little bit of tweaking, you can tune afick to completely monitor your filesystem in all the necessary places, while ignoring the spots that would generate useless noise.

After you've spent some time tweaking your configuration file you need to ensure that afick itself cannot be modified. The most secure way to accomplish this is to put the database, found by default at /var/lib/afick/afick.pag, somewhere that is write-only. Unfortunately a diskette isn't an option because of the size of the afick database; my database is roughly 15MB for a 4GB server. I recommend using an Iomega Zip disk for a couple reasons. Primarily, you can switch a Zip disk from read-only to read-write with the flip of a switch. This is convenient because every time you make a change to your filesystem you'll need to update the database to clear the warnings that afick will produce every time it runs thereafter, which could lead to wasting a lot of CDs if you tried to deploy the database on a CD-R.

No matter where you store your database, you still need to tell the configuration file where to find it. Mount your Zip drive (or your CD) and copy /var/lib/afick/afick.pag to your mount point. Then change the entry for the database location in the first section of the config file to represent the path to your removable media.


In addition to storing the database on read-only media, I also choose to err on the side of paranoia, and I keep my config file on read-only media as well. In this case, it's a diskette that I've write-protected. This doesn't prevent afick from being run with a separate config file created locally on the server, but it does allow me to be sure that no small detail within the file can be changed without someone physically touching my servers.

Automating it

The final step in configuring afick is automating it. (Note that this is not strictly necessary if you wish to run afick only after certain tasks.) The easiest way to automate afick is with the afick.cron script included in the original directory where you unpacked the source code. If you installed via an RPM, then afick.cron was implemented upon your installation, and should be emailing root as changes occur. If you followed the instructions here and installed from source, you have to add it manually. At your command prompt, just type:

crontab -e
0*/2 * * * root /[path_to_afick.cron]/afick.cron

Then save and exit the editor. This tells your operating system to run afick once every two hours and mail the results, if any, to root.

Updating it

Every time you make a change to a filesystem, you'll want to update the afick database. Unfortunately, this is a point where security can begin to become an inconvenience. You must physically be at the server to change the Zip disk to read-write mode so the database can be updated.

To initiate update mode, simply replace the -k in the afick command line with -u. This will update the database to include any changes that have happened since the last time afick was run. Since afick will continue using the new database to ensure file integrity, you should always run afick once in compare mode before updating it, to be sure that the database you are about to create won't report recently compromised system commands or files as legitimate. As a further level of protection, afick has built-in integrity checks it performs on its own executables to ensure that afick itself hasn't been modified.

On a short side note, you can also use the update feature of afick to monitor exactly what changes a program's installation procedure makes to a filesystem by using the update feature immediately before and immediately after the installation. This is extremely useful in situations where you cannot verify the authenticity of an application before you install it. Just make sure when you test it with afick that you don't do so on a mission-critical machine. You can also use afick for retroactive testing to ensure uninstalling the software actually returns your filesystem to its previous state.

The future of afick

Afick is a work in progress. In recent conversations, developer Eric Gerbier said he intends to include in future releases a daemon-enabled version that doesn't rely on cron to run afick, thus delivering real-time filesystem monitoring. An option to export afick's results in HTML/XML is in the works for version 3.0, due out sometime in the next few months.

No system will ever be completely safe from malicious users and unauthorized access. If a machine under your control becomes compromised, you must have the proper precautions in place to quickly mitigate the damage and restore services. With a file integrity checker such as afick in place, when that dreaded day comes, you will be prepared to determine exactly what has happened (or is happening) and react accordingly.

In order for afick, or any other file integrity checker, to work as needed, you'll need to take special care in observing the general actions and changes in your filesystem in order to correctly and efficiently craft your configuration rules. Once you've done that, it's just a matter of staying on top of the ever-changing filesystem. Update regularly and update often, so as to catch problems as they begin, and not when it's too late. If you make these practices a regular part of your daily administration routines, you'll be prepared to react efficiently to a breach in security should the need arise.

Brian Warshawsky has built, supported, and administered mission-critical IT infastructure for the United States Naval Research Laboratory, Virginia Commonwealth University, RichmondAir Wireless, and is currently employed by Sungard Collegis at Virginia State University.

[Dec 2, 2004] Sys Admin Magazine v13, i12 Entrap A File Integrity Checker

Ed Schaefer and John Spurgeon

Verifying the integrity of files is an important systems administration task. Well-known systems administration authority ∆leen Frisch says that "minimally, you should periodically check the ownership and permissions of important system files and directories." One method for verifying files is to take a snapshot of the system in a pristine state and compare it against subsequent snapshots.

You can use a product such as Tripwire (, or create your own such as our Entrap utility. Entrap is a suite of Korn shell scripts that compares two snapshots of a system and reports the differences. When two snapshots are compared, Entrap reports information about files that have been added, deleted, and modified.

An Entrap snapshot includes the file characteristics displayed by the command ls -ild, as well as optional file signatures, such as md5. Filtering rules may be set up to instruct Entrap to ignore specific files and/or attributes when comparing two snapshots.

In this column, we'll explain Entrap's configuration file. We'll discuss the commands used to take a snapshot, filter snapshots, and compare two snapshots. We'll review the directory structure, present an Entrap example, and include a high-level description of the Entrap scripts. We conclude with what's in the tarball and possible Entrap enhancements. homepage - RPM Package Manager

The stated early (10 November 2002) roadmap for that new rpm-4.2 release is to include:

a) file classes (think: sanitized file (1) output in dictionary, per-file index).

b) file color bits (think: 1=elf32, 2=elf64).

c) attaching dependencies to files, so that a refcount is computible.

d) replacing find-{provides,requires} with internal elfutils/file-3.39.

e) install policy based on file color bits

f) --excludeconfig like --excludedocs with the added twist that an internal Provides: will be turned off, exposing a Requires:. This will provide a means to install all %config files from a separate package if/when necessary.

and teaching tripwire to read file MD5's from an rpm database.

rpm-4.2 will be the next release of rpm.

changetrack is a program to monitor changes to files. If files are modified one day, and the machine starts working incorrectly some days later, changetrack can provide information on which files were modified, and help locate the problem. Changetrack will also allow recovery of the files from any stage.

Debian package:
Demo site:

From: [email protected]
Date: Thu Oct 18 2001 - 12:29:56 PDT

  1. Writing Tripwire Policy Files

  • Next message: [email protected]: "borZoi 0.9.0 (Development)"

    Tripwire Policy File Generator 0.62
      by Pjotr Prins (
      Monday, October 15th 2001 13:11
    Categories: System :: Installation/Setup, System :: Logging, System ::
    Systems Administration
    About: Tripwire Policy File Generator uses a Perl script - -
    that writes a Tripwire policy file from  an existing Linux setup. While it
    targets ROCK Linux  it can really be used for any system. The script  reads
    its commands from a Tripwire policy  file 'template' using name expansions.
    It has a few  nice facilities like list expansion, directory walks,
    variable adding  etc.
    Changes: Support for tripwire installations in /opt/tripwire/sbin/.
    License: GNU General Public License (GPL)
    Elias Levy
    Si vis pacem, para bellum

    Useful tripwire script

    Sasha Pachev [email protected]
    Mon, 28 Apr 2003 21:40:31 -0600

    Hello, everyone:
    For those not familiar with Tripwire, it is an intrusion detection tool that
    tracks unauthorized modification of critical system files. The home site of
    Tripwire is
    I have been setting it up on our company's systems at my new job, and ran
    into a problem. The default policy file listed so many non-existent files
    that it would have been very tedious to go through every one of them and
    comment it out by hand. To my surprise, I could not find a tool that would do
    this automatically, so I quickly put together a simple perl script to solve
    the problem:
    #! /usr/bin/perl
    while (<>)
      if ($_ =~ /(\/[\w\-\.\/]+)/ && !($` =~ /#/))
        $_ = "#$_" if (! -e "$1");
      print $_;
    Input is the default policy source file on STDIN or as first argument, output
    on stdout is the new policy source file. Some pedants will probably comment
    that my regexp does not cover every legal character you could possibly find
    in a file. However, such characters are not found in the default policy file
    names, so I did not worry about it.
    Let's hope somebody finds this useful. If you know of an easier solution for
    this problem, I would like to hear about it.
    Sasha Pachev
    Create online surveys at is a simple filesystem integrity checker implemented in perl. It compares the SHA-1 hashes of files to an initial state and alerts the user of any changes. slipwire also records extensive file information such as inode number, last-modified time, filesize, uid, gid, etc, and can also report changes in any of these.


    Nabou is a system integrity monitor. That means, it runs every night and watches for changes on files. If a file has changed in any way, it will inform you by email(if you prefer that). Beside of this it can also look for changed or added user accounts, cronjobs, weird processes and suid files. And you can define your own checks using inline scriptlets.

    It stores the properties for each file in a dbm database and will warn you if something has been changed on a file. The most important thing to check for, is the MD5-checksum. This checksum will never be the same if the file content has changed even if only one letter has changed. But you can also look for some other properties, like ownership or filemode. See the section configuration for more details on that!

    You can use nabou as an Intrusion Detection System or simply as a system monitor.

    Beside filesystem integrity you can use nabou as process monitor as well, in this special mode it can run as a daemon in the background and inform you if it finds a weird process. Take a look at the sample process monitoring config.

    Nabou can also monitor crontab entries, UID 0 user accounts, User accounts and Listening TCP/UDP ports.

    Nabou requires perl and some Perl Modules.

    If you are interested, here is a sample report generated by a nabou check run.

    Project details for Integrity Checking Utility -- Perl

    ICU (Integrity Checking Utility) is a Perl program used for executing AIDE filesystem integrity checks on remote hosts from an ICU server and sending reports via email. Remote sessions are initiated via ssh.

    fcheck is GPL licensed perl code written by Michael A. Gumienny.

    It's source can be downloaded from various sites (debian)

    The current version is 2.7.59

    Description: IDS filesystem baseline integrity checker. The fcheck utility is an IDS (Intrusion Detection System) which can be used to monitor changes to any given filesystem.
    Essentially, fcheck has the ability to monitor directories, files or complete filesystems for any additions, deletions, and modifications. It is configurable to exclude active log files, and can be ran as often as needed from the command line or cron making it extremely difficult to circumvent.

    Paranoid Penguin Verifying Filesystem Integrity with CVS by Michael Rash

    ...CVS is a tool used primarily by software developers to help organize and provide a version structure to a set of source code. One of its most useful features is the ability to keep track of all changes in a piece of source code (or ordinary text) over time, starting from when the code initially was checked into the CVS repository. When a developer wishes to make changes, the code is checked out of the repository, modifications are made and the resulting code is committed back into the repository. CVS automatically increments the version number and keeps track of the changes. After modifications have been committed to the repository, it is possible to see exactly what was changed in the new version relative to the previous (or any other) version by using the cvs diff command.

    Now that we have a basic understanding of the steps used to administer Tripwire for a system, we will show how CVS can be leveraged in much the same way, with one important enhancement: difference tracking for plain-text configuration files. If one of your machines is cracked, and a user is added to the /etc/passwd file, Tripwire can report the fact that any number of file attributes have changed, such as the mtime or one-way hash signature, but it cannot tell you exactly which user was added.

    A significant problem for intrusion detection in general is that the number of false positives generated tends to be very high, and host-based intrusion-detection systems (HBIDS) are not exempt to this phenomenon. Depending on the number of systems involved, and hence the resulting complexity, false positives can be generated so frequently that the data generated by HBIDS may become overwhelming for administrators to handle. Thus, in the example of a user being added to the /etc/passwd file, if the HBIDS could report exactly which user was added, it would help to determine whether the addition was part of a legitimate system administration action or something else. This could save hours of time since once it has been determined that a system has been compromised, the only way to guarantee returning the system to its normal state is to re-install the entire operating system from scratch.

    Collecting the Data

    In keeping with the Tripwire methodology of storing the initial filesystem snapshot on media separate from the system being monitored, we need a way to collect data from remote systems and archive it within a local CVS repository. To accomplish this, we set up a dedicated CVS ``collector box'' within our network. All filesystem monitoring functions will be executed by a single user from this machine. Monitoring functions include the collecting of raw data, synchronizing the data with the CVS repository and sending alerts if unauthorized changes are found. We will utilize the ssh protocol as the network communication vehicle to collect data from the remote machines. To make things easier we put our HBIDS user's RSA key into root's authorized_keys file on each of the target systems. This allows the HBIDS user to execute commands as root on any target system without having to supply a password via ssh. Now that we have a general idea for the architecture of the collector box, we can begin collecting the data.

    We need to collect two classes of data from remote systems: ASCII configuration files and the output of commands. Collecting the output of commands is generally a broader category than collecting files because we probably are not going to want to replicate all remote filesystems in their entirety. Commands that produce important monitoring output include md5sum (generates md5 signatures for files) and find / -type f -perm +6000 (finds all files that have the uid and gid bits set in their permissions). In Listings 1 and 2, we illustrate Perl code that collects data from remote systems and monitors changes to this data over time.

    Listing 1.

    In Listing 1, we have a piece of Perl code that makes use of the Net::SSH::Perl module to collect three sets of data from the host whose address is, md5 hash signatures of a few important system binaries, a tar archive of a few OS configuration files and a listing of all files that have the uid and/or gid permission bits set. Lines 7 and 8 define the IP address of the target machine as well as the remote user that will log in to. Recall that we have the local user's preshared key on the remote machine, so we will not have to supply a password to log in. Lines 10-13 define a small sample list of system binaries for which md5 hash signatures will be calculated, and similarly, lines 15-18 define a list of files that will be archived locally from the remote system. Lines 20-24 build a small hash to link a local filename to the command that will be executed on the remote system, and the output of each command will be stored locally within this file. Lines 27-33 comprise the meat of the collection code and call the run_command() subroutine (lines 36-49) for each command in the %Cmds hash. Each execution of run_command() will create a new Net::SSH::Perl object that will be used to open an ssh session to the remote host, log in and execute the command that was passed to the subroutine.

    Listing 2.

    In Listing 2, we illustrate a piece of Perl code that is responsible for generating e-mail alerts if any of the files collected by change. This is accomplished first by checking the current module out of the CVS repository (line 12), executing (line 14) to install fresh copies of the remote command output and files within the local directory structure, and then committing each (possibly modified) file back into the repository (line 20). By checking the return value of the cvs commit command for each file (line 20), we can determine if changes have been made to the file, as cvs automatically increments the file's version number and keeps track of exactly what has changed. If a change is detected in a particular file, calculates the previous version number (lines 27-36), executes the cvs diff command against the previous revision to get the changes (lines 39-40) and e-mails the contents of the change (lines 47-48) to the e-mail address defined in line

    Detecting Intrusions

    Now let's put the and scripts into action with a couple of intrusion examples. Assume the target system is a Red Hat 6.2 machine; HIBDS data has been collected from this machine before any external network connection was established, and the target has an IP address of

    Example 1

    Suppose machine is cracked, and the following command is executed as root:

    # cp /bin/sh /dev/... && chmod 4755 /dev/...

    This will copy the /bin/sh shell to the /dev/ directory as the file ``...'' and will set the uid bit. Because the file is owned by root, and we made it executable by any user on the system, the attacker only needs to know the path /dev/... to execute any command as root. Obviously, we would like to know if something like this has happened on the system. Now, on the collector box, we execute, and the following e-mail is sent to root@localhost, which clearly shows /dev/... as a new suid file:

    From: hbids@localhost
    Subject: Changed file on suidfiles
    To: root@localhost
    Date: Sat, 10 Nov 2001 17:35:13 -0500 (EST)
    Index: /home/mbr/
    RCS file: /usr/local/hbids_cvs/,v
    retrieving revision 1.3
    retrieving revision 1.4
    diff -r1.3 -r1.4
    > -rwsr-xr-x 1 root root 512668 Nov 10 18:40 /dev/...

    Example 2

    Now suppose an attacker is able to execute the following two commands as root:

    # echo "eviluser:x:0:0::/:/bin/bash" >> /etc/passwd
    # echo "eviluser::11636:0:99999:7:::" >> /etc/shadow

    Note that the uid and gid for eviluser are set to 0 and 0 in the /etc/passwd entry, and also that there is no encrypted password string in the /etc/shadow entry. Hence, any user on the system could become root without supplying a password simply by typing su - eviluser. As in the previous example, after running, we receive the following e-mails in root's mailbox:

    From: hbids@localhost
    Subject: Changed file on /etc/passwd
    Delivered-To: root@localhost
    Date: Sat, 10 Nov 2001 17:43:17 -0500 (EST)
    Index: /home/mbr/
    RCS file: /usr/local/hbids_cvs/,v
    retrieving revision 1.2
    retrieving revision 1.3
    diff -r1.2 -r1.3
    > eviluser:x:0:0::/:/bin/bash


    From: hbids@localhost
    Subject: Changed file on /etc/shadow
    Delivered-To: root@localhost
    Date: Sat, 10 Nov 2001 17:43:18 -0500 (EST)
    Index: /home/mbr/
    RCS file: /usr/local/hbids_cvs/,v
    retrieving revision 1.2
    retrieving revision 1.3
    diff -r1.2 -r1.3
    > eviluser::11636:0:99999:7:::


    Finding changes in the filesystem can be an effective method for detecting intruders. In this article we have illustrated some simple Perl code that bends CVS into a homegrown, host-based intrusion-detection system. At my current place of employment, USinternetworking, Inc., a large ASP in Annapolis, Maryland, we use a similar (although greatly expanded) custom system called USiOasis to help verify filesystem integrity across several hundred machines in our network infrastructure. The machines are loaded with various operating systems that include Linux, HPUX, Solaris and Windows, and run many different types of server applications. The system includes a MySQL database back end, a rather large CVS repository and a custom web/CGI front end written mostly in Perl. Making use of a CVS repository to perform difference tracking also comes with an important additional benefit: an excellent visualization tool written in Python called ViewCVS. Storing operating system and application configuration files within CVS also aids several areas outside of detecting intrusions, such as troubleshooting network and application-level outages, disaster recovery and tracking system configurations over time.

    samhain labs - samhain

    The samhain file integrity / intrusion detection system

    Executive summary

    samhain is an open source file integrity and host-based intrusion detection system for Linux and Unix. It can run as a daemon process, and and thus can remember file changes - contrary to a tool that runs from cron, if a file is modified you will get only one report, while subsequent checks of that file will ignore the modification as it is already reported (unless the file is modified again).

    samhain can optionally be used as client/server system to provide centralized monitoring for multiple host. Logging to a (MySQL or PostgreSQL) database is supported.



    samhain has been tested on Linux, FreeBSD, AIX 4.x, HP-UX 10.20, Unixware 7.1.0, Solaris 2.6, 2.8, and Alpha/True64. We have reports on smooth installation on OpenBSD and HP-UX 11 systems as well. samhain builds cleanly on Mac OS X, but is not tested by us on this platform. If you have a platform that is more or less POSIX-compliant but is not listed here, we may help you to get samhain running. Just send a mail to [email protected] or use our contact form.

    samhain is reported to build and run on Windows 2000 in the Cygwin environment (Cygwin is a free POSIX emulation for Windows). However, please note that Cygwin "uses shared memory areas to store information on Cygwin processes. Because these areas are not protected in any way, in principle a malicious user could modify them to cause unexpected behaviour in Cygwin processes" (from the Cygwin User Guide).

    [Jan 25, 2002] BSI Software Toby

    (Perl alternative to Tripwire) v. 1.1 was released on 6-Dec-2001 A major rewrite of the internal code, and fixes for a few minor bugs.

    Toby is another reimplementation of the ever-useful tripwire program. The original tripwire-1.3 is available for free, but ran a bit slow in my test comparisons. Also, newer versions of tripwire are not free for commercial users (no longer true!), but include much cooler crytographic signatures and such. My feeling was that it it would be nice to have a GPL version of tripwire to use with some of my clients.

    The first major difference from tripwire is that toby is written in perl. Cryptographic modules from CPAN are used, hopefully ensuring that as better algorithms are found for some routines (eg, MD5) then toby will inherit those improvements.

    Currently, there are a few deficiencies in the program. There is no -loosedir option, so directories are strictly checked in the same way as files. Also, the configuration file is really just a perl

    [Jan 25, 2002] Solaris[tm] Fingerprint Database

    A new version of The Solaris[tm] Fingerprint Database (1.2) Download: sfpc-1.2.tar.Z and
    For the use of The Solaris[tm] Fingerprint Database - A Security Tool for the Solaris Operating Environment Files
    sfpc-1.2.tar.Z - The Solaris[tm] Fingerprint Database Companion (sfpC) is a tool designed to automate the process of querying the Solaris Fingerprint Database (sfpDB). - SideKick is a tool developed to automate the collection of MD5 files signatures.

    Sun BluePrints OnLine - Articles May 2001

    Verifying whether system executables, configuration files, and startup scripts have been modified by a user has always been a difficult task. Security tools attempting to address this issue have been around for many years. These tools typically generate cryptographic checksums of files when a system is first installed.

    The Solaris Fingerprint Database (sfpDB) is a free SunSolve Online[sm] service that enables users to verify the integrity of files distributed with the Solaris[tm] Operating Environment (Solaris OE). Examples of these files include the /bin/su executable file, Solaris OE patches, and unbundled products such as Sun Forte[tm] Developer Tools. The list of checksums, generated for a system, must be updated after the system is modified by patch installation and software installations. The issue with these tools has always been verifying that the files used to generate the baseline checksums are correct and current.

    The Solaris Fingerprint Database addresses the issue of validating the base Sun provided files. This includes files distributed with Solaris OE media kits, unbundled software, and patches. The sfpDB provides a mechanism to verify that a true file in an official binary distribution is being used, and not an altered version that compromises system security and causes other problems.

    FCheck by Michael A. Gumienny

    This is an integrity checker written in Perl.

    FCheck v2.07.59 provides intrusion detection and policy enforcement through the use of comparative system snapshots. Similar to Tripwire but less cumbersome to operate, FCheck can provide notification of any differences found through use of an event management system, printer, and/or email when any monitored system files or directories are modified, including any additions and/or deletions. Tested on AIX, BSD, HP/UX, Linux, SCO, Solaris, SunOS, and Windows95/98/NT/3.x systems all running PERL 4.0.x or better.

    rkdet - rootkit detector for Linux

    This program is a daemon intended to catch someone installing a rootkit or running a packet sniffer. It is designed to run continually with a small footprint under an innocuous name. When triggered, it sends email, appends to a logfile, and disables networking or halts the system. it is designed to install with the minimum of disruption to a normal multiuser system, and should not require rebuilding with each kernel change or system upgrade.

    Re:Some Advice. (Score:2, Insightful)
    by dennisp ([email protected]) on Tuesday September 28, @02:09PM EDT (#48)
    (User Info)
    Actually, I use a slightly modified version of AIDE. It has the same feature set as tripwire as well as a couple of other features. Search on freshmeat with the keyword 'tripwire' and you'll get many similar utilities.

    As well as that, I run a tty watching daemon that monitors user commands and pages me and mails a message if someone tries something especially stupid. I also have the regular userland jailed to prevent regular users from doing something stupid as well.

    Another tip for the real sysadm nazi is to mount the filesystem with system commands as read only so that idgits can't install their handy rootkit that prevents you noticing what they are doing.

    As well as that, you could also set it up to log certain breaking attempts and to automatically send mail to the set arin administrator of that ip range. That sometimes works and is satisfying when I get a reply back stating that the user doing such a thing was deleted (though you'll never get a response if it is a large ISP and doesn't care at all [see uunet dialup users and no responses for repeated spammers and breakin attempts]).
    DISCLAIMER: Opinions subject to change if realized idiotic. Therefore flames are not appreciated.

    [Dec. 14, 2001] O'Reilly Network Understanding Rootkits

    TLSA2001017 Adore Worm Advisory

    The four security flaws sought out by the Adore worm are:

    a. wu-ftpd: Buffer overrun; due to improper bounds checking, SITE EXEC may
    enable remote root execution, without having any local user account

    b. nfs-utils: Flaw in the rpc.statd daemon can lead to remote root break in.
    Please note that the nfs-utils package will replace the
    packages knfsd and knfsd-client. The package knfsd-client
    contains the rpc.statd daemon.

    c. LPRng: Vulnerability due to incorrect usage of the syslog() function.
    Local and remote users can send string-formatting operators to the print-
    er daemon to corrupt the daemon's execution, potentially gaining root

    d. bind: Buffer overflow in transaction signature (TSIG) handling code.
    This vulnerability may allow an attacker to execute code with the same
    privileges as the BIND server. Because BIND is typically run by a
    superuser account, the execution would occur with superuser privileges.

    SANS Global Incident Analysis Center Adore Worm

    Adore worm replaces only one system binary (ps), with a trojaned version and moves the original to /usr/bin/adore. It installs the files in /usr/lib/lib. It then sends an email to the following addresses: [email protected], [email protected], [email protected], [email protected]
    Attempts have been made to get these addresses taken offline, but no response so far from the provider. It attempts to send the following information:

    • /etc/ftpusers
    • ifconfig
    • ps -aux (using the original binary in /usr/bin/adore)
    • /root/.bash_history
    • /etc/hosts
    • /etc/shadow

    Adore then runs a package called icmp. With the options provided with the tarball, it by default sets the port to listen too, and the packet length to watch for. When it sees this information it then sets a rootshell to allow connections. It also sets up a cronjob in cron daily (which runs at 04:02 am local time) to run and remove all traces of its existence and then reboots your system. However, it does not remove the backdoor.

    New worm targets unprotected Linux systems


    At risk, SANS said, are Linux systems that haven't been protected against vulnerabilities known as rpc-statd, wu-ftpd, LPRng and the Berkeley Internet Name Domain (BIND) software. LPRng is installed by default on servers running Red Hat 7.0, according to SANS, while BIND refers to a series of holes in the Redwood City, Calif.-based Internet Software Consortium's BIND server software.

    All of those vulnerabilities are well-known and can be blocked by readily available patches. But Adore and other worms like it can easily find exposed systems because IT managers frequently don't have time to install every security patch and bug fix that's released, said Eric Hemmendinger, an analyst at Aberdeen Group Inc. in Boston.

    F-Secure Computer Virus Information Pages Adore

    Adore is a worm, that spreads in Linux systems using four diffrent, known vulnerabilities already used by Ramen and Lion worms. These vulnerabilities concern BIND named, wu-ftpd, rpc.statd and lpd services.

    When Adore is running, it scans for vulnerable hosts from random Class B subnets on the network. If vulnerable host is found, attempts to download the main worm part from a web server located in China, in a similar way that Lion worm does.

    After the worm has been downloaded to the victim machine, it is stored in to "/usr/local/bin/lib/" directory and "" is executed launching the worm.

    At the start, "" replaces "/bin/ps" with trojanized version that does not show processes that are part of the worm. The original "/bin/ps" command is copied "/usr/bin/anacron".

    The script also replaces "/sbin/klogd" with a version that has a backdoor. The backdoor activates when it receives a ping packet with correct size, and opens a shell in the port 65535. Orginal "klogd" will be saved to "/usr/lib/klogd.o".

    The worm sends sensitive system data, including contents of the "/etc/shadow" file to four different email addresses.

    Adore also creates a script file "/etc/cron.daily/0anacron". This file will be executed by the cron daemon with the next daily run. At this time, the worm will remove itself from the system and restore the original "/bin/ps". All worm related processes except the backdoor will be shut down, and the system will be restarted if "/sbin/shutdown" exists. The backdoor will start after the system has been restarted as the "/sbin/klogd" still contains the backdoor.

    All four vulnerabilities have been already fixed by different Linux vendors. Further information is available at:

    Debian GNU/Linux:

    Linux Mandrake:


    RedHat Linux:

    F-Secure Anti-Virus detects the Adore worm with the current updates.

    [Analysis: Sami Rautiainen, F-Secure; April 2001]

    chkrootkit -- locally checks for signs of a rootkit

    ID FAQ - Analysis of N.F.O hacking- - rootkit

    The detection of this intrusion was fairly easy but it shows that a skilled administrator knows what's happening on his machine. The Administrator found a application named "bnc" running as uid=0 (root) and he simply did "find / -name *bnc*" and found that secret directory I mentioned before. He noticed that he had been compromised and handled it very well.

    [Oct 1, 2000] FEATURES System Security

    Linux Magazine

    Integrity Checking

    So all this process accounting and system auditing is fine, but how do you know that there's not something really sneaky going on? For example, how do you know that the new programs you are installing are not secretly e-mailing sensitive information to some system cracker somewhere? Or, how can you tell if someone has broken into your system and quietly replaced a key system binary, such as the login program, with a "special" version of that program that records your username and password and forwards the information along to someone else?

    The answer to these questions is that you need to be able to verify the integrity of your key system files. There are a number of tools available that can do this. When it comes to installing new software, many Linux distributions use the RPM (Red Hat Package Manager) system for their package management. RPM comes with integrity-checking built in. Every package has a unique "checksum" that the package manager can verify in order to determine whether a package has been modified or not. See the man page on rpm for more details.

    Another utility that is very useful for verifying the integrity of key system files is the md5sum command. md5sum can be used to create a "fingerprint" of a file. The fingerprint is strongly dependent upon the contents of the file to which md5sum is applied. Any changes to the file will result in a completely different fingerprint. md5sum is commonly used to verify the integrity of package updates that are distributed by a vendor. For example, a typical RPM package update could look something like the following:

    f380646e78a1f463c2d2cc855d3ccb2b package-2.2.1-1.i386.rpm.

    The md5sum-generated fingerprint belonging to the file package-2.2.1-1.i386.rpm is the 128-bit number printed above (f380646e78a1f463c2d2cc855d3ccb2b). The fingerprint can be used to correlate the integrity of the updated package before it is installed. To verify this file's integrity, you would type:

    [dave@magneto ~]$ md5sum package-2.2.1-1.i386.rpm.
    f380646e78a1f463c2d2cc855d3ccb2b package-2.2.1-1.i386.rpm.

    Running the md5sum command on the file package-2.2.1- 1.i386.rpm produces output indicating that the fingerprints match. This means that the file has not been tampered with. Keep in mind that this method doesn't take into account the possibility that the same person who may have modified a particular package might have also modified the fingerprint.

    Another very useful program for verifying the integrity of a number of important system files is named tripwire. Tripwire looks at a number of checksums based on all of your important system binaries and configuration files and compares them against a database of former, known-to-be-valid checksum values for a reference. Thus, changes in any of the files will be flagged.

    Setting up and configuring tripwire is not terribly difficult. However, managing tripwire requires daily monitoring of the checksum database and coordination with other users on the system to make sure that authorized changes to configuration files are properly accounted for, while unauthorized changes will be flagged.

    It is also a good idea to make a copy of critical system files and store them on removable media to be used as a form of integrity checking. Programs such as /bin/ps and /sbin/ifconfig should always be readily available.

    [Apr 07, 2000] checking rpm integrity

    SuSE Security mailinglist
    Date: Fri, 07 Apr 2000 16:56:20 +1200 (NZST)
    From: Volker Kuhlmann <[email protected]>
    Message-id: <[email protected]>
    Subject: checking rpm integrity
    Stupid question: when I download an updated rpm for SuSE, how do I check
    whether it's realy come from SuSE???

    There is md5sum - but arrrrrrrrgggggggggg it's tedious!!!

    Copy the relevant lines out of the SuSE advisory into a new file, edit
    out the "ftp://..." part at the front, save, run md5sum -c.
    That can't be it, can it?

    It does not seem to be a very reliable way to go. I find that

    > md5sum -c ~/t/m
    update/6.4/kpa1/kreatecd-0.3.8b-0.i386.rpm: FAILED



    > md5sum update/6.4/kpa1/kreatecd-0.3.8b-0.i386.rpm
    ec64fd1187373f48c02922eb71ae2f7a update/6.4/kpa1/kreatecd-0.3.8b-0.i386.rpm

    I know SuSE has published bogus md5 sums before. Has it happen
    again? Seems like it. See:

    out of the gpm advisory.

    When I was still using Red Hat, the whole job for any number of downloaded
    rpms was done with "rpm -Kv *.rpm".

    Question: why does SuSE not pgp/gpg sign their rpms? It would be much
    more user-friendly as well as less error-prone. Or does it take that
    much more effort to organise on SuSE's part?

    (This is what I was meaning to gripe about for a while :-( )


    integrity checking

    Re: [Re: Security Thread Revival] integrity checking

    • To: ma-linux @ tux . org
    • Subject: Re: [Re: Security Thread Revival] integrity checking
    • From: Peter W <peterw @ usa . net>
    • Date: 29 Dec 00 13:00:35 EST
    • Sender: owner-ma-linux @ tux . org

    Chuck Moss <[email protected]> wrote:
    > Yeah I use tripwire a lot now.
    > Doesn't help with an audit after the fact.
    Sure it does! You can know exactly what was changed.
    > What I am thinking about is some kind of rescue floppy/CD that has a
    > database of chksums or something like that.  Maybe I can pull them out of
    > the original RPMS and build a table or something.
    > The trick is to build a good database and have a controlled clean
    > independent boot.
    Right. If your box is thoroughly cracked, you can't trust the scheduled
    tripwire runs. A tripwire/AIDE/etc. cron job is better than nothing, but far
    from perfect.
    > Anybody done this yet?
    Few things:
    A rescue CD + floppy is tough, as you can't hold much data on a floppy. Jay
    Beale wrote an article on using Tripwire, and setting up a second config for a
    smaller, floppy-based database:
    [I've had better luck with floppy tripwire databases for Solaris, but I expect
    that's because Solaris includes so little software. ;-)]
    A better option would be having a Zip/LS-120 or both a CD and a CD-R[W] drive,
    so you could boot off the CD and use the Zip/LS-120/CD-R[W] to hold a full
    integrity-checking database.
    RPM verification: Sweth Chandramouli (a DC-area open systems guru) put
    together a Perl script that uses RPM's --verify option to check package
    integrity. There were a few things I wanted to add, but didn't. :-( If you
    rely heavily|exclusively on RPM packages, this is a great way to check things
    out if you failed to set up a tripwire/AIDE/etc. database. Not perfect, as 1)
    your configuration files are sure to change from the standard package, and
    this can't discern your changes from an attacker's changes 2) obviously does
    no good for non-RPM software 3) wouldn't catch the addition of new software
    that didn't conflict with RPMs, e.g. SUID root shells. Sweth's script should
    be on the Bastille-linux-discuss archives,, but I'm
    having trouble reaching that site at the moment.
    AIDE is a tripwire-like app that is truly Free Software:
    Two things about AIDE bug me:
     - configuration doesn't seem as flexible as Tripwire, e.g.
       having it check /foo but not /foo/bar or anything under /foo/bar
     - it doesn't monitor directory stat() info, e.g. an attacker could
       chmod a+w /etc and AIDE wouldn't catch it

    Recommended Links

    Google matched content

    Softpanorama Recommended

    Top articles



    Recommended Articles

    Host Integrity Monitoring Best Practices for Deployment by Brian Wotring

  • Installwatch

    installwatch 0.5
    Installwatch is very useful when you install a new package you've just compiled and want to keep track of changes in your file system. It monitors created and modified files, directories, permissions. It's very fast because it does not need a ``pre-install'' phase and it's not fooled by files added or modified by concurrent installations. It works with dynamically linked ELF programs.
  • MD5 Tools

    Claymore 0.2 by Sam Carter, Platforms: Linux
    Size: 4.75Kb
    Score: Notscoredyet

    Claymore is an intrusion detection, and integrity monitor system. This release is a little more polished than before, but it still assumes that you know how to setup a crontab file. It is still fully functional, :-). To accomplish it's design goals, it reads in a list of files stored in flat ASCII, and uses md5sum to check their integrity against that recorded earlier in a database. If the database is placed on a read-only medium such as a write-protected floppy, then it should provide an infallible record against remotely installed trojan horses. Thus by monitoring the integrity of the file system, claymore will serve as an aid in intrusion detection.

    Random Findings

    Integrity checking utility (ICU) 0.1

    Andreas ÷stling

    ICU (Integrity Checking Utility) is a PERL program used for executing AIDE filesystem integrity checks on remote hosts from an ICU server and sending reports via email. This is done with help from SSH.

    Sentinel (develpment status: active?)

    Download Sentinel for Linux (tar.gz file) - Available [ Sentinel v1.2.1c+ ] [ Sentinel v1.2.1c ] Sentinel is a fast file scanner similar to Tripwire or Viper with built in authentication using the RIPEMD 160 bit MAC hashing function. It uses a single database similar to Tripwire, maintains file integrity using the RIPEMD algorithm and also produces secure, signed logfiles. Its main design goal is to detect intruders modifying files. It also prevents intruders with root/superuser permissions from tampering with its log files and database. Disclaimer: this is not a security toolkit. It is a single purpose file/drive scanning program. Available versions are for linux (tested on all current Slackware and RedHat releases), with Irix versions soon to be added on.The tool can be downloaded from:



    Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


    War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


    Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


    Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

    Classic books:

    The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

    Most popular humor pages:

    Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

    The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

    Copyright © 1996-2021 by Softpanorama Society. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

    This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

    You can use PayPal to to buy a cup of coffee for authors of this site


    The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

    Last modified: July 28, 2019