May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Perl Integrity Checkers

News Integrity Checkers Recommended Links Baseliners Unix Configuration Management Tools Unix System Monitoring
Fcheck Afick Viper Nabou Slipwire Toby
      Sysadmin Horror Stories Humor Etc

Integrity checkers that support MD5 seems to be preferable as MD5 checksums are often available from vendors. With the current huge harddrives, storing a copy of all system files in archive is also a viable strategy.

Perl implementations are generally more flexible that written in C.  Integrity checkers in not the complete answer to the problem of detecting authorized changes. In reality they probably should be used with configuration managers such as Subversion, etc.  In real day-to-day usage integrity checker reports often degenerate to spam  so configuration requires intelligent tuning.  For example Suse SLES 10 SP3  is often writing to many of its configuration files at once when YaST is used. Even without YaST it for some  reason updates zmd.conf.   Several files are updated on reboot. 

In general the proper place for the integrity checking is OS kernel. Borth Linux and Windows make steps in this direction (Windows Authenticode is a real breakthrough, while Linux SOFFIC patch is more experimental)

But there is a problem of information overload that is present independently of the type of implementation and that needs to be deals with separately. Typically integrity checkers produce too much noise and  soon are abandoned by their users.

That means that the most important thing with the integrity checkers is to adhere to the principle "not too much zeal". It's better to check less but avoid spamming yourself, then check too much and subvert the whole idea (crying wolf). Start with a dozen files and two directories (/root and /etc) and increase dose little by little as you understand more what are you doing.  Never follow sample files provided, especially in tripwire analogs like aide  :-)

Another serious problem is that "one size fit all" integrity checkers (tripwire and friends) are almost useless as they are unable to answer to the most important questions, unless internally they distinguish different types of files. First of all there are several major types of files in operating system.

Often creators and users of integrity checkers became too paranoid to accomplish anything useful (that first happened with tripwire many years ago so this can be called  "Tripwire effect").  For configuration files, a simple configuration manager like Subversion can be more useful as the question you need to ask about changes are impossible to answer with simple integrity checkers -- they do not store the baseline files corresponding to their MD5 or other cryptographic checksums.   Nice example of paranoid approach is Samhain Labs file integrity checkers.

In reality if you are really paranoid you just should use integrity checker (and its databases) from a CD. Beyond basic staff elaborate efforts to prevent tampering with integrity checker database are overkill and make implementation considerably more complex and less flexible. That includes all  games with cryptographic signing of integrity checker configuration files, executable and to lesser extent the database (it can be compresses and encrypted at the end of the run using some kind of archiver). While theoretically useful (and probably a must for military organizations) for regular companies they distract from the task in hand and often contribute to making the integrity checker unusable (Tripwire effect).

Almost no effort was made to make integrity checker intelligent and suppress typical spam.  This huge drawback that partially can be solved by using open source (for example Perl) integrity checkers where you can not only create elaborate configuration file but also gradually adapt the code for your needs.  It is very important to distinguish configuration files and executables.  Different strategies should be employed for checking those two types of files.  For configuration files storing previous version is a better strategy then calculating cryptographic checksum. In Windows registry represents formidable challenge for integrity checkers.

Among available integrity checkers written in Perl two deserve mention:


Fcheck  is the current favorite for sysadmins. It is really simple (and that mean understandable and modifiable by average sysadmin)  Written in Perl. Licensed under GPL. Author: Michael A. Gumienny.

It consists of a single 72K  Perl script.  It's source can be downloaded from various sites (debian). The current version is 2.7.59

Fcheck  has the ability to monitor directories, files or complete filesystems for any additions, deletions, and modifications.  It is configurable to exclude active log files, and can be ran as often  as needed from the command line or cron making it extremely difficult to circumvent. You can install and run it in a couple of hours. All you need is to modify the location of config file directly in script and run fcheck -ac to create the initial database.  Script works both in Windows and Unix. 

For more information see Fcheck


afick is more complex tool that has TK interface but at the same time has some bugs. It can be recommended only to sysadmin with above average understanding of Perl.  It should work both in Windows and Unix but in reality works only in Linux  due to complex calls/macros used. Please be skeptical about positive press for afick as it is a complex and capricious beast with most of complexity not related to the core functionality. 

With new threats showing up every day, administrators find it increasingly hard to establish continued trust with their filesystems. Luckily, it's easier than you might think to maintain omniscient control of your filesystem. Through effective use of a filesystem integrity checker, you can keep a watchful eye on every aspect of an important machine's filesystem.

There are several filesystem integrity checker applications, both commercial and open source. I chose to deploy afick, because it is written in Perl, which makes it lightweight and easily portable between different operating systems. Though by nature designed for the command line, afick also has an optional Webmin module and a graphical interface written in perl-Tk.

For this article we will focus on the command-line implementation on a SUSE Enterprise 8.0 server, but what you see here should be applicable to just about any *nix distribution.

Afick can be installed  on Windows with ActivePerl but in older versions results are far from encouraging.   It is just too buggy to be usable.

Top Visited
Past week
Past month


Old News ;-)

[Oct 10, 2011] Another File Integrity Checker 2.18

afick is another file integrity checker, designed to be fast and fully portable between Unix and Windows platforms. It works by first creating a database that represents a snapshot of the most essential... parts of your computer system. You can then run the script to discover all modifications made since the snapshot was taken (i.e. files added, changed, or removed). The configuration syntax is very close to that of aide or tripwire, and a graphical interface is provided.

[Oct 23, 2008] Another File Integrity Checker 2.12 by Gerbier Eric

Perl-based, so modifiable by admin. See also Integrity Checkers

About: afick is another file integrity checker, designed to be fast and fully portable between Unix and Windows platforms. It works by first creating a database that represents a snapshot of the most essential parts of your computer system. You can then run the script to discover all modifications made since the snapshot was taken (i.e. files added, changed, or removed). The configuration syntax is very close to that of aide or tripwire, and a graphical interface is provided.

Changes: The code now works with perl 5.10. On Windows, afick_planning now sends a report instead of a summary and uses the "LINES" macro. On Unix, a new MOUNT macro allows you to use a remote database in afick_cron. Udev files were removed from scan. The tar.gz installer was recoded to display better diagnostics.

O'Reilly -- News -- Detecting Local Filesystem Changes with Perl


by David N. Blank-Edelman, author of Perl for System Administration

It is not uncommon for system administrators to have to drop whatever they are working on to deal with the security problem du jour. Some of these problems involve serious breaches of security. In these cases, the first question asked is often, "What has the intruder done?" In my recently released O'Reilly book, Perl for System Administration, I begin the chapter on security and network monitoring with a discussion of some of the available Perl tools that can help answer this question. Here's an excerpt from that chapter, which deals with finding changes made to a local filesystem:

Filesystems are an excellent place to begin our exploration into change-checking programs. We're going to explore ways to check if important files like operating system binaries and security-related files (e.g., /etc/passwdor msgina.dll) have changed. Changes to these files made without the knowledge of the administrator are often signs of an intruder. There are some relatively sophisticated cracker tool-kits available on the Net that do a very good job of installing Trojan versions of important files and covering up their tracks. That's the most malevolent kind of change we can detect. On the other end of the spectrum, sometimes it is just nice to know when important files have been changed (especially in environments where multiple people administer the same systems). The techniques we're about to explore will work equally well in both cases.

The easiest way to tell if a file has changed is to use the Perl functions stat() and lstat() . These functions take a filename or a filehandle and return an array with information about that file. The only difference between the two functions manifests itself on operating systems like Unix that support symbolic links. In these cases lstat()is used to return information about the target of a symbolic link instead of the link itself. On all other operating systems the information returned by lstat()should be the same as that returned by stat(). Using stat()or lstat()is easy:

@information = stat("filename");
As demonstrated in Chapter 3, "User Accounts," we can also use Tom Christiansen's File::Stat module to provide this information using an object-oriented syntax. The information returned by stat()orlstat()is operating-system dependent. stat()and lstat()began as Unix system calls, so the Perl documentation for these calls is skewed towards the return values for Unix systems. See Table 10-1 in Perl for System Administration for a comparison between the values returned by stat()under UNIX and those returned by stat()on Windows NT/2000 and MacOS.

In addition to stat()and lstat(), other non-Unix versions of Perl have special functions to return attributes of a file that are peculiar to that OS. See Chapter 2, "Filesystems," for discussions of functions like MacPerl::GetFileInfo()and Win32::FileSecurity::Get().

Once you have queried the stat()ish values for a file, the next step is to compare the "interesting" values against a known set of values for that file. If the values changed, something about the file must have changed. Here's a program that both generates a string of lstat()values and checks files against a known set of those values. We intentionally exclude last access time from the comparison because it changes every time a file is read. This program takes either a -p filename argument to print lstat()values for a given file or a -c filename argument to check the lstat()values all of the files listed in filename.

use Getopt::Std;

# we use this for prettier output later in &printchanged()
@statnames = qw(dev ino mode nlink uid gid rdev size mtime
ctime blksize blocks);
die "Usage: $0 [-p <filename>|-c <filename>]\n" unless ($opt_p or $opt_c);
if ($opt_p){
die "Unable to stat file $opt_p:$!\n" unless (-e $opt_p);
print $opt_p,"|",join('|',(lstat($opt_p))[0..7,9..12]),"\n";

if ($opt_c){
open(CFILE,$opt_c) or die "Unable to open check file $opt_c:$!\n";
@savedstats = split('\|');
die "Wrong number of fields in line beginning with $savedstats[0]\n"
unless ($#savedstats == 12);
@currentstats = (lstat($savedstats[0]))[0..7,9..12];
# print the changed fields only if something has changed
&printchanged(\@savedstats,\ @currentstats)
if ("@savedstats[1..13]" ne "@currentstats");
# iterates through attributes lists and prints any changes between
# the two
sub printchanged{
my($saved,$current)= @_;
# print the name of the file after popping it off of the array read
# from the check file
print shift @{$saved},":\n";
for (my $i=0; $i < $#{$saved};$i++){
if ($saved->[$i] ne $current->[$i]){
print "\t".$statnames[$i]." is now ".$current->[$i];
print " (should be ".$saved->[$i].")\n";

To use this program, we might type checkfile -p /etc/passwd>>checksumfile. checksumfileshould then contain a line that looks like this:


We would then repeat this step for each file we want to monitor. Then, running the script with checkfile -c checksumfilewill show any changes. For instance, if I remove a character from /etc/passwd, this script will complain like this:

size is now 606 (should be 607)
mtime is now 921020731 (should be 921016509)
ctime is now 921020731 (should be 921016509)
There's one quick Perl trick in this code to mention before we move on. The following line demonstrates a quick-and-dirty way of comparing two lists for equality (or lack thereof):
if ("@savedstats[1..12]" ne "@currentstats");
The contents of the two lists are automatically "stringified" by Perl by concatenating the list elements with a space between them, as if we typed:
join(" ",@savedstats[1..12]))
and then the resulting strings are compared. For short lists where the order and number of the list elements is important, this technique works well. In most other cases, you'll need an iterative or hash solution like those documented in the Perl FAQs.

Now that you have file attributes under your belt, I've got bad news for you. Checking to see that a file's attributes have not changed is a good first step, but it doesn't go far enough. It is not difficult to alter a file while keeping attributes like the access and modification times the same. Perl even has a function, utime(), for changing the access or modification times of a file. Time to pull out the power tools.

Detecting change in data is one of the fortes of a particular set of algorithms known as "message-digest algorithms." Here's how Ron Rivest describes a particular message-digest algorithm called the "RSA Data Security, Inc. MD5 Message-Digest Algorithm" in RFC1321:

The algorithm takes as input a message of arbitrary length and produces as output a 128-bit "fingerprint" or "message digest" of the input. It is conjectured that it is computationally infeasible to produce two messages having the same message digest, or to produce any message having a given pre-specified target message digest.
For our purposes this means that if we run MD5 on a file, we'll get a unique fingerprint. If the data in this file were to change in any way, no matter how small, the fingerprint for that file will change. The easiest way to harness this magic from Perl is through the Digestmodule family and its Digest::MD5 module.

The Digest::MD5module is easy to use. You create a Digest::MD5object, add the data to it using the add()or addfile()methods, and then ask the module to create a digest (fingerprint) for you. To compute the MD5 fingerprint for a password file on Unix, we could use something like this:

use Digest::MD5 qw(md5);
$md5 = new Digest::MD5;
open(PASSWD,"/etc/passwd") or die "Unable to open passwd:$!\n";
print $md5->hexdigest."\n";
The Digest::MD5documentation demonstrates that we can string methods together to make the above program more compact:
use Digest::MD5 qw(md5);
open(PASSWD,"/etc/passwd") or die "Unable to open passwd:$!\n";
print Digest::MD5->new->addfile(PASSWD)->hexdigest,"\n";
Both of these code snippets print out:
If we make even the slightest change to that file, the output changes. Here's the output after I transpose just two charactersin the password file:
Any change in the data now becomes obvious. Let's extend our previous attribute-checking program to include MD5:

use Getopt::Std;
use Digest::MD5 qw(md5);

@statnames = qw(dev ino mode nlink uid gid rdev size mtime
ctime blksize blocks md5);
die "Usage: $0 [-p <filename>|-c <filename>]\n"
unless ($opt_p or $opt_c);
if ($opt_p){
die "Unable to stat file $opt_p:$!\n" unless (-e $opt_p);
open(F,$opt_p) or die "Unable to open $opt_p:$!\n";
$digest = Digest::MD5->new->addfile(F)->hexdigest;
print $opt_p,"|",join('|',(lstat($opt_p))[0..7,9..12]),"|$digest","\n";
if ($opt_c){
open(CFILE,$opt_c) or die "Unable to open check file $opt_c:$!\n";
while (<CFILE>){
@savedstats = split('\|');
die "Wrong number of fields in \'$savedstats[0]\' line.\n"
unless ($#savedstats == 13);

@currentstats = (lstat($savedstats[0]))[0..7,9..12];
open(F,$savedstats[0]) or die "Unable to open $opt_c:$!\n";
&printchanged(\@savedstats,\ @currentstats)
if ("@savedstats[1..13]" ne "@currentstats");

sub printchanged {
my($saved,$current)= @_;
print shift @{$saved},":\n";
for (my $i=0; $i <= $#{$saved};$i++){
if ($saved->[$i] ne $current->[$i]){
print " ".$statnames[$i]." is now ".$current->[$i];
print " (".$saved->[$i].")\n";

System Configuration Collector for Windows

Written in Perl

System Configuration Collector for Windows collects configuration data from Windows systems and compares the data with the previous run. Differences are added to a logbook, and all data can be sent to the server part of SCC.

Release focus: Minor bugfixes

The current snapshot of scc-log is removed after all data is collected correctly, and the syntax of a snapshot is checked on the first installation. The DIV tag with class SCC_LOG_ENTRY in scc-log2html was corrected. Nested DIV tags were added in scc-snap2html to support formatting of each section of the snapshot.

Siem Korteweg [contact developer]

AFICK (Another File Integrity CHecker)

Afick is a security tool, very close from the well known tripwire. It allows to monitor the changes on your files systems, and so can detect intrusions.

It's designed to be quick and portable. For now, it has been tested on

but it should work on any computer with perl and its standard modules.

[May 2, 2005] The Second Commandment of system administration (NewsForge)

NewsForge takes a look at integrity checkers. "Each integrity checker is a little different, so do some research before deciding on one. There are many excellent integrity checking applications out there, but the one I recommend and prefer is called afick (Another File Integrity ChecKer). Afick offers several advantages over integrity checkers such as Tripwire and AIDE. The first and foremost difference is that afick is written in Perl, which gives it the advantage of speed. Afick finishes the initialization of the database that stores filesystem attributes almost a minute faster than AIDE. Being written in Perl also means that afick is highly portable between operating systems."

[Feb 20, 2005] A Perl script to watch files for changes


OS X applications make use of various support files and these files get read and written quite frequently behind the scenes. Especially when troubleshooting some problem, it is often useful to be notified when relevant files get modified. I wrote a Perl script (which I call watchfile [4kb download from]) that prints a message when one of the specified files gets modified. As usual, you need to make the script file executable (with chmod +x) and install the script in a folder that is in your shell's execution path.

You execute this script in a Terminal window, specifying the file or files you want it to watch in the usual way (a space-separated list of filenames). For example, if you want it to watch the file ~/Library -> Preferences ->, you would run the command:

 % watchfile ~/Library/Preferences/
Or if you wanted to watch all of the preference files, you would run the command:
 % watchfile ~/Library/Preferences/*
The script will print a message "info stored" when it starts watching each file and then it will check on the file each 10 seconds after that and print a message if anything about the file has changed. The messages are somewhat cryptic, indicating which things have changed since the last check. Read the rest of the hint for a breakdown on the messages you might see...

[robg adds: I tried this script, and it's really pretty interesting. It's amazing to see how much stuff is changed, and how often it's changed.]

Here's a list of the things that might be reported:

Each time a change is reported, the script also shows the result of doing ls -l on the file.


In my testing, I found it very interesting to watch the preference files, especially with the -atime option. I was surprised how many preference files are being accessed and how often they get written to (and often recreated).

One final note: Although I haven't yet tried it, I think the script should need only small changes to get it to work on other operating systems (anywhere that Perl is installed). It should work as is on any Linux system. After changing the line that does the ls -l, it should even work on Windows if you install Perl.

[Feb 18, 2005] Check your filesystems' integrity with afick


With new threats showing up every day, administrators find it increasingly hard to establish continued trust with their filesystems. Luckily, it's easier than you might think to maintain omniscient control of your filesystem. Through effective use of a filesystem integrity checker, you can keep a watchful eye on every aspect of an important machine's filesystem.

There are several filesystem integrity checker applications, both commercial and open source. I chose to deploy afick, because it is written in Perl, which makes it lightweight and easily portable between different operating systems. Though by nature designed for the command line, afick also has an optional Webmin module and a graphical interface written in perl-Tk.

For this article we will focus on the command-line implementation on a SUSE Enterprise 8.0 server, but what you see here should be applicable to just about any *nix distribution.


You can either download and build afick from the source code, or, if you're using a package-based distribution, you can install from an RPM or Debian binary packages. Let's walk through building it from source. To begin, download and unpack the latest version, navigate to the afick directory, and run the following command:

# perl

Since you aren't installing the GUI, you can safely ignore any errors about missing perl-Tk modules.

Next, type make install to install the console tool. When your machine finishes copying all the necessary files, afick will be installed on your system. Installation is only half the battle, though. The real fun lies in the configuration and testing phase.


Afick begins by creating a database of values representing the current state of your filesystem. When you run it later in update mode, afick compares the current filesystem to the original and notes changes.

The exact attributes of your filesystem that are checked are controlled by a configuration file. Afick provides a default configuration file for both Linux and Windows in the directory where it was originally unpacked. In our case, we are interested in the file linux.conf. You can modify a wealth of options in this file, but we will focus only on the essentials.

Afick provides multiple ways to check every file and directory on your system, so no one configuration file is going to work for everyone. For instance, I am running PHPNuke on a Web server that includes forums, which are going to change constantly as users post items and change their preferences. I don't want those changes dumped into my mailbox every day, possibly burying something important. Someone else with a static Web site, however, might want to see that that content is never changed unexpectedly, and would therefore closely monitor that directory. It takes a bit of trial and error to fine-tune afick (or any other integrity checker for that matter) for your specific needs.

Let's begin by initializing the database with the command:

# afick -c /[path_to_linux.conf]/linux.conf -i

The -c tells afick where to find the configuration file it should use, while the -i instructs it to make an initial copy of the filesystem database. This process will take a few minutes. Once it has finished, let your server run for a while, then compare the databases with this slightly different command:

# afick -c /[path_to_linux.conf]/linux.conf -k

The -k argument tells afick to compare the current filesystem against the database specified in the first line of the linux.conf file, which is the initial database. Any changes will be noted on stdout.

Repeat this process a few times until you're getting a feel for what is changing and how so. For instance, if you've got busy log files somewhere outside of /var, they might produce a bundle of changes every time you run afick, which will create white noise around potentially useful data. After you've got a list of repeat offenders, you can tune the linux.conf file.

The linux.conf file actually has a decent description of all the file attributes you can monitor, including device, inode, permissions, owner, last time modified, and several others. You can even create your own rule sets for certain types of files and directories. For instance, you don't want afick reporting warnings about the individual files in /var/log being modified, as these files are going to be modified almost constantly on some systems. To create a rule set that would check the user and group ownership, the device they reside on, and the permissions, you would first add:

specialrule = u+g+d+p

Then, to apply this custom rule to the /var/log directory, you would add the following line to the =/Dir section of the conf file:

/var/log specialrule

If you want to define a rule set that ignores the /tmp directory, checks only the files ending in .backup in /root, and ignores all files in /home/user that end in .old, you would add the following lines to the alias section of the config file.

/root/*.backup special_rule
exclude_re := /home/user/.*\.old

Afick recognizes the standard Unix wild cards and regular expressions in rule sets. With a little bit of tweaking, you can tune afick to completely monitor your filesystem in all the necessary places, while ignoring the spots that would generate useless noise.

After you've spent some time tweaking your configuration file you need to ensure that afick itself cannot be modified. The most secure way to accomplish this is to put the database, found by default at /var/lib/afick/afick.pag, somewhere that is write-only. Unfortunately a diskette isn't an option because of the size of the afick database; my database is roughly 15MB for a 4GB server. I recommend using an Iomega Zip disk for a couple reasons. Primarily, you can switch a Zip disk from read-only to read-write with the flip of a switch. This is convenient because every time you make a change to your filesystem you'll need to update the database to clear the warnings that afick will produce every time it runs thereafter, which could lead to wasting a lot of CDs if you tried to deploy the database on a CD-R.

No matter where you store your database, you still need to tell the configuration file where to find it. Mount your Zip drive (or your CD) and copy /var/lib/afick/afick.pag to your mount point. Then change the entry for the database location in the first section of the config file to represent the path to your removable media.


In addition to storing the database on read-only media, I also choose to err on the side of paranoia, and I keep my config file on read-only media as well. In this case, it's a diskette that I've write-protected. This doesn't prevent afick from being run with a separate config file created locally on the server, but it does allow me to be sure that no small detail within the file can be changed without someone physically touching my servers.

Automating it

The final step in configuring afick is automating it. (Note that this is not strictly necessary if you wish to run afick only after certain tasks.) The easiest way to automate afick is with the afick.cron script included in the original directory where you unpacked the source code. If you installed via an RPM, then afick.cron was implemented upon your installation, and should be emailing root as changes occur. If you followed the instructions here and installed from source, you have to add it manually. At your command prompt, just type:

crontab -e
0*/2 * * * root /[path_to_afick.cron]/afick.cron

Then save and exit the editor. This tells your operating system to run afick once every two hours and mail the results, if any, to root.

Updating it

Every time you make a change to a filesystem, you'll want to update the afick database. Unfortunately, this is a point where security can begin to become an inconvenience. You must physically be at the server to change the Zip disk to read-write mode so the database can be updated.

To initiate update mode, simply replace the -k in the afick command line with -u. This will update the database to include any changes that have happened since the last time afick was run. Since afick will continue using the new database to ensure file integrity, you should always run afick once in compare mode before updating it, to be sure that the database you are about to create won't report recently compromised system commands or files as legitimate. As a further level of protection, afick has built-in integrity checks it performs on its own executables to ensure that afick itself hasn't been modified.

On a short side note, you can also use the update feature of afick to monitor exactly what changes a program's installation procedure makes to a filesystem by using the update feature immediately before and immediately after the installation. This is extremely useful in situations where you cannot verify the authenticity of an application before you install it. Just make sure when you test it with afick that you don't do so on a mission-critical machine. You can also use afick for retroactive testing to ensure uninstalling the software actually returns your filesystem to its previous state.

The future of afick

Afick is a work in progress. In recent conversations, developer Eric Gerbier said he intends to include in future releases a daemon-enabled version that doesn't rely on cron to run afick, thus delivering real-time filesystem monitoring. An option to export afick's results in HTML/XML is in the works for version 3.0, due out sometime in the next few months.

No system will ever be completely safe from malicious users and unauthorized access. If a machine under your control becomes compromised, you must have the proper precautions in place to quickly mitigate the damage and restore services. With a file integrity checker such as afick in place, when that dreaded day comes, you will be prepared to determine exactly what has happened (or is happening) and react accordingly.

In order for afick, or any other file integrity checker, to work as needed, you'll need to take special care in observing the general actions and changes in your filesystem in order to correctly and efficiently craft your configuration rules. Once you've done that, it's just a matter of staying on top of the ever-changing filesystem. Update regularly and update often, so as to catch problems as they begin, and not when it's too late. If you make these practices a regular part of your daily administration routines, you'll be prepared to react efficiently to a breach in security should the need arise.

Brian Warshawsky has built, supported, and administered mission-critical IT infastructure for the United States Naval Research Laboratory, Virginia Commonwealth University, RichmondAir Wireless, and is currently employed by Sungard Collegis at Virginia State University.

Recommended Links

Google matched content

Softpanorama Recommended

Top articles



Nabou Advanced Host Intrusion Detection System

Nabou is a Perl program which can be used to monitor file changes and directories on your system using MD5 checksums. It can also monitor crontab entries, suid files, user accounts, listening TCP/UDP ports, and processes. Nabou stores all data in standard dbm databases. Encrypted databases are supported using RSA public key encryption. Nabou is highly configurable; you can exclude files from being checked, configure which file attributes it should look for, use custom checks, and much more.

Toby IDS -- Perl reimplementation of Tripwire
(development status: stalled )

BSI Software Toby The Toby intrusion detection system is an attempt of reimplementation of tripwire-1.3 (ASR) into Perl. Probably the best free Perl integrity checker around. The current version is 1.1 (Jan-2002). It maintains a database of file properties to detect alterations. It supports MD5 and SHA-1 checksums of the file contents. It features a configuration file which is actually a Perl script, with the attendant power, flexibility, and difficulty. Project details for Another File Integrity Checker

afick is another file integrity checker, designed to be fast and fully portable between Unix and Windows platforms. It works by first creating a database that represents a snapshot of the most essential parts of your computer system. You can then run the script to discover all modifications made since the snapshot was taken (i.e. files added, changed, or removed). The configuration syntax is very close to that of aide or tripwire, and a graphical interface is provided. Viper -- Perl script

viper is small perl script that stores its database in each directory. This is a very questionable approach because it clutter the filesystem. Also if directory is writable, than deletion of the file is always possible.
Version 0.9.2: viperdb.conf, viperdb.ignore and


0.9.2 - Changed entire way parsing of config file was done
- Added capability to check dirs recursively
- Added different LogLevels 1=basic 2=extensive
- Fixed a bug in Cleaup() which was leaving
ViperDB.tmp files everywhere.
- removed CTIME monitoring and only left MTIME
- Misc other cleanup/bugfixes

This version has some critical race conditions fixes

Version 0.9.1: viperdb.conf, viperdb.ignore and

NOTES : When you first run this new version, be sure to first run with the -init option to re-create all new databases as the structure for storing file information is completely rewritten.


0.9.1 - Fixed some nasty race conditions
- changed almost all system() calls to perl equivolents
- cleaned up -checkstrict code to handle changing perms,
owner,group back to original.. now use UID/GID instead
of the name of the uid/gid
- Simplfied reporting (lumped all perms together and uid/gid together)
- cleaned up a bug with chattr and -init runtype
- added ctime & mtime checking

This version just has some added features.

Version 0.9: viperdb.conf, viperdb.ignore and


0.9 - Added ignore functionality which allows the user to specify
filenames to explicitly ignore and they will not be monitored
- Updated code for better compatability with Solaris
(LS_OPTIONS were -laAS and solaris needs -lAcr)
- Cleaned up code having to do with splitting perms out into owner,
group, and all perms.

This version just has some added features.

Version 0.8: viperdb.conf and


0.8 - Added email notification system to send summary
emails when changes have been detected.
- Made database(s) immutable and undeleteable between checks
so that even if someone manages to bust root, removing/modifying
the databases between checks is considerably harder.

ViperDB 0.7 has been released. This new version has numerous bugfixes as well as the addition of many new features including the ability to not only monitor but to "Protect" your files. ViperDB 0.7 has the added ability to change owner/group back to the original values on files that have had their owner/group changed as well change permissions back to original permissions. Another feature of ViperDB 0.7 is that now, if a change detected to a SUID/GUID file, all permissions are taken away from that file until a time at which the admin can review the file.

This is an ongoing project and all comments and suggestions are welcomed at [email protected]

Version 0.7: viperdb.ini and

Documentation Coming shortly


  • Adding of a more complex "system status" function which when a change is detected, would grab info that might be helpful in determining what caused the change (ie. processes running, users logged in, last few lines from logfiles, etc)


    0.7 - Changed logging mechanism from logging to an individual file to
    logging to the standard logging facility (calls on 'logger')
    - Added '-checkstrict' functionality which changes permissions back
    to what they were before the change was made to the file.
    - Added exception(s) to '-checkstrict' which removes all permissions
    from the changed file if the file originally was SUID/GUID.
    - Changed way changes were seen by admin, now a change only
    sends an alert to the logs once instead of repeatedly.

    ViperDB was created as a smaller & faster option to Tripwire. Tripwire while being a great product leaves something to be desired in the speed department and also, by default tripwire generates a report everytime it runs and directs that report to an email address. This hinders most people from running Tripwire every few minutes to do a system check. ViperDB however is the answer to this problem. ViperDB does not use a fancy all-in-one database to keep records instead, I opted to keep it fast and hence decided to go with a plaintext db which is stored in each "watched" directory. By using this there is no real one attack point for a attacker to focus his attention on. This coupled with the running of ViperDB every 5 minutes (via cron root job) decreases that likelyhood that an attacker will be able to modify your "watched" filesystem .while ViperDB is monitoring your system

    As of now, I am tired of waiting around for the beta testers to get back to me and let me know what they think, so I am kinda doing a general release of the source to all to check out and let me know what you think. If you like it, please tell me.. I am always pleased to hear that I have helped someone out. But more than hearing about how great it is.. I would like to hear about problems that you encountered, etc. I have a few more things I would like to add to this script before version 1.0 (official public release & posting on ,etc) which are listed below. If you can think of anything to add to this script tha tis not listed below, please email me.

    Just before I tell you about how to setup viperdb and shit, let me just save myself some flamemail. I am in no way stating that my lil perl script is better than Tripwire.. I am simply saying that (in my personal opinion) Tripwire leaves something to be desired and whatever it is I feel that Tripwire lacks, I will try to makeup for by adding into my script. Once again, Tripwire is a GREAT product... and I am not trying to knock them. My script is VERY simple, no complex databases, nothing suppah dupah l33t.. just me dinking around and some people have shown interest in possibly having me develop this further ( no profit of course... ain't OSS great *sigh* =P )

    Currently there are 2 files that you need viperdb.ini and

    As of now, there is no real documentation on this. I suck ass at documentation, but basically, you stick viperdb.ini in /usr/local/etc/viperdb.ini (you can change the ini file location in and edit the ini file to contain the directories that you would like to "watch" (presently it is REQUIRED that you have the trailing / on the path for the script to work.. so /bin in the ini file wouldn't work but /bin/ would... )and then run the ' -init' this creates the 'database' and stores it.. then when you want to check any changes just do a ' -check' and any changes will be recorded to /var/log/ViperDB.Log by default (again, you can change the logfile from within the script.


  • Slipwire -- Perl script SHA-1 hashes, uses DBM is a simple filesystem integrity checker that uses Berkeley DBM. It compares the SHA-1 hashes of files to an initial state and alerts the user of any changes. (Development status: stalled)

    Version 1.4

    This script started off as an attempt to capture some of the basic functions of the Tripwire product, and ended up as a decent learning experience for perl, DBM files and digest hashing.

    I implemented some additional features based on some feedback I got, (see CHANGES) and this is actually turning into quite a useful little tool .

    I would advise caution if you're thinking about using this in any sort of production environment. In fact, I don't warrant that this script will work at all. Don't use it. Anywhere. It's been tested under (and is know to run in) FreeBSD3.4-STABLE.

    In any event, I'm continuing to make it available because it might be useful to other folks and Lord knows, I've certainly made liberal use of other people's work in my perl education.

    Questions, comments, concerns, rants or raves are welcome. The latest version will always be available at the URL below.


    James Quinby
    [email protected]
    22 Feb 2000


    ./slipwire [options] database [file_list.txt|file1 file2 file3...]



    Creates the initial database of filenames and their SHA1 hashes. You want to do this the first time you run this program and whenever you want to update it after changing files.


    ./slipwire -create test.dbm list_example.txt

    ...will read the list of paths in list_examples.txt and store the SHA1 hashes of all of the files in test.dbm. Note that you'll need read permissions on the directories (and all the files therein). You can alternatively use

    ./slipwire -only my test.dbm file1 file2 file3... only store the hashes of file1, file2 and file3. Note that you'll need read permissions on the directories (and all the files therein).

    More complete usage instructions are in the comments of the script.



    Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


    War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


    Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


    Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

    Classic books:

    The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

    Most popular humor pages:

    Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

    The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

    Copyright © 1996-2021 by Softpanorama Society. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

    This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

    You can use PayPal to to buy a cup of coffee for authors of this site


    The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

    Last modified: March 12, 2019