|Home||Switchboard||Unix Administration||Red Hat||TCP/IP Networks||Neoliberalism||Toxic Managers|
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells
|News||Editorial||Recommended books||Recommended Links||Minimization||Tutorials & FAQs||Tips||Linux vs Solaris Security whitepaper|
|ACLs||RBAC||Privilege Sets Solaris 10||Zones||Firewall||Log centralization and audit||PAM||SSH||Access control|
(no longer maintained)
(architecturally weak, but resulting configurations are supported by Sun)
Solaris Hardening Kit
|Slightly Skeptical)Solaris Security Toolkit (JASS) Notes||Application Server hardening||Slightly Skeptical Notes on Titan||Web server hardening||Humor||Etc|
Due to the size of the page an introductory note was converted to an Editorial on a separate page. Please read it as it might help you to avoid typical hardening mistakes. It is not very current: with Solaris 10 it needs to be updated as zones and privilege sets changed the Solaris security landscape significantly.
It is very important to understand that security is human-related feature (or, more correctly, an organization IQ related feature IQ related feature -- organizations with stupid management usually do not have great security) and Solaris admins are often more qualified then Linux admin and more professionally trained. Often large corporations require them to be certified (although Red Hat certification is better then Sun certification). They often are older and have more years under the belt, although this is both advantage and disadvantage (see remark about firewalls below).
Solaris security advantage rests on combination of unique features: RBAC (which now includes the concept of privileges) and zones. In addition ACLs are also more widely used in Solaris although they are now fully available in Linux. Linux admins typically do not know this feature and as a result do not use it, but it is available.
There are some minor things like the fact that primary CPU for Solaris is pretty obscure RISC CPU (UltraSparc) which kills most exploits dead (security via obscurity) but this now is in a process of matching by IBM which tries to promote Linux on Power CPUs.
Also Linux is now Microsoft of Unix world and that means that most exploits are directed at popular Linux distributions, especially Red Hat.
Solaris filesystem security is weaker then in Free BSD, but somewhat
better then in Linux. For example you can make
/usr read only in Solaris and JASS (standard hardening toolkit)
does exactly that. A
Sun BluePrints article (pdf) describes the Solaris Fingerprint Database
(sfpDB), a security tool that enables users to verify the integrity of files
distributed with the Solaris OS. It is different and better that RPM
based security checking.
Linux has a distinct advantage of more wide and established use of local firewall (Red Hat training actually presuppose that this feature is enabled; Solaris training does not).
Networking stack is better engineered in Solaris and as such is more secure. Solaris implemented IPv6 earlier then Linus and is is more mature and less prone to problems.
As for application security Suse Apparmor is superior to anything Solaris has in application security hardening. See http://en.opensuse.org/Apparmor
An internal firewall that is now became an integral component of any more or less secure server changed priorities in hardening quite significantly. Using firewall based hardening is easier for entry level administrators because issues and tradoff are easier to understand and more transparent.
Still it in worth to not that beginners and entry level admins usually have excessive zeal as for hardening and can screw a server (or a dozen ;-) almost in no time due to insufficient testing of the feature (or firewall rules) on test servers or insufficient understanding of the limitations of the hardening packets or both.
Although such an experiment represent tremendous learning opportunity it's better of avoid it. Somebody said “Experience keeps a very expensive school but fools can learn in no other” and George Bernard Shaw aptly added "There are two tragedies in life. One is to lose your heart's desire. The other is to gain it.".
Those two quotes are fully applicable to hardening. Naive enthusiasm after some trade presentation (or minor bribe by security snake oil salesmen) and attempts to implement them on production servers probably caused ten times more damage that all hackers attacks together.
It is very important to understand that many more servers were hosed due to hardening mistakes than from hacker attacks. That does not means that hardening is unimportant or that it should be better ignored. What it means is that you have not overdo it. Excessive zeal really hurts here. Each change that suppously increases the level of protection should be weighted against convenience of work with the server. The level of hardening should correspond to general level of security in a company. If holes are everywhere and nobody is paying attention to such problems as role base access, authentication, etc, hardening does not increase the general level of security: it is always as good as the weakest link.
Situation with hardening tools on Solaris looks like one man game: JASS is still maintained, but Titan (although I like Titan simple approach to writing hardening modules better) is not. Unless you want to improve it yourself (and Titan is more suitable for adaptation the JASS) it does not make much sense to use it (it never made sense to use Titan blindly, anyway). Google list Comparison of Solaris Hardening Scripts among top finding on "Solaris hardening" topic. Ignore it, this is a very old paper that outlived its usefulness. Another entry YASSP is dead for so long that I start to be vary about Goggle algorithm seeing it in the top finding list (YASSP: Hardening Script for Solaris - stable beta.).
Also with the availability of internal Solaris firewall creates a new situation where old a-la Josef Stalin recipe of hardening "if you have a network service you have a problem, if you do not have a network service you have no problems" ;-). Now you can skip remove vulnerable services if they are important and raise productivity, but just limit the ip-ranges of server that can connect with them. The excessive zeal in elimination of /etc/inetd.conf entries now is much less justified.
Dr. Nikolai Bezroukov
Describes the Solaris Fingerprint Database (sfpDB), a security tool that enables users to verify the integrity of files distributed with the Solaris OS.
- This message: [ Message body ] [ More options ]
- Related messages: [ Next message ] [ Previous message ]From: Gideon T. Rasmussen, CISSP, CISA, CISM, CFSO, SCSA <lists_at_infostruct.net>
Date: Sun, 30 Jan 2005 14:58:08 -0500
I just sent an e-mail to a gent I met at a UNIX auditing course. Thought it might be of interest...
To take a quick Solaris security audit, use the CIS Solaris bench marking tool (http://www.cisecurity.org/bench_solaris.html). It produces a vulnerability assessment report. There is a corresponding Solaris hardening standard on the same page.
My Solaris hardening recommendations can be found at: http://www.sun.com/bigadmin/content/submitted/Solaris_build_document.pdf
Additional Solaris hardening resources can be found at:
The usual hardening disclaimers apply here. Test in a non production environment and conduct thorough functionality testing...
You may also want to take a look at my INFOSEC site (http://www.ussecurityawareness.org). It has auditing resources you may find of interest.
Contact me if you have any questions or comments.
Gideon T. Rasmussen
CISSP, CISA, CISM, CFSO, SCSA
Boca Raton, FL
Received on Jan 31 2005
On a FreeBSD system, you can set the "immutable flag" on a file. Given a high enough system securelevel, that file will be completely resistant to change (including unsetting that flag). This is extremely handy for locking down file signature databases, kernel files, and other likely targets for stealth modification. So long as that portion of the kernel stands intact, the system can never be completely clandestinely owned
Very interesting. This FAQ [osxfaq.com] suggest that OS X retains BSD's immutable flag. In theory, the only way to change this flag in OS X is to reboot in single-user mode. I wonder if a rootkit could force a reboot into single user mode, change these flags, and reboot back to remotely own an OS X machine? I would assume that unless the rootkit can insert something into the single-user mode start-up sequence, the system immutable flag should be fairly safe. The big downside would be that System Update would cease to work (and probably create a corrupt partial update) if the wrong file were locked in this way (security vs. ease-of-use again!).
First let's first provide a little background. TCP Wrappers has been around for many, many years (see Wietse Venema's FTP archive). It is used to restrict access to TCP services based on host name, IP address, network address, and so on. For more details on what TCP Wrappers is and how you can use it, see
tcpd(1M). TCP Wrappers was integrated into the Solaris Operating System starting in the Solaris 9 release, where both Solaris Secure Shell and
inetd-based (streams, nowait) services were wrapped. Bonus points are awarded to anyone who knows why UDP services are not wrapped by default.
TCP Wrappers support in Secure Shell was always enabled since Secure Shell always called the TCP Wrapper function
host_access(3)to determine if a connection attempt should proceed. If TCP Wrappers was not configured on that system, access, by default, would be granted. Otherwise, the rules as defined in the
hosts.denyfiles would apply. For more information on these files, see
hosts_access(4). Note that this and all of the TCP Wrappers manual pages are stored under
/usr/sfw/manin the Solaris 10 OS. To view this manual page, you can use the following command:$ man -M /usr/sfw/man -s 4 hosts_access
inetd-based services use TCP Wrappers in a different way. In the Solaris 9 OS, to enable TCP Wrappers for
inetd-based services, you must edit the
/etc/default/inetdfile and set the
YES. By default, TCP Wrappers was not enabled for
In the Solaris 10 OS, two new services were wrapped:
sendmailworks in a way similar to Secure Shell. It always calls the
host_accessfunction and therefore TCP Wrappers support is always enabled. Nothing else needs to be done to enable TCP Wrappers support for that service. On the other hand, TCP Wrappers support for
rpcbindmust be enabled manually using the new Service Management Facility (SMF). Similarly,
inetdwas modified to use a SMF property to control whether TCP Wrappers is enabled for
The topic for this article is the Solaris 10 Reduced Networking Software Group (also commonly known as the Solaris 10 Reduced Networking Meta Cluster). This software group is new and joins the five existing software groups available in Solaris today: Core, End User, Developer, Entire and Entire + OEM software groups. The Reduced Networking Software Group is positioned as a subset of Core and represents the smallest amount of Solaris that can or should be installed and have a working and supported system. (Note that for support reasons, it is not advised to remove packages installed by the Reduced Networking Software Group.)
To install the Reduced Networking Software Group, simply select it from the list when doing a graphical installation. If you are using JumpStart, then you should use the cluster keyword with the new value SUNWCrnet. The following is a sample JumpStart profile that uses the Reduced Networking Software Group. This profile was also used to build the system used as an example in this article.
The Solaris Security Toolkit, formerly known as the JumpStart Architecture and Security Scripts (JASS) toolkit, provides a flexible and extensible mechanism to harden and audit Solaris Operating Systems (OSs). The Solaris Security Toolkit simplifies and automates the process of securing Solaris Operating Systems and is based on proven security best practices and practical customer site experience gathered over many years. This toolkit can be used to secure SPARC(R)-based and x86/x64-based systems.
The Solaris Security Toolkit 4.2 release is available now and the toolkit is fully supported as part of Solaris Software Support Service Plans or the SunSpectrum(SM) Service Plan contract. For more information on Solaris Support go to:
The Solaris Security Toolkit 4.2 software is fully supported on the following SPARC and x86/x64 Solaris Operating System releases:
- Solaris 10
- Solaris 9
- Solaris 8
Today is the big day! The Solaris Security Toolkit version 4.2 has been released. The biggest change in this new release is with its support of the Solaris 10 OS (global and local zones). You can read all about the changes in this new update in the Release Notes. With this release, you have a fully documented and supported tool for hardening the Solaris 10 OS (as well as previous releases) on both SPARC, Intel and AMD platforms!
Commentor: Casper Dik
Added: September 7, 2004
It is rather pointless to install TCP wrappers for Solaris 9 and later as the version included in the OS is exactly the same as the one available on porcupine. That version has also been reved twice because of bugs we ran into. Solaris 9 SSH already has libwrap support compiled on. In S10 and later we also provide rpcbind linked with libwrap.
[Apr 3, 2005] Conversion of application accounts to roles is a simple, but at the same time effective hardening technique. See Security/RBAC/conversion_of_application_accounts_to_roles
by Glenn Brunette
This Sun BluePrints Cookbook describes how to centralize and automate the collection of file integrity information using the following Solaris features:
* Secure Shell
* Role-based Access Control (RBAC)
* Process Privileges
* Basic Auditing and Reporting Tool (BART)
Each of these features can be quickly and easily integrated to centralize and automate the process of collecting file fingerprints across a network of Solaris 10 systems.
Note: This article is available in PDF Format only.
This Tech Tip explains how to use NFS to inspect the underlying directory structure if the reported disk usage seems inconsistent.
Read about the build, configuration, and subsequent hardening of UNIX servers that constitute a secured FTP solution.
So what makes Solaris Privileges different? Why didn't we copy something else like Trusted Solaris Privileges or "POSIX" capabilities?
Let's start from what we formulated as our requirements near the beginning of our project.
One of the important features of Solaris is complete binary backward compatibility; in order to offer that we needed to design the privilege subsystem in such a manner that current practices, binaries and products would continue to work. Of course, some have solved this issue by providing a system wide knob to turn: root / root + privileges / just privileges. We don't like knobs in our OS; specifically not ones which drastically alter the behavior of a system. It makes it harder to develop software; it needs to work for all settings. Certain products may require conflicting settings, and so on. So we decided on a "per-process" knob which is largely automatic
With backward compatibility comes the onus on the software developer to develop future proof interfaces; that ruled out all other interfaces as they all have fixed bitmaps and fixed privilege/capability numbers, fixed structure sizes in the programmer visible parts of the system. Solaris Privileges have none of that. And while we could safely reuse the names of the Trusted Solaris interfaces we can not redefine interfaces even from a defunct standard. So we have interfaces which smell like Trusted Solaris but with a completely new userland representation of privileges and privilege sets. We can never have more signals; but we can have more privileges and more privilege sets!
The privileges and privilege sets in Solaris 10 are represented to userland processes and non-core kernel modules as strings; privilege sets are bitmasks of undetermined size; they can only be allocated through the C library routines. Privilege set names are also strings and not plain integer indices; this gives us even more flexibility. A Solaris binary compiled for 4 privilege sets of each 32 privileges will continue to work on a Solaris system with 5 privilege sets each of which can contain 64 privileges and with all the privileges having their internal representation renumbered.
... Many software exploits count on this escalated privilege to gain superuser access to a machine via bugs like buffer overflows and data corruption. To combat this problem, the Solaris 10 Operating System includes a new least privilege model, which gives a specified process only a subset of the superuser powers and not full access to all privileges.
The least privilege model evolved from Sun's experiences with Trusted Solaris and the tighter security model used there. The Solaris 10 OS least privileged model conveniently enables normal users to do things like mount file systems, start daemon processes that bind to lower numbered ports, and change the ownership of files. On the other hand, it also protects the system against programs that previously ran with full root privileges because they needed limited access to things like binding to ports lower than 1024, reading from and writing to user home directories, or accessing the Ethernet device. Since setuid root binaries and daemons that run with full root privileges are rarely necessary under the least privilege model, an exploit in a program no longer means a full root compromise. Damage due to programming errors like buffer overflows can be contained to a non-root user, which has no access to critical abilities like reading or writing protected system files or halting the machine.
The Solaris 10 OS least privilege model includes nearly 50 fine-grained privileges as well as the basic privilege set.
- The defined privileges are broken into the groups
- The basic privilege set includes all privileges granted to unprivileged processes under the traditional security model:
Increasing life expectancy
The past 12-24 months has seen a significant downward shift in successful random attacks against Linux-based systems. Recent data from our honeynet sensor grid reveals that the average life expectancy to compromise for an unpatched Linux system has increased from 72 hours to 3 months. This means that a unpatched Linux system with commonly used configurations (such as server builds of RedHat 9.0 or Suse 6.2 ) have an online mean life expectancy of 3 months before being successfully compromised. Meanwhile, the time to live for unpatched Win32 systems appears to continues to decrease. Such observations have been reported by various organizations, including Symantec , Internet Storm Center and even USAToday. The few Win32 honeypots we have deployed support this. However, Win32 compromises appear to be based primarily on worm activity.
T H E D A T A
Background Our data is based on 12 honeynets deployed in eight different countries (US, India, UK, Pakistan, Greece, Portugal, Brazil and Germany). Data was collected from the calendar year of 2004, with most of the data collected in the past six months. Each honeynet deployed a variety of different Linux systems accessible from anywhere on the Internet. In addition, several Win32 based honeypots were deployed, but these were limited in number and could not be used to identify widespread trends. A total of 24 unpatched Unix honeypots were deployed, of which 19 were Linux, primarily Red Hat. These unpatched honeypots were primarily default server installations with additional services enabled (such as SSH, HTTPS, FTP, SMB, etc). In addition, on several systems insecure or easily guessed passwords were used. In most cases, host based firewalls had to be modified to allow inbound connections to these services. These systems were targets of little perceived value, often on small home or business networks. They were not registered in DNS or any search engines, so the systems were found by primarily random or automated means. Most were default Red Hat installations. Specifically one was RH 7.2, five RH 7.3, one RH 8.0, eight RH 9.0, and two Fedora Core1 deployments. In addition, there were one Suse 7.2, one Suse 6.3 Linux distributions, two Solaris Sparc 8, two Solaris Sparc 9, and one Free-BSD 4.4 system. Of these, only four Linux honeypots (three RH 7.3 and one RH 9.0) and three Solaris honeypots were compromised. Two of the Linux systems were compromised by brute password guessing and not a specific vulnerability. Keep in mind, our data sets are not based on targets of high value, or targets that are well known. Linux systems that are of high value (such as company webservers, CVS repositories or research networks) potentially have a shorter life expectancy.
The science is methodical, premeditated actions to gather and analyze evidence. The technology, in the case of computers, are programs that suite particular roles in the gathering and analysis of evidence. The crime scene is the computer and the network (and other network devices) to which it is connected.
Your job, as a forensic investigator, is to do your best to comb through the sources of evidence -- disc drives, log files, boxes of removable media, whatever -- and do two things: make sure you preserve as much of this data in its original form, and to try to re-construct the events that occurred during a criminal act and produce a meaningful starting point for police and prosecutors to do their jobs.
Every incident will be different. In one case, you may simply assist in the seizure of a computer system, which is analyzed by law enforcement agencies. In another case, you may need to collect logs, file systems, and first hand reports of observed activity from dozens of systems in your organization, wade through all of this mountain of data, and reconstruct a timeline of events that yields a picture of a very large incident.
In addition, when you begin an incident investigation, you have no idea what you will find, or where. You may at first see nothing (especially if a "rootkit" is in place.) You may find a process running with open network sockets that doesn't show up on a similar system. You may find a partition showing 100% utilization, but adding things up with du only comes to 50%. You may find network saturation, originating from a single host (by way of tracing its ethernet address or packet counts on its switch port), a program eating up 100% of the CPU, but nothing in the file system with that name.
The steps taken in each of these instances may be entirely different, and a competent investigator will use experience and hunches about what to look for, and how, in order to get to the bottom of what is going on. They may not necessarily be followed 1, 2, 3. They may be way more than is necessary. They may just be the beginning of a detailed analysis that involves De-compilation of recovered programs and correlation of packet dumps from multiple networks.
Instead of being a "cookbook" that you follow, consider this a collection of techniques that a chef uses to construct a fabulous and unique gourmet meal. Once learned, you'll discover there are plenty more steps than just those listed here.
Its also important to remember that the steps in preserving and collecting evidence should be done slowly, carefully, methodically, and deliberately. The various pieces of data -- the evidence -- on the system are what will tell the story of what occurred. The first person to respond has the responsibility of ensuring that as little of this evidence as possible is damaged, thereby making it useless in contributing to a meaningful reconstruction of what occurred.
One thing is common to every investigation, and it cannot be stressed enough. Keep a regular old notebook handy and take careful notes of what you do during your investigation. These may be necessary to refresh your memory months later, to tell the same long story to a new law enforcement agent who takes over the case, or to refresh your own memory when/if it comes time to testify in court. It will also help you accurately calculate the cost of responding to the incident, avoiding the potentially exaggerated estimates that have been seen in some recent computer crime cases. Crimes deserve justice, but justice should be fair and reasonable.
As for the technology aspect, the description of basic forensic analysis steps provided here assumes Red Hat Linux on i386 (any Intel compatible motherboard) hardware. The steps are basically the same with other versions of Unix, but certain things specific to i386 systems (e.g., use of IDE controllers, limitations of the PC BIOS, etc.) will vary from other Unix workstations. Consult system administration or security manuals specific to your version of Unix.
It is helpful to set up a dedicated analysis system on which to do your analysis. An example analysis system in a forensic lab might be set up as follows:
- Fast i386 compatible motherboard with 2 IDE controllers
- At least two large (>8GB) hard drives on the primary IDE controller (to fit the OS and tools, plus have room to copy partitions off tape or recover deleted file space from victim drives)
- Leave second IDE cable empty. This means you won't need to mess with jumpers on discs -- just plug them in and they will show up as /dev/hdc (master) or /dev/hdd (slave)
- SCSI interface card (e.g., Adaptec 1542)
- DDS-3 or DDS-4 4mm tape drive (you need enough capacity to handle the largest partitions you will be backing up)
- If this system is on the network, it should be FULLY PATCHED and have NO NETWORK SERVICES RUNNING except SSH (for file transfer and secure remote access) -- Red Hat Linux 6.2 with Bastille-Linux hardening is a good choice
(It can be argued that no services should be running, not even SSH, on your analysis systems. You can use netcat to pipe data into the system, encrypting it with DES or Blowfish stream cyphers for security. This is fine, provided you do not need remote access to the system.)
Another handy analysis system is a new laptop. An excellent way of taking the lab to the victim, a fast laptop with 10/100 combo ethernet card, an 18+GB hard drive, and a backpack with padded case, allows you to easily carry everything you need to obtain file system images (later written to tape for long-term storage), analyze them, display the results, crack intruder's crypt() passwords you encounter, etc.
A cross-over 10Base-T cable allows you to get by without a hub or switch, and to still use the network to communicate with the victim system on an isolated mini-network of two systems. (You will need to set up static route table entries in order for this to work.)
A Linux analysis system will work for analyzing file systems from several different operating systems that have supported file system under Linux, e.g., Sun UFS. You simply need to mount the file system with the proper type and options, e.g. (Sun UFS):
# mount -r -t ufs -o ufstype=sun /dev/hdd2 /mnt
Another benefit to Linux are "loopback" devices, which allow you to mount a file containing an image copy (obtained with dd) into the analysis system's file system. See Appendices A and B.
The next item of my list of lesser known and/or publicized security enhancements to the Solaris 10 OS is account lockout. Account lockout is the ability of a system or service to administratively lock an account after that account has suffered "n" consecutive failed authentication attempts. Very often "n" is three hence the "three strikes" reference.
Recall from yesterday's entry on non-login and locked accounts that there is in fact a difference. Locked accounts are not able to access any system services whether interactively or through the use of delayed execution mechanisms such as cron(1M). So, when an account is locked out using this capability, only a system administrator is able to re-enable the account, using the passwd(1) command with the "-u" option.
Account lockout can be enabled in one of two ways. The first way will enable account lockout globally for all users. The second method will all more granular control of which users will or will not be subject to account lockout policy. Note that the account lockout capability will only apply to accounts local to the system. We will look at both in a little more detail below.
Before we look at how to enable or disable the account lockout policy, let's first take a look at how you configure the number of consecutive, failed authentication attempts that will serve as your line in the sand. Any number of consecutive, failed attempts beyond the number selected will result in the account being locked. This number is based on the RETRIES parameter in the /etc/default/login file. By default, this parameter is set to 5. You can certainly customize this parameter based on your local needs and policy. By default, the Solaris Security Toolkit will set the RETRIES parameter to 3.
The Solaris Service Manager
To better handle software faults, Sun has redesigned the way it starts and monitors services. Instead of the the traditional
/etc/init.dstartup scripts, many programs in the Solaris 10 OS have been converted to use the service management framework (smf) of the Solaris Service Manager to start, stop, modify, and monitor programs. The service manager is also used to identify software interdependencies and ensure that services are started in the correct order. Should a service, such as sendmail, suddenly die, the service manager automatically verifies that all of the requirements for the sendmail service are running and respawns the necessary programs. When a hardware fault occurs and hardware is offlined, the service manager can restart any programs under service manager control that needed to be stopped to remove the hardware from service.
Each service under the control of the service manager is controlled by an XML configuration file, called a manifest, that defines the name of the service, the type, any dependencies, and other important information. These manifests are stored in a repository and can be viewed and modified by the repository daemon,
svc.configd(1M). The repository is read by the master restarter daemon,
svc.startd(1M), which evaluates the dependencies and initiates the services as needed. Traditional inetd services are now part of the service manager as well. Any of the inetd services can be enabled, disabled, or restarted via the same mechanism as any other service manager-enabled program.
Itch scratching, and audit (Score:3, Interesting)
by RedPhoenix (124662) on Tuesday September 14, @09:15PM (#10251879)
At the risk of the post sounding like a discussion at a head-lice convention, everyone has their own personal itch to scratch.
Several posts thus far, have questioned the viability of establishing yet another secure-Debian project, similar to other existing projects, and have indicated that there would be a better use of available resources if everyone would just get along and work together (or at least, form under a single project). Fair enough.
However, there are a whole range of reasons why diversity and natural selection w.r.t many competing projects can provide benefits over and above a single large project - organizational inertia, effective and efficient communication, and development priority differences, for example.
'Organizational inertia' in particular, whereby the larger a organization/project gets, the slower it can react to changing requirements, is a good reason why this effort-amalgamation can potentially be a bad thing.
Each of these projects probably has a slightly different 'itch' to 'scratch'. There's no reason why, later on down the track, that the best elements of each of these projects cannot be merged into something cohesive.
A good example is the current situation in Linux Auditing (as in C2/CAPP style auditing and event logging, not code verification) and host-based audit-related intrusion detection. Over time, we've had Snare (http://www.intersectalliance.com), SLES (http://www.suse.com), and Riks Audit Daemon (http://www.redhat.com). Each project had a slightly different focus, and each development team have come up with some great solutions to the problems of auditing / event logging.
The developers of each of these projects are now communicating and collaborating, with a view to bringing a effective audit subsystem to Linux that incorporates the best ideas from each approach.
BTW: How about auditing in this project? Here's a starting point:
Red. (Snare Developer)
About: pam_passwdqc is a simple password strength checking module for PAM-aware password changing programs, such as passwd(1). In addition to checking regular passwords, it offers support for passphrases and can provide randomly generated passwords. All features are optional and can be (re-)configured without rebuilding.
Changes: The module will now assume invocation by root only if both the UID is 0 and the PAM service name is "passwd". This should fix changing expired passwords on Solaris and HP-UX and make "enforce=users" safe. The proper English explanations of requirements for strong passwords will now be generated for a wider variety of possible settings.
Each CERT Security Improvement module addresses an important but narrowly defined problem in network security. It provides guidance to help organizations improve the security of their networked computer systems.
Each module page links to a series of practices and implementations. Practices describe the choices and issues that must be addressed to solve a network security problem. Implementations describe tasks that implement recommendations described in the practices. For more information, read the section about module structure.
- List of modules
- List of practices
- List of implementations
- Configuring NCSA httpd and Web-server content directories on a Sun Solaris 2.5.1 host
- Enabling process accounting on systems running Solaris 2.x
- Installing, configuring, and using tcp wrapper to log unauthorized connection attempts on systems running Solaris 2.x
- Configuring and using syslogd to collect logging messages on systems running Solaris 2.x
- Using newsyslog to rotate files containing logging messages on systems running Solaris 2.x
- Installing, configuring, and using logdaemon to log unauthorized login attempts on systems running Solaris 2.x
- Installing, configuring, and using logdaemon to log unauthorized connection attempts to rshd and rlogind on systems running Solaris 2.x
- Understanding system log files on a Solaris 2.x operating system
- Installing, configuring, and using swatch to analyze log messages on systems running Solaris 2.x
- Installing, configuring, and using logsurfer on systems running Solaris 2.x
- Configuring and installing lsof 4.50 on systems running Solaris 2.x
- Configuring and installing top 3.5 on systems running Solaris 2.x
- Installing, Configuring, and using npasswd to improve password quality on systems running Solaris 2.x
- Installing and configuring sps to examine processes on systems running Solaris 2.x
- Installing and securing Solaris 2.6 servers
- Installing, configuring, and operating the secure shell (SSH) on systems running Solaris 2.x
- Characterizing files and directories with native tools on Solaris 2.X
- Detecting changes in files and directories with native tools on Solaris 2.X
- Installing and operating lastcomm on systems running Solaris 2.x
- Installing, configuring, and using spar 1.3 on systems running Solaris 2.x
- Installing and operating tcpdump 3.5.x on systems running Solaris 2.x
- Installing, configuring, and using argus to monitor systems running Solaris 2.x
- Using newarguslog to rotate log files on systems running Solaris 2.x
- Installing libpcap to support network packet tools on systems sunning Solaris 2.x
- Writing rules and understanding alerts for Snort, a network intrusion detection system
- Disabling network services on systems running Solaris 2.x
- Installing noshell to support the detection of access to disabled accounts on systems running Solaris 2.x.
- Disabling user accounts on systems running Solaris 2.x
- Installing OpenSSL to ensure availability of cryptographic libraries on systems running Solaris 2.x.
- Installing and Operating ssldump 0.9 Beta 1 on systems running Solaris 2.x.
Linux sources that might be useful (some Linux HOW-TO are not bad and are largely applicable to other Unix environments):
FAQs and RFCs
A word of thanks is due Dr. Cohen for making this valuable tool freely available. Check it out !
This DTK is remarkable. Within three hours of successful installation, I was able to interdict a vicious (and persistent) little ankle-biter who has been troubling me for weeks.
Some insecurely-configured Web proxy servers can be exploited by a remote attacker to make arbitrary connections to unauthorized hosts. Two common abuses of a misconfigured proxy server are to use it to bypass firewall restrictions and to send spam email. A server is used to bypass a firewall by connecting to the proxy from outside the firewall and then opening a connection to a host inside the firewall. A server is used to send spam by connecting to the proxy and then having it connect to a SMTP server. It has been reported that many Web proxy servers are distributed with insecure default configurations.
Users should carefully configure Web proxy servers to prevent unauthorized connections. It has been reported that http://www.monkeys.com/security/proxies/ contains secure configuration guidelines for many Web proxy servers. We can not verify the accuracy of this information, and if there are any questions users should contact their vendors.
Solaris Fingerprint Database Companion & Sidekick
Sun Managers Mailing List Archive
Yassp Development Mailing List Archive
A stack smashing attack is most typical for C programs. Many C programs have buffer overflow vulnerabilities, both because the C language lacks array bounds checking, and because the culture of C programmers encourages a performance oriented style that avoids error checking... Several papers contain "cook book style" descriptions of stack smashing attacks exploitation. If the attacker has access to a non-privileged account than unless the server has a hardware or software protection the only remaining work for an wanna-be attacker is to find a suitable non-patched utility and download or write an exploit. Hundreds of such exploits have been reported in recent years.
Aleph One's "Smashing The Stack For Fun And Profit" from Phrack 49
Mudge's "How to write Buffer Overflows"
Richard Jones and Paul Kelly's bounds checking patches to GCC
Solar Designer's Non-executable user stack area -- Linux kernel patch
Miller, Fredrickson and So's "An Empirical Study in the Reliability of UNIX utilities"
*** SecurityPortal.com Securing your File System in Linux. Average discussion.
Best practices in Linux file system security dictate a philosophy of configuring file system access rights in the most restrictive way possible that still allows legitimate users and processes to function properly. However, even with the most careful planning and restrictive settings, successful file system attacks and corruption can occur. To have the most comprehensive plan for Linux file system security, a system administrator needs to modify a default installation's settings, proactively monitor and audit file system changes and have multiple methods to recover from a file system attack.
In configuring file system security, the key areas to be concerned about are: access rights granted to legitimate users to create/modify files and execute programs, access to the file system granted to remote machines, and any area of the file system designated as world-writable.
To quickly review Linux permissions for files and directories, there are three basic types: read (numerically represented as 4), write (2) and execute (1). The values are summed to determine the permissions for the file or directory - a value of 4 meaning read-only, a value of 7 meaning read, write and execute are allowed. A file or directory is assigned three standard sets of permissions: access allowed to the owner, the associated group, and everyone.
umask A common occurrence over time on Linux systems is that when files get created or modified, the permissions become significantly more liberal than what was originally intended. When new files are created by users, administrators or processes, the default permissions granted are determined by umask. By using umask to set a restrictive default, files and directories that are created will retain more restrictive permissions unless they are manually changed with chmod. Umask defaults for all users are set in /etc/profile. Default permissions are determined by subtracting the umask value from 777. Files created by a user with a umask of 037 would have permissions of 640 (that isn't new math, the execute bit is not getting set for the owner), which means the owner can read/write the file, the group can read the file, and everyone has no access. Setting umask values to 077 means no one else has any access to files created by the owner.
... ... ...
NFS, Samba - The "Not For Security" file system should be avoided where possible on Linux boxes directly connected to the Internet. NFS requires a high degree of trust of the peer machine that will be mounting your partitions. You must be very careful about providing anything beyond read access to hosts in the /etc/exports. Samba, while not using a peer trust system, can nonetheless be complex to maintain user rights. They are both network filing services, and the only way to be sure that your file system is not at risk is to be running in a completely trusted environment.
Auditing your file system regularly is a must. You should look for files with the permissions anomalies described above. You should also be looking for changes in standard files and packages. At a minimum, you can use the find command to search for questionable file permissions:
Suid & sgid: find / \( -perm -2000 -o -perm -4000 \) -ls (You can add -o -perm -1000 to catch sticky bit files and directories)
World-writable files: find / -perm -2 ! -type l -ls
Files with no owner: find / \( -nouser -o -nogroup \) -print (thanks to Michael Wood for correcting this)
You can write and create cron job for a simple script that directs this output to a file, compares it with a file created by a search the day before and mails the difference to you.
As you might guess, several people have written simple to complex tools that check for files with questionable permissions, checksum binaries to detect tampering and a host of other functions. Here are a few:
Remote audit services
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info|
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Last modified: August 11, 2010