May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Open Source and Free Software for Solaris


See also

Books Recommended Links Sun supported open source Sun Software Companion Disk Solaris Hosting
gcc ksh93 CVS vim vnc THE (The Hessling  Editor) rexx/regina for Solaris
Apache PHP mySQL Postgress nmap Snort acid
Mozilla Open Office ssh TCL/TK Expect JASS Titan
mc ipfilter nmap wget TCP wrappers sudo rsync Tips & Tricks
Screen Samba History Tips  Humor  Random Findings Etc

"The rumors of SPARC's death have been greatly exaggerated"

This page in not about quantity, but quality. It tries to answer questions which open source software is most beneficial for Solaris, and which is not.  Not all open source software are created equal. Some products have solid architecture and wide appeal, other OSS products have flaky architecture, but still wide appeal and the third, most questionable category has both problematic architecture and is of problematic usefulness (open source version of tripwire is one such example).

Open source software on Solaris suffers from differences (often subtle) between Solaris and Linux (Linux became the main platform for OSS development, although FreeBSD is still used by many developers too).  Most GNU utilities are incompatible with RBAC and ACLs.  Bash is semi-compatible (I do not know if you can used it as a role shell).

An example of the package that have its share of problems with Solaris is mc, although it can be argued that it has its share of problems with Linux as well ;-). Part of those problems are connected with the fact that mc is implicitly biased to bash as an underling shell. Here is the list of OSS software the Sun preinstall during regular installation of Solaris 9 4-04:

Solaris packages are available for Solaris 10 11/06 OS Companion Software DVD image download  also can be downloaded separately:

There are also a set of Studio 11 compiled packages called Cool Tools

The minimal recommended set of OSS utilities that might be helpful in Unix administration might include:

  1. mc -- not on the Software Companion CD, but version 6.1 is available from GNU Midnight Commander (also referred to as MC) is a user shell with text-mode full-screen interface - installs in /usr/local  Suffers from bad keyboard compatibility.  Yes mc development is stalled and codebase is horrible, but still it is a very useful tool. I just pity those hard-line Unix administrators who are trying to accomplish daily tasks using plain-vanilla shell be it bash of ksh93.  This is so archaic and unproductive that it's really amazing that outside Eastern Europe and Germany administration of Unix server using Orthodox file managers (OFM) never got traction. But as the USA still uses Fahrenheit  scale for measuring temperature (while all world uses Celsius) and miles for measuring distance (while all world switched to metric system long ago) this might be not that surprising ;-)

  2. VNC.  It is more convenient to work with VNC if your primary desktop is Windows and in most corporations desktop is Windows-based. Also VNC preserves session and that means big time savings for administrators who need to deal with more the a dozen servers on daily basis.  BTW the Unix desktop with VNC can serve as a multiplexer for multiple ssh or telnet sections similar to the way old-style Unix administrators use screen.  You can easily have all your servers as icons and the terminal for the machine is always just one click away. If you administer more then a dozen servers that's very, very important. For security reasons VNC should be used via ssh. See VNC on Solaris 10  on how to installing and configuring VNC from the Solaris software companion CD...

  3. Screen  Screen is a terminal multiplexer. Using it, you can run any number of console-based applications--interactive command shells, curses-based applications, text editors, etc.--within a single terminal. The desire to do this is what gets most people hooked on screen. I used to start a half-dozen terminal emulators when I logged into my machine: I wanted one to read my email, one to edit my code, one to compile my code, one for my newsreader, one for a shell into my web host, and so on. Now I start one terminal emulator, and run screen in it. Problem solved. Just two packages need to be installed on Solaris 10 and 9:

  4. tcp_wrappers (included with the Solaris 9 and 10 distribution; distribution supplied package should probably be used).

  5. ip filter (standard on Solaris 10)

  6. sudo (On Solaris 10 it is redundant, before that this is an essential tool).

  7. perl 5.8  (Installed and supported by Sun, but many modules are missing. See BigAdmin Submitted Tech Tip Installing Non-Core Perl Modules  on how to install them)

  8. ksh93-s or better

  9. GNU make

  10. bash 3.1 or better (installed by default) -- actually bash is considerably more convenient interactive shell then ksh but the problem is that it is somewhat buggy in Solaris environment (for example, in case of long commands the tail does not go to the next line).  You should add bash debugger.

  11. gzip, gunzip, gzcat (gzip is now installed by default and supported by sun)

  12. bzip2, bunzip2, bzcat

  13. lynx -- important for man browsing, see below man2html

  14. wget -- useful for getting HTML documents on the server as well as a better man pages viewer.

  15. wput --  wput-0.6-sol10-sparc-local.gz Wput is a command-line ftp-client similar to wget, but uploads files or directories to remote ftp servers - installs in /usr/local. Dependencies: zlib, libgpgerror, libgcrypt, gnutls, and to have /usr/local/lib/ install either the libgcc-3.4.6 or gcc-3.4.6 or later.

  16. vim -- vim is more powerful version of vi that can save some time for administrator.  Also it is more customizable and there is implementation of "all" command from XEDIT for it

  17. gawk -- probably better then nawk and more portable.

  18. expect -- a must of any thinking administrator.  Requres TCL       

  19. rsync -- useful to have in case you need sync two directories and NFS is not the way to go.

  20. fileutils -- nice to have but do not work with RBAC

  21. findutils -- nice to have but do not work with RBAC.

  22. dig -- not on Software Companion CD. Needs to be extracted from the bind package.

  23. man2html -- it is really stupid to use man format (and man utility) in XXI century. Conversion to HTML and usage lynx or other browser create much better environment in case Internet access is slow or absent. Solaris used to have the capability of on-the fly conversion but lost it due to security problem with the software.  That's bad and conversion to HTML is one way to restore part of previous functionality.

  24. TCL and associated packages. (Expect by Don Libes, TkMan  by Tom Phelps )

    "I encourage you to use TkMan for reading man pages. ... TkMan provides an extremely pleasant GUI for browsing man pages. I cannot describe all the nice features of TkMan in this small space. Instead I will merely say that I now actually look forward to reading man pages as long as I can do it with TkMan."
    -- Don Libes, Exploring Expect, page 21

  25. The Hessling Editor This is a personal favorite and you need some previous experience with VM/CMS probably to appreciate the features it provides, but still IMHO it make sense to try. The command "All"  is probably the worst simple and powerful implementation of folding that I know. There is a reimplementation for VIM  (See "all" command )

  26. Synergy  Synergy lets you easily share a single mouse and keyboard between multiple computers with different operating systems, each with its own display, without special hardware. It's intended for users with multiple computers on their desk since each system uses its own monitor(s).  Redirecting the mouse and keyboard is as simple as moving the mouse off the edge of your screen. Synergy also merges the clipboards of all the systems into one, allowing cut-and-paste between systems. Furthermore, it synchronizes screen savers so they all start and stop together and, if screen locking is enabled, only one screen requires a password to unlock them all. Learn more about how it works.

  27. TKman and TKdiff
  28. sysstat for Solaris

    "sysstat" complements Solaris' system tools for performance analysis. It presents all key performance metrics on a VT100 terminal and has the possibility to toggle its view between different hosts.

Here are some packages that I do not recommend to install:

  1. top (can be just alias top='prstat -S cpu' ). GNU top provides misleading data on Solaris or at least used to provide misleading data in older versions.
  2. gnome -- used to be bloated and rather buggy. Also it is not more flexible as a desktop environment then CDE and I doubt that programmer and system administrators will be more productive using it; if so, why change ?  people are cooperating with Sun to offer the set of the most popular open source software for Solaris but paradoxically they produce packages in slightly different formats:  The packages in the Companion CD archives are compressed with the zip program rather than gzip. The Companion CD files install in subdirectories of the /opt/sfw file system, packages from in /usr/local (which creates problems in Solaris 10).  You may think about linking those directories to use for upgrades as Sun is currently bad with this (they are non-supported packages after all). 

Top Visited
Past week
Past month


Old News ;-)

[Apr 20, 2009] Esar 1.0.8

Esar is a replacement for the sar utility on Solaris. In addition to all the standard reporting features of sar, esar can also report network usage (UDP, TCP, NFS, and RPC traffic) and the processor load average.

[Feb 18, 2009] seccheck for Solaris 10

On reviewing the excellent security benchmarks available over at CI Security, I wanted to automate the security checks of my Solaris 10 servers and produce a highly detailed report listing all security warnings, together with recommendations for their resolution. The solution was seccheck - a modular host-security scanning utility. Easily expandable and feature rich, although at the moment only available for Solaris 10.

This doesn't cover 100% of the checks recommended by CI Security, but has 99% of them - the ones that I consider important. For example, I don't check X configuration because I always ensure my servers don't run X.


The source distribution should be unpacked to a suitable location. I suggest doing something like the following:

     # mkdir /usr/local/seccheck # chown root:root /usr/local/seccheck # chmod 700 /usr/local/seccheck # cd /usr/local/seccheck # mkdir bin output # cd /wherever/you/downloaded/seccheck # gzip -dc ./seccheck-0.7.6.tar.gz | tar xf - # cd seccheck-0.7.6 # mv modules.d /usr/local/seccheck/bin 

Everything is implemented as bash shell scripts, so there are no really strict installation guidelines, place the files wherever you wish. You can specify an alternate location for the modules directory with the -m option anyway.

Using seccheck

By default, will search for a modules.d directory in the same directory in which the script is located. If your modules are not located there, you can use the -m option to specify an alternate module location, for example:

        # ./ -m /security/seccheck/mymodules 

seccheck will then scan through the modules.d for valid seccheck modules (determined by filename). A seccheck module filename should be of the following format:

Where nn is a two digit integer that determines the order in which modules should be executed. For example, included with the current seccheck distribution you'll find the following files in modules.d:

        # ls -1 modules.d 

You can see that will be processed before, and so on. You can disable a module by renaming it something other than the convention, for example, by appending a .NOT suffix to the module filename.

A template is provided so that you can write your own seccheck modules.

By default, seccheck will write everything out to STDOUT and STDERR. If you want to redirect to an output file, just use the -o option and specify an output directory. After running the script, you'll be left with a file such as:


containing the output of your modules.


You can download the latest seccheck distribution, including all current modules, below:


User Contributed Modules

Please feel free to submit your own seccheck modules - send them through to [email protected]. Bear in mind that any scripts submitted will be distributed freely under the terms of the GPL. Also please note that these are user contributed modules, and as such are unsupported by me!

Module Name Author Date Added View Download Description Scott Everard 26/05/07 View D/L Check Solaris Audit Daemon configuration Scott Everard 26/05/07 View D/L Check Solaris Zones configuration

[Sep 11, 2008] The LXF Guide 10 tips for lazy sysadmins Linux Format The website of the UK's best-selling Linux magazine

A lazy sysadmin is a good sysadmin. Time spent in finding more-efficient shortcuts is time saved later on for that ongoing project of "reading the whole of the internet", so try Juliet Kemp's 10 handy tips to make your admin life easier...

  1. Cache your password with ssh-agent
  2. Speed up logins using Kerberos
  3. screen: detach to avoid repeat logins
  4. screen: connect multiple users
  5. Expand Bash's tab completion
  6. Automate your installations
  7. Roll out changes to multiple systems
  8. Automate Debian updates
  9. Sanely reboot a locked-up box
  10. Send commands to several PCs

[Sep 9, 2008] GNU ddrescue 1.9-pre1 (Development) by Antonio Diaz Diaz

About: GNU ddrescue is a data recovery tool. It copies data from one file or block device (hard disc, cdrom, etc) to another, trying hard to rescue data in case of read errors. GNU ddrescue does not truncate the output file if not asked to. So, every time you run it on the same output file, it tries to fill in the gaps. The basic operation of GNU ddrescue is fully automatic. That is, you don't have to wait for an error, stop the program, read the log, run it in reverse mode, etc. If you use the logfile feature of GNU ddrescue, the data is rescued very efficiently (only the needed blocks are read). Also you can interrupt the rescue at any time and resume it later at the same point.

Changes: The new option "--domain-logfile" has been added. This release is also available in lzip format. To download the lzip version, just replace ".bz2" with ".lz" in the tar.bz2 package name.

[Sep 9, 2008] safe-rm 0.3 by Francois Marier

About: safe-rm is intended to prevent the accidental deletion of important files by replacing /bin/rm with a wrapper which checks the given arguments against a configurable blacklist of files and directories that should never be removed. Users who attempt to delete one of these protected files or directories will not be able to do so and will be shown a warning message instead. Protected paths can be set both at the site and user levels.

Changes: This release fixes a bug which caused safe-rm to skip the full blacklist checks when dealing with certain files and directories in the working directory. Previously, unless the argument you passed to safe-rm contained a slash, it would not get the real (absolute) path of the file before checking against the blacklist.

[Jul 18, 2008] Documentation at

The Image Packaging System (IPS) software was developed by the Indiana OpenSolaris project team and is a part of the OpenSolarisTM 2008.05 release. The IPS software is a prototype. Not all features are complete and only experimental deployments should be attempted.

Download the OpenSolaris 2008.05 release

Go to the Getting Started with IPS Guide

The project team has been explaining some of the ongoing assumptions behind the project in a series of blog posts:

[Jul 18, 2008] sysstat for Solaris 20080718 by Thomas Maier-Komor

About: sysstat complements Solaris' system tools for performance analysis. It presents all key performance metrics on a VT100 terminal and has the possibility to toggle its view between different hosts.

Changes: The default install prefix should now be SUS compliant, as it was changed to /opt/local. A minor Solaris 8 fix was made. Zone awareness was changed to be dynamic.

[Jun 26, 2008] Project details for Patch Check Advanced

Perl-based tool
Patch Check Advanced (pca) generates lists of installed and missing patches for Sun Solaris systems and optionally downloads patches. It resolves dependencies between patches and installs them in the correct order. It works on all versions of Solaris and on both SPARC and x86.

Release focus: Minor feature enhancements

Checks for patches 137112, 119252, and 119253 have been added. Ignorable errors message from showrev are no longer shown. The list of contributors has been updated.

[Jun 6, 2008] Six free security tools you shouldn't live without Community


I can't tell you how much I love this little program. KeePass is a free, open-source password management application. Using KeePass, you can store all of your credentials in a single secure database that can only be accessed by using a master password, key (a file), master password + key, or Windows credentials. Here are some reasons to use this utility:

[May 24, 2008] Jython Reborn By Chris McAvoy

Here's a fun game to play at your next developer cocktail party. If someone asks you what you're interested in, tell them "Jython." Inevitably, their response will be, "Jython is dead." Six months ago, I would have agreed, but with recent investment by Sun Microsystems, it appears that Jython is very much alive and ready to be used. I'll spend a little bit of time talking about some recent Jython history, and then compile the latest revision of Jython from source and play around with it a bit.

Jython is one of the many dynamic languages built on top of the Java Virtual Machine. The Jython project began in 1997. The original release was written by Jim Hugunin, who worked on it until 1999. Jim currently develops IronPython, the .NET implementation of Python.

Between 1999 and 2005, Jython passed among several lead developers, finally ending up with Frank Wierzbicki. In March of this year, Frank was hired by Sun Microsystems to work on Jython full time. Sun's recent hiring of two lead JRuby developers, plus the hiring of Ted Leung as "Principal Engineer, Dynamic Languages and Tools," shows just how serious Sun appears to be in its future support of dynamic languages on the JVM. Jython and JRuby aren't alone; the JVM language space is getting crowded, Scala, Rhino (a Javascript implementation written by the Mozilla Foundation), Clojure (a Lisp dialect), and Groovy all have their proponents.

The current release of Jython is 2.2, which conforms to the CPython 2.2 release. However, after 2.2, the Jython developers started to move in a new direction for the implementation, choosing ASM and ANTLR to speed development time. Although it's still in development, I'll focus on the 2.3 trunk version of Jython. A 2.5 implementation (again, that conforms to the 2.5 release of CPython) is slated to be the next official release). To get started on the latest version of Jython, you'll have to check it out of the Sourceforge repository.

[May 7, 2008] Patch Check Advanced 20080507 >by Dagobert Michelsen

About: Patch Check Advanced (pca) generates lists of installed and missing patches for Sun Solaris systems and optionally downloads patches. It resolves dependencies between patches and installs them in the correct order. It works on all versions of Solaris and on both SPARC and x86.

Changes: HTML tags in patchdiag.xref are ignored. This change from Sun to patchdiag.xref breaks compatibility with all previous versions of PCA and makes updating mandatory. An option for concurrent patch downloads was added. A new option to set sunsolve access protocol to HTTPS was added. wgetproxy options for non-SunSolve URLs are honored as well. The file ../etc/pca-proxy.conf is read in proxy mode. Checks for several patches were added.

[May 06, 2008] ack! - Perl-based grep replacement

There are some tools that look like you will never replace them. One of those (for me) is grep. It does what it does very well (remarks about the shortcomings of regexen in general aside). It works reasonably well with Unicode/UTF-8 (a great opportunity to Fail Miserably for any tool, viz. a2ps).

Yet, the other day I read about ack, which claims to be "better than grep, a search tool for programmers". Woo. Better than grep? In what way?

The ack homepage lists the top ten reasons why one should use it instead of grep. Actually, it's thirteen reasons but then some are dupes. So I'd say "about ten reasons". Let's look at them in order.

  1. It's blazingly fast because it only searches the stuff you want searched.

    Wait, how does it know what I want? A DWIM-Interface at last? Not quite. First off, ack is faster than grep for simple searches. Here's an example:

    $ time ack 1Jsztn-000647-SL exim_main.log >/dev/null
    real    0m3.463s
    user    0m3.280s
    sys     0m0.180s
    $ time grep -F 1Jsztn-000647-SL exim_main.log >/dev/null
    real    0m14.957s
    user    0m14.770s
    sys     0m0.160s

    Two notes: first, yes, the file was in the page cache before I ran ack; second, I even made it easy for grep by telling it explicitly I was looking for a fixed string (not that it helped much, the same command without -F was faster by about 0.1s). Oh and for completeness, the exim logfile I searched has about two million lines and is 250M. I've run those tests ten times for each, the times shown above are typical.

    So yes, for simple searches, ack is faster than grep. Let's try with a more complicated pattern, then. This time, let's use the pattern (klausman|gentoo) on the same file. Note that we have to use -E for grep to use extended regexen, which ack in turn does not need, since it (almost) always uses them. Here, grep takes its sweet time: 3:56, nearly four minutes. In contrast, ack accomplished the same task in 49 seconds (all times averaged over ten runs, then rounded to integer seconds).

    As for the "being clever" side of speed, see below, points 5 and 6

  2. ack is pure Perl, so it runs on Windows just fine.

    This isn't relevant to me, since I don't use windows for anything where I might need grep. That said, it might be a killer feature for others.

  3. The standalone version uses no non-standard modules, so you can put it in your ~/bin without fear.

    Ok, this is not so much of a feature than a hard criterion. If I needed extra modules for the whole thing to run, that'd be a deal breaker. I already have tons of libraries, I don't need more undergrowth around my dependency tree.

  4. Searches recursively through directories by default, while ignoring .svn, CVS and other VCS directories.

    This is a feature, yet one that wouldn't pry me away from grep: -r is there (though it distinctly feels like an afterthought). Since ack ignores a certain set of files and directories, its recursive capabilities where there from the start, making it feel more seamless.

  5. ack ignores most of the crap you don't want to search

    To be precise:

    • VCS directories
    • blib, the Perl build directory
    • backup files like foo~ and #foo#
    • binary files, core dumps, etc.

    Most of the time, I don't want to search those (and have to exclude them with grep -v from find results). Of course, this ignore-mode can be switched off with ack (-u). All that said, it sure makes command lines shorter (and easier to read and construct). Also, this is the first spot where ack's Perl-centricism shows. I don't mind, even though I prefer that other language with P.

  6. Ignoring .svn directories means that ack is faster than grep for searching through trees.

    Dupe. See Point 5

  7. Lets you specify file types to search, as in --perl or --nohtml.

    While at first glance, this may seem limited, ack comes with a plethora of definitions (45 if I counted correctly), so it's not as perl-centric as it may seem from the example. This feature saves command-line space (if there's such a thing), since it avoids wild find-constructs. The docs mention that --perl also checks the shebang line of files that don't have a suffix, but make no mention of the other "shipped" file type recognizers doing so.

  8. File-filtering capabilities usable without searching with ack -f. This lets you create lists of files of a given type.

    This mostly is a consequence of the feature above. Even if it weren't there, you could simply search for "."

  9. Color highlighting of search results.

    While I've looked upon color in shells as kinda childish for a while, I wouldn't want to miss syntax highlighting in vim, colors for ls (if they're not as sucky as the defaults we had for years) or match highlighting for grep. It's really neat to see that yes, the pattern you grepped for indeed matches what you think it does. Especially during evolutionary construction of command lines and shell scripts.

  10. Uses real Perl regular expressions, not a GNU subset

    Again, this doesn't bother me much. I use egrep/grep -E all the time, anyway. And I'm no Perl programmer, so I don't get withdrawal symptoms every time I use another regex engine.

  11. Allows you to specify output using Perl's special variables

    This sounds neat, yet I don't really have a use case for it. Also, my perl-fu is weak, so I probably won't use it anyway. Still, might be a killer feature for you.

    The docs have an example:

    ack '(Mr|Mr?s)\. (Smith|Jones)' --output='$&'
  12. Many command-line switches are the same as in GNU grep:

    Specifically mentioned are -w, -c and -l. It's always nice if you don't have to look up all the flags every time.

  13. Command name is 25% fewer characters to type! Save days of free-time! Heck, it's 50% shorter compared to grep -r

    Okay, now we have proof that not only the ack webmaster can't count, he's also making up reasons for fun. Works for me.

Bottom line: yes, ack is an exciting new tool which partly replaces grep. That said, a drop-in replacement it ain't. While the standalone version of ack needs nothing but a perl interpreter and its standard modules, for embedded systems that may not work out (vs. the binary with no deps beside a libc). This might also be an issue if you need grep early on during boot and /usr (where your perl resides) isn't mounted yet. Also, default behaviour is divergent enough that it might yield nasty surprises if you just drop in ack instead of grep. Still, I recommend giving ack a try if you ever use grep on the command line. If you're a coder who often needs to search through working copies/checkouts, even more so.


I've written a followup on this, including some tips for day-to-day usage (and an explanation of grep's sucky performance).


René "Necoro" Neumann writes (in German, translation by me):

Stumbled across your blog entry about "ack" today. I tried it and found it to be cool :). So I created two ebuilds for it:

Just wanted to let you know (there is no comment function on your blog).

[Feb 16, 2008] Google The new Sourceforge The Open Road

The Business and Politics of Open Source by Matt Asay - CNET Blogs

Sourceforge boasts 169,282 registered projects. The actual number of active projects may be as low as 15,000. This is still an impressive number, but it may not be enough to stave off the Google threat.

Just two years after Google kicked off project hosting on its Google Code site, Google is reporting that it now hosts over 80,000 projects. Given how new it is (and how infrequently Sourceforge prunes its projects, if at all), it may well be that Google now has more active projects hosted on its Google Code site than Sourceforge.

The real question, of course, is how important or relevant these projects are. I've not heard of many (any?) high-profile open-source projects moving to Google Code, though there certainly are some making the move.

[Feb 4, 2008] SchilliX 0.6.1 by Jörg Schilling

About: SchilliX is an OpenSolaris-based live CD and distribution that is intended to help people discover OpenSolaris. When installed on a hard drive, it also allows developers to develop and compile code in a pure OpenSolaris environment. SchilliX tries to be as Sun Solaris compatible as possible and to be the optimum development platform for Solaris and OpenSolaris.

Changes: This version was upgraded to use Nevada build 81. A SVR4 package data base was added. /bin/sh is now a Bourne Shell with added file name completion and cursor editable history.

[Jan 17, 2008] Staf Wagemakers chpasswd implementation (changepass)

I tested on Solaris 10 and 9. Works as expected.
changepass manpage

changepass − update an user's password

changepass is a chpasswd clone, it might be useful on platforms that doesn't have such a command like Solaris.
Most GNU/Linux distributions have chpasswd (8), on FreeBSD you can use "pw usermod name -h 0" but many commercial Un*ces doesn't have tool like this. An alternative is to update the user's password in script with usermod but it's possible to see the encrypted password in the process list, which is not very secure.
reads a list of user name and password pairs from stdin and updates the users passwords
Each line has format:


-h,--help print this help
-n,--nopam don't use pam
-p,--pam use pam (default)
-e,--encrypt password is already encrypted, this option will disable pam
-m,--md5 use md5 encryption, this option will disable pam
-v,--verbose enable verbose output

[Dec 22, 2007] libev 2.0 by Marc A. Lehmann

About: Libev is a high-performance event loop for C (with optional and separate interfaces for C++ and Perl), featuring support for I/O, timers (relative and absolute, cron-like ones), signals, process status changes, and other types of events. It has both a fast native API and libevent emulation to support programs written using the libevent API. The libev distribution consists of libevent with the core event handling parts replaced by the libev embedded event loop. Differences to libevent include higher speed, simpler design, more features, less memory usage, embedability, and no arbitrary limits. libev supports epoll, kqueue, Solaris event ports, poll, and select.

Changes: Embed watchers are now functional as documented. A memleak in ev_loop_destroy has been fixed. Epoll has been removed from the embeddable backends set. Export symbol lists that might help embedders have been added. The documentation has been improved greatly to include more portability hints and background information. Functions for finer control over the event loop block and waiting time have been added. A great number of minor portability and compile issues have been fixed.

[Dec 11, 2007] Using Unison to synchronize files between windows and solaris

This document describes how to setup Unison to perform synchronization between a windows laptop and a solaris system.

What I am trying to achieve is to use the Windows version of Unison, as compiled by Max Bowsher. This version unfortunately has a problem asking for password for the ssh account but following this document should provide an acceptable alternative.

What I do is run Unison on the laptop and make it ssh to the solaris system where the remote files are stored (and backed up).

For this to work, you will need to install a few Cygwin packages (for ssh) and manually install Unison for windows and at last, set it up so we can avoid the bug mentioned above.

[Dec 1, 2007] solaris-friendly pam_cracklib

solaris-friendly pam_cracklib is a reimplementation of pam_cracklib that builds and runs on Solaris as well as Linux. It is a PAM module for checking passwords with cracklib.

[Nov 30, 2007] Project details for Expect-lite

Expect-lite is a wrapper for expect, created to make expect programming even easier. The wrapper permits the creation of expect script command files by using special character(s) at the beginning of each line to indicate the expect-lite action. Basic expect-lite scripts can be created by simply cutting and pasting text from a terminal window into a script, and adding '>' '

Release focus: Major feature enhancements

The entire command script read subsystem has changed. The previous system read directly from the script file. The new system reads the script file into a buffer, which can be randomly accessed. This permits looping (realistically only repeat loops). Infinite loop protection has been added. Variable increment and decrement have been added to support looping.

Craig Miller [contact developer]

[Nov 6, 2007] Project details for sarvant

sarvant analyzes files from the sysstat utility "sar" and produces graphs of the collected data using gnuplot. It supports user-defined data source collection, debugging, start and end times, interval counting, and output types (Postscript, PDF, and PNG). It's also capable of using gnuplot's graph smoothing capability to soften spiked line graphs. It can analyze performance data over both short and long periods of time.

[Nov 5, 2007] OpenSolaris Forums Project Indiana milestone reached! ...

From: Glynn Foster <Glynn.Foster-UdXhSnd/>

To: Open Solaris <>, OpenSolaris Announce <>, Indiana Discuss <>, advocacy-discuss-AT-op

Subject:[indiana-discuss] Project Indiana milestone reached!

Date:Thu, 01 Nov 2007 16:32:34 +1300

I'm very pleased to announce that the first milestone of Project Indiana is now
available - called OpenSolaris Developer Preview.

It's available for download at

This is an x86-based LiveCD install image, containing some new and emerging
OpenSolaris technologies. This may result in instabilities that lead to system
panics or data corruption.

Among the features contained in this release are

  o Single CD download, with LiveCD 'try before you install' capabilities

  o Caiman installer, with significantly improved installation experience

  o ZFS as the default filesystem

  o Image packaging system, with capabilities to pull packages from
    network repositories

  o GNU utilities in the default $PATH

  o bash as the default shell

  o GNOME 2.20 desktop environment

For more details about the system requirements along with some basic user
documentation, see -

and the release notes

This milestone preview shows the results of many months of engineering work
through the collaboration of several projects on I would like
to thank to those people who have been involved, and offer my congratulations
for reaching this successful milestone.

Report Bugs
We are very interested in hearing feedback about your experiences with this
release. In particular, if you have issues installing on your hardware we would
love to know.

If you would like to provide feedback, see our bug reporting page for details on
how to do that -

About Project Indiana
Project Indiana is working towards creating a binary distribution of an
operating system built out of the OpenSolaris source code. The distribution is a
point of integration for several current projects on, including
those to make the installation experience easier, to modernize the look and feel
of OpenSolaris on the desktop, and to introduce a network-based package
management system into Solaris.

Rock on!

On behalf of Project Indiana Team

[Aug 8, 2007] BigAdmin Feature Article PostgreSQL in the OpenSolaris OS

Abstract: This article describes key features of PostgreSQL 8.2, which have been available in OpenSolaris since build 66.


[Aug 7, 2007] Expect plays a crucial role in network management by Cameron Laird

31 Jul 2007 |

If you manage systems and networks, you need Expect.

More precisely, why would you want to be without Expect? It saves hours common tasks otherwise demand. Even if you already depend on Expect, though, you might not be aware of the capabilities described below.

Expect automates command-line interactions

You don't have to understand all of Expect to begin profiting from the tool; let's start with a concrete example of how Expect can simplify your work on AIX® or other operating systems:

Suppose you have logins on several UNIX® or UNIX-like hosts and you need to change the passwords of these accounts, but the accounts are not synchronized by Network Information Service (NIS), Lightweight Directory Access Protocol (LDAP), or some other mechanism that recognizes you're the same person logging in on each machine. Logging in to a specific host and running the appropriate passwd command doesn't take long-probably only a minute, in most cases. And you must log in "by hand," right, because there's no way to script your password?

Wrong. In fact, the standard Expect distribution (full distribution) includes a command-line tool (and a manual page describing its use!) that precisely takes over this chore. passmass (see Resources) is a short script written in Expect that makes it as easy to change passwords on twenty machines as on one. Rather than retyping the same password over and over, you can launch passmass once and let your desktop computer take care of updating each individual host. You save yourself enough time to get a bit of fresh air, and multiple opportunities for the frustration of mistyping something you've already entered.

The limits of Expect

This passmass application is an excellent model-it illustrates many of Expect's general properties:

[Jul 16, 2007] ISECOM - OSSTMM Open Source Security Testing Methodology Manual by Pete Herzog

The Open Source Security Testing Methodology Manual (OSSTMM) is a peer-reviewed methodology for performing security tests and metrics. The OSSTMM test cases are divided into five channels (sections) which collectively test: information and data controls, personnel security awareness levels, fraud and social engineering control levels, computer and telecommunications networks, wireless devices, mobile devices, physical security access controls, security processes, and physical locations such as buildings, perimeters, and military bases.

The OSSTMM focuses on the technical details of exactly which items need to be tested, what to do before, during, and after a security test, and how to measure the results. New tests for international best practices, laws, regulations, and ethical concerns are regularly added and updated.

Provided here is the latest public release. To receive OSSTMM development status, notes, and betas, become part of the team. Subscribe now to join the ISECOM Gold or Silver Team or contact us at osstmm<at> with how you can help OSSTMM development and earn a place on the core development team.

[Jul 10, 2007] BigAdmin Feature Article Installing and Using GNU Screen by Amy Rich

Precompiled package for screen is available from

Common Screen Tasks

Starting and Selecting Windows

Once screen is installed, it can be used without any further configuration. First run /usr/local/bin/screen to start a session. As mentioned previously, this starts one virtual shell window. Generally, additional shell windows are desirable and can be started with the prefix key followed by C-c, C-a C-c. This runs the screen command screen. Additional windows can also be run by entering screen's command mode and entering the command there. The command mode is entered by the key sequence C-a :. Once in command mode, type in screen and hit return. Every command that can be run by entering a key sequence can also be run by name from screen's command mode.

In addition to shell windows, screen can also attach directly to serial devices. This is quite useful when installed on a machine acting as a console server to a number of other machines or on a machine directly attached to a modem. To attach directly to /dev/ttyb, for example, enter command mode and give the screen command port as an argument: C-a : screen /dev/ttyb. This is shorthand for C-a : screen cu -l /dev/ttyb.

Once a screen session has multiple virtual windows, the user needs to easily switch between them. Like a TV remote, screen can access windows by using a wraparound previous/next mechanism or by specifying the window directly. Each window has an associated number, which gives it its place in the ring. To obtain information about all windows, enter the key sequence C-a . To obtain information about the current window, C-a i.

To switch to the next window in sequence, enter the key sequence C-a C-n, and to switch to the previous numbered window, C-a C-p. To hop directly to a window, enter the key sequence C-a # where # is the number of the window. For example, if there were a shell running in window 2, switch to it using C-a 2. To see a listing of all virtual windows and select one to switch to, enter C-a ". To hop back to the window last displayed, enter C-a C-a.

History, Cut and Paste, Logging, and Monitoring

When the user is working on a terminal that has no mouse, screen offers the capability to cut and paste by using a virtual clipboard. The key sequence C-a C-[ enters copy/history scrollback mode and allows the use of (mostly) vi-style syntax to navigate through the scrollback buffer. The motion options available in copy/history scrollback mode are covered in detail in the man page. The copy range is specified by setting two marks. The text between these marks will be highlighted and stored into the paste buffer. Press the space bar to set the first and second marks, respectively. To paste the text just saved to the buffer, go to the appropriate location in the desired window and enter the key sequence C-a C-].

Activity in a screen virtual window can be logged to a file, much like the UNIX script command does for an interactive session. To toggle logging of activity in the window to the file screenlog.#, where # is a number starting at 0, enter the key sequence C-a H. Along the same lines, a window can be watched for any activity. If the user is in window 3 and activity occurs in window 2, a message will be displayed at the bottom of the screen session if window 2 is being monitored. To toggle monitoring of the current window on the fly, use the key sequence C-a M.

Locking and Detaching, and Reattaching

Perhaps the two most useful features of screen are the ability to lock the terminal and the ability to disconnect the session and later reconnect. To lock the terminal (or xterm, if called from one), enter the key sequence C-a C-x. This runs /usr/bin/lock or an internal function and does not pass any input to the screen session from that terminal until the user's password is entered correctly. Processes in each window continue to run just as if the screen session were detached.

There are two ways to detach a screen session, power detach and a regular detach. In a regular detach (C-a C-d), the screen session is detached and the user is returned to the shell from which screen was invoked. In a power detach (C-a D D), the screen session is detached and the user is logged out of the calling shell. The user can also kill all windows and terminate screen instead of detaching by entering the key sequence C-a C-\.

Screen sessions can also be detached from outside the screen session, which is useful for stealing a session after changing physical locations. Again, sessions can be detached regularly or power detached, but if a user runs more than one screen session, the correct session to detach must first be determined. This is done by issuing screen -ls from the command line. On a machine called hostname where the user username is attached to two sessions, the output of the aforementioned command will look like:

% screen -ls
There are screens on:
        651.pts-5.hostname  (Attached)
        16405.pts-12.hostname       (Attached)
2 Sockets in /tmp/screens/S-username.

To detach the session 651.pts-5.hostname run one of the following commands, the first being a regular detach and the second being a power detach:

screen -d 651.pts-5.hostname
screen -D 651.pts-5.hostname

If there was only one active session, screen could be called without the session name:

screen -d 
screen -D

There are a variety of ways to reattach to a detached session, some of which will even detach the session first if needed. Each of the following is a command-line option to the screen program:

When a screen session dies, either because the machine rebooted or the process got killed or corrupted, the dead socket file can be left in the screen directory. A dead screen session cannot be reattached, and the sockets should be cleaned up. These dead screens are visible with the screen -ls command and can be cleaned out with the screen -wipe command.

Customizing Screen

Screen can be effectively run without any configuration at all, but most sysadmins will want to change some default behaviors and create shortcuts with key bindings. Most customization occurs via the screen resource files, though sessions can also be customized from the invoking command line or on the fly. Directives in the resource files set options, bind functions to keys, and automatically start virtual windows at the beginning of the session. Each directive is listed one per line with the arguments separated by tabs or spaces. The pound sign (#) acts as the comment delimiter, and any text appearing on a line after one is ignored. Any blank lines in the file are ignored. The arguments section of each directive can contain references to environment variables as well as plain text.

Here's a short example $HOME/.screenrc file containing comments for each directive:

# set some options
activity "activity: window ~%"  # Message when activity occurs in a window
vbell_msg "bell: window ~%"     # Message for visual bell
vbellwait 2                     # Seconds to pause the screen for visual bell
allpartial off                  # Refresh entire screen on window change
autodetach on                   # Autodetach session on hangup instead of
                                # terminating screen completely
bufferfile /tmp/screen-buffer   # Filename for the paste buffer
chdir                           # Change to the home directory
escape "``"                     # Redefine the prefix key to ` and define a
                                # literal ` as ``
shelltitle $HOST                # Set the title of all shell windows
defflow off                     # Set the default flow control mode
defmode 0620                    # Set the default mode of each pseudo tty
defscrollback 200               # Set the default number of scrollback lines
deflogin off                    # Do not register the window in utmp
startup_message off             # Disable startup messages

# virtual windows to start when screen starts
screen -t emacs@$HOST -h 0 1 /usr/local/bin/emacs -nw
                                # Start emacs in window 1 with a scrollback
                                # buffer of 0
screen -t tcsh@$HOST -ln -h 100 2
                                # Start a shell with the title of
                                # tcsh@.  turn off login mode
                                # (remove the window from utmp).  Use a
                                # scrollback of 100 lines and start the shell
                                # in window 2 (or the next available window)
monitor on                      # Monitor the above shell window

# keymap for use with the prefix key (backquote)
bind ' ' windows                # Show listing of all windows
bind 'a' prev                   # Previous window
bind 'c' copy                   # Copy paste buffer
bind 'e' screen -t emacs@$HOST -h 0 1 /usr/local/bin/emacs -nw
                                # Create new emacs window
bind 'i' info                   # Show info about the current window
bind 'n' next                   # Next window
bind 's' screen -t tcsh@$HOST -ln -h 100  # Create new shell window

As shown above, one very common modification is changing the prefix key from C-a (used in emacs to go to the beginning of the line) to something less frequently used. Picking an alternate prefix key can be difficult if the user makes full use of all of the keys; the alternate is usually a seldom-used combination involving the escape or control key. This makes for extra typing, of course, so one-key prefixes are optimal if the prefix key sees a lot of use.


The screen(1) man page contains a wealth of information for the power-user as well as the novice. It lists the defaults for the large number of customizable options, key bindings, and command-line arguments, as well as providing a few examples. Other resources include:

[Jul 10, 2007] chandanlog(3C) The Story of OpenGrok - the Wicked Fast Source Browser

The initial version of OpenGrok was a perl script named that extracted the above 5 streams and piped them to a lucene search engine. had become more intelligent. It was now running each file through ctags and extracting definitions. It also parsed out program identifiers. It would run dis(1) on ELF files and extract labels and call statement symbols.

I called it the Universal Program Search Engine. I was using this on my machine for quite some time. This system was used to confirm or deny existence of several vulnerabilities. For example I used it to confirm that no code in Solaris 7 was calling gzprintf() which was the cause of CVE-2003-0107 Now I could pinpoint affected areas in Solaris for each newly discovered security hole.

Perl to Java

I choose Perl because it was very easy and quick to code. I could use its efficient data structures. It was really quick to prototype a design and make sure it actually worked. I realized choosing perl for a long term solution was a mistake. Perl is great for onetime use and throw type of applications. When I profiled the processes, java process was mostly waiting for perl to parse the text. Processing the entire program tree source and binaries took a almost half a day. After some profiling the perl code and some optimizations, I could reduce the time to about 8-9 hours. Perl was consuming too many compute cycles, despite my script being only couple of hundred lines.

[Jul 10, 2007] OpenGrok at

OpenGrok is a fast and usable source code search and cross reference engine. It helps you search, cross-reference and navigate your source tree. It can understand various program file formats and version control histories like SCCS, RCS, CVS, Subversion and Mercurial. In other words it lets you grok (profoundly understand) the open source, hence the name OpenGrok. It is written in Java.

OpenGrok is the tool used for the OpenSolaris source browser and search.



[Jul 2, 2007] BigAdmin Solaris Information Center - How To Solaris Express, Developer Edition Installation

The Solaris Express, Developer Edition release provides a quick installation program that steps you through an installation. This release includes the latest tools, technologies, and platforms to create applications for Solaris, Java, and Web 2.0 for x86 based systems.

" A 10-minute video steps you through the installation
" Frequently Asked Questions (FAQ)
" Getting Started Guide

[Jun 27, 2007] BigAdmin Submitted Article A Script Template and Useful Techniques for ksh Scripts by Bernd Schemmer

This article discusses a script template for ksh scripts. I use this script template for nearly all the scripts I write for doing day-to-day work. I'm pretty sure that every system administrator who is responsible for more than a few machines running the Solaris Operating System has her own bag of scripts for maintaining the machines. Nevertheless, the script template and the programming techniques discussed in this article might be useful for them also.

The script template is released under the Common Development and Distribution License, Version 1.0; a link to download the script is at the end of this article.

[Jun 22, 2007] Automated FTP using expect - BigAdmin Description

This is a pretty trivial example which probably can be done with Perl better, but the key idea is right:: expect should be a standard tool in any decent sysadmin arsenal.
Automated FTP (expect)

Description: This is an expect script to automate ftp'ing a file to a host.

Shows how to script using expect.

Contact: N/A

Submitter: Niranjan Reddy

[Jun 21, 2007] New release of ksh93 -- release s+

The AT&T Software Technology ast-ksh package from AT&T Research contains ksh and support libraries. This is the minimal set of components needed to build ksh.

2007-03-28 BASE * SOURCE 1646999 9227250fa6ad2235cc8665bb664cc740 INIT
2007-03-28 BASE * sol8.i386 5009924 771a2be637c187cc8b169b316fffd5ec INIT
2007-03-28 BASE * sol8.sun4 4207051 084f75d377ee8c0c895db11d94a89a05 INIT
2007-03-28 BASE * sol9.sun4 3867447 f32b5b96ae0070565f10bce5e32a3937 INIT
Yes -- a new release in only 3 months. This release contains fixes and features that address the issues raised on the { ast-users uwin-users ksh-solaris-integration } lists. Thanks to all who helped. A summary of recent ksh93 changes:
  1. Double precision floating point arithmetic with full C99 arithmetic support on systems that provide the C99 arithmetic functions. The numbers Inf and NaN can be used in arithmetic expressions.
  2. TAB-TAB completion generates a numbered list of completions which the user can select.
  3. Support for processing/handling multibyte locales (e.g., en_US.UTF-8, hi_IN.UTF-8, ja_JP.eucJP, zh_CN.GB18030, zh_TW.BIG5 etc.) has been extensively revised, tested, and is now supported even on the language level (e.g. variable and function identifiers may contain locale specific codeset characters).
  4. /dev/(tcp|udp|sctp)/host/sevrice now handles IPv6 addresses on systems that provide getaddrinfo(3).
  5. The ability to seek on a file by offset or content with new redirection operators.
  6. A new --showme option which allows portions of a script to behave as if -x were specified while other parts execute as usual. This simplifies the coding of make -n style semantics at the script level by eliminating code replication. In particular, io redirections are handled by --showme.
  7. The [[...]] operator =~ has been added which compares the string to an extended regular expression rather than == which compares against a shell pattern.
  8. The printf(1) builtin has been extended to support the = flag for centering a field. The # flag when used with %d and %i provides values in units of thousands or 1024 respectively with an appropriate suffix added.
  9. Example screenshots from joint work with the Solaris ksh93 integration project are available here.
See the release change log for details.
This release, almost a year from the last big release, contains changes based on feedback from the { ast-users ast-developers uwin-users uwin-developers } lists and the ongoing ksh93-solaris integration project. Thanks to all who helped. Our resolution this year is to increase release frequency to keep internal and external source/binaries more in sync. See the release change log for details.
See the release change log for details.
This release fixes a few packaging missteps from 2006-01-24 and syncs the ast and uwin source release. See the release change log for details. The download site is being serviced by a new host. The intention is to preserve urls, but intervening caches may foil that intent. Details of the server change follow in case you run into trouble. The old host www was sgi, the new one public is linux. Both run apache. Urls prefixed by will go to the old server which will map the prefix to the new one Eventually the mapping will dissappear when www is retired and public takes on the name www.
Its been almost a year since the last release, but we haven't been idle:
  • ksh(1) release 93r new features:
    1. The brace expansion option (-B, --braceexpand) expands {first..last[..incr][%fmt]} sequences.
    2. Redirection operators can be immediately preceded by {vname}, {n}>file, which allow the shell to chose the file descriptor number and store it in varname.
    3. Redirection syntax <# ((expr)) added to position file descriptor at offset specified by evaluating arithmetic expression expr.
    4. Shell pattern matching extension for matching nested groups while skipping quoted strings.
    5. The multiline option (--multiline) allows lines longer than the column width to be edited using multiple lines.
    6. The integer and float aliases now default to the longest integral and floating types on the system.
  • ast-open sort(1) now supports plugins, including -lsum for record summation, -lsync for IBM dfsort (aka mainframe syncsort), and -lvcodex for intermediate and output file compression.
  • The ast-open vczip(1) command and vcodex(3) base library have been added. vcodex is a grand unification of compression, encryption and data transformation methods. Software the way it should be -- small, composable, influencing paradigms in unexpected ways.
  • The ast-open dss(1) command, base library, and plugins have been added. dss suports efficient data stream scanning, schema specification, and dynamic data types. dss dynamic data types will be integrated into ksh(1) extensible types in the next release.
  • And, not to be left out of the latest fad, not one but two command line sudoku solver/generator programs in ast-sudoku to burn cycles and brain cells. There is some good math in there, including respectable order N QWH (quasigroup with holes / latin square completion) results.
  • Finally, see the release change log for details.
  • ast and UWIN source and binaries are now (finally) covered by the OSI-approved Common Public License Version 1.0.
  • The licence agreement prompt is back -- its either that or we don't post source. The prompt mechanism works with text-only and command-line browsers -- see the second paragraph of the main download page for details.
  • If the file $INSTALLROOT/bin/.paths contains the line BUILTIN_LIB=cmd then the ast libcmd enters the ksh(1) command $PATH search when $INSTALLROOT/bin is hit. i.e., if you place $INSTALLROOT/bin before /bin or /usr/bin in $PATH then builtin ast libcmd versions of cp, rm etc. will be run instead of standalone executables. This may provide significant speedups for some shell script applications.
  • After 20 years AT&T nmake(1) finally has regression tests -- up to now packaging, bootstrapping and building ast packages was the only test.
  • cp(1), date(1), ls(1), nmake(1), pax(1), and touch(1) now support nanosecond time resolution, due mostly to the fact that most of the new nmake regression tests would have failed to detect sub-second changes from one test to the next. As it is we have some machines that get > 10 compiles per second.
  • This release has quite a few malloc and ksh/malloc bug fixes. Thanks to the users who provided detailed bug reports through many rounds of testing.

readme for ast-ksh package

07-03-08 --- Release ksh93s+ ---
07-03-08 A bug in which set +o output command line options has been fixed.
07-03-08 A bug in which an error in read (for example, an invalid variable
name), could leave the terminal in raw mode has been fixed.
07-03-06 A bug in which read could core dump when specified with an array
variable with a subscript that is an arithmetic expression has
been fixed.
07-03-06 Several serious bugs with the restricted shell were reported and
07-03-02 If a job is stopped, and subsequently restarted with a CONT
signal and exits normally, ksh93 was incorrectly exiting with
the exit status of the stop signal number.
07-02-26 M-^L added to emacs mode to clear the screen.
07-02-26 A bug in which setting a variable readonly in a subshell would
cause an unset error when the subshell completed has been fixed.
07-02-19 The format with printf uses the new = flag to center the output.
07-02-19 A bug in which ksh93 did not allow multibyte characters in
identifier names has been fixed.
07-02-19 A bug introduced in ksh93 that causes global compound variable
definitions inside functions to exit with "no parent" has been fixed.
07-02-19 A bug in which using compound commands in process redirection
arguments would give syntax errors <(...) and >(...) has been fixed.
07-01-29 A bug which caused the shell to core dump which can occur when a
built-in exits without closing files that it opens has been fixed.
07-01-26 A bug in which ~(E) in patterns containing that are not inside ()
has been fixed.

[Jun 18, 2007] Some tuning and optimization tools

[Jun 7, 2007] MultiTail

For Solaris up to date binaries can be retrieved from:

MultiTail lets you view one or multiple files like the original tail program. The difference is that it creates multiple windows on your console (with ncurses). It can also monitor wildcards: if another file matching the wildcard has a more recent modification date, it will automatically switch to that file. That way you can, for example, monitor a complete directory of files. Merging of 2 or even more logfiles is possible.

It can also use colors while displaying the logfiles (through regular expressions), for faster recognition of what is important and what not.

It can also filter lines (again with regular expressions). It has interactive menus for editing given regular expressions and deleting and adding windows. One can also have windows with the output of shell scripts and other software.

When viewing the output of external software, MultiTail can mimic the functionality of tools like 'watch' and such.

For a complete list of features, look here.

[Jun 6, 2007] Project details for Zonestats

Zonestats create an RRD database with the values of CPU and memory (RSS) usage per Solaris 10 zone. It requires only the RRDs Perl module.

[Jun 1, 2007] Sun hopes Project Indiana will help OpenSolaris


Indiana will fit on a single CD and be updated every six months, Foster said. "With a focus on the user experience, it is hoped that with wide distribution, the OpenSolaris ecosystem will grow, providing valuable feedback to the project."

And although Foster said the project is intended to be grassroots and consensus-driven, "there may be a real need for a sole arbiter, Ian Murdock," who is Sun's chief operating systems officer and a founder of the Debian version of Linux.

[May 22, 2007] Solaris Express, Developer Edition 02-07 in VMware " Dave's Blogs

On your first boot, you might want to login to the console as root for the system to automatically go through "post install setup". You might want to also install VMware Tools. If you need help with that, see Appendix A in this blog entry. I had to comment out unwanted extra large resolution listings in /etc/X11/xorg.conf in order to keep the resolution at 1024×768. Perhaps this could be a minor RFE for VMWare team.

[May 22, 2007] Solaris Express Developer Edition

It does not make sense to spend significant part of your life installing open source applications -- you might be better off developing a new one ;-). And even time spend for installing OS can be spend more productively on developing applications. See also VMware AMD64 Image Download.

Solaris Express Developer Edition is an OpenSolaris-based distribution for x86 that includes the latest tools, technologies, and platforms to create applications for Solaris OS, Java Application Platform, and Web 2.0.

Available at no-cost, Solaris Express Developer Edition is regularly updated to incorporate new functionality to help application developers create better applications -- faster. Developers can create high performance applications using this distribution and deploy to Solaris 10 OS.

Develop your applications using Solaris Express Developer Edition and deploy to Solaris 10. For applications that use Solaris APIs, we encourage you to download and use the Solaris Ready Test Suit to verify use of Solaris 10 APIs. In addition, you should do your final build on a Solaris 10 server before deploying.

Developer support options are available for code support, programming and technical assistance. Recognized industry-wide, Sun offers developer training and certification courses for Solaris, Java, and Web 2.0 developers.

The 2/07 release of Solaris Express Developer Edition is only for x86-based laptops and desktops. Developers on SPARC systems can obtain similar functionality by downloading the latest Solaris Express Community Edition build 55 (CD) or (DVD) and installing Sun Studio 11 for OpenSolaris and NetBeans IDE 5.5 with NetBeans Enterprise Pack 5.5. Future Solaris Express Developer Edition releases will include support for both x86 and SPARC platforms. VMware for Solaris Express Developer Edition is also available.

[Apr 22, 2007] Solaris Express, Developer Edition Desktop by Solaris Express Developer Edition Team

The Solaris Express, Developer Edition desktop contains Vino-server - a remote desktop server based on VNC protocol allowing remote desktop takeover and remote desktop viewing functions. That saves you from the task of manually installing and configuring VNC server on the server. For more information, see the vino-preferences and vino-server man pages.
See also

[Apr 21, 2007] Unplugged Sun chief engineer Rob Gingell - News - Builder AU By David Berlind, Special to ZDNet | 2002/08/30 12:20:01

Rob Gingell is accustomed to herding cats.

He has spent much of his 17-year career at Sun Microsystems trying to get the other technology gurus at the company to follow his lead. As the chief technologist for Sun's system software group, Gingell ran herd on Solaris, Java, and the entire portfolio of servers and development tools. Four months ago he was appointed Sun's chief engineer, and now is responsible for crafting a cohesive strategy as Sun moves from it first-generation systems based on Unix to a second generation oriented around Java.

Gingell talks about his desire to open source Solaris and intermarry it with Linux. He also discusses his focus on other parts of the software stack, especially Java, and why he believes Sun will succeed at a time when Solaris and SPARC are no longer the company's crown jewels. Get an inside look at Sun's strategy in this first instalment of the two-part interview with Gingell. When you're done with Part I, be sure to check out, in which Gingell talks about how he thinks history will repeat itself--to Sun's benefit.

As Sun's chief engineer, what do you do that's different from the other technology chiefs in the company?

My charter is conceptual integrity. Until a couple of months ago, we never had a chief engineer, so this job is different in that respect. Prior to that, I was the chief technologist for the software systems group, which included Solaris, Java, the iPlanet products, and the development tools.

Within Sun, we have a bunch of chiefs primarily because the structure of the company is a little recursive. Every manager of a large staff has their person responsible for representing the technology interests and portfolio of that division. As a software chief technologist, my job was portfolio management, but I did the architectural stuff as well, so I was fairly unique in that regard. Yeah, we do have a lot of chiefs. I've been on a lot of annoying panels where you start to wonder if any one is doing any work with all these chiefs.

What do you mean by "conceptual integrity?"

That's a short description that I use. If I'm successful, then when customers buy a stream of our products and slap them together, they ought to be working. If it happens that they slap them together and they don't work, or they stop working, or work in unexpected ways, that's probably a failure of architecture. At some level, I need to figure out why that happened and make sure we put things in place to put it back together.

My goal in life is to make sure that all the brains in all these buildings [at the various Sun campuses] are effectively employed and create as much as they can. If only one person creates the ideas, you only get one person's worth of ideas. I'd much rather have 30,000 people's worth of ideas. It's always much more powerful, although you have to deal with the arbitration between the conflicting ideas.

Company officials that I've met with in the past have talked about how running Sun was like herding cats, with a lot of diverse interests running in different directions. How much of what you do is focused on keeping the company going in one direction so others can see what the mission is and see what the future is like?

A lot of it is like that. I actually hope that it's never true that the herding cats phenomenon vanishes from Sun. Some of the chaos you're referring to is what makes us interesting and vital, and keeps us from getting locked into a "we're doing this because we did it last week" mentality. That level of chaos, while it's annoying at times, is also fairly powerful because it's the product of having all those brains usefully applied. Where it's a negative is when you have no way of arbitrating the chaos. That goes back to my arbitration role, which I did locally in the software group for many years. It's a new scope expansion to consider doing it for everything all at once.

If I'm successful, we'll more efficiently surf the froth off that chaos, mine it more effectively, and more quickly translate it into "OK, this is where we are going and how that idea over there contributed. Next idea, please."

Where you have this chaos and you see it as a positive, your customers certainly don't necessarily feel the same way. Is there a disconnect?

I haven't personally run into that many customers who are confused about what we're doing. Some of the publications are more confused about what we're doing than some of the customers are, although I certainly don't talk to all of our customers. I've been at Sun for 17 years and I haven't woken up on any day confused about what we're doing or why we're doing it. What is going on is there are a lot of people at Sun who have not been there as long.

There's a lot of primate behaviours in any large organization. The things that everyone works on--those trees that they're staring at a lot--are sometimes confused for the forest. I won't predict that if you talk to a random selection of employees in the hall that they'll all tell you the same thing, but I'll bet most of what they'll tell you can be mapped to the same essential thing.

The way I think of it is that we're moving from our first generation of systems to our second generation of systems. Our first generation of systems was designed to run the Unix application base, and the second generation of systems is designed to run the Java application base. They incorporate the Unix base into it, but that's a definite shift in the structure of what our products are.

[Apr 9, 2007] BigAdmin Feature Article Deploying JBoss Application Server on Sun Fire T2000 Servers by Viet Pham (Sun Microsystems) and Phillip Thurmond (JBoss), April 2007

It looks like JBoss performance can be significantly improved by proper tuning of TCP/IP stack and system parameters.

[Apr 5, 2007] LinuxWorld preview Sun dishes on Linux identity management

In a recent article on synching Linux servers with AD, users told us that using Sun NIS was a "big no-no" in terms of Sarbanes-Oxley compliance. What have you heard on this?

Sigle: Many of those users are today moving from [Network Information Service] to LDAP. This is because with LDAP you get native security built into it, like SSL. With a customer I visited just last week, a large telecom, they had 20 different NIS domains, and they were planning on consolidating those into one infrastructure. They were putting those all into LDAP, centralized LDAP. Their domains will all still have multiple domain names but will instead be centralized into an LDAP tree.

Preview what you're going to be doing at LinuxWorld and why IT managers might want to attend.

Sigle: Basically it will be [about] identity management and access management options in the open source space conducted in a panel format with Gianluca Brigandi, Founder and System Architect of the JOSSO Project, and Anthony Nadalin from IBM. We will cover enterprise-to-customer relationships, business-to-business relationships, and so on in the ID space. Eventually, the conversation will end with a slide that I call the alphabet soup. It will list all the current identity standards like OpenSSO, JOSSO, OpenID -- all the buzzwords.

Could you provide a little perspective on some of these standards, like Sun's OpenSSO for example?

Sigle: Sun has been an industry lead with Directory Server, all the way back from the Netscape days. Many enterprises and many telecoms in the market run Directory Server for their customers, and many large telecoms run millions of identities in Directory Server. About a year ago, Sun architected a new Directory Server with all those aforementioned standards in mind. Sun then donated the code to the open source community. At some point in time, the plan is for Sun to take a snapshot of the open code, wrap support around it, and that will most likely be the next version of a directory server we support as company. That's one to two years away however. For now with OpenSSO, we took [Directory Server] and its access management capabilities and basically released all the source code. Going forward it will be the same scenario as Directory Server: we'll release a commercial snapshot of OpenSSO in the future.

Has there been any user confusion regarding the number of standards?

Sigle: In the past I worked with telecom customers and I heard that complaint all the time. Customers wanted to know how all these standards were going to talk to one another. Even if we delve into one of the collaborative efforts like the Liberty Alliance [which is comprised of Sun Java System Access Manager, OpenSSO, Lasso (Liberty single sign on) and HP Select Federation], there are different phases and specs. There's a bunch of stuff in there, and a lot of these standards drive toward the same goal. Recently we have started to get clarity with standards like the Security Assertion Markup Language from the OASIS, which has risen to the top. But customers are still asking when to use one over the other. When you are talking standards, there is no real company that is trying to appease all the standards at once.

[Apr 2, 2007] VNC on Solaris 10 Installing and configuring VNC from the software companion CD

See also VNC on Solaris

Solaris 10 can be downloaded from sun's web site. Also available from the download pages is an image of a "Software Companion CD".

This CD contains extra freeware products which can be added post-installation. One of these is the excellent Virtual Network Computing (VNC) package.

VNC allows remote desktop access to computers over a network. It can be used between Microsoft and Unix/Linux operating systems.

alt="Text Box: Solaris 10 can be downloaded from sun's web site. Also available from the download pages is an image of a "Software Companion CD". This CD contains extra freeware products which can be added post-installation. One of these is the excellent Virtual Network Computing (VNC) package. VNC allows remote desktop access to computers over a network. It can be used between Microsoft and Unix/Linux operating systems. " v:shapes="_x0000_s1025">

Insert the Solaris Companion CD and allow the volume management daemon to mount it. There are three packages which are required. In this example they are converted into package datastreams on the hard disk:

# cd /cdrom/cdrom0/sparc/Packages

# pkgtrans . /opt/SFWgcmn.pkg SFWgcmn

# pkgtrans . /opt/SFWgcc34l.pkg SFWgcc341

# pkgtrans . /opt/SFWvnc.pkg SFWvnc

[Apr 2, 2007] BigAdmin Submitted Tech Tip Installing Non-Core Perl Modules

Stuart Abrams-Humphries, March 2007

This document shows how to install non-core Perl modules in environments running the Solaris Operating System and Linux. This procedure can give application and development teams more control over what Perl modules they use, upgrade, and remove. (Note: This should work with all versions of the Solaris OS and Linux.)


To enable a simple installation of Perl on all machines, application and development teams can install non-core Perl modules in directories they control. Installing non-core Perl modules lets teams use, upgrade, and remove non-core modules without affecting any extra Perl modules that have been added to a machine. It also means that the modules can be used with an upgraded version of Perl, if necessary. Additionally, in a clustered environment, the non-core Perl modules can be installed on a shared disk so they will be consistent across all nodes in a cluster when a service group fails over between machines.


The non-core Perl module needs to be built with the same version of the C compiler as Perl itself was built with. To see which compiler Perl was built with, type the following command:

perl -V

Then ensure that the same compiler version is installed on the machine, whether it's the GNU Compiler Collection (GCC) or the Sun Studio C compiler (or an older version).


Follow these steps to install a non-core Perl module on a machine:

1. Download the Perl module source (for example, from CPAN, the Comprehensive Perl Archive Network web site).

2. Normally, you will have a <modulename>.tar.gz file.

3. Extract the module file:

tar -xvzf <modulename>.tar.gz

Alternatively, you can decompress the file using gzip -d and then extract the file using tar -xvf.

4. Create a makefile for the module by typing the following commands:

perl Makefile.PL PREFIX=<moduleinstalldir> LIB=<moduleinstalldir>/lib

5. Run the make command:


6. Install the module:

make install
Using the New Module

To use the newly installed Perl module, insert the following lines at the top of your Perl scripts:


use lib '<moduleinstalldir>/lib';
use mod::bang;

[Mar 27, 2007] Ian Murdock Making Solaris more like Linux Between the Lines by Dan Farber & Larry Dignan

March 27th, 2007 (

.... He noted that the Web as a platform pushes the importance of an operating system down the stack–most cool, new applications are not written today for a specific operating system. However, Linux community can provide valuable lessons as the evolution of software under shifts in architecture, monetization and technologies.

Over the years, the Linux community has been adept at building a developer ecosystem, and articulating, packaging and integrating the technology, Murdock said. "The Web as a platform has to do a lot of same things the OS had to do–build ecosystem, beyond people willing to role up their sleeves and backing their way into how the system works. It's how to make it more of coherent platform," Murdock said. "Guess what–the OS guys know how to do that, and open source OS guys have an advantage."

As a newborn Sun employee, Murdock is thinking about making Solaris more Linux-like. "When people say Linux what do they mean? Linux is a kernel. Cool apps are not written to the kernel. The OS powers higher levels of the stack. What we want is an open OS platform and to make sure that the existing skill sets and knowledge and training investments are leveraged. We don't want to make them learn a new product or rip and replace," Murdock said. "You can make a real argument that Solaris innovated more than Linux in the last few years-such as DTrace and ZFS-but usability stands in the way of appreciating that," Murdock said. "Part of what we are working on is closing the usability gap so that it doesn't stand in the way."

"There is no reason we can't make Solaris look and feel more like Linux," he continued. "There are a couple of ways we could do it. We could stick a penguin on it or take a Linux distribution and put a Solaris kernel in it. There are a few Solaris-based distros that have done that. Personally, as the person charting the course and looking at the strategy question, it becomes how to keep the competitive differentiation of Solaris while closing the usability gap."

"As someone on the other side of the table not long ago, you can't just rip the guts out of Linux and put in Solaris. If you do that, you are leaving out many of the compelling differentiations [of Solaris]. It will take some time to figure out precisely what the answer is."

[Mar 26, 2007] Freeware for Solaris/Solview

solview-0.3.tar.gz The solview application, written by Peter Tribble, is a simple java utility to display information about a Solaris 10 system. It gives information on installed software (clusters and packages), services running, and output from useful commands. After downloading the file, gunzip it, untar it into the solview-0.3 directory, and then run ./solview &. See [Details] for more information.

[Mar 26, 2007] Freeware for Solaris/Shmux

[Mar 25, 2007] Freeware for Solaris/Hping

[Mar 25, 2007] Freeware for Solaris/X11vnc

[Mar 25, 2007] Freeware for Solaris/Xbindkeys

[Mar 25, 2007] Project details for sysstat for Solaris

sysstat for Solaris 20070323 released
"sysstat" complements Solaris' system tools for performance analysis. It presents all key performance metrics on a VT100 terminal and has the possibility to toggle its view between different hosts.

Release focus: Major bugfixes

Changes: This release uses the 32-bit libncurses instead of the 64-bit libcurses, resulting in much better terminal support (i.e. sysstat on dtterm works again).

Author: Thomas Maier-Komor [contact developer]

[Mar 24, 2007] Project details for Tcpreplay

Tcpreplay 3.0.beta13 released

Tcpreplay is a set of Unix tools which allows the editing and replaying of captured network traffic in pcap (tcpdump) format. It can be used to test a variety of passive and inline network devices, including IPS's, UTM's, routers, firewalls, and NIDS.

Release focus: Major bugfixes

This release fixes some serious regression bugs that prevented tcprewrite from editing most packets on Intel and other little-endian systems. Some smaller bugfixes and tweaks to improve replay performance were made.

Aaron Turner [contact developer]

[Mar 23, 2007] BalanceNG A simple approach to load balancing by Anže Vidmar

Load balancing software uses multiple hardware devices to spread work around and thereby speed performance. While Linux Virtual Server may be the best-known option for Linux networks, another alternative, BalanceNG, a simple, lightweight utility, may be a better choice for some organizations.

BalanceNG (Balance Next Generation) is user-mode load balancing software with its own network stacks that runs over Linux and Solaris. All the work is done by the software; the operating system is used only for accessing the physical network interfaces and TCP/IP functions. It supports many different load balancing methods, including round robin, random, hash, and least resource. The load-balancing service takes around 400KB of disk space and the agent takes around 100KB. You need to run the balancing service on a machine that will act as your virtual server, and a balancing agent on all nodes that are part of the cluster, which are called targets in BalanceNG parlance. The software generates minimal network (UDP) traffic. The software can provide load balancing not only for Web servers, but almost any kind of service, including HTTP, FTP, SQL, POP3, IMAP, and SMTP.

BalanceNG is not open source. You can download and use the software for free on one virtual server and two targets, which is enough for a small or home business. If you need more, you can upgrade the basic license so that you can have up to 512 virtual servers and up to 1,024 target servers.

Getting and installing BalanceNG

To get started, download BalanceNG. In the tarball you'll find an executable file called bng, which you need to copy to your /etc/rc.d folder and start with the command /etc/init.d/bng start. This is the only file that needs to be started for BalanceNG server. You can install a second executable, bngagent, on your target servers. If you want the service or agent to be started automatically, copy it somewhere like /etc/rc.local to make sure it starts at boot time.

BalanceNG has two ways of configuring load balancing. You can use the standard method and edit the BalanceNG default configuration file /etc/bng.conf, or configure load balancing in real time using the bng console, which you invoke with the command /etc/init.d/bng control. The BalanceNG console functions are well-documented in the software's User and Reference Manual.

Because I needed to integrate BalanceNG into an existing network installation, I chose the easy way: I used a "single-legged" configuration, but BalanceNG's site provides several possible configuration scenarios. You can take an example, paste it in your /etc/bng.conf file, and edit it to suit your network environment.

If your configuration file contains errors (meaning you have misconfigured your network settings), you will still be able to successfully start the bng service. You can see the error messages by looking at /var/log/syslog, or go into the bng console. When you enter the console, the software informs you of any errors during software startup. You should see:

BalanceNG: connected to PID 872
*WARNING*: Errors in /etc/bng.conf, type "show log" for details

Type show log to see what the problem is:

2007/03/08 00:57:59 3 ERROR /etc/bng.conf line 14: gateway address not directly reachable
2007/03/08 00:58:00 3 ERROR /etc/bng.conf line 22: WARNING: server 1 has no matching network

The software's error-reporting tool tells you what went wrong, and on what line number you made an error. In this case you can see that I misconfigured the gateway address. To correct mistakes, leave the bng console by pressing Ctrl-D, edit the /etc/bng.conf file, and restart the service. Enter the bng console again to see if you have any more errors. If not, and you have a working load balancing configuration, the log should look something like this:

2007/03/08 00:57:47 6 BalanceNG 1.795: starting background operation
2007/03/08 00:57:47 6 loading /etc/bng.conf
2007/03/08 00:57:47 6 configuration taken Wed Mar 7 00:57:08 2007
2007/03/08 00:57:47 6 configuration saved by BalanceNG 1.795 (created 2007/02/26)
2007/03/08 00:57:48 6 /etc/bng.conf successfully loaded

Run bngagent on each target server with the command bngagent 439, where 439 is the default UDP port; you can choose any available UDP port for communication. Bngagent is a small UDP server program that communicates with the BalanceNG server. It also comes with source code, so if the binary doesn't work on your distribution, you can always compile an agent by yourself.

To test if the virtual server is performing IP load balancing, open a Web browser and type in the IP address of your virtual server -- in my case You should see the normal Web page that is running on the target servers. Now create another connection to the server from other machine and watch the Apache logs on both targets. You should see that the virtual server is distributing the incoming connections equally to the targets. If not, check your configuration in order to resolve the cause of the misbehavior.

As you can see from the examples, BalanceNG's config file is simple, well-structured, and well-documented. Also, there is no need to interact with a command-line utility to get things done. Everything can be configured by editing the /etc/bng.conf file.

It took me only 20 minutes to download and install BalanceNG and configure it as a single-legged direct server return application.

BalanceNG includes alerting notification scripts, so if a node goes down, the administrator is immediately notified by email. You can also use SNMP traps to send messages to the network management system.

The project offers free one-year email support as well as a Professional Software Maintenance and Support service.

BalanceNG is a reliable load balancing application that is easy to learn and master. It does its job right, and can be used anywhere from the home to an enterprise environment.

[Mar 20, 2007]

Debian founder Ian Murdock. In an entry on his blog, Murdock announced that he is joining Sun Microsystems as their chief operating platforms officer. As he put it in his opensolaris post, this "...basically means I'll be in charge of Sun's operating system strategy, spanning Solaris and Linux." In all likelihood one of his first priorities will be "closing the usability gap" between Solaris and Linux.

Re:Debian on Solaris?

by kindbud (90044) on Monday March 19, @10:27PM (#18409579)
If not that, then at least an upgrade of the current Solaris userland to make it more Linux-like.

You mean it would have all the inconsistencies and inscrutability of the System V and BSD userland inherited from SunOS, PLUS all the additional inconsistencies Linux has contributed? I can hardly wait.

Do I use a dash or a double-dash? Will the man page refer me to the info docs? Or will it refer me to the command line help? Or was that --help?

One of the things I dislike about Linux userland is that it is such a bastard of every other userland out there. Cacophony cannot be emulated, it can only be shouted down.

Re:Shooting too low, again.core:4, Interesting)

by caseih (160668) on Monday March 19, @11:32PM (#18410127)

I think Sun should buy Apple and rename themselves as Apple. Then Mac OS X gets a much better kernel, and Sun gets all of Apple's nice unix userspace (Solaris 10's userspace is awful). Mac OS X server becomes Solaris 11 and all of apple's good ideas like OpenDirectory, their management GUIs for open source apps, etc become a part of solaris. Already technology transfer is happening. My local Apple rep said a lot of core technologies are being licensed from Sun including ZFS.

It would be a clear win for both companies. Apple gets instant access to the enterprise, and Sun will make sure the acquisition means that Apple's technologies will get the enterprise-level support they deserve. Currently Apple's so-called enterprise offerings are really not very serious, although they have improved their support with Tiger. Sun can finally sell desktop machines sporting an amazing OS and desktop (under the Apple Macintosh brand) and have a server OS that's powerful and easy to setup and administer and with the better BSD userspace that Apple has.

Re:Shooting too low, again.

by caseih (160668) on Monday March 19, @11:32PM (#18410127)

I think Sun should buy Apple and rename themselves as Apple. Then Mac OS X gets a much better kernel, and Sun gets all of Apple's nice unix userspace (Solaris 10's userspace is awful). Mac OS X server becomes Solaris 11 and all of apple's good ideas like OpenDirectory, their management GUIs for open source apps, etc become a part of solaris. Already technology transfer is happening. My local Apple rep said a lot of core technologies are being licensed from Sun including ZFS.

It would be a clear win for both companies. Apple gets instant access to the enterprise, and Sun will make sure the acquisition means that Apple's technologies will get the enterprise-level support they deserve. Currently Apple's so-called enterprise offerings are really not very serious, although they have improved their support with Tiger. Sun can finally sell desktop machines sporting an amazing OS and desktop (under the Apple Macintosh brand) and have a server OS that's powerful and easy to setup and administer and with the better BSD userspace that Apple has.

Re:Shooting too low, again.
(Score:3, Insightful)

by cheshire_cqx (175259) on Monday March 19, @09:34PM (#18409161)

Real apt-get with dist-upgrade for Solaris would be great. Blastwave seems like a stop-gap in comparison. Reinstalling from the DVD every time is a pain, and BFU isn't as comprehensive. In this respect OpenSolaris can learn usability from Debian, and I'd love to see it.

[Mar 19, 2007] Simon Phipps, SunMink Charting the Next 25 Years

I'm delighted to be able to welcome a new colleague who's starting with Sun today. He is starting a newly-defined role as Chief Operating Platforms Officer at Sun, and is responsible for building a new strategy to evolve both Sun's Solaris and GNU/Linux strategies. The appointment is at the same time both brilliant and controversial, but is the logical next step as far as I am concerned.

Sun bootstrapped the commercial Unix industry 25 years ago. Solaris offers both an unbeatable promise of binary compatibility, so that your current binaries are guaranteed to run on your Solaris system when you upgrade, every time, and an extraordinary level of innovation that has made ZFS, DTrace, SMF and Zones the talk (and envy) of the operating systems scene.

Meanwhile, the combination of the GNU operating system pioneered by Richard Stallman with the inclusive development delivered around the Linux kernel by Linus Torvalds has brought a new life and energy to the extended family tree of Unix. The popularity of GNU/Linux bears testament to the vision and skill Stallman and Torvalds exhibit.

And now there is OpenSolaris, bringing the potential to weave a new cloth from both the Solaris and the GNU heritage, albeit with both cultural and licensing challenges to overcome. Today my new colleague is here to perhaps guide the combination of the brilliance of Solaris and the pervasive and seductive character of GNU/Linux to start the next wave. Please welcome the founder of Debian GNU/Linux, chair of the Linux Standards Base and outgoing CTO of the Linux Foundation, Ian Murdock (click that link and read his own words). Welcome, Ian! It's going to be an interesting year!

[Mar 19, 2007] tecosystems " Apt-get Install Ian Murdock The Q&A

Q: What do you see as the primary risks to the hire?
A: They're two sides to the same coin: Ian meets significant internal resistance and is unable to effect necessary change, or Ian meets resistance and chooses to boomerang out of Sun. Sun has long been a relatively political organization, even for its size, and the divide within the Solaris community in particular is - at times - quite wide. In short, the question to me will be can Ian win over or persuade the old school factions within Sun? The new schoolers seem to be presold. Rightly or wrongly, Ian is going to be viewed by many within the Solaris and OpenSolaris communities as the "Linux guy," and that lens alone is likely to provoke some non-rational, emotional responses given the antipathy a fair percentage of members of the Solaris community have for Linux. Dealing with that will be a continuing challenge. Ian's no stranger to such friction, of course, as Linux communities are not known for their tendency to hold hands and sing kumbaya, but there are significant organizational and governance differences between Linux and Solaris/OpenSolaris.
Q: Can you be more specific regarding the divide between the two schools?
A: The most visible example of the divide probably came in the form of the original decision to open source the operating system in the first place. Many engineers and executives had been agitating on behalf of an open Solaris for years within Sun, with little success. One of the difficulties, apart from the cost and effort required to open source such a sizable project, was the entrenched strength of those who opposed such a move. Eventually, of course, the new schoolers prevailed and we now have OpenSolaris. Despite the apparent and evident success of that project, there remain pockets within Sun and without that remain decidely old school.
Q: Is the open source vs closed source divide the primary point of contention between the new and old schools?
A: No. The tension between new and old manifests itself in a variety of other areas, including binary compatibility, choice of shell, installation experience, licensing, package management, userland tools, and so on. And frankly, it's not unique either to Solaris or to Sun - it's natural to have factions pulling in different directions. It's just that within Sun, leadership changes at the highest levels have changed the dynamics between the factions quite dramatically in recent years.
Q: Do you view this as an offensive strike at the Linux Foundation?
A: No. First of all, Sun is, by virtue of its membership with the LSB I believe, a member of the Linux Foundation. Second, Ian will continue to chair the LSB. But last, this is about hiring the right person at the right time - not trying to damage another organization, despite some of the comments that have come out of the LF that dismiss Solaris as real competition.
Q: Do you think this is a mandate for change for Solaris?
A: Mandate? No. From the conversations I've had, the Sun folks are already quite clear on where they're scoring a Needs Improvement on the report card. Recent chats with Solaris folks have indicated an impressive willingness to change, to adapt, to evolve. Simon, Tim, and many others across Sun are quite aware that package management, as an example, is a significant advantage for Linux at the current time. So no, I don't think Ian's hiring can be read as a mandate for change.

... ... ...

Q: What do you anticipate Ian will be working on?
A: The expectation in most quarters - see Jason's comment here or Bryan's final comment here - is that Ian will be tasked with remedying the lack of package management functionality. That expectation, of course, is quite logical given Ian's background, and I share it. I've argued on behalf of apt-get to Solaris engineers and executives on multiple occasions (more on a popularity than technical basis), and one would be foolish to read nothing into Ian's hiring in that regard. Beyond that, I'm interested to see whether or not the GNU userland can make inroads into Solaris, and what Ian can bring to the table from a package standardization perspective.
Q: What would you like to see happen from this hiring?
A: In a perfect world, as Mark says, this hiring would result in some convergence between the two operating systems. One of my stated regrets regarding the original Linux Foundation announcements, in fact, was the unhelpful wedge some of the surrounding rhetoric seemed to drive between the Linux and Solaris worlds. Perhaps, with Ian's move, that divide need not be permanent.

[Feb 23, 2007] BigAdmin Submitted Tech Tip How to Send Email Without Using sendmail by Ross Moffatt

A useful topic, especially about attachment sending.

If you need to send emails from a host but don't want to run sendmail, this tech tip explains how to use Perl to send emails. This procedure can be used on a host such as a Sun Fire V120 server running the Solaris 9 OS.


If you need to send emails from a host but don't want to run sendmail, this tech tip explains how to use Perl to send emails. This procedure can be used on a host such as a Sun Fire V120 server running the Solaris 9 OS.

Sending Simple Email

The following section describes what is required to send a simple email from a host.

Required Perl Modules

The following Perl modules are required in addition to a standard Perl installation, given here with the CPAN link:,,,

Ideally these modules would be loaded into the standard Perl library. However, for simplicity I will load them into my current directory by copying them into the directories that follow. (Note that the directory structure is required.)


The use lib definition in the script will also need to be changed to indicate the location of these Perl modules. The my $SMTP_SERVER = definition will need to be changed to the name of the SMTP server where you are going to send your mail.

Note: If the SMTP server can't be contacted or won't accept email, then the following error occurs:

Can't call method "mail" on an undefined value at ./ line 33.

Simple Email Script:

use lib "<home>";
use Env;
use Net::SMTP;
use Getopt::Std;
my $SMTP_SERVER = '<smtp server>';
my $DEFAULT_SENDER = 'False';
my $DEFAULT_SUBJECT = 'False';
getopts('hf:t:s:', \%o);
$o{f} ||= $DEFAULT_SENDER;
if ($o{h} or $o{f} =~ /^False$/ or $o{t} =~ /^False$/ or
		$o{s} =~/^False$/) {
die "usage:\n\tbody | $0 [-h] [-f from (required)] [-t to (required)]
		[-s subject (required)\n";
$mailmsg->datasend("To: $o{t}\n");
$mailmsg->datasend("Subject: $o{s}\n\n");
exit 0;
Sending Email Attachments

This section explains how to send email attachments from a host without using sendmail.

Required Perl Modules

For ease of reading, I have shown all modules required in addition to a standard Perl installation, given here with the CPAN link. is the only extra module required, as compared to sending simple emails.,,,,

Ideally these modules would be loaded into the standard Perl library. However, for simplicity I will load them into my current directory by copying them into the following directories:


The use lib definition in the script will also need to be changed to indicate the location of these Perl modules. The my $SMTP_SERVER = definition will need to be changed to the name of the SMTP server where you are going to send your mail.

Note: If the SMTP server can't be contacted or won't accept email then the following error occurs:

SMTP Failed to connect to mail server: Invalid argument

The email attachment Perl script,, follows:

use lib ".";
use Env;
use MIME::Lite;
use Getopt::Std;
my $SMTP_SERVER = '<smtp server>';
my $DEFAULT_SENDER = 'False';
my $DEFAULT_SUBJECT = 'False';
MIME::Lite->send('smtp', $SMTP_SERVER, Timeout=>60);
my (%o, $msg);
getopts('hf:t:s:', \%o);
$o{f} ||= $DEFAULT_SENDER;
if ($o{h} or !@ARGV or $o{f} =~ /^False$/ or $o{t} =~ /^False$/ or 
		$o{s}=~ /^False$/) {
die "usage:\n\t$0 [-h] [-f from (required)] [-t to (required)] 
		 [-s subject (required)] file [file] ... [file]\n";
$msg = new MIME::Lite(
>From => $o{f},
To => $o{t},
Subject => $o{s},
Data => "Hi",
Type => "multipart/mixed",
while (@ARGV) {
$msg->attach('Type' => 'application/octet-stream',
'Encoding' => 'base64',
'Path' => shift @ARGV);
$msg->send('smtp', $SMTP_SERVER);
exit 0;

[Jan 15, 2007] GNU Screen

See also Screen -- an Orthodox Windows Manager. Binaries are available from

[Jan 15, 2007] Synergy

Binaries are available from

Synergy lets you easily share a single mouse and keyboard between multiple computers with different operating systems, each with its own display, without special hardware. It's intended for users with multiple computers on their desk since each system uses its own monitor(s).

Redirecting the mouse and keyboard is as simple as moving the mouse off the edge of your screen. Synergy also merges the clipboards of all the systems into one, allowing cut-and-paste between systems. Furthermore, it synchronizes screen savers so they all start and stop together and, if screen locking is enabled, only one screen requires a password to unlock them all. Learn more about how it works.

Synergy is open source and released under the GNU Public License (GPL).

System Requirements

All systems must support TCP/IP networking.

"Unix" includes Linux, Solaris, Irix and other variants. Synergy has only been extensively tested on Linux and may not work completely or at all on other versions of Unix. Patches are welcome (including patches that package binaries) at the patches page.

Recommended Links

Google matched content

Softpanorama Recommended

Top articles


Selected products

Solaris open source software precompiled packages search

Included with Solaris 9 OS supported open source software

Network Servers and Clients
Scripting Language
Security Tools
Commands and Tools
grep (GNU)
tar (GNU)

Sun packaged, but unsupported
open source software
from Software Companion CD

freetts 1.1.1
yasr 0.6.4
bluefish 0.12
emacs 21.3
sed-3.02 (GNU)
ethereal 0.10.4
fetchmail 6.2.5
hpijs 1.6
nmap 3.5
Open LDAP 2.1.22
Open SLP 1.0.1
rsync 2.5.7
graphviz 1.10
xpdf 2.03
expect 5.39
Foomatic filters 3.0.1
Foomatic-ppds 3.0.1
findutils 4.1.20
gkrellm 2.1.19
gnuplot 3.7.3
MySQL 4.0.15
screen 4.0.2
serweb 2004-01-04
TeTex 2.0.2
gcc-3.3 3.3.2
libtool 1.5.2
m4-1.4 (GNU)
MySQL python API 0.9.2
GD graphics library 2.0.15
Perl-compatible reg. expr. library
autoconf 2.5.9
automake 1.8.3
cvs 1.11.17
ddd 3.3.7
gdb 5.3
make-3.80 (GNU)
imap2002d (UW)
proftpd 1.2.10rc1
ser 0.8.12
squid 2.5.STABLE5
xmcd 3.2.1
xmms 1.2.8
xterm-175 (XFree86)
X/Window Managers


Index of -mirror-Mozilla-releases-mozilla1.7-contrib




PluginDoc Solaris (SPARC) Mozilla Plugin Support on Solaris (SPARC)

OISec " Mozilla 1.7a for Solaris-sparc

Another build of Mozilla for Solaris/sparc has been build by me this morning. It's version 1.7a and has been compiled on my Sun Ultra 60 using GCC 3.3. It can be found here
It's a 32-bit version using a default Solaris Install with the default Sun Freeware packages and the additional Sun Freeware GCC package.

Random Findings Outside Links

DVD Player Ogle is a DVD player that alegedly runs on both Solaris and Linux. I havent tried it myself - I have a standalone DVD player, and a Playstation 2. I dont need to watch DVDs on my computer :-)

But for general playing of mov,mpeg, and DIVX format files, I like Xine. It's an easy compile, and plays most anything, including mp3 and wav. Trouble is, the controls are quirky, and dont always STOP :-) So for audio, I prefer...

MP3/WAV/audio Player

XMMS, the best audio player available for Solaris, IMO. Only drawback is that it doesnt handle CD playing properly, in my experience. To jass it up, try the XLiquid "Skin" for it. BTW: a binary for XMMS is available via pkg-get



Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

Copyright © 1996-2021 by Softpanorama Society. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March 12, 2019