May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Solaris Patching

News See also Recommended books Recommended Links Solaris 10 patch log Humor Etc

Applying patches have evolved into complex, time-consuming processes.  Knowing what software updates and patches to use and when to apply them have become important cost saving measure. To minimize the cost, while maintaining a reasonable level of risk Sun provides Recommended Patch Clusters  which is the most common patching solution for enterprise (so called "blind patching"). While this strategy does address patching issues and compliance issues, it also introduces more change to the system than is necessary.  Therefore, for critical servers you might chose to  apply patches only to address specific issues or needs and not apply patches merely to keep current. Without understanding whether those patches provide and what production or security problems they fix, the jury is out about the benefits of "blind patching" for critical servers.

But the fashion now is to patch everything and this trend accelerated with SOX.

Solaris Recommended Patch Clusters do not upgrade Solaris to the next minor revision, for example from 04/04 to 04/08, you stay on the same revision as you was.

The most common commands for managing patches are:

Important directories:

Recommended patch cluster can be downloaded via ftp and HTTP from a browser. HTTP might be helpful if you have a firewall. You should use install_cluster for the installation of the Recommended patch cluster.

Installing individual patches is a more tedious and time-consuming task. Each patch must be downloaded, placed on each server, uncompressed, untarred, installed, and removed.

The recommended patch cluster is a zip file with the OS version as a prefix, for example for Solaris 9 it will be

Before installing any patches, you need to verify if the /var filesystem has sufficient space. A recommended patch cluster may need up to 0.5G.

The availability of free space can be checked by executing df -k /var. It might make sense to remove extra log from this filesystem to free more space.

Generally you need to download Recommended Patch Cluster into partition that has a lot of space, for example /home or /opt -- it is a bad idea to download it to the /tmp partition, as it is mapped to the memory or root partition as it is often (and should be) pretty tiny (less then 1G).  There are perverts that allocate to root partition several gigabytes, but generally it is a stupid idea. Moreover I would recommend to convert root home directory to /root as it is done in Red Hat, but that's another story (see hardening for details)

To determine the current set of patches installed, you can use the command:

showrev -p | more

The resulting long long listing shows each patch and its revision level. To find if a particular patch installed use grep for example:

showrev -p | grep 116268

Each patch is also identified with a revision number separated by a dash from the patch number. It is only necessary to install the most current revision. 116268-05 would indicate a patch revision of 05, and all patches with the revision lower than 05 would be considered obsolete. The patch number typically does not change, however the revision number changes with every new release. This makes it easy to identify new releases.

The kernel patch level can be verified by typing uname -a. This command shows the current OS release and the current kernel patch installed. The command can be executed by any user and does not require root privileges.

Determining the completeness of patching the kernel revision status on dozens of servers is a time-consuming process. You can split audit into two parts: data collection and report generation.  Data collection can be made into a monthly cron job.

Using the SunSolve Web site one can try to describe a problem and locate patches that might solve it, or at least list current bug reports about the problem. Once you have a list of possible patches, use the showrev -p command to see which patches are currently installed on the system and remedy any differences. As always, be sure you have a working backup of your system before making major changes (such as installing patches). Note that, rarely, patches that are installed do not appear in the showrev -p list. These are patches which include only firmware, and make no system code changes. Because there are no software changes, they are not recorded as an installed patch. The Web site can send a notification when any document of interest changes.

Another means of gaining patch information is to join Sun's mailing lists. 

Now let's look at the individual tools.

One lesson system administrators learn the hard way is to open support ticket, no matter what. For instance, a search of  may reveal that a patch exists that looks like an exact remedy to a problem. Great, but a quick exchange via ticket can sometimes reveal that the problem is a hardware issue, or that there's a procedure that needs to be followed that isn't indicated in the patch README. The folks at technical support have more information available than you'll find. Use them whenever you have any doubts. Also, be sure to review any README files that come with the patches. They often contain crucial information!

Patching production systems more often then once a month is a questionable practice. Typically once a quarter schedule is adequate as it gives time to test the patch cluster.  Here definitely less is more

One thing to look out for with firmware patches (PROM patches in particular) is that some don't like having a serial-port-attached terminal for a console. I just installed such a patch, and it took quite some time before this was discovered. Make sure you read the postscript files before you install patches, because the README files don't have this information. A little FYI from a slightly burnt admin.

Some patches replace binaries wholesale, and most recommended and suggested clusters include updates to Sendmail and BIND. Many people are using later versions of Sendmail and Bind then supplied by Sun. When you apply the recommended patch cluster, these binaries are overwritten. I usually edit the patch list to not patch Sendmail and/or BIND. This is important because otherwise one can spend a lot of time trying to troubleshoot mail and/or DNS. Be sure to save such binaries before patching and restore afterward, or edit the patch list, or both, to avoid this problem.

With zones one can use preconfigured, prepatched and stringently tested Solaris images.

UX-admin on July 25, 2007

How do you \*know\* what you can delete? Given a running/production system how do you determine which of the multitude of installed packages can be removed without adverse impact?"

Long story short: preconfigured, prepatched and stringently tested Solaris images.
This is what system engineering and platform lifecycle management are all about. At the very core, we can summarize the above terms to standardized Flash(TM) builds, preconfigured, prepatched and rigorously tested Solaris images. No ad-hoc changes are ever allowed in such a setup; configurations come on as tightly controlled package payload; and adding or removing something from the image is a matter of request/bug tracking, specification (in writing), panel approval, and finally, if the request / fix is approved, integration into the next platform release cycle.

Note that I'm not referring to Sun development, although one can surmise they have similar practices.

Also, ad-hoc changes, and this includes patching, are strictly banned, prohibited, and forbidden, unless engineering tested them and approved them. For example, it would be strictly forbidden for an SA to log into the system and start modifying configuration files with `vi` (or `emacs`, or whichever editor); changes would come in form of revised package payload, so they would be reproducible and uniform across systems.



Top Visited
Past week
Past month


Old News ;-)

[May 29, 2013] Rethinking patching (The view from the Engine Room) By barts

Jul 25, 2007
As Stephen mentioned recently, several of us have been thinking about revising the way we manage software change on Solaris. I've been particularly focused on the difficulties Sun and it's customers have with the patching process, and the kinds of changes we need to make as a result in our technology and development processes.

Today, most customers don't run OpenSolaris; they run a supported version of Solaris such as Solaris 8, 9 or 10. A supported release means that someone will answer the phone, and that patches for problems are available.

Patches are a separate software change control mechanism distinct from package versions in Solaris. Each patch may affect portions of several packages; patches are intended to include all the files necessary to fix one or more problems, either directly or by specifying dependencies. If a patch affects packages which are not installed on this system (typically because it has been minimized), those portions of the patch are not installed. If the administrator later adds the missing package, he must remember (good luck) to re-apply the patches since the packaging code knows nothing of patches.

Customers are today free to install which ever patches they feel are appropriate for their environment, consistent with the built-in dependency requirements. This customization is a technique I refer to as Dim Sum patching, and is a major cause of patching difficulties. Many customers pick and choose amongst the thousands of patches available for Solaris 10, for example; this means that customers are often pioneering new configurations. Note that each Solaris release consists of a single source base; all Solaris 10 updates, for example, are but snapshots of the same Solaris patch gate at different times. As a result, the developers are working on a cumulative set of all previous changes; when a new patch is created, the files in the patch not only contain the desired fix, but all previous fixes as well. Thus, the software change is constructed as a linear stream of change, but customers installs selected binaries from the various builds via patches.

When I've discussed the hazards of Dim Sum patching with customers, the reasons given are typically characterizable as :

  1. we don't need all those patches, we don't have those drivers loaded
  2. we're reducing downtime by not installing so many patches
  3. the less change, the less risk.

To these, I reply:

  1. If you don't need those drivers, then remove them them w/ pkgrm rather than leaving them in an unpatched state awaiting the introduction of new hardware or software to expose problems. Minimization, not spotty patching, is the answer. This is akin to disposing of an unused car, rather than simply leaving it unmaintained.
  2. Today, you should be using Live Upgrade and patching the alternate boot image to reduce downtime. This allows machines in production to be safely patched, and will not leave the system in an inconsistent or unbootable state in the case of power failure during patching operations. In the future, the new packaging system will always patch a clone of the current system to avoid the potential for disaster in case of power failure.
  3. Our experience has been that customers running all of the changes in an update are generally far less likely to experience problems than those who select only the fixes and features that appeal to them, and hope that our QA processes found all hidden dependencies on previous changes.

For our new packaging system, there is a powerful incentive to eliminate Dim Sum patching: since we wish to use a single version numbering space for any package, attempting to support fine-grain Dim Sum patching would require very small packages - affecting the performance of packaging operations, and significantly increasing the workload of OpenSolaris developers. Instead, we can set package boundaries according to what makes sense for minimization purposes.

This implies that future (post Solaris 11) patches will be completely cumulative (aside from some exceptions for urgent security fixes), at least for the core OS. Your system will be able to determine what is needed to bring the installed software up to the desired revision level automatically; needing to pick and choose patches will be a thing of the past

[Sep 9, 2008] Patch Check Advanced 20080909-02 (Stable) by Dagobert Michelsen

About: Patch Check Advanced (pca) generates lists of installed and missing patches for Sun Solaris systems and optionally downloads patches. It resolves dependencies between patches and installs them in the correct order. It works on all versions of Solaris and on both SPARC and x86.

Changes: Checks for patches that are only partly installed have been added. A bug with missing patches with multiple versions of the same package has been fixed. Failed patch installs are logged to syslog. Minage option being one day off has been fixed. Certificates for HTTPS downloads from SunSolve and PCA home have been added. The code was simplified in several places. Patch handling has been added for several new patches.

[Dec 8, 2006] BigAdmin Sun Connection

Get information about how Sun Connection can help you to deploy, manage, and track software updates on your systems that use the Solaris OS or Linux OS.

Sun Connection offers three options for managing your systems:

Learn More

For information about the features and value offered by each product, go to


This community web site is the place to share and view tips and tricks from other sys admins who are using Sun Connection. To see contributions from the community, or to submit content, click Participate.

[Dec 7, 2006] BigAdmin Feature Article Using Sun Update Connection System and a Local Patch Server to Patch Systems With a Fixed Baseline by Barry Greenberg

November 2006(Bigadmin)

Would you like to use Sun Update Connection System and the Sun Update Connection Proxy to manage your systems against fixed "baseline" patch sets representing a point in time, using a local source of patches? Although Sun's tools don't directly support this, it can be done. The following write-up explains how.

Sun Update Connection uses a patch set file that it retrieves from its patch source, either directly from Sun's patch server or indirectly through a Sun Update Connection Proxy (also known as a local patch server or LPS). The default patch set,, contains all of the currently available patches for that day. The file is cached locally on the LPS and returned when a local Sun Update Connection client requests the file during a patch analysis operation.

The file is produced once a day and contains all of the current patches. If you copy today's file to you can later refer to that as the baseline for that day. For example, to create a baseline for November 10, 2006, copy the file produced on November 10, 2006 to

Perform these tasks to configure a proxy server for your environment and establish a patch baseline:

  1. Install the Sun Update Connection Proxy Software
  2. Configure the Sun Update Connection Proxy
  3. Establish a Fixed Patch Baseline
Install the Sun Update Connection Proxy Software

The system that you designate to be your local patch server must be using the Solaris 10 Operating System (Solaris OS).

  1. Install the current Sun Update Connection client on the Solaris 10 system that will be used as the local patch server. Note that the Solaris 10 1/06 and later releases include the client software; however, it's always a good idea to include the latest revision as fixes to the client might affect your performance.
  2. Register the system with a service plan so your patch server will have access to all patches, including the Sun Update Connection Proxy patch.
  3. Install the Sun Update Connection Proxy patch.
Configure the Sun Update Connection Proxy

The server can be configured in either connected or standalone mode. If the server is configured in connected mode, it will send requests to Sun to get patches and patch metadata that are not in its local cache. The requests can be sent to Sun through an intervening server. If a server is configured in standalone mode, it does not have any connections to Sun and is limited to the data that is available in its local repository. Once your proxy is configured, point your clients to the proxy server.

Sun Update Connection Proxy Setup - Connected Server

The Sun Update Connection Proxy is configured in connected mode when its patch source URL is Sun's server.

  1. Set the server's network proxy, if needed:
    patchsvr setup -x network-proxy-name:port
  2. Set the server's patch source:
    patchsvr setup -p

    Note: This is the default setting.

Sun Update Connection Proxy Setup - Standalone Server

The Sun Update Connection Proxy is configured in standalone mode when its patch source is on the local system, typically a directory full of patches or a CD-ROM. Set the server's patch source:

patchsvr setup -p file:/patchsvr

In this example, there must be a /patchsvr directory on the host, and you must populate that directory as discussed below.

Sun Update Connection Proxy Setup - Client

Set clients to use the proxy server as their patch source (on each client):

smpatch set patchpro.patch.source=http://server-name:3816/solaris/

Note: Don't omit the trailing "/".

Congratulations, you now have a local patch server. If you're running in connected mode there is really nothing more you need to do but get to work. If you are running in standalone mode you need to plan how you will populate and manage the local repository.

If you are running in standalone mode you must manually populate your server's repository with patches and patch metadata files. The format of the local repository is as follows. If the patch source URL is file:/patchsvr, under the /patchsvr directory you will need to create the following directories:

The most straightforward way of populating a standalone patch server is to alternate between connected and disconnected modes. When connected, the Proxy can retrieve the current patch metadata file, into a similar structure under /var/sadm/spool/patchsvr. You will need to rename the files as you copy them into /patchsvr. If you know the specific patches that your environment will need you can also use the smpatch download command with the -f option to download the specific patches. You can use other methods to get the right set of patches into the cache that might be needed in your environment. For example, after running an analysis on the Proxy host (smpatch analyze > patchlist) you will have the set of patches needed by that host. You could then do this same operation on your local client systems to produce a master list of patches needed for these systems. Once you have a master patch list, you can include it in the smpatch command as follows:

smpatch download -f -x idlist=master_list

When your local patch source is populated with a current detectors.jar,, and the patches you intend to deploy to your local systems, you can disconnect your Proxy and run it in standalone mode to serve patches to your local clients.

Establish a Fixed Patch Baseline

Sun updates the database of patches available to the Sun Update Connection software daily. This database, also known as the patch metadata file, patch set, or collection, is called and represents the current set of patches available on Sun's external patch server. That file changes daily. If you would like to represent a fixed point of time you can manually create a baseline metadata file, appropriately named, and refer to that when performing patch analysis on your local system. Note that this strategy is not without pitfalls: specifically, it does not address the treatment of withdrawn patches. Sun does periodically withdraw patches for a variety of reasons. By freezing a file for later use you might be including some of those withdrawn patches. If you decide to use this baseline strategy you must be aware that you may be applying patches that Sun has withdrawn as defective or obsolete.

Here's how to create a baseline:

  1. Copy the file and save it with a new file name that makes sense for your environment.
    cp /patchsvr/Database/ /patchsvr/Database/

    For example:

    cp /patchsvr/Database/ /patchsvr/Database/
  2. Add this baseline file to the collections file by adding a new entry as follows: Available Updates for 7/31/06

You may notice other patch sets in this file. You can leave them or remove those if not appropriate to your environment. What is on the right side of the equals sign is the user-visible text string that will appear in the Update Manager Patch Collections selection box.

Now you must configure a local client to use a baseline. If using Update Manager, select Patch Collections drop-down box and select the desired baseline.

Note: You may need to manually remove the client's cached versions of these files. Simply remove the files under /var/sadm/spool/cache/collection and /var/sadm/spool/cache/Database.

If using smpatch, use the following command:

smpatch set patchpro.patchset=baseline060731

Subsequent host analysis will be performed using this baseline. As long as the desired patches are available in the Proxy's repository then everything should work as expected.

BigAdmin Submitted Article Automating patch installation Mike Myers, July, 2004

Like other computing sites, we've seen security become more and more important in our environment. One aspect of security that we had done a particularly poor job on was regular (proactive) system patching. Mostly, this was because the system admins, after working a long hard day, had little desire to work late at night or on weekends to execute system patching. Also, there was nothing to "trigger" a system to get patched. We only patched when we did major work on systems or upgraded them, or when a system crashed and support identified a patch to fix the issue -- generally, that was all. Sadly, we had systems that went for over three years without patching, and we had dozens of different patch sets on our servers. Consistent it was not.

Because we carry a Platinum contract from Sun and run Sun Explorer Software on all our systems (about 115 of them), we receive a monthly status report. This showed us just how badly we were doing -- lots of red. (As an aside, I'd highly recommended these reports to anyone who's not getting them -- talk to your account rep.) We would try to fix the worst of these, but without an actual policy for this, the fixes tended to get preempted by "real" work.

This obviously wasn't acceptable.

To solve this, we first created a "security" policy saying that systems must be patched every 90 days (security is in quotes because this also has a major effect on availability -- far fewer system crashes). This gave us a requirement to meet and allowed us to build a process to do just that.

The process we developed included the following points:

Fortunately for our site, we have a 7x24 operations staff. These folks are not admins (even junior level) but are very good at following rigidly designed procedures. The boot time script has just such a well-documented, rigid flow.

Once all that was in place and tested in our lab, we unleashed it on our little world.

Things were a bit rocky to start -- system owners were not used to taking outages so often and often complained long and loud about how we were driving down availability. We stuck to our guns, mostly playing the security card, and eventually people got used to this new procedure. Some folks even gave us a schedule of outages for the next 12 months.

Some of the admins were a little disgruntled, too. They believed that patching was a critical aspect of their jobs and were uncomfortable with others taking on this role. This didn't last long once it became clear that the process was reliable and gave them a few weekends back.

Things have improved in our environment dramatically since then. Our monthly reports from Sun are now mostly green. We can now let the regular patching cycle address certain less critical security announcements. We still patch the more critical announcements outside of this schedule; fortunately, many of those don't require a reboot. Unscheduled system outages have dropped dramatically. I don't think we've had a single service call to Sun that ended with, "That problem is addressed by a patch," since our first complete pass through the systems.

We've had to use a few tricks to make things run smoothly. The rc script has to be smart enough to know if it's patching the central NFS server itself, in which case it seeks out the files locally instead of trying NFS. Obviously we have to make sure to schedule the NFS server patching at a time when no one else is patching.

We use a heavily modified version of the install_cluster script that Sun ships with the patch clusters. It scans the patch_order file and eliminates patches that are already on the system. This decreases the time to apply a patch significantly.

Since all patches must be tested in the lab before we release them to the system owners, we built a facility into the rc script to pull the patches from a "staging" area (and turn on a bunch of debugging) when appropriate responses are provided (for example, answering "debug" to the "shall we patch" question).

To keep our processes clean and consistent, the same script can be run from the command line (as root) during system build to patch the new system to the same cluster that is being deployed in our environment.

The rc script uses /etc/nologin to block logins while the system is being patched. It also uses a local alerting to a console in our operations room to let the operators know that the process is stopped waiting for some type of response (either the initial "shall we patch" questions or an error later on). The full transcript of the patching session is captured and emailed to the list of system admins for post-patching review (either immediately if there's a problem or the next business day otherwise).

One trick: If you're running from an rc script, you cannot just say read foobar in your script -- you must explicitly specify /dev/console as your tty:

	read foobar < $mytty

The structure on the central patch server is that all patches are in a share called /patches. Under these are directories for each OS release that is supported, which are called Sun<release>_DATE where <release> is whatever is returned by uname -r, and DATE is a date stamp of the form DDMMYYYY. These directories in turn hold a _Recommended and _Addon directory that hold the Recommended and security bundle and a local bundle of your own choosing. The install_cluster script in each directory is what gets executed to drive the actual patching.

There are also directories called VxVM_DATE/vx_release/OS_RELEASE and VxFS_DATE/vx_release/OS_RELEASE, which have patches for VERITAS Volume Manager and File System, respectively (these also are driven by install_cluster scripts so they can house multiple patches).

Here is a link to the autopatching rc script, which will require some customization for each environment.

Recommended Links

Google matched content

Softpanorama Recommended

Top articles


BugTraq Archive Re Checking for most recent Solaris Security

Well, sun patch reports are quite strange, i remember last year, the syslog patch wasn't recommended. Now, the ToolTalk patch is recommended for 2.5.1, 2.5.1_x86 and 2.6_x86 but not for 2.6. Was it because the public ttdb exploit was only for 2.5.1? Little work is needed to make it work on 2.6. Too bad for people who blindly install the recommended patch cluster. Anyway, i parsed the latest Sun patch report for easy reading and retrieving: , i'll add cross links to advisory later.

Enclosed is a script that checks if your Solaris system has the latest security patches applied.


patchdiag (from Sun) does much the same as what this script does (at least, by what you have said). It also notifies which patches are "Recommended" and "Security" patches. You can get it at (this is the UK URL).

Unfortunately, this is only available for contract customers.

There was a blurb about it in a ;login: last year. http://www.cs.duke.eduwjspr.html

Or use the automated email patch status robot at See -- Our (my) service makes no pretence of being a service that extremely vulnerable machines should use. But then again, the mail you send doesn't need to identify _which_ machine the showrev output is from. Just take the showrev/pkginfo from one machine, put it into a file, email it from another machine (with correct subject). So any eavesdropper would only know that somewhere (in the world) there is a Sun/Solaris machine with this software/patchlevel.

Doesn't sound very good to send the configuration of your machine over the internet by email. What if someone gets it and use that information to know the vulnerabilities of your server? Using your service he would know:

* Which Software you have installed in your server
* Which patches you have applied (and what's more interesting, which
patches you *haven't* applied)
* The OS version, platform, etc...
* Your server's name

Mmmmmmm... Just the information someone would need to hack your system :) What about making public the program you use, to run it locally? (showrev -p ; pkginfo -l)|yourniceprog

Patch Tools

Sun makes patch update tasks more bearable in a few ways.

GASP! -- Georgetown Automated Solaris Patchtool

SAGE ;login - ToolMan Meets PatchReport

One of my co-workers, Joe Shamblin, is our OS and security guru and is generally the one burdened with the chore of keeping our systems up-to-date with the latest patches. He has brilliantly automated and simplified this odious task.

Joe created ­ and regularly employs ­ a not-so-trivial Perl script called PatchReport, so named because in its initial incarnation its sole function was to produce a report à la patchdiag, but with two improvements: it automatically downloaded the xref file, and it provided a more usable, consolidated report. The table produced by PatchReport combines the information from all three sections of the patchdiag report, indicating in a single line for each patch if it is Recommended, Security, and/or if a prior version is already installed. This makes it much easier to scan the list and decide which patches are appropriate for your system.

But PatchReport has evolved into a full-fledged patch analyzer (the report), downloader, and installer. As noted earlier, it can accomplish all of this and leave a Solaris system completely up to date via a single command. Invoked with appropriate options specified, PatchReport will: connect to the Sunsolve site, download the patchdiag.xref and CHECKSUMS files, analyze patches on your system and produce a report, download selected patches, check the md5 checksums, and install the patches. You can even tell it to shut down and reboot the system when it's done! Listing 2 shows an example of a PatchReport run.

By the way, if PatchReport finds that your system is completely up to date, well, you'll just have to find out for yourself what happens.

You can also use PatchReport in conjunction with Sun's JumpStart suite (used to install new systems). PatchReport can download appropriate patches to a directory. Then JumpStart can be configured to install these patches to the OS during the initial installation. Alternatively, JumpStart could be configured to call PatchReport directly.

PatchReport is written for Solaris and not extended to other operating systems because Sun has a very open and well-documented patch system that makes programs like patchdiag and PatchReport possible. Other vendors, please take note.

Solaris Patch Levels by Thomas Knox

...The first thing to do is to install the Sun patchdiag tool onto your servers. I like to install it into /usr/local/patchdiag so I always know where it is, no matter what system I might be on. The patchdiag tool can be found at:


and the most recent version (as of this writing) is 1.0.4.

... ... ...

After you have downloaded the patchdiag tool, install it into a uniform place. All of my scripts assume /usr/local/patchdiag; change yours accordingly:

cd /usr/local
zcat patchdiag-1.0.4.tar.Z | tar -xvf -
ln -s patchdiag-1.0.4 patchdiag
cd patchdiag

I also make a user (called "patches") who owns the patchdiag directory on each machine. This account is used to automate pushing the patchdiag.xref file to all of the servers:

cd /usr/local
chown -R patches patchdiag-1.0.4
chmod 700 patchdiag-1.0.4
The Automation Process

The first script (Listing 1) will go out to the SunSolve FTP site and download the current patchdiag.xref file for system analysis. After downloading it, it will push it to all of your other servers.

Replace host1 login_id password ... hostX login_id password with your server names and the login information (i.e., sunbox1 patches patchpw /usr/local/patchdiag). Since this script will have live account information, it is a good idea to keep it owned by root with permissions 700, and in a private directory.

I initially used ncftpget to FTP the patchdiag.xref file, but Sun changed how the file was stored (it is now listed as a 0 byte file), and ncftpget will no longer retrieve this file, even with command-line arguments to "force" a RETR.

This script was designed to run as a cron entry. How frequently you check your patch levels should help you determine how often to run this script. Running it at off-peak hours will endear you to the Sun administrators.

The next phase of automation involves determining which patches need to be downloaded, retrieving them, and prepping them for installation. This script (Listing 2) uses wget, available from:

or precompiled from:

Follow the instructions provided by your download of wget and install it.

Replace my_login_id with your SunSolve login ID, and my_passwd with your SunSolve password. Again, because Listing 2 contains live passwords, keep it in a private directory with permissions 700.

patch.ignore is a list of patch IDs that you do not want to get. For example, if you're running a headless Solaris 8 server, you probably do not want patch 108576 to support Expert3D IFB Graphics. List the patches without revision numbers. A patch.ignore file that contained the following:


would not download patches 108569, 108576, or 108864.

If your server is behind a proxy, add the flags:


to the wget statement above, thus supplying your correct proxy user id and password. Be sure to add the line:

http_proxy =

to your ~/.wgetrc file, or define the environment variable http_proxy in the script (e.g., http_proxy= \ 8080/; export http_proxy)

This script will get all current patches for your system that were not explicitly excluded by the patch.ignore file, and their associated readme files. It will also expand the patches for easy installation. This script can also be run from cron, preferably after the first one runs.


Now that the patches have been placed on your systems, it is up to you to determine system applicability and install them by hand. It would be easy to also automate the patch installation. A simple i='ls -d 1*-*'; for j in $i;do;patchadd $j;done would work well. However, it is highly recommended not to do so. Rather review each patch's .readme and PATCH-ID/README files to determine applicability, special requirements, and whether a specific order is needed for installation.

Using these scripts on a regular basis on my servers has enabled me to be much more proactive in keeping my systems up to date and preventing problems before they become major issues. It has also reduced the usual hassle in finding new patches and retrieving them, thus saving my time for other tasks.

patchfetch2 - report patch status, select wanted patches and download

Solaris -- Patch Management (04-Dec-2000) -- This is essentially two Born shell scripts CheckPatches and GetApplyPatch.

Patch management is fundamental to security. Two simple tools we've developed for patch management are presented -- CheckPatches to list outstanding patches and GetApplyPatch to apply them. Traditional Unix tar kit available.

Solaris PatchReport 2.x (Solaris patch tool) PatchReport 2.x is ready for download. The latest update contains support for Casper Dik's fastpatch, and has the ability to exclude certain patches based on patch ids. PatchReport can be found at the following URL(s):


Solaris-specific resources on things I can do to secure boxes before putting them into production. I was specifically having problems with having patches and packages on the box that would never be used, and were security risks.

Many good responses:

1. Casper Dik <[email protected]>

I want to make sure that by installing the patches I don't end upinstalling a service that I didn't previously have. Will running the cluster patch script install a patch for lpd, for example, if the lpd package hasn't previously been installed?

If you removed the entire package, the patch won't install ("no applicable packages found"). If you just removed a few files, and those file are included in the patch, the files will be restored.

Is this the most prudent way to upgrade a box with security and keeping current in mind? I understand there is a program out there that will monitor the current state of your machine with respect to the currently available patches? Does anyone know anything more about this?

Monitor the security patches and download them. I'd also suggest taking all of the recommended patches. Another recommendation of mine is getting Also, check the sunworldonline columns (including back issues) off Suin's home page and check the security column.

2. Gnuchev Fedor <[email protected]>

there is a patchxref on sunsolve1 - atol with daily updated database of patches - so you can check which patches are required according to what packages are really loaded on your machine patch will not be applied if the package is not installed what's more unpleasant it will fail to apply if you'd made some changes to the system and replaced files within a package - since it checks against /var/sadm/install/contents database.

Should I just remove the entire existing named and sendmail packages before proceeding? Is that generally the proper approach?

Well, NO - do not remove - move all files as *.bak or *.orig - but keep them (well, for no good reason actually - except possibility to bail out)
//frankly - never had to bail out - ISC tools are fine on Solaris :-)



Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

Copyright © 1996-2021 by Softpanorama Society. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March 12, 2019