May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Disk Partitions in Solaris

News Recommended Links Solaris Volume Manager (SVM) Solaris TMPFS UFS filesystem Swap Space and Virtual Memory
ZFS NFS Floppy FAT filesystem CdRom Flashdrives Solaris snapshots
df du mount Mount Options Humor Etc

One of the most common problems that system administrators face in the Unix world is disk space. Whether it's running out of space, or just making sure that no one user is hogging all your resources, exploring how the hard disks are allocated and utilized on your system is a critical skill.

In the last few years, hard disks have become considerably bigger than most operating systems can comfortably manage. Indeed, most file systems have a minimum size for files and a maximum number of files and/or directories that can be on a single physical device, and it's those constraints that slam up against the larger and larger devices.

As a result, most modern operating systems support taking a single physical disk and splitting it into multiple virtual disks, or partitions. Windows and Macintosh systems have supported this for a few years, but usually on a personal desktop system you don't have to worry too much about disks that are too big, or, worse, running out of disk space and having the system crash.

Unix is another beast entirely. In the world of Unix, you can have hundreds of different virtual disks and not even know it -- even the contents of your home directory might be spread across two or three partitions.

For example, on my main Web server, I have a log file that's currently growing about 140K/day and is 19 megabytes. Doesn't sound too large when you think about 50 gigabyte disks for $100 at the local electronics store, but having big disks at the store doesn't mean that they're installed in your server!

In fact, Unix is very poorly behaved when it runs out of disk space, and can get sufficiently corrupted enough that it essentially stops and requires an expert sysadmin to resurrect. To avoid this horrible fate, it's crucial to keep an eye on how big your partitions are growing, and to know how to prune large files before they become a serious problem.

 Exploring Partitions

  1. The command we'll be exploring in this section is df, a command that reports disk space usage. Without any arguments at all, it offers lots of useful information:
    # df
    Filesystem           1k-blocks      Used Available Use% Mounted on
    /dev/sda5               380791    108116    253015  30% /
    /dev/sda1                49558      7797     39202  17% /boot
    /dev/sda3             16033712     62616  15156608   1% /home
    none                    256436         0    256436   0% /dev/shm
    /dev/sdb1             17245524   1290460  15079036   8% /usr
    /dev/sdb2               253871     88384    152380  37% /var
    Upon first glance, it appears that I have five different disks connected to this system. In fact, I have two.
  2. I'm sure you already know this, but it's worth pointing out that all devices hooked up to a computer, whether for input or output, require a specialized piece of code called a device driver to work properly. In the Windows world, they're typically hidden away, and you have no idea what they're even called.

    Device drivers in Unix, however, are files. They're special files, but they show up as part of the file system along with your e-mail archive and login scripts.

    That's what the /dev/sda5 is on the first line, for example. We can have a look at this file with ls to see what it is:

    # ls -l /dev/sda5
    brw-rw----    1 root     disk       8,   5 Aug 30 13:30 /dev/sda5
    The leading 'b' is something you probably haven't seen before. It denotes that this device is a block-special device.


    TIP If you ever have problems with a device, use ls -l to make sure it's configured properly. If the listing doesn't begin with a 'c' (for a character special device) or a 'b' (for a block- special device), something's gone wrong and you need to delete it and rebuild it with mknod.

    Here's a nice thing to know: The device names in Unix have meaning. In fact, "sd" typically denotes a SCSI device, then the next letter is the major device number (in this case an "a"), and the last letter is the minor device number (a "5").

    From this information, we can glean that there are three devices with the same major number but different minor numbers (sda1, sda3 and sda5), and two devices with a different major number and different minor numbers (sdb1 and sdb2).

    In fact, the first three are partitions on the same hard disk, and the second two are partitions on a different disk.

  3. How big is the disk? Well, in some sense it doesn't really matter in the world of Unix, because Unix only cares about the partitions that are assigned to it. If the second disk is 75GB gigabytes, but we only have a 50MB partition that's available to Unix, the vast majority of the disk is untouchable and therefore doesn't matter.

    If you really want to figure it out, you could add up the size of each partition (the Available column), but let's dissect a single line of output first, so you can see what's what:

    /dev/sda5               380791    108116    253015  30% /
    Here you're shown the device ID (sda5), then the size of the partition (in 1K blocks within Linux). This partition is 380,791KB, or 380MB . The second number shows how much of the partition is used -- 108,116KB -- and the next how much is available -- 253,015KB. This translates to 30% of the partition is in use and 70% is available.

    NOTE Those purists among you will realize the error of this calculation: 380,791/1024 produces MB, not a simple division by 1000. So that everyone is happy, that reveals that this partition is exactly 371.8MB.

    The last value is perhaps the most important because it indicates where the partition has been connected to the Unix file system. Partition "sda5" is the root partition, as can be seen by the '/'.
  4. Let's look at another line from the df output:
    /dev/sda3             16033712     62616  15156608   1% /home
    Here notice that the partition is considerably bigger! In fact, it's 16,033,712KB, or roughly 16GB (15.3GB for purists). Unsurprisingly, very, very little of this is used -- less than 1% -- - and it's mounted to the system as the /home directory.

    In fact, look at the mount points for all the partitions for just a moment:

    # df
    Filesystem           1k-blocks      Used Available Use% Mounted on
    /dev/sda5               380791    108116    253015  30% /
    /dev/sda1                49558      7797     39202  17% /boot
    /dev/sda3             16033712     62616  15156608   1% /home
    none                    256436         0    256436   0% /dev/shm
    /dev/sdb1             17245524   1290460  15079036   8% /usr
    /dev/sdb2               253871     88389    152375  37% /var
    We have the top-most root partition (sda5), then we have additional small partitions for /boot, /usr, and /var. The two really big spaces are /home, where all the individual user files will live, and /usr, where I have all the Web sites on this server stored.

    This is a very common configuration, where each area of Unix has its own "sandbox to play in,", as it were. This lets you, the sysadmin, manage file usage quite easily, ensuring that running out of space in one directory (say, /home) doesn't affect the overall system.

  5. Solaris 8 has a df command that offers very different information, focused more on files and the file system than on disks and disk space used:
    # df
    /                  (/dev/dsk/c0d0s0   ):  827600 blocks   276355 files
    /boot              (/dev/dsk/c0d0p0:boot):   17584 blocks       -1 files
    /proc              (/proc             ):       0 blocks     1888 files
    /dev/fd            (fd                ):       0 blocks        0 files
    /etc/mnttab        (mnttab            ):       0 blocks        0 files
    /var/run           (swap              ): 1179992 blocks    21263 files
    /tmp               (swap              ): 1179992 blocks    21263 files
    /export/home       (/dev/dsk/c0d0s7   ): 4590890 blocks   387772 files
    It's harder to see what's going on, but notice that the order of information presented on each line is the mount point, the device identifier, the size of the device, in 1K blocks, and the number of files on that device.

    There's no way to see how much of the disk is in use and how much space is left available, so the default df output isn't very helpful for a system administrator.

    Fortunately, there's the '-t' totals option that offers considerably more helpful information:

    # df -t
    /                  (/dev/dsk/c0d0s0   ):   827600 blocks   276355 files
                                      total:  2539116 blocks   320128 files
    /boot              (/dev/dsk/c0d0p0:boot):    17584 blocks       -1 files
                                      total:    20969 blocks       -1 files
    /proc              (/proc             ):        0 blocks     1888 files
                                      total:        0 blocks     1932 files
    /dev/fd            (fd                ):        0 blocks        0 files
                                      total:        0 blocks      258 files
    /etc/mnttab        (mnttab            ):        0 blocks        0 files
                                      total:        0 blocks        1 files
    /var/run           (swap              ):  1180000 blocks    21263 files
                                      total:  1180008 blocks    21279 files
    /tmp               (swap              ):  1180000 blocks    21263 files
                                      total:  1180024 blocks    21279 files
    /export/home       (/dev/dsk/c0d0s7   ):  4590890 blocks   387772 files
                                      total:  4590908 blocks   387776 files
    Indeed, when I've administered Solaris systems, I've usually set up an alias df="df t" to always have this more informative output.

    NOTE If you're trying to analyze the df output programmatically so you can flag when disks start to get tight, you'll immediately notice that there's no percentile-used summary in the df output in Solaris. Extracting just the relevant fields of information is quite tricky too, because you want to glean the number of blocks used from one line, then the number of blocks total on the next. It's a job for Perl or awk (or even a small C program).

  6. By way of contrast, Darwin has a very different output for the df command:
    # df
    Filesystem              512-blocks     Used    Avail Capacity  Mounted on
    /dev/disk1s9              78157200 29955056 48202144    38%    /
    devfs                           73       73        0   100%    /dev
    fdesc                            2        2        0   100%    /dev
    <volfs>                       1024     1024        0   100%    /.vol
    /dev/disk0s8              53458608 25971048 27487560    48%    /Volumes/Macintosh HD
    automount -fstab [244]           0        0        0   100%    /Network/Servers
    automount -static [244]          0        0        0   100%    /automount
    About as different as it could be, and notice that it suggests that just about everything is at 100% capacity.

    Uh oh!

    A closer look, however, reveals that the devices at 100% capacity are devfs, fdesc, <volfs> and two automounted services. In fact, they're related to the Mac OS running within Darwin, and, really, the only lines of interest in this output are the two proper /dev/ devices:

    /dev/disk1s9              78157200 29955056 48202144    38%    /
    /dev/disk0s8              53458608 25971048 27487560    48%    /Volumes/Macintosh HD
    The first of these, identified as /dev/disk1s9, is the hard disk where Mac OS X is installed, and it has 78,157,200 blocks., However they're not 1K blocks as in Linux, they're 512 byte blocks, so you need to factor that in when you calculate the size in GB:
                    78,157,200 / 2 = 39,078,600 1K blocks
                    39,078,600 / 1024 = 38,162.69MB
                    38,162.69MB / 1024 = 37.26GB
    In fact, this is a 40GB disk, so we're right on with our calculations, and we can see that 38% of the disk is in use, leaving us with 48202144 / (2 * 1024 * 1024) = 22.9GB.

    TIP Wondering what happened to the 2.78GB of space that is the difference between the manufacturer's claim of a 40GB disk and the reality of my only having 37.26GB? The answer is that there's always a small percentage of disk space consumed by formatting and disk overhead. That's why manufacturers talk about "unformatted capacity.".

    Using the same math, you can calculate that the second disk is 25GB, of which about half (48%) is in use.
  7. Linux has a very nice flag with the df command worth mentioning: Use '-h' and you get:
    # df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda5             372M  106M  247M  30% /
    /dev/sda1              48M  7.7M   38M  17% /boot
    /dev/sda3              15G   62M   14G   1% /home
    none                  250M     0  250M   0% /dev/shm
    /dev/sdb1              16G  1.3G   14G   8% /usr
    /dev/sdb2             248M   87M  148M  37% /var
    A much more human- readable format. Here you can see that /home and /usr both have 14GB unused. Lots of space!
This section has given you a taste of the df command, but we haven't spent too much time analyzing the output and digging around trying to ascertain where the biggest files live. That's what we'll consider next.

A Closer Look with du

The df command is one you'll use often as you get into the groove of system administration work. In fact, some sysadmins have df e-mailed to them every morning from cron so that they can keep a close eye on things. Others have it as a command in their .login or .profile configuration file so that they see the output every time they connect.

Once you're familiar with how the disks are being utilized in your Unix system, however, it's time to dig a bit deeper into the system and ascertain where the space is going.

Task 3.2: Using du to Ascertain Directory Sizes

The du command shows you disk usage, helpfully enough, and it has a variety of flags that are critical to using this tool effectively.

  1. There won't be a quiz on this, but see if you can figure out what the default output of du is here when I use the command while in my home directory:
    # du
    12      ./.kde/Autostart
    16      ./.kde
    412     ./bin
    36      ./CraigsList
    32      ./DEMO/Src
    196     ./DEMO
    48      ./elance
    16      ./Exchange
    1232    ./Gator/Lists
    4       ./Gator/Old-Stuff/Adverts
    8       ./Gator/Old-Stuff
    1848    ./Gator/Snapshots
    3092    ./Gator
    160     ./IBM/i
    136     ./IBM/images
    10464   ./IBM
    76      ./CBO_MAIL
    52      ./Lynx/WWW/Library/vms
    2792    ./Lynx/WWW/Library/Implementation
    24      ./Lynx/WWW/Library/djgpp
    2872    ./Lynx/WWW/Library
    2880    ./Lynx/WWW
    556     ./Lynx/docs
    184     ./Lynx/intl
    16      ./Lynx/lib
    140     ./Lynx/lynx_help/keystrokes
    360     ./Lynx/lynx_help
    196     ./Lynx/po
    88      ./Lynx/samples
    20      ./Lynx/scripts
    1112    ./Lynx/src/chrtrans
    6848    ./Lynx/src
    192     ./Lynx/test
    13984   ./Lynx
    28484   .
    If you guessed that it's the size of each directory, you're right! Notice that the sizes are cumulative because they sum up the size of all files and directories within a given directory. So the "Lynx" directory is 13,984 somethings, which includes the subdirectory "Lynx/src" (6,848), which itself contains "Lynx/src/chrtrans" (1112).

    The last line is a summary of the entire current directory ("."), which has a combined size of 28484.

    And what is that pesky unit of measure? Unfortunately, it's different in different implementations of Unix, so I always check the man page before answering this question. Within RHL7.2, the man page for du reveals that the unit of measure isn't specifically stated, frustratingly enough. However, it shows that there's a '-k' flag that forces the output to 1KBbyte blocks, so a quick check:

    # du -k | tail -1
    28484   .
    produces the same number as the preceding, so we can safely conclude that the unit in question is a 1KB block. Therefore, you can see that "Lynx" takes up 13.6MB of space, and that the entire contents of my home directory consume 27.8MB. A tiny fraction of the 15GB /home partition!

    NOTE Of course, I can recall when I splurged and bought myself a 20MB external hard disk for an early computer. I couldn't imagine that I could even fill it, and it cost more than $200 too! But I'll try not to bore you with the reminiscence of an old-timer, okay?

  2. The recursive listing of subdirectories is useful information, but the higher up you go in the file system, the less helpful that information proves to be. Imagine if you were to type du / and wade through the output:?!
    # du / | wc -l
    That's a lot of output!

    Fortunately, one of the most useful flags to du is '-s', which summarizes disk usage by only reporting the files and directories that are specified, or '.' if none are specified:

    # du -s
    28484   .
    # du -s *
    4       badjoke
    4       badjoke.rot13
    412     bin
    4       buckaroo
    76      CBO_MAIL
    36      CraigsList
    196     DEMO
    48      elance
    84      etcpasswd
    16      Exchange
    3092    Gator
    0       gif.gif
    10464   IBM
    13984   Lynx
    Note in the latter case that because I used the '*' wildcard, it matched directories and files in my home directory. When given the name of a file, du dutifully reports the size of that file in 1KB blocks. You can force this behavior with the '-a' flag if you want.

    TIP The summary has vanishes from the bottom of the du output when I specify directories as parameters, and that's too bad, because it's very helpful. To request a summary at the end, simply specify the '- c' flag.

  3. While we're looking at the allocation of disk space, don't forget to check the root level, too. The results are interesting:
    # du -s /
    1471202 /
    Ooops! We don't want just a one-line summary, but rather all the directories contained at the topmost level of the file system. Oh, and do make sure that you're running these as root, or you'll see all sorts of odd errors. Indeed, even as root the /proc file system will sporadically generate errors as du tries to calculate the size of a fleeting process table entry or similar. You can ignore errors in /proc in any case.

    One more try:

    # du -s /*
    5529    /bin
    3683    /boot
    244     /dev
    4384    /etc
    29808   /home
    1       /initrd
    67107   /lib
    12      /lost+found
    1       /misc
    2       /mnt
    1       /opt
    1       /proc
    1468    /root
    8514    /sbin
    12619   /tmp
    1257652 /usr
    80175   /var
    0       /web
    That's what I seek. Here you can see that the largest directory, by a significant margin, is /usr, weighing in at 1,257,652KB.

    Rather than calculate sizes, I'm going to use another du flag (-h) to ask for human- readable output: '-h'.

    # du -sh /*
    5.4M    /bin
    3.6M    /boot
    244k    /dev
    4.3M    /etc
    30M     /home
    1.0k    /initrd
    66M     /lib
    12k     /lost+found
    1.0k    /misc
    2.0k    /mnt
    1.0k    /opt
    1.0k    /proc
    1.5M    /root
    8.4M    /sbin
    13M     /tmp
    1.2G    /usr
    79M     /var
    0       /web
    Much easier. Now you can see that /usr is 1.2GB in size, which is quite a lot!
  4. Let's use du to dig into the /usr directory and see what's so amazingly big, shall we?
    # du -sh /usr/*
    121M    /usr/bin
    4.0k    /usr/dict
    4.0k    /usr/etc
    40k     /usr/games
    30M     /usr/include
    3.6M    /usr/kerberos
    427M    /usr/lib
    2.7M    /usr/libexec
    224k    /usr/local
    16k     /usr/lost+found
    13M     /usr/sbin
    531M    /usr/share
    52k     /usr/src
    0       /usr/tmp
    4.0k    /usr/web
    103M    /usr/X11R6
    It looks to me like /usr/share is responsible for more than half the disk space consumed in /usr, with /usr/bin and /usr/X11R6 the next largest directories.

    You can easily step into /usr/share and run du again to see what's inside, but before we do, it will prove quite useful to take a short break and 's time to talk about sort and how it can make the analysis of du output considerably easier.

  5. Before we leave this section to talk about sort, though, let's have a quick peek at du within the Darwin environment:
    # du -sk *
    5888    Desktop
    396760  Documents
    84688   Library
    0       Movies
    0       Music
    31648   Pictures
    0       Public
    32      Sites
    Notice that I've specified the '-k' flag here to force 1KB blocks (similar to df, the default for du is 512-byte blocks). Otherwise, it's identical to Linux.

    The du output on Solaris is reported in 512-byte blocks unless, like to Darwin, you force 1KB blocks with the '-k' flag:

    # du -sk *
    1       bin
    1689    boot
    4       cdrom
    372     dev
    13      devices
    2363    etc
    10      export
    0       home
    8242    kernel
    1       lib
    8       lost+found
    1       mnt
    0       net
    155306  opt
    1771    platform
    245587  proc
    5777    sbin
    32      tmp
    25      TT_DB
    3206    users
    667265  usr
    9268    var
    0       vol
    9       xfn
This section has demonstrated the helpful du command, showing how '-a', '-s,' and '-h' can be combined to produce a variety of different output. You've also seen how successive du commands can help you zero in on disk space hogs, foreshadowing the "diskhogs" shell script we'll be developing later in this hour.

Simplifying Analysis with sort

The output of du has been very informative, but it's difficult to scan a listing to ascertain which are the four or five largest directories, particularly as more and more directories and files are included in the output. The good news is that the Unix sort utility is just the tool we need to sidestep this problem.

Task 3.3: Piping Output to sort

Why should we have to go through all the work of eyeballing page after page of listings when there are Unix tools to easily let us ascertain the biggest and smallest? One of the great analysis tools in Unix is sort, even though you rarely see it mentioned in other Unix system administration books.
  1. At its most obvious, sort alphabetizes output:
    # cat names
    # sort names
    No rocket science about that! However, what happens if the output of du is fed to sort?
    # du -s * | sort
    0       gif.gif
    10464   IBM
    13984   Lynx
    16      Exchange
    196     DEMO
    3092    Gator
    36      CraigsList
    412     bin
    48      elance
    4       badjoke
    4       badjoke.rot13
    4       buckaroo
    76      CBO_MAIL
    84      etcpasswd
    Sure enough, it's sorted. But probably not as you expected: it's sorted by the ASCII digit characters! Not good.
  2. That's where the '-n' flag is a vital addition here: with '-n' specified, sort will assume that the lines contain numeric information and sort them numerically:
    # du -s * | sort -n
    0       gif.gif
    4       badjoke
    4       badjoke.rot13
    4       buckaroo
    16      Exchange
    36      CraigsList
    48      elance
    76      CBO_MAIL
    84      etcpasswd
    196     DEMO
    412     bin
    3092    Gator
    10464   IBM
    13984   Lynx
    A much more useful result, if I say so myself!
  3. One more flag and we'll go back to analyzing disk usage. The only thing I'd like to change in the sorting here is that I'd like to have the largest directory listed first, and the smallest listed last.

    The order of a sort can be reversed with the '-r' flag, and that's the magic needed:

    # du -s * | sort -nr
    13984   Lynx
    10464   IBM
    3092    Gator
    412     bin
    196     DEMO
    84      etcpasswd
    76      CBO_MAIL
    48      elance
    36      CraigsList
    16      Exchange
    4       buckaroo
    4       badjoke.rot13
    4       badjoke
    0       gif.gif
    One final concept and we're ready to move along: if you want to only see the five largest files or directories in a specific directory, all that you'd need to do is pipe the command sequence to head:
    # du -s * | sort -nr | head -5
    13984   Lynx
    10464   IBM
    3092    Gator
    412     bin
    196     DEMO
    This sequence of sort|head will prove very useful later in this hour.
A key concept with Unix is to begin to understanding how the commands are all essentially Lego pieces, and that you can combine them in any number of ways to get exactly the results you seek. In this vein, sort -rn is a terrific piece, and you'll find yourself using it again and again as you learn more about system administration.

Identifying the Biggest Files

We've explored the du command, sprinkled in a wee bit of sort for zest, and now it's time to accomplish a typical sysadmin task: Find the biggest files and directories in a given area of the system.

Task 3.4: Finding Big Files

The du command offers the capability to either find the largest directories, or the combination of the largest files and directories, combined, but it doesn't offer a way to examine just files. Let's see what we can do to solve this problem.
  1. First off, it should be clear that the following command will produce a list of the five largest directories in my home directory:
    # du | sort -rn | head -5
    28484   .
    13984   ./Lynx
    10464   ./IBM
    6848    ./Lynx/src
    3092    ./Gator
    In a similar manner, the five largest directories in /usr/share and in the overall file system (ignoring the likely /proc errors):
    # du /usr/share | sort -rn | head -5
    543584  /usr/share
    200812  /usr/share/doc
    53024   /usr/share/gnome
    48028   /usr/share/gnome/help
    31024   /usr/share/apps
    # du / | sort -rn | head -5
    1471213 /
    1257652 /usr
    543584  /usr/share
    436648  /usr/lib
    200812  /usr/share/doc
    All well and good, but how do you find and test just the files?
  2. The easiest solution is to use the find command. Find will be covered in greater detail later in the book, but for now, just remember that find lets you quickly search through the entire file system, and performs the action you specify on all files that match your selection criteria.

    For this task, we want to isolate our choices to all regular files, which will omit directories, device drivers, and other unusual file system entries. That's done with "-type f".

    In addition, we're going to use the "-printf" option to find to produce exactly the output that we want from the matched files. In this instance, we'd like the file size, in Kilobytes, and the fully -qualified file name. That's surprisingly easy to accomplish with a printf format string of "%b %p".

    TIP Don't worry too much if this all seems like Greek to you right now. Hour 12, "Quoting and Finding Files," will talk about the many wonderful features of find. For now, just type in what you see here in the book.

    Put all these together and you end up with the command:

    find . -type f -printf "%k %p\n"
    The two additions here are the '.', which tells find to start its search in the current directory, and the \n sequence in the format string, which is translated into a carriage return after each entry.
  3. Let's see it in action:
    # find . -type f -printf "%k %p\n" | head
    4 ./.kde/Autostart/Autorun.desktop
    4 ./.kde/Autostart/.directory
    4 ./.emacs
    4 ./.bash_logout
    4 ./.bash_profile
    4 ./.bashrc
    4 ./.gtkrc
    4 ./.screenrc
    4 ./.bash_history
    4 ./badjoke
    You can see where the sort command is going to prove helpful! In fact, let's preface head with a sort -rn to identify the ten largest files in the current directory, or the following:
    # find . -type f -printf "%k %p\n" | sort -rn | head
    8488 ./IBM/j2sdk-1_3_0_02-solx86.tar
    1812 ./Gator/Snapshots/MAILOUT.tar.Z
    1208 ./IBM/fop.jar
    1076 ./Lynx/src/lynx
    1076 ./Lynx/lynx
    628 ./Gator/Lists/Inactive-NonAOL-list.txt
    496 ./Lynx/WWW/Library/Implementation/libwww.a
    480 ./Gator/Lists/Active-NonAOL-list.txt
    380 ./Lynx/src/GridText.c
    372 ./Lynx/configure
    Very interesting information to be able to ascertain, and it'll even work across the entire file system (though it might take a few minutes, and, as usual, you might see some /proc hiccups):
    # find / -type f -printf "%k %p\n" | sort -rn | head
    26700 /usr/lib/libc.a
    19240 /var/log/cron
    14233 /var/lib/rpm/Packages
    13496 /usr/lib/netscape/netscape-communicator
    12611 /tmp/partypages.tar
    9124 /usr/lib/librpmdb.a
    8488 /home/taylor/IBM/j2sdk-1_3_0_02-solx86.tar
    5660 /lib/i686/
    5608 /usr/lib/qt-2.3.1/lib/
    5588 /usr/lib/qt-2.3.1/lib/
    Recall that the output is in 1KB blocks, so libc.a is pretty huge at more than 26 megabytes!
  4. You might find that your version of find doesn't include the snazzy new GNU find "-printf" flag (neither Solaris nor Darwin do, for example). If that's the case, then you can at least fake it in Darwin, at least, with the somewhat more convoluted:
    # find . -type f -print0 | xargs -0 ls -s | sort -rn | head
    781112 ./Documents/Microsoft User Data/Office X Identities/Main Identity/Database
    27712 ./Library/Preferences/Explorer/Download Cache
    20824 ./.Trash/palmdesktop40maceng.sit
    20568 ./Library/Preferences/America Online/Browser Cache/IE Cache.waf
    20504 ./Library/Caches/MS Internet Cache/IE Cache.waf
    20496 ./Library/Preferences/America Online/Browser Cache/IE Control Cache.waf
    20496 ./Library/Caches/MS Internet Cache/IE Control Cache.waf
    20488 ./Library/Preferences/America Online/Browser Cache/cache.waf
    20488 ./Library/Caches/MS Internet Cache/cache.waf
    18952 ./.Trash/Palm Desktop Installer/Contents/MacOSClassic/Installer
    Here we not only have to print the filenames and feed them to the xargs command, we also have to compensate for the fact that most of the file names will have spaces within their names, which will break the normal pipe. Instead, find has a "-print0" option which terminates each filename with a null character. Then the "-0" flag indicates to xargs that it's getting null-terminated filenames.

    WARNING Actually, Darwin doesn't really like this kind of command at all. If you want to ascertain the largest files, you'd be better served to explore the "-ls" option to find and then an awk to chop out the file size:. Like this:

    find /home -type f -ls | awk '{ print $7" "$11 }'
    Of course, this is a slower alternative that'll work on any Unix system, if you really want.
  5. To just calculate the sizes of all files in a Solaris system, you can't use printf or print0, but if you omit the concern for filenames with spaces in them (considerably less likely on a more traditional Unix environment like Solaris anyway), you'll find that the following works fine:
    # find / -type f -print | xargs ls -s | sort -rn | head
    55528 /proc/929/as
    26896 /proc/809/as
    26832 /usr/j2se/jre/lib/rt.jar
    21888 /usr/dt/appconfig/netscape/.netscape.bin
    21488 /usr/java1.2/jre/lib/rt.jar
    20736 /usr/openwin/lib/locale/zh_TW.BIG5/X11/fonts/TT/ming.ttf
    18064 /usr/java1.1/lib/
    16880 /usr/sadm/lib/wbem/store
    16112 /opt/answerbooks/english/solaris_8/SUNWaman/books/REFMAN3B/index/index.dat
    15832 /proc/256/as
    Actually, you can see that the memory allocation space for a couple of running processes has snuck into the listing (the /proc directory). We'll need to screen those out with a simple grep:
    # find / -type f -print | xargs ls -s | sort -rn | grep -v '/proc' | head
    26832 /usr/j2se/jre/lib/rt.jar
    21888 /usr/dt/appconfig/netscape/.netscape.bin
    21488 /usr/java1.2/jre/lib/rt.jar
    20736 /usr/openwin/lib/locale/zh_TW.BIG5/X11/fonts/TT/ming.ttf
    18064 /usr/java1.1/lib/
    16880 /usr/sadm/lib/wbem/store
    16112 /opt/answerbooks/english/solaris_8/SUNWaman/books/REFMAN3B/index/index.dat
    12496 /usr/openwin/lib/llib-lX11.ln
    12160 /opt/answerbooks/english/solaris_8/SUNWaman/books/REFMAN3B/ebt/REFMAN3B.edr
    9888 /usr/j2se/src.jar
The find command is somewhat like a Swiss Army Knife. It can do hundreds of different tasks in the world of Unix. For our use here, however, it's perfect for analyzing disk usage on a per- file basis.

Keeping Track of Users: diskhogs

Let's put all the information in this hour together and create an administrative script called "diskhogs". When run, thise script will report the users with the largest /home directories, and then report the five largest files in each of their homes.

Task 3.5: This Little Piggy Stayed Home?

This is the first shell script presented in the book, so a quick rule of thumb: write your shell scripts in sh rather than csh. It's easier, more universally recognized, and most shell scripts you'll encounter are also written in sh. Also, keep in mind that just about every shell script discussed in this book will expect you to be running as root, since they'll need access to the entire file system for any meaningful or useful system administration functions.

In this book, all shell scripts will be written in sh, which is easily verified by the fact that they all have #!/bin/sh as their first line.

  1. Let's put all this together. To find the five largest home directories, you can use:
    du -s /home/* | sort -rn | cut -f2 | head -5
    For each directory, you can find the largest files within by using:
    find /home/loginID -type f -printf "%k %p\n" | sort -rn | head
    Therefore, we should be able to identify the top home directories, then step one-by-one into those directories to identify the largest files in each. Here's how that code should look:
    for dirname in `du -s /home/* | sort -rn | cut -f2- | head -5`
      echo ""
      echo Big directory: $dirname
      echo Four largest files in that directory are:
      find $dirname -type f -printf "%k %p\n" | sort -rn | head -4
    exit 0
  2. This is a good first stab at this shell script. Let's save it as, run it and see what we find:
    # sh
    Big directory: /home/staging
    Four largest files in that directory are:
    423 /home/staging/waldorf/big/DSCF0165.jpg
    410 /home/staging/waldorf/big/DSCF0176.jpg
    402 /home/staging/waldorf/big/DSCF0166.jpg
    395 /home/staging/waldorf/big/DSCF0161.jpg
    Big directory: /home/chatter
    Four largest files in that directory are:
    1076 /home/chatter/comics/lynx
    388 /home/chatter/logs/access_log
    90 /home/chatter/logs/error_log
    64 /home/chatter/responding.cgi
    Big directory: /home/cbo
    Four largest files in that directory are:
    568 /home/cbo/financing.pdf
    464 /home/cbo/investors/CBO-plan.pdf
    179 /home/cbo/Archive/cbofinancial-modified-files/CBO
    77 /home/cbo/Archive/cbofinancial-modified-files/CBO Financial Incorporated.doc
    Big directory: /home/sherlockworld
    Four largest files in that directory are:
    565 /home/sherlockworld/originals-from gutenberg.txt
    56 /home/sherlockworld/speckled-band.html
    56 /home/sherlockworld/copper-beeches.html
    54 /home/sherlockworld/boscombe-valley.html
    Big directory: /home/launchline
    Four largest files in that directory are:
    151 /home/launchline/logs/access_log
    71 /home/launchline/x/submit.cgi
    71 /home/launchline/x/admin/managesubs.cgi
    64 /home/launchline/x/status.cgi
    As you can see, the results are good, but the order of the output fields is perhaps less than we'd like. Ideally, I'd like to have all the disk hogs listed, then their largest files listed. To do this, we'll have to either store all the directory names in a variable that we then parse subsequently, or we'd have to write the information to a temporary file.

    Because it shouldn't be too much information (five directory names), we'll save the directory names as a variable. To do this, we'll use the nifty backquote notation.

    TIP Unix old-timers often refer to backquotes as 'backticks', so a wizened Unix admin might well say "stick the dee-ewe in back ticks" at this juncture.

    Here's how things will change. First off, let's load the directory names into the new variable:

    bigdirs="`du -s /home/* | sort -rn | cut f2- | head -5`"
    then we'll need to change the for loop to reflect this change, which is easy:
    for dirname in $bigdirs ; do
    Notice I've also pulled the do line up to shorten the script. You recall that a semicolon indicates the end of a command in a shell script, so we can then pull the next line up without any further ado.
  3. Now let's not forget to output the list of big directories before we list the big files per directory. In total, our script now looks like this::
    echo "Disk Hogs Report for System `hostname`"
    bigdirs="`du -s /home/* | sort -rn | cut -f2- | head -5`"
    echo "The Five biggest home directories are:"
    echo $bigdirs
    for dirname in $bigdirs ; do
      echo ""
      echo Big directory: $dirname
      echo Four largest files in that directory are:
      find $dirname -type f -printf "%k %p\n" | sort -rn | head -4
    exit 0
    This is quite a bit closer to the finished product, as you can see from its output:
    Disk Hogs Report for System
    The Five biggest home directories are:
    /home/staging /home/chatter /home/cbo /home/sherlockworld /home/launchline
    Big directory: /home/staging
    Four largest files in that directory are:
    423 /home/staging/waldorf/big/DSCF0165.jpg
    410 /home/staging/waldorf/big/DSCF0176.jpg
    402 /home/staging/waldorf/big/DSCF0166.jpg
    395 /home/staging/waldorf/big/DSCF0161.jpg
    Big directory: /home/chatter
    Four largest files in that directory are:
    1076 /home/chatter/comics/lynx
    388 /home/chatter/logs/access_log
    90 /home/chatter/logs/error_log
    64 /home/chatter/responding.cgi
    Big directory: /home/cbo
    Four largest files in that directory are:
    568 /home/cbo/financing.pdf
    464 /home/cbo/investors/CBO-plan.pdf
    179 /home/cbo/Archive/cbofinancial-modified-files/CBO
    77 /home/cbo/Archive/cbofinancial-modified-files/CBO Financial Incorporated .doc
    Big directory: /home/sherlockworld
    Four largest files in that directory are:
    565 /home/sherlockworld/originals-from gutenberg.txt
    56 /home/sherlockworld/speckled-band.html
    56 /home/sherlockworld/copper-beeches.html
    54 /home/sherlockworld/boscombe-valley.html
    Big directory: /home/launchline
    Four largest files in that directory are:
    151 /home/launchline/logs/access_log
    71 /home/launchline/x/submit.cgi
    71 /home/launchline/x/admin/managesubs.cgi
    64 /home/launchline/x/status.cgi
    This is a script you could easily run every morning in the wee hours with a line in cron (which we'll explore in great detail in Hour 15, "Running Jobs in the Future"), or you can even put it in your .profile to run automatically each time you log in.
  4. One final nuance:. To have the output e-mailed to you, simply append the following:
    | mail s "Disk Hogs Report" your-emailaddr
    If you've named this script like I have, then you could have the output e-mailed to you (as root) with:
    sh | mail s "Disk Hogs Report" root
    Try that, then check root's mailbox to see if the report made it.
  5. For those of you using Solaris, Darwin, or another Unix, the nifty printf option probably isn't available with your version of find. As a result, the more generic version of this script is rather more complex, because we not only have to sidestep the lack of printf, but we also have to address the challenge of having embedded spaces in most directory names (on Darwin). To accomplish the latter, we use sed and awk to change all spaces to double underscores, then back again when we feed the arg to the find command:
    echo "Disk Hogs Report for System `hostname`"
    bigdir2="`du -s /Library/* | sed 's/ /__/g' | sort -rn | cut -f2- | head -5`"
    echo "The Five biggest library directories are:"
    echo $bigdir2
    for dirname in $bigdir2 ; do
      echo ""
      echo Big directory: $dirname
      echo Four largest files in that directory are:
      find "`echo $dirname | sed 's/__/ /g'`" -type f -ls | \
        awk '{ print $7" "$11 }' | sort -rn | head -4
    exit 0
    The good news is that the output ends up being almost identical, which you can verify if you have an OS X or other BSD system available.

    Of course, what would be smart would be to replace the native version of find with the more sophisticated GNU version, but changing essential system tools is more than most Unix users want!

    TIP If you do want to explore upgrading some of the Unix tools in Darwin to take advantage of the sophisticated GNU enhancements, then you'd do well to start by looking on for ported code. The site also includes download instructions.

    If you're on Solaris or another flavor of Unix that isn't Mac OS X, then check out the main GNU site for tool upgrades at:

This shell script evolved in a manner that's quite common for Unix tools -- it started out life as a simple command line, then as the sophistication of the tool increased, the complexity of the command sequence increased to where it was too tedious to type in directly, so it was dropped into a shell script. Then shell variables then offered the capability to save interim output, fine -tune the presentation, and more, so we exploited it by building a more powerful tool. Finally, the tool itself was added to the system as an automated monitoring task by adding it to the root cron job.


This hour has not only shown you two of the basic Unix commands for analyzing disk usage and utilization, but it's also demonstrated the evolution and development of a useful administrative shell script, diskhogs.

This sequence of command to multi-stage command to shell script will be repeated again and again as you learn how to become a powerful system administrator.


The Workshop offers a summary Q&A and poses some questions about the topics presented in this chapter. It also provides you with a preview of what you will learn in the next hour


This section contains common questions and answers about the topic covered in this hour. If you have additional questions that aren't covered, send me e-mail and maybe it'll show up in the next edition!

Q Why are some Unix systems built around 512-byte blocks, whereas others are built around 1024-byte blocks?

A This is all because of the history and evolution of Unix systems. When Unix was first deployed, disks were small, and it was important to squeeze as many bytes out of the disk as possible. As a result, the file system was developed with a fundamental block size of 512 bytes (that is, the space allocated for files was always in 512-byte chunks). As disks became bigger, millions of 512-byte blocks began to prove more difficult to manage than their benefit of allowing more effective utilization of the disk. As a result, the block size doubled, to 1KBbyte, and has remained there to this day. Some Unix systems have stayed with the 512-byte historical block size, whereas others are on the more modern 1KBbyte block size.

Q Do all device names have meaning?

A As much as possible, yes. Sometimes you can't help but end up with a /dev/fd13x4s3, but even then, there's probably a logical explanation behind the naming convention.

Q If there's a flag to du that causes it to report results in 1KB blocks on a system that defaults to 512- byte blocks, why isn't there a flag on 1KB systems to report in 512- byte blocks?

A Ah, you expect everything to make sense? Maybe you're in the wrong field after all...

  1. Why do most Unix installations organize disks into lots of partitions, rather than a smaller number of huge physical devices?
  2. When you add up the size of all the partitions on a large hard disk, there's always some missing space. Why?
  3. If you see devices /dev/sdb3, /dev/sdb4, and /dev/sdc1, what's a likely guess about how many physical hard disks are referenced?
  4. Both Solaris and Darwin offer the very helpful '-k' flag to the df command. What does it do, and why would it be useful?
  5. Using the '-s' flag to ls, the '-rn' flags to sort and the '-5' flag to head, construct a command line that shows you the five largest files in your home directory.
  6. What do you think would happen to our script if a very large file was accidentally left in the /home directory overnight?
  1. By dividing a disk into multiple partitions, you have created a more robust system, because one partition can fill without affecting the others.
  2. The missing space is typically allocated for low-level disk format information. On a typical 10GB disk, perhaps as much as two to four percent2-4% of the disk space might not be available after the drive is formatted.
  3. This probably represents two drives: /dev/sdb and /dev/sdc. The '-k' flag makes a system that defaults to 512-byte blocks report file sizes in 1KBbyte block sizes.
  4. This one should be easy: ls -s $HOME | sort -rn | head -5.
  5. The script, as written, would flag the very large file as one of the largest home directories, then it would fail when it tried to analyze the files within. It's an excellent example of the need for lots of error condition code and some creative thought while programming.
Next Hour

The next hour will continue to build the foundations of sysadmin knowledge with the oft-convoluted file ownership model. This will include digging into both the passwd and groups files and learning how to safely change them to create a variety of different permission scenarios.


Adding a SCSI Harddrive under Solaris

Adding a new disk to a system involves a number of steps:

  1. Connecting the disk: it can internal or external harddrive. We assume that SCSI controller is already installed and functional.
  2. Creating the device files required to access the disk. In order to access a disk device, the proper device nodes must exist in /dev. For most versions of UNIX, these were already created during the install of the operating system. Details can be found in the operating system specific sections below
  3. Partitioning the disk. Although it is possible to use a disk drive as one large filesystem, this is generally a bad idea. Partitioning is the process of splitting a disk up into several smaller sections, or partitions. Each partition is treated as an independent filesystem. This increases disk efficiency and organization by localizing data, makes it easier to back up sections of the filesystem, and helps to keep damage to one partition from affecting the entire drive.
  4. Making a new filesystem on each partition.  This is done with newfs command
  5. Checking the integrity of the new filesystems
  6. Create a mount point and mount it

After you connect external drive to controller or install a new drive, the system should recognize a new device on the SCSI bus. After powering up the system, hold down the Stop key and hit the A key to enter the boot monitor. At the boot monitor, probe-scsi can be used to list the SCSI devices the system recognizes:

Type 'go' to resume
Type help for more information
ok probe-scsi
Target 5
  Unit 0  Disk     HP        C37245       5153

After verifying that the new disk is recognized by the system, reboot the machine by issuing "boot -r" from the boot monitor. The -r option tells the system to reconfigure for the new device.

During the boot process, the new disk should be recognized and a message should be printed to the console. (On some Suns, it may not be printed to the screen, but will be written to the system log -- in this case, the dmesg command should be used to review the boot messages). The messages should be similar to this:

   sd5 at esp0: target 5 lun 0
   sd5 is /iommu@f,e0000000/sbus@f,e0001000/espdma@f,400000/esp@f,800000/sd@5,0
   WARNING: /iommu@f,e0000000/sbus@f,e0001000/espdma@f,400000/esp@f,800000/sd@5,0 (sd5):  
    corrupt label - wrong magic number
    Vendor 'HP', product 'C3724S', 2354660 512 byte blocks 

In this example, the disk is located on controller 0, SCSI ID 5. The "corrupt label" warning means that the disk doesn't have a Solaris label on it yet.

Device nodes

The correct device nodes for the disk are automatically added when a "boot -r" is issued or file /reconfigure is created and system is rebooted. 

 Partitioning and Labeling

The format utility is used to format, partition, and label disks. It is menu driven. The raw disk device is given as an argument; if no argument is given, format will print a list of available disks and ask the user to pick one.

# format /dev/rdsk/c0t5d0s2
selecting /dev/rdsk/c0t5d0s2
[disk formatted]
        disk       - select a disk
        type       - select (define) a disk type
        partition  - select (define) a partition table
        current    - describe the current disk
        format     - format and analyze the disk
        repair     - repair a defective sector
        label      - write label to the disk
        analyze    - surface analysis
        defect     - defect list management
        backup     - search for backup labels
        verify     - read and display labels
        save       - save new disk/partition definitions
        inquiry    - show vendor, product and revision
        volname    - set 8-character volume name

Typing format at the prompt will perform a low-level format on the disk. Please note that low level format is not that same as format in a DOS sense. DOS format corresponds more to newfs operation on Solaris then to this format operation (although it does check for bad sectors).  Newer SCSI drives like Seagate are preformatted and do not need additional formatting.  If something do wrong you can always restore formatting with the OFM tool, for example Seagate enterprise tools.

The next step is to partition the drive. Type partition at the prompt to switch to the partition menu:

format> partition

        0      - change `0' partition
        1      - change `1' partition
        2      - change `2' partition
        3      - change `3' partition
        4      - change `4' partition
        5      - change `5' partition
        6      - change `6' partition
        7      - change `7' partition
        select - select a predefined table
        modify - modify a predefined partition table
        name   - name the current table
        print  - display the current table
        label  - write partition map and label to the disk

You need to print select and since this is not a primary disk select option 1. After that select the  print  option to get a listing of the current partition table. Note that the second partition represents the entire disk:

partition> print
Current partition table (original):
Total disk cylinders available: 3361 + 2 (reserved cylinders)
Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm       0               0         (0/0/0)          0
  1 unassigned    wm       0               0         (0/0/0)          0
  2     backup    wu       0-3360          1.12GB    (3361/0/0) 2352700
  3 unassigned    wm       0               0         (0/0/0)          0
  4 unassigned    wm       0               0         (0/0/0)          0
  5 unassigned    wm       0               0         (0/0/0)          0
  6 unassigned    wm       0               0         (0/0/0)          0
  7 unassigned    wm       0               0         (0/0/0)          0

Let's assume that you want to split the disk up into two equal partitions, numbers 0 and 1. The partition size can be specified in blocks, cylinde rs, or megabytes by using the b, c, and mb suffixes when entering the size. when splitting in half, half of the total number of cylinders is easy to calculate and can be used; otherwise megabyte size is easier to use.

partition> 0
Part      Tag    Flag     Cylinders        Size            Blocks
  3 unassigned    wm       0               0         (0/0/0)          0
Enter partition id tag[unassigned]:  
Enter partition permission flags[wm]: 
Enter new starting cyl[0]: 0
Enter partition size[0b, 0c, 0.00mb]: 1680c
partition> 1
Enter partition id tag[unassigned]: 
Enter partition permission flags[wm]: 
Enter new starting cyl[0]: 1681
Enter partition size[0b, 0c, 0.00mb]: $

Note Enter a dollar ($) sign as a value for the last partition size means "automatically assign the remaining space on the disk to this slice".

Once the disk has been partitioned, the label should be written to the disk:

partition> label
Ready to label disk, continue? y

The new partition table can be printed from the format utility, or may be viewed using the prtvtoc command:

# prtvtoc /dev/rdsk/s0t5d0s2
* /dev/rdsk/c0t5d0s2 partition map
* Dimensions:
*     512 bytes/sector
*     140 sectors/track
*       5 tracks/cylinder
*     700 sectors/cylinder
*    3363 cylinders
*    3361 accessible cylinders
* Flags:
*   1: unmountable
*  10: read-only
* Unallocated space:
*       First     Sector    Last
*       Sector     Count    Sector 
*     1176000       700   1176699
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       2      5    01          0   2352700   2352699
       3      0    00          0   1176000   1175999
       4      0    00    1176700   1176000   2352699

It makes sense to save it some where,for example floppy of CD or at least other (primary in this example)  harddrive. To write it to a file you can use commend:

prtvtoc /dev/rdsk/s0t5d0s2 /home/system/external_scsii_vtoc_backup

using the command as later if things go wrong you can restore it using this saved information:

Formatting (creating) new filesystems:
newfs and fsck

New filesystem (format in a DOS sense) can be created on the disk using the newfs command. each partition (slice) can be formatted.

# newfs /dev/rdsk/c0t5d0s3
newfs: construct a new file system /dev/rdsk/c0t5d0s3: (y/n)? y
/dev/rdsk/c0t5d0s3:     1176000 sectors in 1680 cylinders of 5 tracks, 140 sectors
        574.2MB in 105 cyl groups (16 c/g, 5.47MB/g, 2624 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 11376, 22720, 34064, 45408, 56752, 68096, 79440, 89632, 100976, 112320,
 123664, 135008, 146352, 157696, 169040, 179232, 190576, 201920, 213264,
 224608, 235952, 247296, 258640, 268832, 280176, 291520, 302864, 314208,
 325552, 336896, 348240, 358432, 369776, 381120, 392464, 403808, 415152,
 426496, 437840, 448032, 459376, 470720, 482064, 493408, 504752, 516096,
 527440, 537632, 548976, 560320, 571664, 583008, 594352, 605696, 617040,
 627232, 638576, 649920, 661264, 672608, 683952, 695296, 706640, 716832,
 728176, 739520, 750864, 762208, 773552, 784896, 796240, 806432, 817776,
 829120, 840464, 851808, 863152, 874496, 885840, 896032, 907376, 918720,
 930064, 941408, 952752, 964096, 975440, 985632, 996976, 1008320, 1019664,
 1031008, 1042352, 1053696, 1065040, 1075232, 1086576, 1097920, 1109264,
 1120608, 1131952, 1143296, 1154640, 1164832,

You can check the results with fsch command

# fsck -y /dev/rdsk/c0t5d0s3

** /dev/rdsk/c0t5d0s3
** Last Mounted on 
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
2 files, 9 used, 551853 free (13 frags, 68980 blocks, 0.0% fragmentation)

Webliography - SCSI Info Central
BigAdmin Feature Article The 'format' Utility in the Solaris Operating System

The "format" Utility in the Solaris Operating System

Greg (shoe) Schuweiler, November, 2004

To many, a hard disk is a "black box" and is thought of as a small device that somehow stores data, programs and/or an operating system. Nothing is wrong with this approach, of course, as long as that is all you care about. But as a system administrator, one of your primary concerns should be the protection of the data. Another concern way up there in the high-priority range should be the efficient movement of data between memory and the physical disk. In this article I would like to investigate one of the basic utilities that is available to us in the Solaris OS: format.

The format utility that is used to manage slices on a disk was originally written to administer SCSI-connected disks, so your performance may vary with disks connected via IDE. If you have the proper drivers installed and configured correctly, you should be able to administer Fibre Channel-attached drives or LUNs presented by RAID engines as well.

Along with format, I cover some other commands, of two types: non-destructive and destructive. I always put the destructive commands in bold and italics and precede them with the word Warning. For example: Running Warning: cd / ; rm r * as root will really destroy your system disk.

Another word of warning: The non-destructive commands should be just that, but it is up to you to decide whether to run them on your system or not. The destructive commands will destroy data on a disk; run these commands only if you are sure you know what you are doing.

Here are the commands I use throughout this article: format, prtvtoc, dd, od, cat, and fmthard. To start out I would like to define some of the disk terminology that I use here.

Disk Label:
This special area contains information about the disk, such as the geometry and slices. It is also referred to as the volume table of contents (VTOC). The disk label is the first 512 bytes on a disk. Most disks now come from the factory already labeled.

Defect List:
This is a list of areas on the disk that cannot be written to or read from. There is always a manufacturer's defect list and, as we shall see, a 'grown' list, which is a list of defects that grows as time goes by.

Partition Table:
Part of the disk VTOC is the partition table, which contains the slices on the disk (also known as the partitions), boundaries for the slices, and the sizes of the slices. A slice is composed of a contiguous range of blocks on a disk. There are eight slices on a disk [0-7] unless you label the disk with the Extensible Firmware Interface Label (EFI) -- a little more on that later. In most cases we don't use slice 2 as it represents the whole disk.

As you read through this article, keep in mind the following:

  • Each disk slice can only hold one file system.
  • A file system cannot span multiple slices (assuming no logical volume manager is being used).
  • After a file system is created, its size cannot be changed without repartitioning the entire disk.
  • Slices cannot span multiple disks. (In the case of a RAID engine taking n disks and presenting them to the system as one disk, the format utility sees only one disk.)

I hope you have a system with an attached disk that you can play with, as I would like this to be an interactive article. First pick the disk you are going to use with format:

r_gps@holstein: format
Searching for disks...done

0.c0t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>  boot
1.c0t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>  home
2.c2t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>  trashme
Specify disk (enter its number):

You can type 'quit' to exit or back up one menu or <Cntrl-D>, which will exit the format utility completely. In my case I am going to use "AVAILABLE DISK SELECTION 2". As mentioned before, if you are purchasing disks from Sun or a third-party vendor selling Sun equipment, the disks you purchased should already have Sun labels on them. But if you are in a heterogeneous environment and moving SCSI disks around, then obviously, a 36-Gbyte drive from an HP, AIX, or Windows server has an unrecognizable disk label (at least to the Solaris OS), and you will need to add a label. So to start with the basics and have a little fun, I am going to destroy the disk label on the disk I am working with:

 Warning: echo "adios data" | dd of=/dev/dsk/c2t1d0s2 bs=1 count=512 

So now the format command will give us the following:

r_gps@holstein: format
Searching for disks...done

c2t1d0: configured with capacity of 33.92GB

       0. c0t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>  boot
       1. c0t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>  home
       2. c2t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
Specify disk (enter its number): 2
selecting c2t1d0
[disk formatted]
Disk not labeled. Label it now?

Here you would type 'y' if you wish to label the disk, which is nice since that makes it usable for the Solaris OS. Then exit from the format command and use prtvtoc to look at information on the disk geometry and partitioning:

r_gps@holstein: prtvtoc /dev/dsk/c2t1d0s2                            
* /dev/dsk/c2t1d0s2 partition map
* Dimensions:
*       512 bytes/sector
*       107 sectors/track
*         27 tracks/cylinder
*     2889 sectors/cylinder
*   24622 cylinders
*   24620 accessible cylinders
* Flags:
*   1: unmountable
*  10: read-only
*                          First        Sector       Last
* Partition  Tag  Flags    Sector        Count       Sector     Mount Directory
   0	       2    00	   0		262899	     262898
   1	       3    01	   262899	262899	     525797
   2	       5    01	   0		71127180     71127179
   6	       4    00	   525798	70601382     71127179

You can learn some of the same information from the format command, but from this output we can see that four partitions exist. The Tag column shows us that we have root (2), swap (3), backup (5), and usr (4) partitions. The Flags column shows us that we have two partitions that are mountable, with read and write (00), and two partitions that are not mountable (01). For each partition, the First Sector column shows where the partition starts, the Sector Count shows the number of sectors, and the Last Sector shows the location of the last sector in the partition. If we had a file system mounted, the Mount Directory would show us where the partition was mounted. Before we get too far into breaking and fixing things, let's see what the format command shows us on the newly labeled disk. Run the following command:

r_gps@holstein: format /dev/rdsk/c2t1d0s2

At the format prompt, type partition, and then at the partition menu, type print. This will give you the following table:

Current partition table (original):
Total disk cylinders available: 24620 + 2 (reserved cylinders)

Part    Tag     Flag     Cylinders        Size       Blocks
  0     root	  wm	0 - 90     	128.37MB  (91/0/0)        262899
  1        swap	  wu	91 - 181      	128.37MB  (91/0/0)        262899
  2       backup  wu	0 - 24619       33.92GB	  (24620/0/0)   71127180
  3   unassigned  wm	0      	0       (0/0/0)        	0
  4   unassigned  wm	0      	0       (0/0/0)         0
  5   unassigned  wm	0      	0       (0/0/0)         0
  6          usr  wm	182 - 24619      33.67GB   (24438/0/0)  70601382
  7   unassigned  wm	0      	0       (0/0/0)         0


We have some of the same information as with the prtvtoc command. It is in a little different format, and we see the unused partitions. You might have noticed that I am telling format which disk I will be using. I am doing this to avoid accidentally causing problems with one of the other disks on the system I am experimenting with. One more way of looking at the disk label is to dump it out using the dd command:

r_gps@holstein: dd if=/dev/dsk/c2t1d0s2 of=wart.bin bs=512 count=1  
1+0 records in
1+0 records out

This gives us a binary file which we can use the od command on.

r_gps@holstein: od -x wart.bin
0000000 5355 4e33 3647 2063 796c 2032 3436 3230
0000020 2061 6c74 2032 2068 6420 3237 2073 6563
0000040 2031 3037 0000 0000 0000 0000 0000 0000
0000060 0000 0000 0000 0000 0000 0000 0000 0000
0000200 0000 0001 0000 0000 0000 0000 0008 0002
0000220 0000 0003 0001 0005 0001 0000 0000 0000
0000240 0000 0000 0000 0004 0000 0000 0000 0000
0000260 0000 0000 0000 0000 0000 0000 600d deee
0000300 0000 0000 0000 0000 0000 0000 0000 0000
0000640 0000 0000 2729 602e 0000 0000 0000 0001
0000660 602c 0002 001b 006b 0000 0000 0000 0000
0000700 0004 02f3 0000 005b 0004  02f3  0000 0000
0000720 043d 508c 0000 0000 0000 0000 0000 0000
0000740 0000 0000 0000 0000 0000 0000 0000 00b6
0000760 0435 4aa6 0000 0000 0000 0000 dabe 4297

There is a lot of information in the octal dump, and a very good Sun document covers this -- I do not wish to duplicate that information. Search for Document ID 74087 at SunSolve. One thing to note is that od always skips repeating lines (the *), the VTOC_SANE is always 0x600ddeee at offset 0xbc, and the DKL_MAGIC is always 0xdabe at offset 0x1fc just before the check sum. So now that we have our disk labeled, what can we (and the format command) do with it? First look at the menu listing below. With the exception of the volname option, I will go over the non-destructive format menu options first. These options are shown below in bold text.

r_gps@holstein: format /dev/rdsk/c2t1d0s2
selecting /dev/rdsk/c2t1d0s2
[disk formatted]

        disk       	- select a disk
        type       	- select (define) a disk type
        partition  	- select (define) a partition table
        current    	- describe the current disk
        format     	- format and analyze the disk
        repair     	- repair a defective sector
        label      	- write label to the disk
        analyze    	- surface analysis
        defect     	- defect list management
        backup     	- search for backup labels
        verify     	- read and display labels
        save       	- save new disk/partition definitions
        inquiry    	- show vendor, product and revision
        volname    	- set 8-character volume name
        !<cmd>     	- execute <cmd>, then return

I always like to give each disk a volume name as it makes the system more personable. This also is a great help if you have more than one system looking at the same disk drives, which can happen in a highly available cluster. I've seen 120 plus disks presented by a pair of RAID engines all visible by 15 systems in a VERITAS Cluster. Giving each disk a volume name helps identify disks that have already been used. Disks without a volume name are unused. Here is one thing to note when using the volname menu option, as shown in the following example:

format> volname
Enter 8-character volume name (remember quotes)[""]:"pigsnot"
Ready to label disk, continue? y


The volname option will write out to the disk label. I have done this on disks with mounted file systems and with valid data on them. The first time I did this, it was by accident. The world didn't come crashing down, and gravity was still a valid law. So I did some testing on disk drives, changing the volume name on disks with mounted file systems and disks with unmounted file systems. Although the label changed, I have never lost any data. Of course the standard disclaimers do apply: "Your mileage may vary -- proceed at your own risk -- not responsible for typos, and so on and so forth."

The disk option lets you change disks within the format utility, but because I have selected which disk I wish to work with, it only shows me that disk.

format> disk

       0. /dev/rdsk/c2t1d0s2 <SUN36G cyl 24620 alt 2 hd 27 sec 107> pigsnot
Specify disk (enter its number)[0]:

The current option gives us the current disk selected after starting the format command. I have been working with disk 2 for this article, and the current option shows the following:

Current Disk = c2t1d0: pigsnot
<SUN36G cyl 24620 alt 2 hd 27 sec 107>

The current option gives us the physical location of the disk in the last line. You need to precede the physical location with /devices, and there is a letter that represents the partition number.

r_gps@holstein: ls /devices/pci@1f,4000/pci@5/SUNW,isptwo@4/sd@1,0*


The physical device name with the word "raw" is the character device, and the other is the block device. Letter 'a' is partition 0, letter 'b' is partition 1, and so forth.

The defect option opens the Defect Menu, which can be used to see how many defects the disk had when it left the factory. Use the primary option to see this. The three disks I have on my desktop have a primary defect range from 72 to 2922 defects. But we are more interested in the grown option. The "grown" defects on a disk may grow over time; our concern would be with the rate of growth. Obviously if the defects grow at a rapid rate, you may be looking at a pending disk failure.

The print option gives you a list of the defects and their locations on the disk. You can also dump (save) the disk's defect list to a file. I have done this over short periods of time on a suspected disk.

The verify option gives a lot of information that we have seen before with the prtvtoc command and from displaying the partitions with the format command.

format> verify

Primary label contents:

Volume name 	= < pigsnot>
ASCII name  	= <SUN36G cyl 24620 alt 2 hd 27 sec 107>
pcyl        	= 24622
ncyl        	= 24620
acyl        	=    2
nhead       	=   27
nsect       	=  107
Part   Tag   	 Flag   Cylinders       Size       	Blocks
  0        root  wm	0 - 90        128.37MB	 (91/0/0)         262899
  1        swap  wu	91 - 181      128.37MB	 (91/0/0)         262899
  2      backup  wu	0 - 24619      33.92GB	 (24620/0/0) 	71127180
  3  unassigned  wm	0      	0     (0/0/0)    0
  4  unassigned  wm	0      	0     (0/0/0)    0
  5  unassigned  wm	0      	0     (0/0/0)    0
  6         usr  wm	182 - 24619    33.67GB   (24438/0/0)    70601382
  7   unassigned wm	0      	0     (0/0/0)    0

The save option will write out a format.dat file (or whatever name you give it). This dat file is information that the format command can use for drive configuration. (See man -s4 format.dat for more information on this file.) If you cat the newly created format.dat file out in one window and then use the verify command in another window, you will see a lot of the same information, but in a different format or -- to some -- a more readable format. You find out one more bit of information in the format.dat file created with the save option: the rpm of the disk.

# New disk/partition type  saved on Fri Aug  6 06:34:05 2004
disk_type = "SUN36G" \
         : ctlr = SCSI : ncyl = 24620 : acyl = 2 : pcyl = 24622 \
         : nhead = 27 : nsect = 107 : rpm = 10025

And finally on our list of options used for gathering information on the disk we have the inquiry option. It gives pretty basic information as noted below.

format> inq
Vendor:   FUJITSU 
Product:  MAN3367M SUN36G 
Revision: 1502

One thing to take note of is the revision level. Firmware updates do come out for disk drives. This option lets you compare the revision levels that you have on your disks compared to what is available from the disk vendor. I have had vendors update drive firmware on the disks on their large RAID arrays, but they do it on the fly. I updated firmware on SCSI disks once or twice a long time ago. I don't bother anymore -- mainly because of the number of disks in today's environments and the amount of downtime that would be required. (And that old saying, "If it ain't broke, don't fix it," sometimes makes sense.) I haven't figured out how to upgrade without the downtime. Yet.

One of the nice things about the format command is that you can feed it a command file. For example, we could use the following command file to dump out the defect list to a file:

dump /disks/c2t1d0-defect.dat

r_gps@holstein: format -f c2t1d0.cmd /dev/rdsk/c2t1d0s2

But because of the destructive power of the format command, we cannot pass it a command file if the disk has partitions mounted. If you think you would find this useful, I suggest using Perl and Expect to get this information.

Warning: I have reproduced the format menu, and if we look at the remaining options, these are the options that can destroy data on a disk. Remember to proceed with caution as the data you destroy is your own.

r_gps@holstein: format /dev/rdsk/c2t1d0s2
selecting /dev/rdsk/c2t1d0s2
[disk formatted]

        disk       	- select a disk
        type       	- select (define) a disk type
        partition  	- select (define) a partition table
        current    	- describe the current disk
        format     	- format and analyze the disk
        repair     	- repair a defective sector
        label      	- write label to the disk
        analyze    	- surface analysis
        defect     	- defect list management
        backup     	- search for backup labels
        verify     	- read and display labels
        save       	- save new disk/partition definitions
        inquiry    	- show vendor, product and revision
        volname    	- set 8-character volume name
        !<cmd>     	- execute <cmd>, then return

The type option is seldom used nowadays. This was used a lot when (mainly non-SCSI) disks didn't always carry the information about themselves on board. The older system admins remember entering information like the number of cylinders, alternate cylinders, physical cylinders, number of heads, physical number of heads, number data sectors/track, and a host of other items. For many of these, we took the default because sometimes we just couldn't find the information. Disk vendors -- which were a lot more numerous back then -- were very protective of their proprietary information. Sometimes we experimented with the values until we got it correct. You can play with this, but prior to doing so you should save all the information about your disk that was collected above. Yes, this information should be in some sort of read-only device on the disk, but it is better to be safe than at work until the wee hours of the morning. You may need it to set the disk parameters back to the correct values, particularly if you are using an older disk as you go through this article.

The drives for which we used to have to do this were called IPI and SMD drives. Try a search on Google with 'ipi smd disk' drive for a bit of history and to find out where you can still purchase these types of drives if you're interested.

The partition option is probably the most used option within the format utility. Selecting this option starts the Partition Menu, and from here we can modify the partitions on a disk. Keep in mind that the second partition (partition number 2), which is called the backup partition, is the whole disk. We do not want to modify the backup partition. However, rare instances exist in which you create a file system on the backup partition and mount only that -- usually with databases. But be aware this is not a good idea. If you are using software that ties itself to a partition on a disk, you may have problems. I have worked with one application that did this, and it was a nightmare for maintenance, upgrades, and so on. If vendors are not careful, when they require that their product uses slice 2, the software can trash the VTOC.

An area in which I strongly disagree with Sun now starts to emerge. For instance, lately Sun and the Sun Systems Engineers (SEs) that I know have been recommending one root partition and one other partition for everything else for the system disk. Balderdash! Each slice on a disk is seen as a separate disk by the OS, so before we slice up our test disk, let's look at why I disagree with Sun on this point. You can disagree with me if you like -- better yet, write your own article!

What if / (root) fills up because an application goes haywire (in turn filling /var/tmp)? This won't hurt the OS, and it might not even hurt the offending application. But then again, it might stop everything from doing any further processing until the offending application is corrected. So we need a partition to build a file system and mount /. Sun and I agree here: one partition. But we need to get that /var on a separate file system, too. So we need another partition for /var. Two partitions. I also put /usr in a separate partition. The /usr file system should only contain executables and ASCII files. Three partitions. Since the stuff I put in /opt is not needed to run the system, I make this a separate file system, too. Four partitions. I also create a file system for /tmp, which should only contain system-generated temporary files. Five partitions. This is our system disk so we need a swap partition. That's six. That leaves me one partition free for whatever. If I am locking the system down for security reasons, I make all file systems read-only except /tmp and /var/tmp.

The operating system sees each partition as a separate file system. That means it creates cache and buffers for each of these file systems. The extra cache and buffers will spread the I/O load out a little bit. Yes, you are limited by backup, interconnect speed, and disk controller. But in moments of heavy I/O, you are less likely to run into problems.

OK, now let me step down from my soap box, and let's get on with the Partition Menu. This opens up another menu that looks like the following:

        0	- change `0' partition
        1	- change `1' partition
        2	- change `2' partition
        3	- change `3' partition
        4	- change `4' partition
        5	- change `5' partition
        6	- change `6' partition
        7	- change `7' partition
        select	- select a predefined table
        modify	- modify a predefined partition table
        name	- name the current table
        print	- display the current table
        label	- write partition map and label to the disk
        !<cmd>	- execute <cmd>, then return

The easiest way I have found to work with partitioning is this: Once I am in the Partition Menu, I just hit the 'P' key for the print option. This lets us see what we will be starting with. This gives us the following output, which we've also seen above.

Current partition table (original):
Total disk cylinders available: 24620 + 2 (reserved cylinders)

Part    Tag     Flag     Cylinders     Size       	Blocks
  0        root	wm	0 - 90     128.37MB	(91/0/0)      	  262899
  1        swap	wu	91 - 181   128.37MB 	(91/0/0)     	  262899
  2      backup	wu	0 - 24619   33.92GB	(24620/0/0) 	71127180
  3  unassigned	wm	0     0    (0/0/0)        0
  4  unassigned	wm	0     0    (0/0/0)        0
  5  unassigned	wm	0     0    (0/0/0)        0
  6         usr	wm	182 - 24619 33.67GB	(24438/0/0) 	70601382
  7  unassigned	wm	0     0    (0/0/0)        0

This is the default partition table and we need to change it, recalling that we definitely do not want overlapping partitions. The safest way to modify the partitions is with the modify option. This involves something called the Free Hog Slice, which is a temporary partition that "automagically" expands and shrinks to accommodate the partitioning options. The Free Hog Slice only exists when you run the format utility.

When you enter modify, you have the option of selecting a partition base, and you can modify the current partition or the All Free Hog partition.

partition> modify
Select partitioning base:
        0. Current partition table (original)
        1. All Free Hog
Choose base (enter number) [0]?

By default you are going to modify the current partition, and this is just fine. There really isn't much of a difference when you're done. When you select the Current partition table, the format utility first displays the current partitioning. If you select All Free Hog, you will see that the only partition that has allocated space is the backup partition. When using the modify option, you will have the option of modifying all partitions except the backup partition. One thing I dislike about the modify option is that I cannot give the partitions I create a tag or set the flag. I can do this afterward by selecting each partition individually, but that seems redundant to me.

If I choose not to use the modify option, I can change each partition individually. Since I am working with a 36-Gbyte drive, I might as well make the root partition a little bigger. I start by selecting the partition I wish to modify, setting the permissions for the partition, and then giving it a starting cylinder, and finally a size.

partition> 0
Part      Tag    Flag     Cylinders         Size      		Blocks
  0       root    wm       0 -    90      128.37MB    (91/0/0)      262899

Enter partition id tag[root]: 
Enter partition permission flags[wm]: 
Enter new starting cyl[0]: 
Enter partition size [262899b, 91c, 90e, 128.37mb, 0.13gb]: 256mb

You can also enter a '?' at the partition id tag and the permission flag questions and get the responses that are acceptable. If I hit the 'P' key again we can see that I have a problem I need to fix.

Part      Tag   Flag     Cylinders      Size          		Blocks
  0      root	wm	0 - 181      	256.74MB	(182/0/0)   525798
  1      swap	wu	91 - 181      	128.37MB	(91/0/0)    262899


Partition 0 is overlapping partition 1. As you go through the creation of your partitions, ensure that you do not overlap any of the partitions you create. How many partitions you create and what sizes they are really depends on your site, your needs, experiences, and so on. When I finish, I have the following:


Current partition table (original):
Total disk cylinders available: 24620 + 2 (reserved cylinders)

Part      Tag   Flag     Cylinders         Size    		Blocks
  0        root	wm	0 -  181      	56.74MB		(182/0/0)        525798
  1        swap	wu	182 - 272      128.37MB		(1452/0/0)     	4194828
  2      backup	wu	0 - 24619      	33.92GB		(24620/0/0)    71127180
  3  unassigned	wm	3540 - 4265      1.00GB   	(726/0/0)      	2097414
  4  unassigned	wm	0      0    	(0/0/0)   	0
  5  unassigned	wm	4266 - 4991      1.00GB   	(726/0/0)      	2097414
  6         usr	wm	273 - 2087     	 2.50GB		(1815/0/0) 	5243535
  7         var	wm	2088 -	3539     2.00GB   	(1452/0/0)   	4194828

We still need to write the partition table out. This is done by just typing in label and answering the continue question with a 'Y'. You are probably wondering where /home is. I rarely put user stuff on the system disk. It makes going from one OS level to another a little easier.

You shouldn't really need to use the format option if you have purchased your disks from Sun. If you purchase used disks, or move disks from a different OS, or have a high defect list, the format option will prepare the disk for the Solaris OS. In the case of a high defect list, it might clean the disk up some. One reason you may need this option is if your system is connected to RAID engines that are not "smart" enough to present the created LUNs so the Solaris OS can understand them. If you have hardware like this around, you should probably look for newer equipment. Depending on the size, you may want to start the format of a disk before you go home or when you start work for the day. A 36-Gbyte drive on my Ultra Enterprise workstation (UE-60) has an estimated time of 332 minutes to completion.

The repair option can be used to repair a defective block on the disk -- maybe. I haven't used this in years. In fact, until I started writing this, I forgot about that option. With the low cost of disk drives nowadays, it is safer and probably cheaper to replace a disk than to spend hours fiddling with it, trying to recover a block here and a block there. Not as much fun, mind you, but probably safer.

I think I wrote more than enough about the label option earlier, so I am going to skip over it and go right to the analyze option. Like the defect option, the analyze option opens another menu. I rarely use the analyze option anymore except maybe when I'm playing around or writing an article. You can use the items in the Analyze Menu to, well, analyze a disk. You will notice that some selections state that they corrupt data.

The backup option searches for backup labels. First though, it checks for a primary label, and if it finds one, format will ask you if you want to continue. If you continue, this causes the backup option to replace the primary label with the backup one it finds. I have never had to do this, and I wonder how it might be used -- as far as I know, without the primary label you cannot get to the backup option in the format command. I suspect that Sun support may be able to tell you how to use the dd command to move a backup label to the primary label position on the disk, but that is just a guess on my part.

Well, wasn't that fun? Formatting and setting partitions on a couple of disks isn't a problem -- it requires a bit of typing, but it's bearable. But what if you've just hooked up to a large RAID array, and you need to configure 10 or even 200 disks identically? I have a good friend who needed to do this for a large imaging project. She used fmthard and a little scripting to take care of it in one fell swoop.

As always, man -s1m fmthard will give you more information, but in a nutshell, here is how to use fmthard. First you need an ASCII data file that tells fmthard how to set up the partitions on the target disks. You can either create one with your favorite editor or with the fmthard command itself. Looking at the text file, you will notice what you need to give the first sector and the sector count for each partition you what on each disk. As you recall from earlier, we definitely do not want partitions to overlap.

* Partition    Tag     		Flag   		First Sector  Sector Count
  0		2		00		4194828		1048707
  1		7		00		5243535		4145715
  2		5		00		0		71127180
  3		3		01		0		4194828
  4		0		00		9389250		2097414
  5		0		00		11486664	8389656
  6		4		00		19876320	6292242

So the best thing would be to use the format utility to partition up one of the disks for the configuration you need, and then use the fmthard command to create the data file to use for partitioning the rest of your disks.

fmthard -i -n "" /dev/rdsk/c2t1d0s2 > ./mypartition.dat

The preceding command writes the disk partitions on c2t1d0 out to the file mypartition.dat. You then use this dat file for 'feeding' into the fmthard command for all the disks that you wish to partition in a like manner.

 fmthard -s mypartition.dat -n "volumename" /dev/rdsk/cxtydzs2 

The downside to the fmthard command is that it updates the VTOC. So if your disks do not have valid labels to start with, fmthard does not work. If you plan to use or set the volume name using the fmthard command, you could end up with 200 lines (or however many disks you're modifying) in your script. But it's still better than slowly setting the partitions on a bunch of disks through the format utility.

Now if you have actually read the manual page for format, you'll notice the following:

-e    Enable SCSI expert  menu.  Note  this  option  is  not
      recommended for casual use.

Well, I don't know about you, but I just had to play with this option. When you enter the format utility, you get two more lines in the menu:

scsi	- independent SCSI mode selects
cache	- enable, disable or query SCSI disk cache

You even get a neat warning paragraph when you enter the scsi option! In all reality, if you are not SCSI protocol-literate, you will probably want a good book that explains the SCSI protocol, particularly the mode selects that you can change under the format utility's scsi option. I would recommend The SCSI Bus and IDE Interface: Protocols, Applications, and Programming by Friedhelm Schmidt. I have played with this some, but I never put my tinkering into a production server. Sometimes things are best left unaltered.

One word of warning if you intend to play in the scsi option area: First start the format utility with the logging option (-l c2t1d0.log), and then go through each display in the format utility; this will save everything to the log file. And the format selection under the scsi option is not the same as the format option one menu level up.

Now the cache option gives us a menu for the disk read and write cache. Not all SCSI disks have cache, and of those that do, not all of them allow you to change their cache options. This cache is a small amount of memory that is on the disk -- it has nothing to do with the system memory. This means that each disk may be a little different and may behave a little differently when you play with these. I have found that the read cache is usually turned on, and this makes sense as nothing is lost if power goes down during a read operation. The data should still be out on the disk. Likewise, I have always found the write cache turned off. This makes sense because if you lose power, you will lose whatever is in the write cache. When I have turned it on and compared I/O loads using IOzone, improvement on the write operations ranged from very little to a marked improvement. This improvement will vary from disk model to disk model and from vendor to vendor.

Since you are in the format utility with the -e option, you have one more item to look at. Type in label, and you get the following response:

format> label
[0] SMI Label
[1] EFI Label
Specify Label type[0]:

I believe that SMI means Sun Microsystems. This is the default option, and you see this option if you enter the format utility without the -e option. The SMI option gives you your standard partition configuration of eight partitions, with partition two being the backup partition.

If you select the EFI (Extensible Firmware Interface) label, you will get another partition as shown below.

ascii name  = <FUJITSU  MAN3367M SUN36G  1502 43d671f>
bytes/sector    =  512
sectors = 71132958
accessible sectors = 71132925
Part      Tag    Flag     First Sector        Size            Last Sector
  0	   root    wm               34     128.35MB         	  262898    
  1	   swap    wu           262899     128.37MB         	  525797    
  2  unassigned    wm                0         0                     0  
  3  unassigned    wm                0         0                     0    
  4  unassigned    wm                0         0                     0    
  5  unassigned    wm                0         0                     0    
  6         usr    wm           525798      33.66GB         	71116540    
  7  unassigned    wm                0         0                     0   
  8    reserved    wm         71116541       8.00MB         	71132924    


You can find a lot more information about EFI on the Extensible Firmware Interface page on the Intel web site.

So where does this leave us? I hope I have cleared up a few things that you can do with the format utility and what the format utility will do for you.

About the Author

Greg (shoe) Schuweiler has worked in the friendly Midwest (U.S.) for the last 20 years as a consultant, an embedded software designer, Oracle DBA, and a host of other strange titles. He has had the noble profession of UNIX SA for the past eight years. He can be reached at: [email protected].




Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haters Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

Copyright 1996-2021 by Softpanorama Society. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March 12, 2019