Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

tar -- Unix Tape Archiver

News Books Simple Unix Backup Tools Recommended Links


Reference Relax-and-Recover on RHEL SSH Usage in Pipes
gzip bzip2 xz zip        
Tar options for bare metal recovery Pipes support in Unix shell rsync Using -exec option with find Selecting files using their age Unix Find Tutorial: Using find for backups    
star gnu tar gzip find tutorial Relax-and-Recover Tar Tips Humor Etc



Tar is  a very old (dates back to Version 6 of AT&T UNIX, circa 1975) zero-compression archiver. But it is not easy to replace it with any archiver that has a zero-compression option (for example zip) as with time it got some unique capabilities and became a standard de-facto for zero compression archiver.

Unlike most archivers, tar can be used as the head or tail of a pipeline. tar  is very convenient for moving hierarchies directories hierarchies while preserving all attributes and symbolic and hard links between servers.

Archive created by tar is usually called tarball.  A tarball historically was a magnetic tape, but now it's usually a disk file. The default device, /dev/rmt0, is seldom used today, the most common is to archive into file that is often processed additionally by gzip, or, for text files,  xz (option -J).

The tar command can specify a list of files or directories and can include name substitution characters. The basic form is

tar  keystring options -f tarball filenames... .

The keystring is a string of characters starting with one function letter (c, r, t  , u, or x) and zero or more function modifiers (letters or digits), depending on the function letter used. tarball is specified after option -f. Omitting -f us  the most common  mistake for novice users of tar.

You can specify keysting values as options too because previous version of tar did not havethe idea of keystring and this was prevered for compatibility.

tar options files_to_include 

You can perform several standard for archivers  operations on tarball. In this sense tar is just another member of the family. Among typical operations:

We will discuss GNU tar implementation. There is alternative implementation called star that many prefer, but GNU tar is standard in commercial Linux distributions and is available for all other flavors of Unix so we prefer it. GNU tar has several interesting additional features such as:

Tar is often used along with gzip and bzip2. Not all tar implementations are created equal. In the past Solaris tar understand ACLs but later GNU tag caught up and  now is the best implementation. 

The current version of tar as of August 2012 is 1.26 (dated 2011-03-13). Versions used differ in different linux distributions:

While being just a zero compression archiver, tar has several capabilities that regular archivers usually do not have and as such it is very convenient for many tasks like backups and replication of filesystem from one server to another.

File limits are different for various OSes. Older Unixes often used to have 2G file limit. Obviously, older tar programs also won't be able to handle files or archives that are larger than 2^31-1 bytes (2.1 Gigabytes). Try running 'tar --version'. If the first line indicates you are using gnu tar, then any version newer than 1.12.64 is able to work with large files. You can also try command:

strings `which tar` | grep 64

you should see some lines saying lseek64, creat64, fopen64. If yes, your tar contains support for large files. GNU tar docs also mention that the official POSIX tar spec limits files to 8GB, but that gnu tar will generate non-POSIX (therefore possible non-portable files) with sizes up to something like 2^88 bytes. Still formally tarballs that you want to use on any other POSIX computer are limited to 8GB files.

Since the majority of tarballs are gzip'ed, the maximum filesize may be limited to due to gzip limitation.  The newer versions of gzip (1.3.5 and above) support large files. Before that the limit was 2GB. To check gzip version use

gzip --version

Unlike many other utilities tar does not assume that the thirst argument is the tar archive to operate with. Tar assumes that the first argument is a file to operate with. This created problems in novices. For example, unlike most Unix and DOS utilities, listing of the tar archive requires two options t and f (file to be listed).

tar tf myfiles.tar
tar tvf myfiles.tar

See also Google Answers UNIX Question! tar size constraint.

The limit for the length of file names is around 256, but can be as low as 100 on older Oses.


In old versions of tar because of limitations on header block space in the tar command, user numbers (UIDs), and group identification numbers (GIDs) larger than 65,535 will be corrupted when restored by GNU tar.

Tar is one of the few backup programs that is pipable.

Standard operations on tarballs

The functions supported by tar are the same as for any archiver:

The most popular options include:

Three examples:

Main options:

  1. -A, --catenate, --concatenate append tar files to an archive
  2. -c, --create create a new archive
  3. -d, --diff, --compare find differences between archive and file system
  4. --delete delete from the archive (not on mag tapes!)
  5. -r, --append append files to the end of an archive
  6. -t, --list list the contents of an archive
  7. --test-label test the archive volume label and exit
  8. -u, --update only append files newer than copy in archive
  9. -x, --extract, --get extract files from an archive
  10. -C, --directory=DIR change to directory DIR
  11. -f, --file=ARCHIVE use archive file or device ARCHIVE
  12. -j, --bzip2 filter the archive through bzip2
  13. -J, --xz filter the archive through xz
  14. -p, --preserve-permissions extract information about file permissions (default for superuser)
  15. -v, --verbose verbosely list files processed
  16. -z, --gzip filter the archive through gzip

Creation of the tar archives

You need to use option c  - Create which imply that the writing begins at the beginning of the tarball, instead of at the end. The tar command for creating a tar archive uses a string of options as well as a list of what files are to be included and names the device to be written to. The default device, /dev/rmt0, is seldom used today, the most common is to archive into file that is often processed additionally by gzip

tar cvf myarchive.tar *

The cvf argument string specifies that you are writing files (c for create), providing feedback to the user (v for verbose), and specifying the device rather than using the default (f for file). You can instantly pipe tar archive into gzip by specifying z option:

tar cvzf myarchive.tgz  *

The convention is to extension tar for tar archive and tgz for tared and gziped archives. Please note that tgz archives are monolithic bricks and are just designed to store files. But with regular tarball you can do several operations.  In the past instead of gzip, the standard compress utility was used In this case archives have prefix tar.Z.

To create an archive, you use the command.

tar -cvf archivename.tar  directory_or_list_of_files_or_regular_expression

Always use  option -v to see what is happening. To put files in an archive, you need at least read permissions to the file. For example to compress your home directory and pup the resulting archive in  /tmp you can use the command

tar -cvf /tmp/joeuser.tar /home/joeuser

Notice the options that are used; the order in these options is important -- the last one should be option f  which specifies where to pur the archive we are creating
Similarly you can create archive of etc

tar -cvf  /root/etc181006.tar /etc

Originally, tar did not use the dash (-) in front of its options. Modern tar implementations use that dash, as do all other Linux programs, but they still allow the old usage without a dash for backward compatibility.

While managing archives with tar, it is also possible to add a file to an existing archive, or to update an archive. To add a file to an archive, you use the -r options. For example: 

tar -rvf /root/etc181006.tar /root/*cfg /root/.bash*

To update a currently existing archive file, you can use the -u option: 

tar -uvf /root/etc181006.tar /root/*cfg /root/.bash*

write newer versions of specified files all files in /home to the end of archive and correct the directory of the archive to to new entroies. Older files are not deleted so archive increases in size.

Archive of /etc and home directory usually are compresses with gzip achieving approximately 300% compression (the compressed file is one third of the size of the directory). 

# mkdir /root/Archive 
# tar -cv /etc | gzip > /root/Archive/etc`date +"%y%m%d"`.tgz
# du -sh /etc
30M     /etc
# ll /root/Archive
total 9568
-rw-r--r--. 1 root root 9795626 Oct  7 17:47 etc181007.tgz

Binary files also can be compressed  at least to 50% of the original size. So you can save tremendous amount of space used for storing those archive (and typically archives are stored for a couple of years in enterprise environment).

You can compress tarball after it can was created on on the fly.

Compression after it was created is typically done via pipe. this make sense to do only for parallel version of compressors as regular version can be invoked via tar option:

tar -cv  /etc |  bzip2 > /root/etc181006.tar.bgz

on /etc directory bzip2 provides ~12% better compression then gzip, while xz provides ~30% better compression.  So you save one archive space each ten days if you do it daily from cron. For very large archives (say over 500M) is makes sense to use only parallel version of compressions -- pigz and bzip2. Especially the last, as it provides better compression ration, while maining adequate (although slightly slower then pigz) speed.

On the fly compression is specified via one of tar option -z for gzip, -j for bzip and -J for xz. For example

tar -cvzf  /root/etc181006.tgz /etc

To decompress files that have been compressed with gzip or bzip2, you can use the gunzip and bunzip2 utilities. Decompression speed is approximately the same for all three compression programs we discussed.

Here are compressed sizes of default content of /etc directory in RHEL 7 using those three compression programs:

[root@test01 Archive]# ll
total 24600
-rw-r--r--. 1 root root 8580949 Oct  7 17:56 etc181006.tbz
-rw-r--r--. 1 root root 9795626 Oct  7 17:47 etc181006.tgz
-rw-r--r--. 1 root root 6808940 Oct  7 18:04 etc181006.txz

For large directories of files it does not make any sense to use built-in option  and you are better off compressing the tarball via pipe using parallel version of archivers. for xz it makes sense to do it in all cases as non-parallel version in excruciatingly slow even on /etc directory. 

There is no need to specify these options while extracting. The tar utility recognizes the type of compressed contents and automatically decompress it.

Listing of the archives

To list the contents of a tar file without extracting, use the t option as shown below. Including the v option as well results in a long listing.

You need also specify option -f to list the files in the tarball. The most common problem that novices experience with the option -v is forgetting about the necessity to use also option -f to specify the file

TIP: As most people forget to specify option f to list the file it is better to create alias, for example

alias tls='tar -tf'

and put it in your .bash_profile file

This is one of tar idiosyncrasies and source of a lot of grief for system administrators, who type tar -t myfiles.tar and receive nothing as tar tries to read standard input.

tar tf myfiles.tar
tar tvf myfiles.tar

Viewing individual files from the tarball

To extract to standard output a file from archive you can use -O or --to-stdout, for example

tar -xOf myfiles.tar hosts | more
However, --to-command may be more convenient for use with multiple files.

Extraction of the tar archives

Archives created with tar include the file ownership, file permissions, and access and creation dates of the files. The p (preserve) option restores file permissions to the original state. This is usually good since you'll ordinarily want to preserve permissions as well as dates so that executables will execute and you can determine how old they are. In some situations, you might not like that original owners are retrieved, since the original owners may be people at some other organization altogether. The tar command will set up ownership according to the numeric UID of the original owner. If someone in your local passwd file or network information service has the same UID, that person will become the owner; otherwise the owner will display numerically. Obviously, ownership can be altered later.

tar xvpf myachive.tar

Extract each file from a shell prompt by typing tar xvzf file.tar.gz from the directory you saved the file.

I would like to remind you again that you can also extract individual file to standard output (via option -O) and redirect it, for example

tar -xvOf /root/etc_baseline110628_0900.tar  hosts > /etc/hosts110628

Comparing between the content of tarball and the directory

One of the rarely mentioned capabilities of tar is its ability to serve as diff for between the directory and the tarball

The command is (assuming the root was the current directory when tarball was created

cd /etc  && tar -df  /tmp/etc20141020.tar 2>dev/null
Unfortunately if some files are missing from the directory this comparison operation fails to produce different files.  In some cases tar just lists missing files but not differing files. This looks like a bug in current version

From tar manual

4.2.6 Comparing Archive Members with the File System

The `--compare'  (`-d'), or `--diff'  operation compares specified archive members against files with the same names, and then reports differences in file size, mode, owner, modification date and contents. You should only specify archive member names, not file names. If you do not name any members, then tar will compare the entire archive. If a file is represented in the archive but does not exist in the file system, tar reports a difference.

You have to specify the record size of the archive when modifying an archive with a non-default record size.

tar ignores files in the file system that do not have corresponding members in the archive.

The following example compares the archive members `rock', `blues' and `funk' in the archive `bluesrock.tar' with files of the same name in the file system. (Note that there is no file, `funk'; tar will report an error message.)

$ tar --compare --file=bluesrock.tar rock blues funk
tar: funk not found in archive

The spirit behind the `--compare'  (`--diff', `-d') option is to check whether the archive represents the current state of files on disk, more than validating the integrity of the archive media. For this latter goal, see Verifying Data as It is Stored.

diff a gzipped tarball against a directory


Excluding files from archive

This is somewhat tricky feature with a lot of gotchas.  They are discussed in the "Old News" section below. Among them:

It also has very useful but rarely used and poorly understood feature  called tag files. Any directory tagged with such a file is automatically excluded.

Exclude long option

The simplest way to avoid operating on files whose names match a particular pattern is to use  `--exclude' 

Causes tar to ignore files that match the pattern.

Forward slash is OK if you use it with the option, for example

--exclude=/proc -- exclude=/sys

The `--exclude=pattern'  option prevents any file or member whose name matches the shell wildcard (pattern) from being operated on. For example, to create an archive with all the contents of the directory `src' except for files whose names end in `.o', use the command

tar -cf src.tar --exclude='*.o' src

You may give multiple `--exclude'  options.

`-X file'
Causes tar to ignore files that match the patterns listed in file.

The tar command also enables you to create lists of files that should be excluded from into tarball. The option -X accomplish that and it can be used with the option -T (include files from the list, see below)  .

For example

ls *.zip > exclude.lst
tar cvf myarchive.tar -X exclude.lst *

in this case zip files will not be included into the archive. It's a good idea to exclude the exclude file itself, as well as the tar file that you are creating in your exclude file. Notice that this has been done in the following example:

tar cvf myarchive.tar -X exclude.lst *

When archiving directories that are under some version control system (VCS), it is often convenient to read exclusion patterns from this VCS' ignore files (e.g. `.cvsignore', `.gitignore', etc.) The following options provide such possibility:

Before archiving a directory, see if it contains any of the following files: `cvsignore', `.gitignore', `.bzrignore', or `.hgignore'. If so, read ignore patterns from these files.

The patterns are treated much as the corresponding VCS would treat them, i.e.:

Contains shell-style globbing patterns that apply only to the directory where this file resides. No comments are allowed in the file. Empty lines are ignored.
Contains shell-style globbing patterns. Applies to the directory where `.gitfile' is located and all its subdirectories.

Any line beginning with a `#'  is a comment. Backslash escapes the comment character.

Contains shell globbing-patterns and regular expressions (if prefixed with `RE:'(16). Patterns affect the directory and all its subdirectories.

Any line beginning with a `#'  is a comment.

Contains posix regular expressions(17). The line `syntax: glob'  switches to shell globbing patterns. The line `syntax: regexp'  switches back. Comments begin with a `#'. Patterns affect the directory and all its subdirectories.
Before dumping a directory, tar checks if it contains file. If so, exclusion patterns are read from this file. The patterns affect only the directory itself.
Same as `--exclude-ignore', except that the patterns read affect both the directory where file resides and all its subdirectories.
Exclude files and directories used by following version control systems: `CVS', `RCS', `SCCS', `SVN', `Arch', `Bazaar', `Mercurial', and `Darcs'.

As of version 1.29, the following files are excluded:

Exclude backup and lock files. This option causes exclusion of files that match the following shell globbing patterns:

When creating an archive, the `--exclude-caches'  option family causes tar to exclude all directories that contain a cache directory tag. A cache directory tag is a short file with the well-known name `CACHEDIR.TAG' and having a standard header specified in Various applications write cache directory tags into directories they use to hold regenerable, non-precious data, so that such data can be more easily excluded from backups.

There are three `exclude-caches'  options, each providing a different exclusion semantics:

Do not archive the contents of the directory, but archive the directory itself and the `CACHEDIR.TAG' file.
Do not archive the contents of the directory, nor the `CACHEDIR.TAG' file, archive only the directory itself.
Omit directories containing `CACHEDIR.TAG' file entirely.

Exclude file option ( --exclude-from  option)

tar has the ability to ignore specified files and directories contained in a special file. The localtion of the file is specified with option -X or   `--exclude-from'

The syntax is one definition per line. Each line is interpreted as a basic regular expression (tar documentation stupidly call this  "pattern")

Thus if tar is called as `tar -c -X foo .'  and the file `foo' contains a single line `*.o', no files whose names end in `.o' will be added to the archive.

Notice, that lines from file are read verbatim. One of the frequent errors is leaving some extra whitespace after a file name, which is difficult to catch using text editors.

However, empty lines are OK and are ignored. They can be used for readability.

For example:

# Not old backups                                                               
# Not temporary files                                                           

# Not the cache for pacman

see BackupYourSystem-TAR - Community Help Wiki

Exclude tag option

Another option family, `--exclude-tag', provides a generalization of this concept. It takes a single argument, a file name to look for. Any directory that contains this file will be excluded from the dump. Similarly to `exclude-caches', there are three options in this option family:

Do not dump the contents of the directory, but dump the directory itself and the file.
Do not dump the contents of the directory, nor the file, archive only the directory itself.
Omit directories containing file file entirely.

Multiple `--exclude-tag*'  options can be given.

For example, given this directory:

$ find dir

The `--exclude-tag'  will produce the following:

$ tar -cf archive.tar --exclude-tag=tagfile -v dir
tar: dir/folk/: contains a cache directory tag tagfile;
  contents not dumped

Both the `dir/folk' directory and its tagfile are preserved in the archive, however the rest of files in this directory are not.

Now, using the `--exclude-tag-under'  option will exclude `tagfile' from the dump, while still preserving the directory itself, as shown in this example:

$ tar -cf archive.tar --exclude-tag-under=tagfile -v dir
./tar: dir/folk/: contains a cache directory tag tagfile;
  contents not dumped

Finally, using `--exclude-tag-all'  omits the `dir/folk' directory entirely:

$ tar -cf archive.tar --exclude-tag-all=tagfile -v dir
./tar: dir/folk/: contains a cache directory tag tagfile;
  directory not dumped

Problems with using the exclude options

From: Problems with Using the exclude Options

Some users find `exclude'  options confusing. Here are some common pitfalls:

Including files from the list

The include file can be used to specify which files should be included. 

Combining tar with the find utility, you can archive files based on many criteria, including such things as how old the files are, how big, and how recently used. The following sequence of commands locates files that are newer than a particular file and creates an include file of files to be backed up.

find -newer lastproject -print >> include.lst
tar cvf myfiles.tar -T include,lst

That is especially important if you bring archive to a new place after modifying some files in the morning and need to merge changes back in the evening.

Both an include and an exclude file are used. Any file that appears in both files, by the way, will be included.

tar cvf myarchive.tar -X exclude.lst -T include.lst

Notice how we use options that require parameters: the first such option is used as the last in in the first string of options (cvf) and then each option specified separately with its parameter.

GNU tar has some features that enable it to mimic the behavior of find and tar in a single command.

Transfer of tarballs between servers

Tar archives can be transferred with remote copy commands such as ssh, scp, ftp, kermit, etc.  These utilities know how to deal with binary data. To mail a tarball, you need first encode the file to make it work. The uuencode command turns the contents of files into printable characters using a fixed-width format that allows them to be mailed and subsequently decoded easily. The resultant file will be larger than the file before uuencoding; after all, uuencode must map a larger character set of printable and nonprintable characters to a smaller one of printable characters, so it uses extra bits to do this, and the file will be about a third larger than the original.

Tar Usage in Pipes, "Back-to-back tar"

mv  command could not be used to move directories across file systems. A file system can be thought of as a hard drive or hard drive partition. The mv  command works fine when you want to move a directory between different locations on the same file system (hard drive), but it doesn't work well when you want to move a file across file systems. Depending on your version of mv, an error message could be generated when you try to do this.

For example, consider this directory:

ls -F /tmp/ch2 
ch2-01.doc ch2.doc@

If you use the mv  command to move this directory in the directory /home/mybook  on a different file system, an error message similar to the following is generated:

mv: cannot move 'ch22' across filesystems: Not a regular file

Some UNIX versions implement a workaround inside mv  that executes the following commands:

rm -rf destination
cp -rPp source destination
rm -rf source

Here source and destination are directories.

The main problem with this strategy is while symbolic links are copied, hard links in the source directory are not always copied correctly. Sometime coping hard links is just impossible as target of the move operation can be in a different filesystem.

In addition to this, there are two other minor problems with using cp:

The workaround for these problems is to use the tar  ( tar as in tape archive ) command to copy directories. This is usually accomplished with pipe and such an operation  became a Unix idiom for copying large tree of file, possibly the whole filesystem:

(cd mydata; tar cvf - *) | tar xpBf -

What this command does is move to a subdirectory and read files, which it then pipes to an extract at the current working directory. The parentheses group the cd and tar commands so that you can be working in two directories at the same time. The two - characters in this command represent standard output and standard input, informing the respective tar commands where to write and read data. The - designator thereby allows tar commands to be chained in this way.

You can also use cd command with the target directory instead of source directory. For example: 
tar cvz joeuser | (cd /Archive && tar xpz ) 

You can also move file from one server to another. See Cross network copy

Archives created with tar preserve the file ownership, file permissions, and access and creation dates of the files. Once the files are extracted from a tar file they look the same in content and description as they did when archived. The p (preserve) option will restore file permissions to the original state. This is usually a good idea since you'll ordinarily want to preserve permissions as well as dates so that executables will execute and you can determine how old they are.

In some situations, you might not like that original owners are restored, since the original owners may be people at some other organization altogether. The tar command will set up ownership according to the numeric UID of the original owner. If someone in your local /etc/passwd file or network information service has the same UID, that person will become the owner; otherwise the owner will display numerically. Obviously, ownership can be altered later. But in this case you may want to unpack the archive as a regular user instead of root.  If you unpack this archive with other user privileges (non-root) all uid and gid will be replaced with the uid and gid from this user.

Keep that in mind, if you make backups/restore, practically always you need to do it using UID 0 (root).

Cross network copy

No tar tutorial is complete without an example of a cross-network copy. This is often called tar-to-tar file transfere. One tar command creates an archive with one tar command while the other extracts from the archive without ever creating a *.tar file. The only problem with this command is that it looks a bit awkward. This capability depends of present of ssh or rsh.

To copy a directory hierarchy using this technique, first position yourself in the source directory:

cd fromdir

Next, tar the contents of the directory using the create (i.e., the "c" Option). Pipe the output to a tar extract (i.e., the "x" option) command. The tar extract should be enclosed in parentheses and contain two parts:

if you are not sure how tar archive was created it is safer to expand it first in /tmp and see results before restoring it to the original tree (especially if this is a system directory).

For example:

tar cBf - * | (cd todir; tar xvpBf -)

The hyphens in the tar command inform tar that no file is involved in the operation. The option B forces multiple reads and allows the command, as needed, to work across a network. The "p" is the preserve option - generally the default when the superuser uses this command.

Using tar-to-tar commands across you can move tree of directories from one server to another with a single command:

cd /home/joeuser
tar cBf - *" | ssh new_server "cd /home/joeuser; tar xvBf -"

Notice how we group the remote commands to clearly separate what we are running on the remote host from what we are doing locally. You might also use tar in conjunction with dd to read files from, or write files to, a tape device on a remote system. In the following command, we copy the files from the current directory and write them to a tape device on a remote host.

tar cvfb - 20 * | ssh boson dd of=/dev/rmt0 obs=20

Back-to-back tar commands have been used for many yef will be replaced with the uid and gid from this user. Keep that in mind, if you make backups/restore, practically always do any backup/restore with UID 0 (root).

If directories that are contained in tar archive exist the permissions will be changed to that in tar archive and files will be overwritten. If this is important system directory like /etc the net result can be a large SNAFU.

if you are not sure how tar archive was created it is safer to expand it first in /tmp and see results before restoring it to the original tree (especially if this is a system directory).


The use -f option to specify the file on which tar opertes is the source of a log of grievances. This is not standard arrangement and as such it take time to get used to it.

Exclusion of files is tricky and you need to understand that basic regular expression are essentially applied to the line in the listing of tar with such files. If you do not use absolute path option there is no leading slash in such a listing and you can't use it to create your tar archive.

If tar became corrupted you still can  recover most of files because of zero compression feature, but if such file compressed with gzip this is more tricky.

Top Visited
Past week
Past month


Old News ;-)

[Sep 10, 2018] How to Exclude a Directory for TAR

Feb 05, 2012 |
Frankly speaking, I did not want to waste time and bandwidth downloading images. Here is the syntax to exclude a directory.

# tar cvfp mytarball.tar /mypath/Example.com_DIR --exclude=/mypath/Example.com_DIR/images

Tar everything in the current directory but exclude two files

# tar cvpf mytar.tar * --exclude=index.html --exclude=myimage.png

[Apr 28, 2018] Linux server backup using tar

Apr 28, 2018 |

up vote 2 down vote favorite 3
Im new to linux backup.
Im thinking of full system backup of my linux server using tar. I came up with the following code:

tar -zcvpf /archive/fullbackup.tar.gz \
--exclude=/archive \
--exclude=/mnt \
--exclude=/proc \
--exclude=/lost+found \
--exclude=/dev \
--exclude=/sys \
--exclude=/tmp \

and if in need of any hardware problem, restore it with

cd /
tar -zxpvf fullbackup.tar.gz

But does my above code back up MBR and filesystem? Will the above code be enough to bring the same server back? linux backup tar share | improve this question edited Feb 14 '13 at 10:17

But does my above code back up MBR and filesystem?

Hennes 4,470 1 13 27

No. It backs up the contents of the filesystem.

Not the MBR which is not a file but is contained in a sector outside the file systems. And not the filesystem with it potentially tweaked settings and or errors, just the contents of the file system (granted, that is a minor difference).

and if in need of any hardware problem, restore it with

cd /
tar -zxpvf fullbackup.tar.gz

Will the above code be enough to bring the same server back?

Probably, as long as you use the same setup. The tarball will just contain the files, not the partition scheme used for the disks. So you will have to partition the disk in the same way. (Or copy the old partition scheme, e.g. with dd if=/dev/sda of=myMBRbackup bs=512 count=1 ).

Note that there are better ways to create backups, some of which already have been answered in other posts. Personally I would just backup the configuration and the data. Everything else is merely a matter of reinstalling. Possibly even with the latest version.

Also not that tar will backup all files. The first time that is a good thing.

But if you run that weekly or daily you will get a lot of large backups. In that case look at rsync (which does incremental changes) or one of the many other options. share | improve this answer edited Feb 14 '13 at 8:14 answered Feb 14 '13 at 7:29


Using tar to backup/restore a system is pretty rudimentary, and by that I mean that there are probably more elegant ways out there to backup your system... If you really want to stick to tar, here's a very good guide I found (it includes instructions on backing up the MBR; grub specifically).=: While it's on the Ubuntu wiki website, there's no reason why it wouldn't work on any UNIX/Linux machine.

You may also wish to check out this:

If you'd like something with a nice web GUI that's relatively straightforward to set up and use:

Floyd Feb 14 '13 at 6:40

Using remastersys :

You can create live iso of your existing system. so install all the required packages on your ubuntu and then take a iso using remastersys. Then using startup disk, you can create bootable usb from this iso.

edit your /etc/apt/sources.list file. Add the following line in the end of the file.

deb precise main

Then run the following command:

sudo apt-get update

sudo apt-get install remastersys

sudo apt-get install remastersys-gui

sudo apt-get install remastersys-gtk

To run the remastersys in gui mode, type the following command:

sudo remastersys-gui share | improve this answer

[Apr 28, 2018] How to properly backup your system using TAR

This is mostly incorrect if we are talking about bare metal restore ;-). Mostly correct for your data. The thinking is very primitive here, which is the trademark of Ubuntu.
Only some tips are useful: you are warned
Notable quotes:
"... Don't forget to empty your Wastebasket, remove any unwanted files in your /home ..."
"... Depending on why you're backing up, you might ..."
"... This will not create a mountable DVD. ..."
Apr 28, 2018 |

Preparing for backup

Just a quick note. You are about to back up your entire system. Don't forget to empty your Wastebasket, remove any unwanted files in your /home directory, and cleanup your desktop.

.... ... ...

[Apr 28, 2018] tar exclude single files/directories, not patterns

The important detail about this is that the excluded file name must match exactly the notation reported by the tar listing.
Apr 28, 2018 |

Udo G ,May 9, 2012 at 7:13

I'm using tar to make daily backups of a server and want to avoid backup of /proc and /sys system directories, but without excluding any directories named "proc" or "sys" somewhere else in the file tree.

For, example having the following directory tree (" bla " being normal files):

# find

I would like to exclude ./sys but not ./foo/sys .

I can't seem to find an --exclude pattern that does that...

# tar cvf /dev/null * --exclude=sys


# tar cvf /dev/null * --exclude=/sys

Any ideas? (Linux Debian 6)

drinchev ,May 9, 2012 at 7:19

Are you sure there is no exclude? If you are using MAC OS it is a different story! Look heredrinchev May 9 '12 at 7:19

Udo G ,May 9, 2012 at 7:21

Not sure I understand your question. There is a --exclude option, but I don't know how to match it for single, absolute file names (not any file by that name) - see my examples above. – Udo G May 9 '12 at 7:21

paulsm4 ,May 9, 2012 at 7:22

Look here: May 9 '12 at 7:22

CharlesB ,May 9, 2012 at 7:29

You can specify absolute paths to the exclude pattern, this way other sys or proc directories will be archived:
tar --exclude=/sys --exclude=/proc /

Udo G ,May 9, 2012 at 7:34

True, but the important detail about this is that the excluded file name must match exactly the notation reported by the tar listing. For my example that would be ./sys - as I just found out now. – Udo G May 9 '12 at 7:34

pjv ,Apr 9, 2013 at 18:14

In this case you might want to use:
--anchored --exclude=sys/\*

because in case your tar does not show the leading "/" you have a problem with the filter.

Savvas Radevic ,May 9, 2013 at 10:44

This did the trick for me, thank you! I wanted to exclude a specific directory, not all directories/subdirectories matching the pattern. bsdtar does not have "--anchored" option though, and with bsdtar we can use full paths to exclude specific folders. – Savvas Radevic May 9 '13 at 10:44

Savvas Radevic ,May 9, 2013 at 10:58

ah found it! in bsdtar the anchor is "^": bsdtar cvjf test.tar.bz2 --exclude myfile.avi --exclude "^myexcludedfolder" *Savvas Radevic May 9 '13 at 10:58

Stephen Donecker ,Nov 8, 2012 at 19:12

Using tar you can exclude directories by placing a tag file in any directory that should be skipped.

Create tag files,

touch /sys/.exclude_from_backup
touch /proc/.exclude_from_backup


tar -czf backup.tar.gz --exclude-tag-all=.exclude_from_backup *

pjv ,Apr 9, 2013 at 17:58

Good idea in theory but often /sys and /proc cannot be written to. – pjv Apr 9 '13 at 17:58

[Apr 27, 2018] Shell command to tar directory excluding certain files-folders

Highly recommended!
Notable quotes:
"... Trailing slashes at the end of excluded folders will cause tar to not exclude those folders at all ..."
"... I had to remove the single quotation marks in order to exclude sucessfully the directories ..."
"... Exclude files using tags by placing a tag file in any directory that should be skipped ..."
"... Nice and clear thank you. For me the issue was that other answers include absolute or relative paths. But all you have to do is add the name of the folder you want to exclude. ..."
"... Adding a wildcard after the excluded directory will exclude the files but preserve the directories: ..."
"... You can use cpio(1) to create tar files. cpio takes the files to archive on stdin, so if you've already figured out the find command you want to use to select the files the archive, pipe it into cpio to create the tar file: ..."
Apr 27, 2018 |

deepwell ,Jun 11, 2009 at 22:57

Is there a simple shell command/script that supports excluding certain files/folders from being archived?

I have a directory that need to be archived with a sub directory that has a number of very large files I do not need to backup.

Not quite solutions:

The tar --exclude=PATTERN command matches the given pattern and excludes those files, but I need specific files & folders to be ignored (full file path), otherwise valid files might be excluded.

I could also use the find command to create a list of files and exclude the ones I don't want to archive and pass the list to tar, but that only works with for a small amount of files. I have tens of thousands.

I'm beginning to think the only solution is to create a file with a list of files/folders to be excluded, then use rsync with --exclude-from=file to copy all the files to a tmp directory, and then use tar to archive that directory.

Can anybody think of a better/more efficient solution?

EDIT: cma 's solution works well. The big gotcha is that the --exclude='./folder' MUST be at the beginning of the tar command. Full command (cd first, so backup is relative to that directory):

cd /folder_to_backup
tar --exclude='./folder' --exclude='./upload/folder2' -zcvf /backup/filename.tgz .

Rekhyt ,May 1, 2012 at 12:55

Another thing caught me out on that, might be worth a note:

Trailing slashes at the end of excluded folders will cause tar to not exclude those folders at all. – Rekhyt May 1 '12 at 12:55

Brice ,Jun 24, 2014 at 16:06

I had to remove the single quotation marks in order to exclude sucessfully the directories. ( tar -zcvf gatling-charts-highcharts-1.4.6.tar.gz /opt/gatling-charts-highcharts-1.4.6 --exclude=results --exclude=target ) – Brice Jun 24 '14 at 16:06

Charles Ma ,Jun 11, 2009 at 23:11

You can have multiple exclude options for tar so
$ tar --exclude='./folder' --exclude='./upload/folder2' -zcvf /backup/filename.tgz .

etc will work. Make sure to put --exclude before the source and destination items.

shasi kanth ,Feb 27, 2015 at 10:49

As an example, if you are trying to backup your wordpress project folder, excluding the uploads folder, you can use this command:

tar -cvf wordpress_backup.tar wordpress --exclude=wp-content/uploads

Alfred Bez ,Jul 16, 2015 at 7:28

I came up with the following command: tar -zcv --exclude='file1' --exclude='pattern*' --exclude='file2' -f /backup/filename.tgz . note that the -f flag needs to precede the tar file see:

flickerfly ,Aug 21, 2015 at 16:22

A "/" on the end of the exclude directory will cause it to fail. I guess tar thinks an ending / is part of the directory name to exclude. BAD: --exclude=mydir/ GOOD: --exclude=mydir – flickerfly Aug 21 '15 at 16:22

NightKnight on ,Nov 24, 2016 at 9:55

> Make sure to put --exclude before the source and destination items. OR use an absolute path for the exclude: tar -cvpzf backups/target.tar.gz --exclude='/home/username/backups' /home/username – NightKnight on Nov 24 '16 at 9:55

Johan Soderberg ,Jun 11, 2009 at 23:10

To clarify, you can use full path for --exclude. – Johan Soderberg Jun 11 '09 at 23:10

Stephen Donecker ,Nov 8, 2012 at 0:22

Possible options to exclude files/directories from backup using tar:

Exclude files using multiple patterns

tar -czf backup.tar.gz --exclude=PATTERN1 --exclude=PATTERN2 ... /path/to/backup

Exclude files using an exclude file filled with a list of patterns

tar -czf backup.tar.gz -X /path/to/exclude.txt /path/to/backup

Exclude files using tags by placing a tag file in any directory that should be skipped

tar -czf backup.tar.gz --exclude-tag-all=exclude.tag /path/to/backup

Anish Ramaswamy ,May 16, 2015 at 0:11

This answer definitely helped me! The gotcha for me was that my command looked something like tar -czvf mysite.tar.gz mysite --exclude='./mysite/file3' --exclude='./mysite/folder3' , and this didn't exclude anything. – Anish Ramaswamy May 16 '15 at 0:11

Hubert ,Feb 22, 2017 at 7:38

Nice and clear thank you. For me the issue was that other answers include absolute or relative paths. But all you have to do is add the name of the folder you want to exclude.Hubert Feb 22 '17 at 7:38

GeertVc ,Dec 31, 2013 at 13:35

Just want to add to the above, that it is important that the directory to be excluded should NOT contain a final backslash. So, --exclude='/path/to/exclude/dir' is CORRECT , --exclude='/path/to/exclude/dir/' is WRONG . – GeertVc Dec 31 '13 at 13:35

Eric Manley ,May 14, 2015 at 14:10

You can use standard "ant notation" to exclude directories relative.
This works for me and excludes any .git or node_module directories.
tar -cvf myFile.tar --exclude=**/.git/* --exclude=**/node_modules/*  -T /data/txt/myInputFile.txt 2> /data/txt/myTarLogFile.txt

myInputFile.txt Contains:


not2qubit ,Apr 4 at 3:24

I believe this require that the Bash shell option variable globstar has to be enabled. Check with shopt -s globstar . I think it off by default on most unix based OS's. From Bash manual: " globstar: If set, the pattern ** used in a filename expansion context will match all files and zero or more directories and subdirectories. If the pattern is followed by a '/', only directories and subdirectories match. " – not2qubit Apr 4 at 3:24

Benoit Duffez ,Jun 19, 2016 at 21:14

Don't forget COPYFILE_DISABLE=1 when using tar, otherwise you may get ._ files in your tarballBenoit Duffez Jun 19 '16 at 21:14

Scott Stensland ,Feb 12, 2015 at 20:55

This exclude pattern handles filename suffix like png or mp3 as well as directory names like .git and node_modules
tar --exclude={*.png,*.mp3,*.wav,.git,node_modules} -Jcf ${target_tarball}  ${source_dirname}

Alex B ,Jun 11, 2009 at 23:03

Use the find command in conjunction with the tar append (-r) option. This way you can add files to an existing tar in a single step, instead of a two pass solution (create list of files, create tar).
find /dir/dir -prune ... -o etc etc.... -exec tar rvf ~/tarfile.tar {} \;

carlo ,Mar 4, 2012 at 15:18

To avoid possible 'xargs: Argument list too long' errors due to the use of find ... | xargs ... when processing tens of thousands of files, you can pipe the output of find directly to tar using find ... -print0 | tar --null ... .
# archive a given directory, but exclude various files & directories 
# specified by their full file paths
find "$(pwd -P)" -type d \( -path '/path/to/dir1' -or -path '/path/to/dir2' \) -prune \
   -or -not \( -path '/path/to/file1' -or -path '/path/to/file2' \) -print0 | 
   gnutar --null --no-recursion -czf archive.tar.gz --files-from -
   #bsdtar --null -n -czf archive.tar.gz -T -

Znik ,Mar 4, 2014 at 12:20

you can quote 'exclude' string, like this: 'somedir/filesdir/*' then shell isn't going to expand asterisks and other white chars.

Tuxdude ,Nov 15, 2014 at 5:12

xargs -n 1 is another option to avoid xargs: Argument list too long error ;) – Tuxdude Nov 15 '14 at 5:12

Aaron Votre ,Jul 15, 2016 at 15:56

I agree the --exclude flag is the right approach.
$ tar --exclude='./folder_or_file' --exclude='file_pattern' --exclude='fileA'

A word of warning for a side effect that I did not find immediately obvious: The exclusion of 'fileA' in this example will search for 'fileA' RECURSIVELY!

Example:A directory with a single subdirectory containing a file of the same name (data.txt)

  |  data.txt
  |  config.docx

Mike ,May 9, 2014 at 21:26

After reading this thread, I did a little testing on RHEL 5 and here are my results for tarring up the abc directory:

This will exclude the directories error and logs and all files under the directories:

tar cvpzf abc.tgz abc/ --exclude='abc/error' --exclude='abc/logs'

Adding a wildcard after the excluded directory will exclude the files but preserve the directories:

tar cvpzf abc.tgz --exclude='abc/error/*' --exclude='abc/logs/*' abc/

camh ,Jun 12, 2009 at 5:53

You can use cpio(1) to create tar files. cpio takes the files to archive on stdin, so if you've already figured out the find command you want to use to select the files the archive, pipe it into cpio to create the tar file:
find ... | cpio -o -H ustar | gzip -c > archive.tar.gz

frommelmak ,Sep 10, 2012 at 14:08

You can also use one of the "--exclude-tag" options depending on your needs:

The folder hosting the specified FILE will be excluded.

Joe ,Jun 11, 2009 at 23:04

Your best bet is to use find with tar, via xargs (to handle the large number of arguments). For example:
find / -print0 | xargs -0 tar cjf tarfile.tar.bz2

jørgensen ,Mar 4, 2012 at 15:23

That can cause tar to be invoked multiple times - and will also pack files repeatedly. Correct is: find / -print0 | tar -T- --null --no-recursive -cjf tarfile.tar.bz2jørgensen Mar 4 '12 at 15:23

Stphane ,Dec 19, 2015 at 11:10

I read somewhere that when using xargs , one should use tar r option instead of c because when find actually finds loads of results, the xargs will split those results (based on the local command line arguments limit) into chuncks and invoke tar on each part. This will result in a archive containing the last chunck returned by xargs and not all results found by the find command. – Stphane Dec 19 '15 at 11:10

Andrew ,Apr 14, 2014 at 16:21

gnu tar v 1.26 the --exclude needs to come after archive file and backup directory arguments, should have no leading or trailing slashes, and prefers no quotes (single or double). So relative to the PARENT directory to be backed up, it's:

tar cvfz /path_to/mytar.tgz ./dir_to_backup --exclude=some_path/to_exclude

Ashwini Gupta ,Jan 12 at 10:30

tar -cvzf destination_folder source_folder -X /home/folder/excludes.txt

-X indicates a file which contains a list of filenames which must be excluded from the backup. For Instance, you can specify *~ in this file to not include any filenames ending with ~ in the backup.

Georgios ,Sep 4, 2013 at 22:35

Possible redundant answer but since I found it useful, here it is:

While a FreeBSD root (i.e. using csh) I wanted to copy my whole root filesystem to /mnt but without /usr and (obviously) /mnt. This is what worked (I am at /):

tar --exclude ./usr --exclude ./mnt --create --file - . (cd /mnt && tar xvd -)

My whole point is that it was necessary (by putting the ./ ) to specify to tar that the excluded directories where part of the greater directory being copied.

My €0.02

user2792605 ,Sep 30, 2013 at 20:07

I had no luck getting tar to exclude a 5 Gigabyte subdirectory a few levels deep. In the end, I just used the unix Zip command. It worked a lot easier for me.

So for this particular example from the original post
(tar --exclude='./folder' --exclude='./upload/folder2' -zcvf /backup/filename.tgz . )

The equivalent would be:

zip -r /backup/ . -x upload/folder/**\* upload/folder2/**\*

(NOTE: Here is the post I originally used that helped me )

t0r0X ,Sep 29, 2014 at 20:25

Beware: zip does not pack empty directories, but tar does! – t0r0X Sep 29 '14 at 20:25

RohitPorwal ,Jul 21, 2016 at 9:56

Check it out
tar cvpzf zip_folder.tgz . --exclude=./public --exclude=./tmp --exclude=./log --exclude=fileName

James ,Oct 28, 2016 at 14:01

The following bash script should do the trick. It uses the answer given here by Marcus Sundman.

echo -n "Please enter the name of the tar file you wish to create with out extension "
read nam

echo -n "Please enter the path to the directories to tar "
read pathin

echo tar -czvf $nam.tar.gz
excludes=`find $pathin -iname "*.CC" -exec echo "--exclude \'{}\'" \;|xargs`
echo $pathin

echo tar -czvf $nam.tar.gz $excludes $pathin

This will print out the command you need and you can just copy and paste it back in. There is probably a more elegant way to provide it directly to the command line.

Just change *.CC for any other common extension, file name or regex you want to exclude and this should still work.


Just to add a little explanation; find generates a list of files matching the chosen regex (in this case *.CC). This list is passed via xargs to the echo command. This prints --exclude 'one entry from the list'. The slashes () are escape characters for the ' marks.

tripleee ,Sep 14, 2017 at 4:27

Requiring interactive input is a poor design choice for most shell scripts. Make it read command-line parameters instead and you get the benefit of the shell's tab completion, history completion, history editing, etc. – tripleee Sep 14 '17 at 4:27

tripleee ,Sep 14, 2017 at 4:38

Additionally, your script does not work for paths which contain whitespace or shell metacharacters. You should basically always put variables in double quotes unless you specifically require the shell to perform whitespace tokenization and wildcard expansion. For details, please see Sep 14 '17 at 4:38

> ,Apr 18 at 0:31

For those who have issues with it, some versions of tar would only work properly without the './' in the exclude value.
Tar --version

tar (GNU tar) 1.27.1

Command syntax that work:

tar -czvf ../allfiles-butsome.tar.gz * --exclude=acme/foo

These will not work:

$ tar -czvf ../allfiles-butsome.tar.gz * --exclude=./acme/foo
$ tar -czvf ../allfiles-butsome.tar.gz * --exclude='./acme/foo'
$ tar --exclude=./acme/foo -czvf ../allfiles-butsome.tar.gz *
$ tar --exclude='./acme/foo' -czvf ../allfiles-butsome.tar.gz *
$ tar -czvf ../allfiles-butsome.tar.gz * --exclude=/full/path/acme/foo
$ tar -czvf ../allfiles-butsome.tar.gz * --exclude='/full/path/acme/foo'
$ tar --exclude=/full/path/acme/foo -czvf ../allfiles-butsome.tar.gz *
$ tar --exclude='/full/path/acme/foo' -czvf ../allfiles-butsome.tar.gz *

[Jul 20, 2017] Server Backup Procedures

Jul 20, 2017 |
.1.1. Backing up with ``tar'':

If you decide to use ``tar'' as your backup solution, you should probably take the time to get to know the various command-line options that are available; type " man tar " for a comprehensive list. You will also need to know how to access the appropriate backup media; although all devices are treated like files in the Unix world, if you are writing to a character device such as a tape, the name of the "file" is the device name itself (eg. `` /dev/nst0 '' for a SCSI-based tape drive).

The following command will perform a backup of your entire Linux system onto the `` /archive/ '' file system, with the exception of the `` /proc/ '' pseudo-filesystem, any mounted file systems in `` /mnt/ '', the `` /archive/ '' file system (no sense backing up our backup sets!), as well as Squid's rather large cache files (which are, in my opinion, a waste of backup media and unnecessary to back up):

tar -zcvpf /archive/full-backup-`date '+%d-%B-%Y'`.tar.gz \
    --directory / --exclude=mnt --exclude=proc --exclude=var/spool/squid .

Don't be intimidated by the length of the command above! As we break it down into its components, you will see the beauty of this powerful utility.

The above command specifies the options `` z '' (compress; the backup data will be compressed with ``gzip''), `` c '' (create; an archive file is begin created), `` v '' (verbose; display a list of files as they get backed up), `` p '' (preserve permissions; file protection information will be "remembered" so they can be restored). The `` f '' (file) option states that the very next argument will be the name of the archive file (or device) being written. Notice how a filename which contains the current date is derived, simply by enclosing the ``date'' command between two back-quote characters. A common naming convention is to add a `` tar '' suffix for non-compressed archives, and a `` tar.gz '' suffix for compressed ones.

The `` --directory '' option tells tar to first switch to the following directory path (the `` / '' directory in this example) prior to starting the backup. The `` --exclude '' options tell tar not to bother backing up the specified directories or files. Finally, the `` . '' character tells tar that it should back up everything in the current directory.

Note: Note: It is important to realize that the options to tar are cAsE-sEnSiTiVe! In addition, most of the options can be specified as either single mneumonic characters (eg. ``f''), or by their easier-to-memorize full option names (eg. ``file''). The mneumonic representations are identified by prefixing them with a ``-'' character, while the full names are prefixed with two such characters. Again, see the "man" pages for information on using tar.

Another example, this time writing only the specified file systems (as opposed to writing them all with exceptions as demonstrated in the example above) onto a SCSI tape drive follows:

tar -cvpf /dev/nst0 --label="Backup set created on `date '+%d-%B-%Y'`." \
    --directory / --exclude=var/spool/ etc home usr/local var/spool

In the above command, notice that the `` z '' (compress) option is not used. I strongly recommend against writing compressed data to tape, because if data on a portion of the tape becomes corrupted, you will lose your entire backup set! However, archive files stored without compression have a very high recoverability for non-affected files, even if portions of the tape archive are corrupted.

Because the tape drive is a character device, it is not possible to specify an actual file name. Therefore, the file name used as an argument to tar is simply the name of the device, `` /dev/nst0 '', the first tape device on the SCSI bus.

Note: Note: The `` /dev/nst0 '' device does not rewind after the backup set is written; therefore it is possible to write multiple sets on one tape. (You may also refer to the device as `` /dev/st0 '', in which case the tape is automatically rewound after the backup set is written.)

Since we aren't able to specify a filename for the backup set, the `` --label '' option can be used to write some information about the backup set into the archive file itself.

Finally, only the files contained in the `` /etc/ '', `` /home/ '', `` /usr/local '', and `` /var/spool/ '' (with the exception of Squid's cache data files) are written to the tape.

When working with tapes, you can use the following commands to rewind, and then eject your tape:

mt -f /dev/nst0 rewind

mt -f /dev/nst0 offline

Tip: Tip: You will notice that leading `` / '' (slash) characters are stripped by tar when an archive file is created. This is tar's default mode of operation, and it is intended to protect you from overwriting critical files with older versions of those files, should you mistakenly recover the wrong file(s) in a restore operation. If you really dislike this behavior (remember, its a feature !) you can specify the `` --absolute-paths '' option to tar, which will preserve the leading slashes. However, I don't recommend doing so, as it is Dangerous !

[Feb 20, 2017] Stupid tar Tricks

Aug 26, 2010 |

One of the most common programs on Linux systems for packaging files is the venerable tar. tar is short for tape archive, and originally, it would archive your files to a tape device. Now, you're more likely to use a file to make your archive. To use a tarfile, use the command-line option -f . To create a new tarfile, use the command-line option -c. To extract files from a tarfile, use the option -x. You also can compress the resulting tarfile via two methods. To use bzip2, use the -j option, or for gzip, use the -z option.

Instead of using a tarfile, you can output your tarfile to stdout or input your tarfile from stdin by using a hyphen (-). With these options, you can tar up a directory and all of its subdirectories by using:

tar cf archive.tar dir

Then, extract it in another directory with:

tar xf archive.tar

When creating a tarfile, you can assign a volume name with the option -V . You can move an entire directory structure with tar by executing:

tar cf - dir1 | (cd dir2; tar xf -)

You can go even farther and move an entire directory structure over the network by executing:

tar cf - dir1 | ssh remote_host "( cd /path/to/dir2; tar xf - )"

GNU tar includes an option that lets you skip the cd part, -C /path/to/dest. You also can interact with tarfiles over the network by including a host part to the tarfile name. For example:

tar cvf username@remotehost:/path/to/dest/archive.tar dir1

This is done by using rsh as the communication mechanism. If you want to use something else, like ssh, use the command-line option --rsh-command CMD. Sometimes, you also may need to give the path to the rmt executable on the remote host. On some hosts, it won't be in the default location /usr/sbin/rmt. So, all together, this would look like:

tar -c -v --rsh-command ssh --rmt-command /sbin/rmt 
 ↪-f username@host:/path/to/dest/archive.tar dir1

Although tar originally used to write its archive to a tape drive, it can be used to write to any device. For example, if you want to get a dump of your current filesystem to a secondary hard drive, use:

tar -cvzf /dev/hdd /

Of course, you need to run the above command as root. If you are writing your tarfile to a device that is too small, you can tell tar to do a multivolume archive with the -M option. For those of you who are old enough to remember floppy disks, you can back up your home directory to a series of floppy disks by executing:

tar -cvMf /dev/fd0 $HOME

If you are doing backups, you may want to preserve the file permissions. You can do this with the -p option. If you have symlinked files on your filesystem, you can dereference the symlinks with the -h option. This tells tar actually to dump the file that the symlink points to, not just the symlink.

Along the same lines, if you have several filesystems mounted, you can tell tar to stick to only one filesystem with the option -l. Hopefully, this gives you lots of ideas for ways to archive your files.

[Feb 04, 2017] How do I fix mess created by accidentally untarred files in the current dir, aka tar bomb

In such cases the UID of the file is often different from uid of "legitimate" files in polluted directories and you probably can use this fact for quick elimination of the tar bomb, But the idea of using the list of files from the tar bomb to eliminate offending files also works if you observe some precautions -- some directories that were created can have the same names as existing directories. Never do rm in -exec or via xargs without testing.
Notable quotes:
"... You don't want to just rm -r everything that tar tf tells you, since it might include directories that were not empty before unpacking! ..."
"... Another nice trick by @glennjackman, which preserves the order of files, starting from the deepest ones. Again, remove echo when done. ..."
"... One other thing: you may need to use the tar option --numeric-owner if the user names and/or group names in the tar listing make the names start in an unpredictable column. ..."
"... That kind of (antisocial) archive is called a tar bomb because of what it does. Once one of these "explodes" on you, the solutions in the other answers are way better than what I would have suggested. ..."
"... The easiest (laziest) way to do that is to always unpack a tar archive into an empty directory. ..."
"... The t option also comes in handy if you want to inspect the contents of an archive just to see if it has something you're looking for in it. If it does, you can, optionally, just extract the file(s) you want. ..."
Feb 04, 2017 |

linux - Undo tar file extraction mess - Super User

first try to issue

tar tf archive
tar will list the contents line by line.

This can be piped to xargs directly, but beware : do the deletion very carefully. You don't want to just rm -r everything that tar tf tells you, since it might include directories that were not empty before unpacking!

You could do

tar tf archive.tar | xargs -d'\n' rm -v
tar tf archive.tar | sort -r | xargs -d'\n' rmdir -v

to first remove all files that were in the archive, and then the directories that are left empty.

sort -r (glennjackman suggested tac instead of sort -r in the comments to the accepted answer, which also works since tar 's output is regular enough) is needed to delete the deepest directories first; otherwise a case where dir1 contains a single empty directory dir2 will leave dir1 after the rmdir pass, since it was not empty before dir2 was removed.

This will generate a lot of

rm: cannot remove `dir/': Is a directory


rmdir: failed to remove `dir/': Directory not empty
rmdir: failed to remove `file': Not a directory

Shut this up with 2>/dev/null if it annoys you, but I'd prefer to keep as much information on the process as possible.

And don't do it until you are sure that you match the right files. And perhaps try rm -i to confirm everything. And have backups, eat your breakfast, brush your teeth, etc.


List the contents of the tar file like so:

tar tzf myarchive.tar

Then, delete those file names by iterating over that list:

while IFS= read -r file; do echo "$file"; done < <(tar tzf myarchive.tar.gz)

This will still just list the files that would be deleted. Replace echo with rm if you're really sure these are the ones you want to remove. And maybe make a backup to be sure.

In a second pass, remove the directories that are left over:

while IFS= read -r file; do rmdir "$file"; done < <(tar tzf myarchive.tar.gz)

This prevents directories with from being deleted if they already existed before.

Another nice trick by @glennjackman, which preserves the order of files, starting from the deepest ones. Again, remove echo when done.

tar tvf myarchive.tar | tac | xargs -d'\n' echo rm

This could then be followed by the normal rmdir cleanup.

Here's a possibility that will take the extracted files and move them to a subdirectory, cleaning up your main folder.
    #!/usr/bin/perl -w  

    use strict  ;  
    use   Getopt  ::  Long  ;  

    my $clean_folder   =     "clean"  ;  
    my $DRY_RUN  ;  
    die   "Usage: $0 [--dry] [--clean=dir-name]\n"  
          if     (     !  GetOptions  (  "dry!"     =>   \$DRY_RUN  ,  
                           "clean=s"     =>   \$clean_folder  ));  

      # Protect the 'clean_folder' string from shell substitution  
    $clean_folder   =~   s  /  '/'  \\  ''  /  g  ;  

      # Process the "tar tv" listing and output a shell script.  
    print   "#!/bin/sh\n"     if     (     !  $DRY_RUN   );  
      while     (<>)  
        chomp  ;  

          # Strip out permissions string and the directory entry from the 'tar' list  
        my $perms   =   substr  (  $_  ,     0  ,     10  );  
        my $dirent   =   substr  (  $_  ,     48  );  

          # Drop entries that are in subdirectories  
        next   if     (   $dirent   =~   m  :/.:     );  

          # If we're in "dry run" mode, just list the permissions and the directory  
          # entries.  
          if     (   $DRY_RUN   )  
            print   "$perms|$dirent\n"  ;  
            next  ;  

          # Emit the shell code to clean up the folder  
        $dirent   =~   s  /  '/'  \\  ''  /  g  ;  
        print   "mv -i '$dirent' '$clean_folder'/.\n"  ;  

Save this to the file and then execute it like this:

 $ tar tvf myarchive  .  tar   |   perl fix  -  tar  .  pl   --  dry 

This will confirm that your tar list is like mine. You should get output like:

  -  rw  -  rw  -  r  --|  batch
  -  rw  -  rw  -  r  --|  book  -  report  .  png
  -  rwx  ------|  CaseReports  .  png
  -  rw  -  rw  -  r  --|  caseTree  .  png
  -  rw  -  rw  -  r  --|  tree  .  png
drwxrwxr  -  x  |  sample  / 

If that looks good, then run it again like this:

$ mkdir cleanup
$ tar tvf myarchive  .  tar   |   perl fix  -  tar  .  pl   --  clean  =  cleanup   >   fixup  .  sh 

The script will be the shell commands that will move the top-level files and directories into a "clean" folder (in this instance, the folder called cleanup). Have a peek through this script to confirm that it's all kosher. If it is, you can now clean up your mess with:

 $ sh fixup  .  sh 

I prefer this kind of cleanup because it doesn't destroy anything that isn't already destroyed by being overwritten by that initial tar xv.

Note: if that initial dry run output doesn't look right, you should be able to fiddle with the numbers in the two substr function calls until they look proper. The $perms variable is used only for the dry run so really only the $dirent substring needs to be proper.

One other thing: you may need to use the tar option --numeric-owner if the user names and/or group names in the tar listing make the names start in an unpredictable column.

One other thing: you may need to use the tar option --numeric-owner if the user names and/or group names in the tar listing make the names start in an unpredictable column.


That kind of (antisocial) archive is called a tar bomb because of what it does. Once one of these "explodes" on you, the solutions in the other answers are way better than what I would have suggested.

The best "solution", however, is to prevent the problem in the first place.

The easiest (laziest) way to do that is to always unpack a tar archive into an empty directory. If it includes a top level directory, then you just move that to the desired destination. If not, then just rename your working directory (the one that was empty) and move that to the desired location.

If you just want to get it right the first time, you can run tar -tvf archive-file.tar | less and it will list the contents of the archive so you can see how it is structured and then do what is necessary to extract it to the desired location to start with.

The t option also comes in handy if you want to inspect the contents of an archive just to see if it has something you're looking for in it. If it does, you can, optionally, just extract the file(s) you want.

[Nov 06, 2016] Backup and restore using tar

tar -cjpf /backup /bin /etc /home /opt /root /sbin /usr /var /boot

When i include the / directory it also tar's the /lib /sys /proc /dev filesystems too (and more but these seem to be problem directories.)

Although i have never tried to restore the /sys /proc and /dev directories I have not seen anyone mention that your cant restore /lib but when i tried the server crashed and would not even start the kernel (not even in single user mode).

Can anyone let me know why this happened and provide a more comprehensive list of directories than the 4 mentioned as to what should and shouldn't be backed up and restored? Or point me to a useful site that might explain why you should or shouldn't backup each one?

There's no point in backing-up things like /proc because that's the dynamic handling of processes and memory working sets (virtual memory).

However, directories like /lib, although problematic to restore on a running system, you would definitely need them in a disaster recovery situation. You would restore /lib to hard disk in single user or cd boot mode.

So you need to backup all non-process, non-memory files for the backup to be sufficient to recover. It doesn't mean, however, that you should attempt to restore them on a running (multi-user) system.

Full Hard-Drive Backup with Linux Tar

[Nov 06, 2016] GNU tar 1.29 6.1 Choosing and Naming Archive Files

The `-C' option allows to avoid using subshells:

$ tar -C sourcedir -cf - . | tar -C targetdir -xpf -

[Nov 06, 2016] How to restore a backup from a tgz file in linux

Antonio Alimba Jun 9 '14 at 13:01

How can i restore from a backup.tgz file generated from another linux server on my own server? I tried the command the following command:
tar xvpfz backup.tgz -C /

The above command worked, but it replaced the existing system files which made my linux server not to work properly.

How can i restore without running into trouble?

You can use --skip-old-files command to tell tar not to overwrite existing files.

You could still run into problem with the backup files, if the software versions are different between the two servers. Some data file structure changes might have happened, and things might stop working.

A more refined backup process should be developed.

Use of tar for system backup

# save everything except /mnt and /proc.

time tar cfpPzf $TARBALL  --directory=/ --one-file-system  -xattrs \
--exclude /mnt --exclude /proc 


Warning: Exclude actuallyally is condired to be a simple regex expression

10 quick tar command examples to create-extract archives in Linux

Extract tar.bz2/bzip archives

Files with extension bz2 are compressed with the bzip algorithm and tar command can deal with them as well. Use the j option instead of the z option.

$ tar -xvjf archivefile.tar.bz2

2. Extract files to a specific directory or path

To extract out the files to a specific directory, specify the path using the "-C" option. Note that its a capital C.

$ tar -xvzf abc.tar.gz -C /opt/folder/

However first make sure that the destination directory exists, since tar is not going to create the directory for you and will fail if it does not exist.

3. Extract a single file

To extract a single file out of an archive just add the file name after the command like this

$ tar -xz -f abc.tar.gz "./new/abc.txt"

More than once file can be specified in the above command like this

$ tar -xv -f abc.tar.gz "./new/cde.txt" "./new/abc.txt"

4. Extract multiple files using wildcards

Wildcards can be used to extract out a bunch of files matching the given wildcards. For example all files with ".txt" extension.

$ tar -xv -f abc.tar.gz --wildcards "*.txt"

tardiff - an archive patching utility

tardiff.tar.gz v2.1.4 (41,334 bytes)

"tardiff" is a Perl script used to quickly make a tarball of changes between versions of an archive, or between pre- and post-build of an application. There are many, many other possible uses.

More complete documentation is now available here.

Some documentation for applying patches of various sorts is now available here.

linux - How to compare two tarball's content

Stack Overflow

tarsum is almost what you need. Take its output, run it through sort to get the ordering identical on each, and then compare the two with diff. That should get you a basic implementation going, and it would be easily enough to pull those steps into the main program by modifying the Python code to do the whole job.

[Sep 04, 2014] Blunders with expansion of tar files, structure of which you do not understand

if you try to expand tar file in some production directory you accidentally can overwrite and change ownership of such directories and then spend a lot of type restored status quo. It is safer to expand such tar files in /tmp first and only after that seeing the results then decide whether to copy some directories of re-expand the tar file. Now in production directory.

[Sep 03, 2014] Doing operation in a wrong directory among several similar directories

Sometimes directories are very similar, for example numbered directories created by some application such as task0001, task0002, ... task0256. In this case you can well perform operation on a wrong directory. For example send to tech support a tar file with the directory that instead of test data contain production run.

TAR tricks & tips by Viktor Balogh

Viktor Balogh's HP-UX blog

This is how to tar a bunch of files and send it over network to another machine over SSH, in one turn:

# cd /etc; tar cf - passwd | ssh hp01a01.w1 "cd /root;tar xf - passwd"

Note that with tar you must always use relative path, anyway the files on the target system will be extracted with fullpath and the original files will be overwritten. GNU tar also offers some options which allow the user to modify/transform the paths when files are extracted. You can find the GNU tar on HP-UX under the name gtar, you can download it from the HP-UX porting center:

# which gtar

If you have a 'tar' archive that was made with absolute paths, use 'pax' to extract it to a different directory:

# pax -r -s '|/tmp/|/opt/|' -f test.tar

If you unpack this archive with other user privileges (non-root) all uid and gid will be replaced with the uid and gid from this user. Keep that in mind, if you make backups/restore, practically always do any backup/restore with UID 0.

The use of tar with find isn't apt to work if there are lots of files. Instead use pax(1):

# find . -atime +7 | pax -w | gzip > backup.tgz

[Sep 11, 2012] Tips and Tricks: Splitting tar archives on the fly by Alexander Todorov

Splitting big files into pieces is a common task. Another common task is to create a tar archive, and split it into smaller chunks that can be burned onto CD/DVD. The straightforward approach is to create the archive and then use 'split.' To do this, you will need more free space on your disk. In fact, you'll need space twice the size of the created archive. To avoid this limitation, split the archive as it is being created.

To create a tar archive that splits itself on the fly use the following set of commands:

First create the archive:

tar -czf /dev/stdout $(DIRECTORY_OR_FILE_TO_COMPRESS) | split -d -b $(CHUNK_SIZE_IN_BYTES) - $(FILE_NAME_PREFIX)

To extract the contents:

cat $(FILE_NAME_PREFIX)* >> /dev/stdout | tar -xzf /dev/stdin

The above shown set of commands works on the fly. You don't need additional free space for temporary files.

A few notes about this exercise:

The information provided in this article is for your information only. The origin of this information may be internal or external to Red Hat. While Red Hat attempts to verify the validity of this information before it is posted, Red Hat makes no express or implied claims to its validity.


Isn't it easier to just omit the "f"?
cat $(FILE_NAME_PREFIX)* | tar xz

Alexander Todorov:

You are right. Using /dev/stdin and /dev/stdout is to be more clear.

  1. Klaus Lichtenwalder says:
    December 14th, 2007 at 2:49 pm

    Just a few nitbits… If you want to use stdin/stdout with tar, it's simply a -
    e.g.: tar cf – . | (cd /elsewhere; tar xf -)

    cat always appends its arguments to stdout, so
    cat $(prefix)* | command
    is sufficient. I don't know and (honestly) don't care if gnu-tar sends its output to stdout if no f argument given, every other unix uses the default tape device (which is /dev/rmt) if no f argument given (I have to work with Solaris and AIX too…).

[Sep 17, 2011] Linux Tape Backup With mt And tar Command Howto

To backup to multiple tape use the following command (backup /home file system):

# tar -clpMzvf /dev/st0 /home

To compare tape backup, enter:
# tar -dlpMzvf /dev/st0 /home

To restore tape in case of data loss or hard disk failure:
# tar -xlpMzvf /dev/st0 /home


GNU tar

If you are looking for a mature tar implementation that is actively maintained you should have a look
at star.

Changes: This release adds support for xz compression (with the --xz option) and reassigns the short option -J as a shortcut for --xz. The option -I is now a shortcut for --use-compress-program,... and the --no-recursive option works with --incremental

Changes: This release adds new options: --lzop, --no-auto-compress, and --no-null. It has compressed format recognition and VCS support (--exclude-vcs). It fixes the --null option and... fixes record size autodetection

Changes: This release has new options: -a (selects a compression algorithm basing on the suffix of the archive file name), --lzma (selects the LZMA compression algorithm), and --hard-dereference,... which dereferences hard links during archive creation and stores the files they refer to (instead of creating the usual hard link members)

Google Answers UNIX Question! tar size constraint.

Posted: 12 Jun 2002 23:47 PDT
Expires: 19 Jun 2002 23:47 PDT
Question ID: 25116

What is the size constraint to "tar" in a UNIX or Linux environment?

Subject: Re: UNIX Question! tar size constraint.
Answered By: philip_lynx-ga on 13 Jun 2002 01:05 PDT

Hi pwharff,

The quick answer is: 2^63-1 for the archive, 68'719'476'735 (8^12-1)
bytes for each file, if your environment permits that.

as I understand your question, you want to know if you can produce tar
files that are biger than 2 GBytes (and how big you can really make
them). The answer to this question depends on a few simple parameters:

1) Does your operating system support large files?
2) What version of tar are you using?
3) What is the underlying file system?

You can answer question 1) for yourself by verifying that your kernel
supports 64bit file descriptors. For Linux this is the case for
several years now. A quick look in /usr/include/sys/features.h will
tell you, if there is any line containing 'FILE_OFFSET_BITS'. If there
is, your OS very very probably has support for large files.

For Solaris, just check whether 'man largefile' works, or try 'getconf
-a|grep LARGEFILE'. If it works, then you have support for large files
in the operating system. Again, support for large files has been there
for several years.

For other operating systems, try "man -k large file', and see what you
get -- I'll be gladly providing help if you need to ask for
clarification to this answer. Something like "cd /usr/include; grep
'FILE_OFFSET_BITS' * */*" should tell you quickly if there is standard
large file support.

2) What version of tar are you using? This is important. Obviously,
older tar programs won't be able to handle files or archives that are
larger than 2^31-1 bytes (2.1 Gigabytes). Try running 'tar --version'.
If the first line indicates you are using gnu tar, then any version
newer than 1.12.64 will in principle be able to provide you with large
files. Try to run this command: "strings `which tar`|grep 64", and you
should see some lines saying lseek64, creat64, fopen64. If yes, your
tar contains support for large files.

If your tar program does not contain support for large files (most
really do, but maybe you are working on a machine older than 1998?),
you can download the newest gnu tar from
and compile it for yourself.

The size of files you put into a tar archive (not the archive itself)
is limited to 11 octal digits, the max. size of a single file is thus
ca. 68 GBytes.

3) Given that both your operating system (and C library), and tar
application support large files, the only really limiting factor is
the file system that you try to create the file in. The theoretical
limit for the tar archive size is 2^63-1 (9'223'372 Terabytes), but
you will reach more practical limits (disk or tape size) much quicker.
Also take into consideration what the file system is. DOS FAT 12
filesystems don't allow files as big as the Linux EXT2, or Sun UFS
file systems.

If you need more precise data (for a specific file system type, or for
the OS, etc.) please do not hesitate to ask for clarification.

I hope my answer is helpful to you,


ftp tar file size limit

Linux Forums

Just Joined!

Join Date: Sep 2006

Posts: 2

ftp tar file size limit?

I am trying to back up my linux box to my windows box's hard drive. To do this I am using the Knoppix distro to boot my linux box. Then I am taring and ftping every file and sending it to my windows box through ftp. (I wanted to tar the files first, so I can preserve permissions) On my windows xp box I am running filezilla's ftp server, and I am transfering to an external external 320Gb NTFS formated hard drive attached to to it through usb. I don't have enough space left on my linux box to tar everything and then transfer, so I am using the following commands:

ftp 21
put |"tar -cvlO *.*" stuff.tar

It always stops transfering just before 2Gb (1,972,460KB), and the file should be 20Gb or so. What am I doing wrong? Is there some file size limit that I don't know of for ftp or tar? The NTFS files systems should allow bigger files from what I have read. I couldn't find any limit for filezilla. Is this the right place to ask?

Thanks Marsolin

Linux Newbie

Join Date: Aug 2006

Posts: 222 I believe NTFS has a 2GB file limitation unless you are running a storage driver with 44-bit LBA support.



Just Joined!

Join Date: Sep 2006

Posts: 2 Everywhere I have read the NTFS limit is in the tens of Terabytes range. I have some files that are bigger than that now.


tar file size limit
Generally, tar can't handle files larger than 2GB. I suggest using an alternative to tar, 'star'. A more comprehensive answer is available here:

By the looks of it, gnu tar versions newer than 1.12.64 can handle large files but I can't confirm this.



Alex Stan

Join Date: Aug 2006

Location: Hamilton, Ontario

I have a similar problem with big files:

I have a 2.2 Gig file on a linux computer. And i mounted Shared Documents(smbfs) from a another(windows) computer. So when i try to copy it it stops at 2 GB. I even tried moving the file in apache, so i can download the file, but apache won't let me.

I can't archive it either.

Is there any way to move that file?

I like linux!


If you are using smbclient, then follow the kbase article: | Knowledgebase

Subodh Bhagat

Backing up Files with Tar

One last thing about creating archives with tar: tar was designed to back up everything in the specified directory. This means that every single file and subdirectory that exists beneath the specified directory will be backed up. It is possible to specify which files you don't want backed up using the X switch.

Let's say I want to backup everything in the www directory except for the apache2 and zope subdirectories. In order to use the X switch, I have to create a file containing the names of the files I wish to exclude. I've found that if you try to create this file using a text editor, it doesn't always work. However, If you create the file using echo, it does. So I'll make a file called exclude:

echo apache2 > exclude
echo zope >> exclude

Here, I used the echo command to redirect (>) the word apache2 to a new file called exclude. I then asked it to append (>>) the word zope to that same file. If I had forgotten to use two >'s, I would have overwritten the word apache2 with the word zope.

Now that I have a file to use with the X switch, I can make that backup:

tar cvfX backup.tar exclude www

This is the first backup I've demonstrated where the order of the switches is important. I need to tell tar that the f switch belongs with the word backup.tar and the X switch belongs with the word exclude. So if I decide to place the f switch before the X switch, I need to have the word backup.tar before the word exclude.

This command will also work as the right switch is still associated with the right word:

tar cvXf exclude backup.tar www

But this command would not work the way I want it to:

tar cvfX exclude backup.tar www
tar: can't open backup.tar : No such file or directory

Here you'll note that the X switch told tar to look for a file called backup.tar to tell it which files to exclude, which isn't what I meant to tell tar.

Let's return to the command that did work. To test that it didn't back up the file called apache2, I used grep to sort through tar's listing:

tar tf backup.tar | grep apache2

Since I just received my prompt back, I know my exclude file worked. It is interesting to note that since apache2 was really a subdirectory of www, all of the files in the apache2 subdirectory were also excluded from the backup. I then tested to see if the zope subdirectory was also excluded in the backup:

tar tf backup.tar | grep zope
<output snipped>

This time I got some information back, as there were other subdirectories that started with the term "zope," but the subdirectory that was just called zope was excluded from the backup.

Now that we know how to make backups, let's see how we can restore data from a backup. Remember from last week the difference between a relative and an absolute pathname, as this has an impact when you are restoring data. Relative pathnames are considered a good thing in a backup. Fortunately, the tar utility that comes with your FreeBSD system strips the leading slash, so it will always use a relative pathname -- unless you specifically overrride this default by using the P switch.

It's always a good idea to do a listing of the data in an archive before you try to restore it, especially if you receive a tar archive from someone else. You want to make sure that the listed files do not begin with "/", as that indicates an absolute pathname. I'll check the first few lines in my backup:

tar tf backup.tar | head

None of these files begin with a "/", so I'll be able to restore this backup anywhere I would like. I'll practice a restore by making a directory I'll call testing, and then I'll restore the entire backup to that directory:

mkdir testing
cd testing
tar xvf ~test/backup.tar 

You'll note that I cd'ed into the directory to contain the restored files, then told tar to restore or extract the entire backup.tar file using the x switch. Once the restore was complete, I did a listing of the testing directory:


I then did a listing of that new www directory and saw that I had successfully restored the entire www directory structure, including all of its subdirectories and files.

It's also possible to just restore a specific file from the archive. Let's say I only need to restore one file from the www/chimera directory. First, I'll need to know the name of the file, so I'll get a listing from tar and use grep to search for the files in the chimera subdirectory:

tar tf backup.tar | grep chimera

I'd like to just restore the file www/chimera/Makefile, and I'd like to restore it to the home directory of the user named genisis. First, I'll cd to the directory to which I want that file restored, and then I'll tell tar just to restore that one file:

cd ~genisis
tar xvf ~test/backup.tar www/chimera/Makefile

You'll note some interesting things if you try this at home. When I did a listing of genisis' home directory, I didn't see a file called Makefile, but I did see a directory called www. This directory contained a subdirectory called chimera, which contained a file called Makefile. Remember, when you make an archive, you are including a directory structure, and when you restore from an archive, you recreate that directory structure.

You'll also note that the original ownesship, permissions, and file creation time were also restored with that file:

ls -l ~genisis/www/chimera/Makefile
-rw-r--r--  1 test  wheel  406 May 11 09:52 www/chimera/Makefile

That should get you started with using the tar utility. In next week's article, I'll continue with some of the interesting options that can be used with tar, and then I'll introduce the cpio archiver.

Backup using tar command in linux

The tar program is an archiving program designed to store and extract files from an archive file known as a tarball. A tarball may be made on a tape drive; however, it is also common to write a tarball to a normal file.

If you want to know more options about tar click here

Making backups with tar

A full backup can easily be made with tar:

# tar --create --file /dev/ftape /usr/src
tar: Removing leading / from absolute path names in the archive

The example above uses the GNU version of tar and its long option names. The traditional version of tar only understands single character options. The GNU version can also handle backups that don't fit on one tape or floppy, and also very long paths; not all traditional versions can do these things. (Linux only uses GNU tar.)

If your backup doesn't fit on one tape, you need to use the --multi-volume (-M) option:

# tar -cMf /dev/fd0H1440 /usr/src
tar: Removing leading / from absolute path names in the archive Prepare volume #2 for /dev/fd0H1440 and hit return:

Note that you should format the floppies before you begin the backup, or else use another window or virtual terminal and do it when tar asks for a new floppy.

After you've made a backup, you should check that it is OK, using the --compare (-d) option:

# tar --compare --verbose -f /dev/ftape

Failing to check a backup means that you will not notice that your backups aren't working until after you've lost the original data.

An incremental backup can be done with tar using the --newer

# tar --create --newer '8 Sep 1995' --file /dev/ftape /usr/src --verbose
tar: Removing leading / from absolute path names in the archive

Unfortunately, tar can't notice when a file's inode information has changed, for example, that its permission bits have been changed, or when its name has been changed. This can be worked around using find and comparing current filesystem state with lists of files that have been previously backed up. Scripts and programs for doing this can be found on Linux ftp sites.

12.4.2. Restoring files with tar
The --extract (-x) option for tar extracts files:

# tar --extract --same-permissions --verbose --file /dev/fd0H1440

You also extract only specific files or directories (which includes all their files and subdirectories) by naming on the command line:

# tar xpvf /dev/fd0H1440

Use the --list (-t) option, if you just want to see what files are on a backup volume:

# tar --list --file /dev/fd0H1440

Note that tar always reads the backup volume sequentially, so for large volumes it is rather slow. It is not possible, however, to use random access database techniques when using a tape drive or some other sequential medium.

tar doesn't handle How-to Using Tar (Taring)

By SuperHornet from (

Ok well here is a short listing on how to use the command tar to backup you data..
Tar is solely an archiving app. Tar by its self wont compress files.

But you say "then what is a .tar.gz"

It's a tar file that has been compressed with a different compression utility. The .gz=gzip is the compression app use to compress it.

Here is tar in its simplest form

tar -cvf filename.tar /path/to/files

You should see the filename.tar file in what ever directory you ran tar from.

You say "But I want to make the tarball compressed"

Well then -z is the option you want to include in your syntax

tar -zvcf filename.tar.gz /path/to/files

#notice I had to add the .gz extension.
-Z( no not -z) will run it thru the old compress app.

Now when I make a tarball I like to keep all the path's from which the file is in.
For this use the -P (absolute path)

tar -zPvcf filename.tar.gz /path/to/file 

When I extract it I will see a new directory called /path
and under that I will see the "to" directory, and the "file" is under "to"

Now you say "I want to backup ALL my files in my home directory EXCEPT the temp directory I use". No problem.

tar -zPvcf myhomebackup.tar.gz --exclude /home/erik/temp /home/erik

The --exclude will give you this option, just slip it in between the tar filename and the path your going to backup. This will exclude the whole temp directory.

You say "Ok this tar thing is pretty cool but I want to backup only single files from all around the drive.

No problem, this requires a bit more work, but hey this is UNIX, get used to it.

Make a file called locations (call it anything you like). In locations place the full path to each file you want to backup on a new line. Please be aware that you have to have read rights to the files you are going to backup.


Now with the -T option I can tell it to use the locations file.

tar -zPvcf backup.tar.gz -T locations

Now if you want to backup the whole drive. Then you will have to exclude lots of files like /var/log/* and /usr/local/named/*

Using the -X option you can create an exclude file just like the locations file.

tar -zPvcf fullbackup.tar.gz -X /path/to/excludefile -T /path/to/locationsfile

Now a month has gone by and you need to update your myhomebackup.tar.gz with new or changed files.

This requires a extra step (quit your bitching I already told you why)
You have to uncompress it first but not untar it.

gunzip /path/to/myhomebackup.tar.gz

This will leave your myhomebackup.tar.gz mising the .gz.
Now we can update your tarball with -u and then we are going to compress it again.

tar -Puvf myhomebackup.tar /home/erik | gzip mybackup.tar

It will add the .gz for you.

Tar is a pretty old app and has lots of Fetchers.
I suggest reading the man pages to get a lits of all the options.

I have included a little perl script that I made so I can run it as cron job evernight and get a full backup each time.
It wouldn't be that hard to update the tarball but I just like full backups.
Feel free to use it.

If you want to extract the tarball that is compressed

tar -zxvf filename.tar.gz

-x extract

If it is not compressed then

tar -xvf filename.tar

#Created by Erik Mathis 7/02

#Change These paths to fix your needs.
my $filename="/home/sysbkup/backup";
my $exclude="/home/erik/exclude";
my $data="/home/erik/locations";
my $tar="\.tar";
my $gz="\.gz";


system ("tar -Pzcvf $file -X $exclude -T $data");

Re Tar question

Sort of answered my own question. I downloaded and install star:

an enhanced version of tar that includes a name-modification option:

-s replstr
Modify file or archive member names named by a pattern
according to the substitution expression replstr. The
format of replstr is:

-s /old/new/[gp]


Re Tar question -- star is recommended until GNU Tar 1.14 is

On Thu, 2004-08-05 at 22:58, Erich Schroeder wrote:
> Sort of answered my own question. I downloaded and install star:
> an enhanced version of tar

It's not really an "enhanced version of tar" but a more _POSIX_
compliant version.   That's why it has been a part of Fedora Core (FC)
since version 0.8* and _recommended_over_ GNU Tar 1.13.

Understand that cpio, tar and, their new replacement, pax, just write
what is known as "ustar" format.  The latest IEEE POSIX 2001 and X/Open
Single Unix Specification (SUS) 3 from the "Austin Group" defines a lot
of new functionality that really makes up for lack of capability in the
older 1988 and subsequent releases until the late '90s drafts.

This includes overcoming POSIX 1988+ path/naming limitations, as well as
newer POSIX 2001 capabilities like storing POSIX EA/ACLs.

In the meanwhile, the GNU maintainers decided to release their own
extensions that are not compliant.  It was a necessary evil, but now
that the POSIX/SUS standard has been updated, it's time for GNU to come
around.  The current GNU Tar 1.14 alpha adds these capabilities.

star actually had EA/ACLs support on Solaris** _before_ the POSIX
standardization, so adopting it for POSIX 2001 / SUS 3 ustar meta-data
format was easy.

Unfortunately POSIX 2001 / SUS 3 still does _not_ address the issue of
compression.  I hate the idea of block compressing the entire archive,
which renders it largely unrecoverable after a single byte error (at
least with LZ77/gzip or LZO/lzop -- BWT/bzip2 may be better at recovery
though).  That's my "beef" with the whole ustar format in general.

I would have really liked a flexible per-file compression meta-data tag
in the standard.  Until then, we have aging cpio replacements like afio.

-- Bryan

*NOTE:  This is the actual "disttag" versioning (i.e., technical
reasons) for pre-Fedora Core "community Linux" releases from Red Hat
that are now recommended for Fedora Legacy support (i.e., FC 0.8 is fka
"RHL" 8), in addition to any relevant trademark (i.e., non-technical)

**NOTE:   legacy star used Sun's tar approach -- an ACL "attribute file"
preceding the data file, but using the same name.  That way if the tar
program extracting it was "Sun EA/ACL aware," it would read it, but if
not, it would just overwrite the attribute file with the actual when
extracted.  Quite ingenious of an approach.

Engineers scoff at me because I have IT certifications
 IT Pros scoff at me because I am a degreed engineer
    I see and understand both of their viewpoints
  Unfortunately Engineers and IT Pros only see in me
       what they dislike about the other trade
Bryan J. Smith            

Incremental Tar

Martin Maney maney at
Sun Jun 8 21:25:09 CDT 2003

On Sun, Jun 08, 2003 at 05:31:10PM -0500, Patrick R. White wrote: > So isn't this a good reason to use the dump/restore utilities to begin > with?

Maybe, but dump/restore is no panacea. Back in 1991, at LISA V, Elizabeth Zwicky of SRI presented a fascinating paper comparing the performance and problems of then-extant version of tar, cpio, pax, afio, as well as dump.

dump did well on most of the tests, but by its design it is capable of really frighetening errors if the filesystem is not quiesced (in practice, unmounted or mounted r/o would appear to be necessary) during dump. Also worth noting is that dump is quite filesystem-specific, and I seem to recall hearing that the ext2 version was interestingly broken a while ago. Since I don't employ dump, I can't tell you any more than that, sorry.

The only link I can find to the paper is from here:

There's a postscript file and jpegs of the printed document. I thought there used to be something less cumbersome, but Google isn't finding it for me. It did find a good number of now-dead links, though. :-(

Ah, "zwicky backup torture" is a better search key. Still mostly passing mentions of this seminal work. Here's a more recent survey paper about *nix backup techniques:

Here's another useful compendium that seems to be currently maintained

OTOH, "cpio: Some Linux folks appear to use this" seems... odd.

Ah, google-diving!


A delicate balance is necessary between sticking with the things you know and can rely upon, and exploring things which have the potential to be better. Assuming that either of these strategies is the one true way is silly. -- Graydon Hoare

tar minor POSIX incompliance

>From: Paul Eggert <>

>> From: Joey Hess <>
>> Date: Mon, 25 Mar 2002 14:57:20 -0500
>> According to the test suite documentation, POSIX 10.1.1-12(A) says
>> that Fields mode, uid, gid, size, mtime, chksum, devmajor and
>> devminor are leading zero-filled octal numbers in ASCII and are
>> terminated by one or more space or null characters.

>OK, I'll change the behavior of GNU "tar" in a future release.

I am not sure what the text from Joey Hess should be related to...
... his mail did not reach this group.

>From looking at the archives created by GNUtar, I see the following deviations:

-	Checksum field repeats a bug found in ancient TAR implementaions.
	This seems to be a rudiment from early tests done by John Gilmore
	in PD tar where he did try to run "cmp" on PD-tar vs. Sun-tar

	This is a minor deviation and easy to fix.

-	The devmajor/devminor fields are missing if the file is not
	a block/char device - here we see non left zero filled fields.

	A minor deviation that is easy to fix.

-	The Magic Version field contains spaces instead of "00".

	This is just a proof that GNUtar is not POSIX.1-1990 compliant
	and should not be changed before GNUtar has been validated to
	create POSIX.1 compliant archives.


>conformance by running the "tar" command.  A POSIX test suite should
>invoke the "pax" command instead.

While this is the correct answer from theory, you should take into account
that "pax" has not been accepted by a major number of people in the community.

AFAIK, LSB intends to be UNIX-98 compliant, so it would make sense to support
cpio/pax/tar in a way compliant to the SUSv2 document.

Let me comment on the current Linux status. We have:

-	GNUcpio which is neither POSIX.1-1990 compliant nor does it allow
	to archive files >= 2 GB.

	For a list of problems look into:

-	GNUtar which is not POSIX compliant too but supports files >= 2 GB.

	Problems with archive exchange with POSIX compliant platforms:

	-	does not handle long filenames in a POSIX compliant way.
		This has become better with recent alpha releases, but
		gnutar -tvf archive still does not work at all.
		Archives containing long filenames and created with gtar
		cannot be read by POSIX (only) tar implementations correctly.

	-	Is for unknown reason unable to list archives created with other
		TAR implementations (e.g. Sun's tar on Solaris or star).
		For an example look into:

-	Pax (the version fixed by Thorsten Kukuk) is POSIX.1-1990 compliant
	but it is not able to handle files >= 2 GB.

as part of commercial Linux distributions. From a standpoint of what people
might like to see, this could be better. A year 2002 POSIX OS should include at
least one program that creates POSIX compliant tar archives _and_ supports
large files.

People who get and compile software themselves may also use "star" which is
POSIX.1-1990 andd POSIX.1-2001 compliant and supports files >= 2 GB.
So why is star missing from Linux distributions?

>Also, I should mention that GNU tar does not generate POSIX-format
>ustar archives, nor does it claim to.  Volunteers to fix this
>deficiency would be welcome, but that's a different topic.  It is a
>quality-of-implementation issue, and is not strictly a
>POSIX-conformance issue.

There is "star" which is POSIX compliant. A good idea would be to move
gnutar to /bin/gtar on Linux and put star on /bin/star and /bin/tar.
This way, Linux gets a POSIX compliant TAR and users of gnutar will be granted
to have 100% backward compatibility when calling "gtar".

If you don't like to do the transition too fast, here is an idea for an
intermediate step:

Put star on /bin/star, install the star man page for "star" and "tar" and move
the GNUtar man page to "gtar".
Another topic:

>From a discussion at CeBIT, I am now aware of the fact that LSB did
"standardise" on the GNUtar options at:

Let me comment on this too:

It seems to be a bad idea to standardize TAR options that are incompatible
with POSIX standards. So let me first introduce a list of incompatible options
found in GNUtar. The complete list is in:

Gnu tar options that (in the single char variant) are incompatible:

BsS	-F, --info-script=FILE		run script at end of each tape (implies -M)
s	-L, --tape-length=NUM		change tape after writing NUM x 1024 bytes
s	-M, --multi-volume		create/list/extract multi-volume archive
s	-O, --to-stdout			extract files to standard output
sS (+)	-P, --absolute-names		don't strip leading `/'s from file names
s	-S, --sparse			handle sparse files efficiently
s	-T, -I, --files-from=NAME	get names to extract or create from file NAME
s	-U, --unlink-first		remove each file prior to extracting over it
s	-V, --label=NAME		create archive with volume name NAME
s	-d, --diff, --compare		find differences between archive and file system
sP	-l, --one-file-system		stay in local file system when creating archive
sP	-o, --old-archive, --portability write a V7 format archive

B	Incompatible with BSD tar
s	Incompatible with star
S	Incompatible with Sun's/SVr4 tar
P	Incompatible with POSIX

+)	This option is the only option where star deviates from other tar
	implementations, but as there is no other nice way to have an option to
	specify that the last record should be partial and the star option -/
	is easy to remember as well as -P for Partial record is I see no need
	to change star.


Please note that all these incompatibilities are "against" other TAR
implementations that are much older than GNUtar. As as example, let me use the
-M (do not cross mount points) option in star which is available since 1985.

It looks inapropriate to me to include single char options from GNUtar that are not
found in other tar implementations into something like LSB.

To avoid LSB systems to break POSIX.1-1990 and SVSv2, I would recommend to
change so that the following
single char options will disappear (oder is the order from the web page):

-A	This option has low importance and there is no need to have a single
	char option for it.

-d	(*) Use by star with different semantic, the short option should not
	    1be in the LSB standard.

-F	(*) Used with a different semantic by BSD tar for a long time
	    the short option should not be in the LSB standard.

-G	The short option should not be in the LSB standard.

-g	The short option should not be in the LSB standard.

-K	The short option should not be in the LSB standard.

-l	This option violates the POSIX/SUSv2 semantics, it needs to be removed
	from the LSB standard.

-L	(*) The short option should not be in the LSB standard.

-M	(*) The short option should not be in the LSB standard.

-N	The short option should not be in the LSB standard.

-o	This option violates the POSIX/SUSv2 semantics, it needs to be removed
	from the LSB standard.

-O	(*) The short option should not be in the LSB standard.

-P	(*) The short option should not be in the LSB standard.

-R	The short option should not be in the LSB standard.

-s	The short option should not be in the LSB standard.

-S	(*) The short option should not be in the LSB standard.

-T	(*) The short option should not be in the LSB standard.

-V	(*) The short option should not be in the LSB standard.

-W	The short option should not be in the LSB standard.

*) Used by one or more other TAR implementations with different semantics
so defining it in LSB creates problems.

Jörg (home) Jörg Schilling D-13353 Berlin		(uni)  If you don't have iso-8859-1		(work) chars I am J"org Schilling

To UNSUBSCRIBE, email to
with subject of "unsubscribe". Trouble? Email

Recommended Links

Google matched content

Softpanorama Recommended

Top articles


Top articles



Full system backup with tar - ArchWiki

hard drive - How would I use tar for full backup and restore with system on SSD and home on HDD - Ask Ubuntu

How to properly backup your system using TAR


Solaris 9 tar manpage - Solaris tar understands ACLs, but GNU tar don't

AIX tar

GNU tar - Table of Contents

(gnu)tar - GNU version of tar archiving utility

Linux and Solaris ACLs - Backup

The Star tape archiver by Jörg Schilling, available at, since version 1.4a07 supports backing up and restoring of POSIX Access Control Lists. For best results, it is recommended to use a recent star-1.5 version. Star is compatible with SUSv2 tar (UNIX-98 tar), understands the GNU tar archive extensions, and can generate pax archives.



This manual page documents the GNU version of tar, an archiving program designed to store and extract files from an archive file known as a tarball. A tarball may be made on a tape drive, however, it is also common to write a tarball to a normal file. The first argument to tar must be one of the options Acdrtux, followed by any optional functions. The final arguments to tar are the names of the files or directories which should be archived. The use of a directory name always implies that the subdirectories below should be included in the archive.


tar -xvf foo.tar
verbosely extract foo.tar
tar -xzf foo.tar.gz
extract gzipped foo.tar.gz
tar -cjf foo.tar.bz2 bar/
create bzipped tar archive of the directory bar called foo.tar.bz2
tar -xjf foo.tar.bz2 -C bar/
extract bzipped foo.tar.bz2 after changing directory to bar
tar -xzf foo.tar.gz blah.txt
extract the file blah.txt from foo.tar.gz

Function Letters

One of the following options must be used:

-A, --catenate, --concatenate
append tar files to an archive
-c, --create
create a new archive
-d, --diff, --compare
find differences between archive and file system
-r, --append
append files to the end of an archive
-t, --list
list the contents of an archive
-u, --update
only append files that are newer than the existing in archive
-x, --extract, --get
extract files from an archive
delete from the archive (not for use on mag tapes!)

Common Options

All Options

don't change access times on dumped files
-b, --blocking-factor N
block size of Nx512 bytes (default N=20)
-B, --read-full-blocks
reblock as we read (for reading 4.2BSD pipes)
--backup BACKUP-TYPE
backup files instead of deleting them using BACKUP-TYPE simple or numbered
block the output of compression program for tapes
-C, --directory DIR
change to directory DIR
warn if number of hard links to the file on the filesystem mismatch the number of links recorded in the archive
print directory names while reading the archive
-f, --file [HOSTNAME:]F
use archive file or device F (default "-", meaning stdin/stdout)
-F, --info-script F --new-volume-script F
run script at end of each tape (implies --multi-volume)
archive file is local even if has a colon
--format FORMAT
selects output archive format
v7 - Unix V7
oldgnu - GNU tar <=1.12
gnu - GNU tar 1.13
ustar - POSIX.1-1988
posix - POSIX.1-2001
-g, --listed-incremental F
create/list/extract new GNU-format incremental backup
-G, --incremental
create/list/extract old GNU-format incremental backup
-h, --dereference
don't dump symlinks; dump the files they point to
like this manpage, but not as cool
-i, --ignore-zeros
ignore blocks of zeros in archive (normally mean EOF)
ignore case when excluding files
don't exit with non-zero status on unreadable files
--index-file FILE
send verbose output to FILE instead of stdout
-j, --bzip2
filter archive through bzip2, use to decompress .bz2 files
-k, --keep-old-files
keep existing files; don't overwrite them from archive
-K, --starting-file F
begin at file F in the archive
do not overwrite files which are newer than the archive
-l, --one-file-system
stay in local file system when creating an archive
-L, --tape-length N
change tapes after writing N*1024 bytes
-m, --touch, --modification-time
don't extract file modified time
-M, --multi-volume
create/list/extract multi-volume archive
apply PERMISSIONS while adding files (see chmod(1))
-N, --after-date DATE, --newer DATE
only store files newer than DATE
--newer-mtime DATE
like --newer, but with a DATE
match any subsequenceof the name's components with --exclude
use case-sensitive matching with --exclude
don't recurse into directories
apply user's umask when extracting files instead of recorded permissions
don't use wildcards with --exclude
wildcards do not match slashes (/) with --exclude
--files-from reads null-terminated names, disable --directory
always use numbers for user/group names
-o, --old-archive, --portability
like --format=v7; -o exhibits this behavior when creating an archive (deprecated behavior)
-o, --no-same-owner
do not attempt to restore ownesship when extracting; -o exhibits this behavior when extracting an archive
-O, --to-stdout
extract files to standard output
--occurrence NUM
process only NUM occurrences of each named file; used with --delete, --diff, --extract, or --list
overwrite existing files and directory metadata when extracting
overwrite directory metadata when extracting
--owner USER
change owner of extraced files to USER
-p, --same-permissions, --preserve-permissions
extract all protection information
-P, --absolute-names
don't strip leading '/'s from file names
--pax-option KEYWORD-LIST
used only with POSIX.1-2001 archives to modify the way tar handles extended header keywords
like --format=posix
like --preserve-permissions --same-order
this option causes tar to store each file's ACLs in the archive.
this option causes tar to store each file's SELinux security context information in the archive.
this option causes tar to store each file's extended attributes in the archive. This option also enables --acls and--selinux if they haven't been set already, due to the fact that the data for those are stored in special xattrs.
This option causes tar not to store each file's ACLs in the archive and not to extract any ACL information in an archive.
this option causes tar not to store each file's SELinux security context information in the archive and not to extract any SELinux information in an archive.
this option causes tar not to store each file's extended attributes in the archive and not to extract any extended attributes in an archive. This option also enables --no-acls and --no-selinux if they haven't been set already.
-R, --record-number
show record number within archive with each message
--record-size SIZE
use SIZE bytes per record when accessing archives
recurse into directories
remove existing directories before extracting directories of the same name
remove files after adding them to the archive
--rmt-command CMD
use CMD instead of the default /usr/sbin/rmt
--ssh-command CMD
use remote CMD instead of ssh(1)
-s, --same-order, --preserve-order
list of names to extract is sorted to match archive
-S, --sparse
handle sparse files efficiently
create extracted files with the same ownesship
display the default options used by tar
print directories tar skips while operating on an archive
--strip-components NUMBER, --strip-path NUMBER
strip NUMBER of leading

components from file names before extraction

(1) tar-1.14 uses --strip-path, tar-1.14.90+ uses --strip-components

--suffix SUFFIX
use SUFFIX instead of default '~' when backing up files
-T, --files-from F
get names to extract or create from file F
print total bytes written with --create
-U, --unlink-first
remove existing files before extracting files of the same name
--use-compress-program PROG
access the archive through PROG which is generally a compression program
display file modification dates in UTC
-v, --verbose
verbosely list files processed
-V, --label NAME
create archive with volume name NAME
print tar program version number
--volno-file F
keep track of which volume of a multi-volume archive its working in FILE; used with --multi-volume
-w, --interactive, --confirmation
ask for confirmation for every action
-W, --verify
attempt to verify the archive after writing it
use wildcards with --exclude
wildcards match slashes (/) with --exclude
--exclude PATTERN
exclude files based upon PATTERN
-X, --exclude-from FILE
exclude files listed in FILE
-Z, --compress, --uncompress
filter the archive through compress
-z, --gzip, --gunzip, --ungzip
filter the archive through gzip
--use-compress-program PROG
filter the archive through PROG (which must accept -d)
specify drive and density



Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

Copyright © 1996-2018 by Dr. Nikolai Bezroukov. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case is down you can use the at


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: October 08, 2018