Softpanorama
May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

tar -- Unix Tape Archiver

News Books Simple Unix Backup Tools Recommended Links

Tutorial

Reference Bare metal recovery SSH Usage in Pipes
Tar options for bare metal recovery Pipes support in Unix shell rsync Using -exec option with find Selecting files using their age Unix Find Tutorial: Using find for backups    
star gnu tar gzip find tutorial Relax-and-Recover Tar Tips Humor Etc

Contents


Introduction

Tar is  a very old (dates back to Version 6 of AT&T UNIX, circa 1975) zero-compression archiver. But it is not easy to replace it with any archiver that has a zero-compression option (for example zip) as with time it got some unique capabilities.

Archive created by tar is usually called tarball.  A tarball historically was a magnetic tape, but now it's usually a disk file. The default device, /dev/rmt0, is seldom used today, the most common is to archive into file that is often processed additionally by gzip.

The tar command can specify a list of files or directories and can include name substitution characters. The basic form is

tar  keystring options tarball filenames... .

The keystring is a string of characters starting with one function letter (c, r, t  , u, or x) and zero or more function modifiers (letters or digits), depending on the function letter used.

Like in most unix utilities  tar's actions are controlled by options, but before options you need need to specify so called keystring that define type of operation and is specified without leading '-' and in this case are called "keystring".

tar keystring options files_to_include 

You can perform several standard for archivers  operations on tarball. In this sense tar is just another member of the family. Among typical operations:

We will discuss GNU tar implementation. There is alternative implementation called star that many prefer, but GNU tar is standard in commercial Linux distributions and is available for all other flavors of Unix so we prefer it. GNU tar has several interesting additional features such as:

Tar is often used along with gzip and bzip2. Not all tar implementations are created equal. Solaris tar understands ACLs. Not sure about GNU tar but latest version probably do, at least on Linux.

The current version of tar as of August 2012 is 1.26 (dated 2011-03-13). Versions used differ in different linux distributions:

While being just a zero compression archiver, tar has several capabilities that regular archivers usually do not have and as such it is very convenient for many tasks like backups and replication of filesystem from one server to another.

File limits are different for various OSes. Older Unixes often used to have 2G file limit. Obviously, older tar programs also won't be able to handle files or archives that are larger than 2^31-1 bytes (2.1 Gigabytes). Try running 'tar --version'. If the first line indicates you are using gnu tar, then any version newer than 1.12.64 is able to work with large files. You can also try command:

strings `which tar` | grep 64

you should see some lines saying lseek64, creat64, fopen64. If yes, your tar contains support for large files. GNU tar docs also mention that the official POSIX tar spec limits files to 8GB, but that gnu tar will generate non-POSIX (therefore possible non-portable files) with sizes up to something like 2^88 bytes. Still formally tarballs that you want to use on any other POSIX computer are limited to 8GB files.

Since the majority of tarballs are gzip'ed, the maximum filesize may be limited to due to gzip limitation.  The newer versions of gzip (1.3.5 and above) support large files. Before that the limit was 2GB. To check gzip version use

gzip --version

Unlike many other utilities tar does not assume that the thirst argument is the tar archive to operate with. Tar assumes that the first argument is a file to operate with. This created problems in novices. For example, unlike most Unix and DOS utilities, listing of the tar archive requires two options t and f (file to be listed).

tar tf myfiles.tar
tar tvf myfiles.tar

See also Google Answers UNIX Question! tar size constraint.

The limit for the length of file names is around 256, but can be as low as 100 on older Oses.

Note:

In old versions of tar because of limitations on header block space in the tar command, user numbers (UIDs), and group identification numbers (GIDs) larger than 65,535 will be corrupted when restored by GNU tar.

Tar is one of the few backup programs that is pipable.

Standard operations on tarballs

The functions supported by tar are the same as for any archiver:

The most popular options include:

Creation of the tar archives

You need to use option c  - Create which imply that the writing begins at the beginning of the tarball, instead of at the end. The tar command for creating a tar archive uses a string of options as well as a list of what files are to be included and names the device to be written to. The default device, /dev/rmt0, is seldom used today, the most common is to archive into file that is often processed additionally by gzip

tar cvf myarchive.tar *

The cvf argument string specifies that you are writing files (c for create), providing feedback to the user (v for verbose), and specifying the device rather than using the default (f for file). You can instantly pipe tar archive into gzip by specifying z option:

tar cvzf myarchive.tgz  *

The convention is to extension tar for tar archive and tgz for tared and gziped archives. Please note that tgz archives are monolithic bricks and are just designed to store files. But with regular tarball you can do several operations.  In the past instead of gzip, the standard compress utility was used In this case archives have prefix tar.Z.

Using lists of files which should be excluded from archive

The tar command also enables you to create lists of files that should be excluded from into tarball. The option -X accomplish that and it can be used with the option -T (include files from the list, see below)  .

For example

ls *.zip > exclude.lst
tar cvf myarchive.tar -X exclude.lst *

in this case zip files will not be included into the archive. It's a good idea to exclude the exclude file itself, as well as the tar file that you are creating in your exclude file. Notice that this has been done in the following example:

tar cvf myarchive.tar -X exclude.lst *

Using list of files for creation of tarball

Similarly, the include file can be used to specify which files should be included.

Combining tar with the find utility, you can archive files based on many criteria, including such things as how old the files are, how big, and how recently used. The following sequence of commands locates files that are newer than a particular file and creates an include file of files to be backed up.

find -newer lastproject -print >> include.lst
tar cvf myfiles.tar -T include,lst

That is especially important if you bring archive to a new place after modifying some files in the morning and need to merge changes back in the evening.

In the next example, both an include and an exclude file are used. Any file that appears in both files, by the way, will be included.

tar cvf myarchive.tar -X exclude.lst -T include.lst

Notice how we use options that require parameters: the first such option is used as the last in in the first string of options (cvf) and then each option specified separately with its parameter.

GNU tar has some features that enable it to mimic the behavior of find and tar in a single command.

Listing of the archives

To list the contents of a tar file without extracting, use the t option as shown below. Including the v option as well results in a long listing.

You need also specify option -f to list the files in the tarball. The most common problem that novices experience with the option -v is forgetting about the necessity to use also option -f to specify the file

TIP: As most people foreg to specifie option f to list the file it is better to create alias, for example

alias tarls='tar -tf' 

and put it in you .profile file

This is one of tar idiosyncrasies and source of a lot of grief for system administrators, who type tar -t myfiles.tar and receive nothing as tar tries to read standard input.

tar tf myfiles.tar
tar tvf myfiles.tar

Viewing individual files from the tarball

To extract to standard output a file from archive you can use -O or --to-stdout, for example

tar -xOf myfiles.tar hosts | more
However, --to-command may be more convenient for use with multiple files.

Extraction of the tar archives

Archives created with tar include the file ownership, file permissions, and access and creation dates of the files. The p (preserve) option restores file permissions to the original state. This is usually good since you'll ordinarily want to preserve permissions as well as dates so that executables will execute and you can determine how old they are. In some situations, you might not like that original owners are retrieved, since the original owners may be people at some other organization altogether. The tar command will set up ownership according to the numeric UID of the original owner. If someone in your local passwd file or network information service has the same UID, that person will become the owner; otherwise the owner will display numerically. Obviously, ownership can be altered later.

tar xvpf myachive.tar

Extract each file from a shell prompt by typing tar xvzf file.tar.gz from the directory you saved the file.

I would like to remind you again that you can also extract individual file to standard output (via option -O) and redirect it, for example

tar -xvOf /root/etc_baseline110628_0900.tar  hosts > /etc/hosts110628

Comparing between the content of tarball and the directory

One of the rarely mentioned capabilities of tar is its ability to serve as diff for between the directory and the tarball

The command is (assuming the root was the current directory when tarball was created

cd /etc  && tar -df  /tmp/etc20141020.tar 2>dev/null
Unfortunately if some files are missing from the directory this comparison operation fails to produce different files.  In some cases tar just lists missing files but not differing files. This looks like a bug in current version

From tar manual

4.2.6 Comparing Archive Members with the File System

The `--compare' (`-d'), or `--diff' operation compares specified archive members against files with the same names, and then reports differences in file size, mode, owner, modification date and contents. You should only specify archive member names, not file names. If you do not name any members, then tar will compare the entire archive. If a file is represented in the archive but does not exist in the file system, tar reports a difference.

You have to specify the record size of the archive when modifying an archive with a non-default record size.

tar ignores files in the file system that do not have corresponding members in the archive.

The following example compares the archive members `rock', `blues' and `funk' in the archive `bluesrock.tar' with files of the same name in the file system. (Note that there is no file, `funk'; tar will report an error message.)

$ tar --compare --file=bluesrock.tar rock blues funk
rock
blues
tar: funk not found in archive

The spirit behind the `--compare' (`--diff', `-d') option is to check whether the archive represents the current state of files on disk, more than validating the integrity of the archive media. For this latter goal, see Verifying Data as It is Stored.

diff a gzipped tarball against a directory

user394

Transfer of tarballs between servers

Tar archives can be transferred with remote copy commands such as ssh, scp, ftp, kermit, etc.  These utilities know how to deal with binary data. To mail a tarball, you need first encode the files to make it work. The uuencode command turns the contents of files into printable characters using a fixed-width format that allows them to be mailed and subsequently decoded easily. The resultant file will be larger than the file before uuencoding; after all, uuencode must map a larger character set of printable and nonprintable characters to a smaller one of printable characters, so it uses extra bits to do this, and the file will be about a third larger than the original.

Tar Usage in Pipes, "Back-to-back tar"

mv  command could not be used to move directories across file systems. A file system can be thought of as a hard drive or hard drive partition. The mv  command works fine when you want to move a directory between different locations on the same file system (hard drive), but it doesn't work well when you want to move a file across file systems. Depending on your version of mv, an error message could be generated when you try to do this.

For example, consider this directory:

ls -F /tmp/ch2 
ch2-01.doc ch2.doc@

If you use the mv  command to move this directory in the directory /home/mybook  on a different file system, an error message similar to the following is generated:

mv: cannot move 'ch22' across filesystems: Not a regular file

Some UNIX versions implement a workaround inside mv  that executes the following commands:

rm -rf destination
cp -rPp source destination
rm -rf source

Here source and destination are directories.

The main problem with this strategy is while symbolic links are copied, hard links in the source directory are not always copied correctly. Sometime coping hard links is just impossible as target of the move operation can be in a different filesystem.

In addition to this, there are two other minor problems with using cp:

The workaround for these problems is to use the tar  ( tar as in tape archive ) command to copy directories. This is usually accomplished with pipe and such an operation  became a Unix idiom for copying large tree of file, possibly the whole filesystem:

(cd mydata; tar cvf - *) | tar xpBf -

What this command does is move to a subdirectory and read files, which it then pipes to an extract at the current working directory. The parentheses group the cd and tar commands so that you can be working in two directories at the same time. The two - characters in this command represent standard output and standard input, informing the respective tar commands where to write and read data. The - designator thereby allows tar commands to be chained in this way.

You can also use cd command with the target directory instead of source directory. For example: 
tar cvz joeuser | (cd /Archive && tar xpz ) 

You can also move file from one server to another. See Cross network copy

Archives created with tar preserve the file ownership, file permissions, and access and creation dates of the files. Once the files are extracted from a tar file they look the same in content and description as they did when archived. The p (preserve) option will restore file permissions to the original state. This is usually a good idea since you'll ordinarily want to preserve permissions as well as dates so that executables will execute and you can determine how old they are.

In some situations, you might not like that original owners are restored, since the original owners may be people at some other organization altogether. The tar command will set up ownership according to the numeric UID of the original owner. If someone in your local /etc/passwd file or network information service has the same UID, that person will become the owner; otherwise the owner will display numerically. Obviously, ownership can be altered later. But in this case you may want to unpack the archive as a regular user instead of root.  If you unpack this archive with other user privileges (non-root) all uid and gid will be replaced with the uid and gid from this user.

Keep that in mind, if you make backups/restore, practically always you need to do it using UID 0 (root).

Cross network copy

No tar tutorial is complete without an example of a cross-network copy. This is often called tar-to-tar file transfere. One tar command creates an archive with one tar command while the other extracts from the archive without ever creating a *.tar file. The only problem with this command is that it looks a bit awkward. This capability depends of present of ssh or rsh.

To copy a directory hierarchy using this technique, first position yourself in the source directory:

cd fromdir

Next, tar the contents of the directory using the create (i.e., the "c" Option). Pipe the output to a tar extract (i.e., the "x" option) command. The tar extract should be enclosed in parentheses and contain two parts:

For example:

tar cBf - * | (cd todir; tar xvpBf -)

The hyphens in the tar command inform tar that no file is involved in the operation. The option B forces multiple reads and allows the command, as needed, to work across a network. The "p" is the preserve option - generally the default when the superuser uses this command.

Using tar-to-tar commands across you can move tree of directories from one server to another with a single command:

cd /home/joeuser
tar cBf - *" | ssh new_server "cd /home/joeuser; tar xvBf -"

Notice how we group the remote commands to clearly separate what we are running on the remote host from what we are doing locally. You might also use tar in conjunction with dd to read files from, or write files to, a tape device on a remote system. In the following command, we copy the files from the current directory and write them to a tape device on a remote host.

tar cvfb - 20 * | ssh boson dd of=/dev/rmt0 obs=20

Back-to-back tar commands have been used for many years to copy or back up files.

Gotchas

If you unpack this archive with other user privileges (non-root) all uid and gid will be replaced with the uid and gid from this user. Keep that in mind, if you make backups/restore, practically always do any backup/restore with UID 0 (root).

If directories that are contained in tar archive exist the permissions will be changed to that in tar archive and files will be overwritten. If this is important system directory like /etc the net result can be a large SNAFU.

if you are not sure how tar archive was created it is safer to expand it first in /tmp and see results before restoring it to the original tree (especially if this is a system directory).

if you are not sure how tar archive was created it is safer to expand it first in /tmp and see results before restoring it to the original tree (especially if this is a system directory).

Top updates

Softpanorama Switchboard
Softpanorama Search


Old News ;-)

10 quick tar command examples to create-extract archives in Linux

Extract tar.bz2/bzip archives

Files with extension bz2 are compressed with the bzip algorithm and tar command can deal with them as well. Use the j option instead of the z option.

$ tar -xvjf archivefile.tar.bz2

2. Extract files to a specific directory or path

To extract out the files to a specific directory, specify the path using the "-C" option. Note that its a capital C.

$ tar -xvzf abc.tar.gz -C /opt/folder/

However first make sure that the destination directory exists, since tar is not going to create the directory for you and will fail if it does not exist.

3. Extract a single file

To extract a single file out of an archive just add the file name after the command like this

$ tar -xz -f abc.tar.gz "./new/abc.txt"

More than once file can be specified in the above command like this

$ tar -xv -f abc.tar.gz "./new/cde.txt" "./new/abc.txt"

4. Extract multiple files using wildcards

Wildcards can be used to extract out a bunch of files matching the given wildcards. For example all files with ".txt" extension.

$ tar -xv -f abc.tar.gz --wildcards "*.txt"

tardiff - an archive patching utility

tardiff.tar.gz v2.1.4 (41,334 bytes)

"tardiff" is a Perl script used to quickly make a tarball of changes between versions of an archive, or between pre- and post-build of an application. There are many, many other possible uses.

More complete documentation is now available here.

Some documentation for applying patches of various sorts is now available here.

linux - How to compare two tarball's content

Stack Overflow

tarsum is almost what you need. Take its output, run it through sort to get the ordering identical on each, and then compare the two with diff. That should get you a basic implementation going, and it would be easily enough to pull those steps into the main program by modifying the Python code to do the whole job.

[Sep 04, 2014] Blunders with expansion of tar files, structure of which you do not understand

if you try to expand tar file in some production directory you accidentally can overwrite and change ownership of such directories and then spend a lot of type restored status quo. It is safer to expand such tar files in /tmp first and only after that seeing the results then decide whether to copy some directories of re-expand the tar file. Now in production directory.

[Sep 03, 2014]  Doing operation in a wrong directory among several similar directories

Sometimes directories are very similar, for example numbered directories created by some application such as task0001, task0002, ... task0256. In this case you can well perform operation on a wrong directory. For example send to tech support a tar file with the directory that instead of test data contain production run. 

TAR tricks & tips by Viktor Balogh

Viktor Balogh's HP-UX blog

This is how to tar a bunch of files and send it over network to another machine over SSH, in one turn:

# cd /etc; tar cf - passwd | ssh hp01a01.w1 "cd /root;tar xf - passwd"

Note that with tar you must always use relative path, anyway the files on the target system will be extracted with fullpath and the original files will be overwritten. GNU tar also offers some options which allow the user to modify/transform the paths when files are extracted. You can find the GNU tar on HP-UX under the name gtar, you can download it from the HP-UX porting center:

# which gtar
/usr/local/bin/gtar

If you have a ‘tar’ archive that was made with absolute paths, use ‘pax’ to extract it to a different directory:

# pax -r -s '|/tmp/|/opt/|' -f test.tar

If you unpack this archive with other user privileges (non-root) all uid and gid will be replaced with the uid and gid from this user. Keep that in mind, if you make backups/restore, practically always do any backup/restore with UID 0.

The use of tar with find isn’t apt to work if there are lots of files. Instead use pax(1):

# find . -atime +7 | pax -w | gzip > backup.tgz

[Sep 11, 2012] Tips and Tricks: Splitting tar archives on the fly by Alexander Todorov

Splitting big files into pieces is a common task. Another common task is to create a tar archive, and split it into smaller chunks that can be burned onto CD/DVD. The straightforward approach is to create the archive and then use ‘split.’ To do this, you will need more free space on your disk. In fact, you’ll need space twice the size of the created archive. To avoid this limitation, split the archive as it is being created.

To create a tar archive that splits itself on the fly use the following set of commands:

First create the archive:

tar -czf /dev/stdout $(DIRECTORY_OR_FILE_TO_COMPRESS) | split -d -b $(CHUNK_SIZE_IN_BYTES) - $(FILE_NAME_PREFIX)

To extract the contents:

cat $(FILE_NAME_PREFIX)* >> /dev/stdout | tar -xzf /dev/stdin

The above shown set of commands works on the fly. You don’t need additional free space for temporary files.

A few notes about this exercise:

The information provided in this article is for your information only. The origin of this information may be internal or external to Red Hat. While Red Hat attempts to verify the validity of this information before it is posted, Red Hat makes no express or implied claims to its validity.

vsego:

Isn’t it easier to just omit the “f”?
tar cz $(DIRECTORY_OR_FILE_TO_COMPRESS) | split -d -b $(CHUNK_SIZE_IN_BYTES) – $(FILE_NAME_PREFIX)
cat $(FILE_NAME_PREFIX)* | tar xz

Alexander Todorov:

You are right. Using /dev/stdin and /dev/stdout is to be more clear.
 

  1. Klaus Lichtenwalder says:
    December 14th, 2007 at 2:49 pm

    Just a few nitbits… If you want to use stdin/stdout with tar, it’s simply a -
    e.g.: tar cf – . | (cd /elsewhere; tar xf -)

    cat always appends its arguments to stdout, so
    cat $(prefix)* | command
    is sufficient. I don’t know and (honestly) don’t care if gnu-tar sends its output to stdout if no f argument given, every other unix uses the default tape device (which is /dev/rmt) if no f argument given (I have to work with Solaris and AIX too…).

[Sep 17, 2011] Linux Tape Backup With mt And tar Command Howto

To backup to multiple tape use the following command (backup /home file system):

# tar -clpMzvf /dev/st0 /home

To compare tape backup, enter:
# tar -dlpMzvf /dev/st0 /home

To restore tape in case of data loss or hard disk failure:
# tar -xlpMzvf /dev/st0 /home

Where,

GNU tar

If you are looking for a mature tar implementation that is actively maintained you should have a look
at star.
freshmeat.net

Changes:  This release adds support for xz compression (with the --xz option) and reassigns the short option -J as a shortcut for --xz. The option -I is now a shortcut for --use-compress-program,... and the --no-recursive option works with --incremental

Changes:  This release adds new options: --lzop, --no-auto-compress, and --no-null. It has compressed format recognition and VCS support (--exclude-vcs). It fixes the --null option and... fixes record size autodetection

Changes:  This release has new options: -a (selects a compression algorithm basing on the suffix of the archive file name), --lzma (selects the LZMA compression algorithm), and --hard-dereference,... which dereferences hard links during archive creation and stores the files they refer to (instead of creating the usual hard link members)

Google Answers UNIX Question! tar size constraint.

Posted: 12 Jun 2002 23:47 PDT
Expires: 19 Jun 2002 23:47 PDT
Question ID: 25116

What is the size constraint to "tar" in a UNIX or Linux environment?

Subject: Re: UNIX Question! tar size constraint.
Answered By: philip_lynx-ga on 13 Jun 2002 01:05 PDT
Rated: 

Hi pwharff,

The quick answer is: 2^63-1 for the archive, 68'719'476'735 (8^12-1)
bytes for each file, if your environment permits that.

as I understand your question, you want to know if you can produce tar
files that are biger than 2 GBytes (and how big you can really make
them). The answer to this question depends on a few simple parameters:

1) Does your operating system support large files?
2) What version of tar are you using?
3) What is the underlying file system?

You can answer question 1) for yourself by verifying that your kernel
supports 64bit file descriptors. For Linux this is the case for
several years now. A quick look in /usr/include/sys/features.h will
tell you, if there is any line containing 'FILE_OFFSET_BITS'. If there
is, your OS very very probably has support for large files.

For Solaris, just check whether 'man largefile' works, or try 'getconf
-a|grep LARGEFILE'. If it works, then you have support for large files
in the operating system. Again, support for large files has been there
for several years.

For other operating systems, try "man -k large file', and see what you
get -- I'll be gladly providing help if you need to ask for
clarification to this answer. Something like "cd /usr/include; grep
'FILE_OFFSET_BITS' * */*" should tell you quickly if there is standard
large file support.

2) What version of tar are you using? This is important. Obviously,
older tar programs won't be able to handle files or archives that are
larger than 2^31-1 bytes (2.1 Gigabytes). Try running 'tar --version'.
If the first line indicates you are using gnu tar, then any version
newer than 1.12.64 will in principle be able to provide you with large
files. Try to run this command: "strings `which tar`|grep 64", and you
should see some lines saying lseek64, creat64, fopen64. If yes, your
tar contains support for large files.

If your tar program does not contain support for large files (most
really do, but maybe you are working on a machine older than 1998?),
you can download the newest gnu tar from ftp://ftp.gnu.org/pub/gnu/tar
and compile it for yourself.

The size of files you put into a tar archive (not the archive itself)
is limited to 11 octal digits, the max. size of a single file is thus
ca. 68 GBytes.

3) Given that both your operating system (and C library), and tar
application support large files, the only really limiting factor is
the file system that you try to create the file in. The theoretical
limit for the tar archive size is 2^63-1 (9'223'372 Terabytes), but
you will reach more practical limits (disk or tape size) much quicker.
Also take into consideration what the file system is. DOS FAT 12
filesystems don't allow files as big as the Linux EXT2, or Sun UFS
file systems.

If you need more precise data (for a specific file system type, or for
the OS, etc.) please do not hesitate to ask for clarification.

I hope my answer is helpful to you,

--philip

ftp tar file size limit

Linux Forums
rockytopvols

Just Joined!

Join Date: Sep 2006

Posts: 2

ftp tar file size limit? 


I am trying to back up my linux box to my windows box's hard drive. To do this I am using the Knoppix distro to boot my linux box. Then I am taring and ftping every file and sending it to my windows box through ftp. (I wanted to tar the files first, so I can preserve permissions) On my windows xp box I am running filezilla's ftp server, and I am transfering to an external external 320Gb NTFS formated hard drive attached to to it through usb. I don't have enough space left on my linux box to tar everything and then transfer, so I am using the following commands:

ftp 192.168.1.101 21
binary
put |"tar -cvlO *.*" stuff.tar

It always stops transfering just before 2Gb (1,972,460KB), and the file should be 20Gb or so. What am I doing wrong? Is there some file size limit that I don't know of for ftp or tar? The NTFS files systems should allow bigger files from what I have read. I couldn't find any limit for filezilla. Is this the right place to ask?

Thanks Marsolin 

Linux Newbie

Join Date: Aug 2006

Posts: 222 I believe NTFS has a 2GB file limitation unless you are running a storage driver with 44-bit LBA support.

__________________
Chad
http://linuxappfinder.com

rockytopvols 

Just Joined!

Join Date: Sep 2006

Posts: 2 Everywhere I have read the NTFS limit is in the tens of Terabytes range. I have some files that are bigger than that now.

nlsteffens

tar file size limit
Generally, tar can't handle files larger than 2GB. I suggest using an alternative to tar, 'star'. A more comprehensive answer is available here:

http://answers.google.com/answers/threadview?id=25116

By the looks of it, gnu tar versions newer than 1.12.64 can handle large files but I can't confirm this.

Regards,

Nick

Alex Stan

Join Date: Aug 2006

Location: Hamilton, Ontario

I have a similar problem with big files:

I have a 2.2 Gig file on a linux computer. And i mounted Shared Documents(smbfs) from a another(windows) computer. So when i try to copy it it stops at 2 GB. I even tried moving the file in apache, so i can download the file, but apache won't let me.

I can't archive it either.

Is there any way to move that file?

__________________
I like linux!

sbhagat

If you are using smbclient, then follow the kbase article:

redhat.com | Knowledgebase

Regards,
Subodh Bhagat

Backing up Files with Tar

ONLamp.com

One last thing about creating archives with tar: tar was designed to back up everything in the specified directory. This means that every single file and subdirectory that exists beneath the specified directory will be backed up. It is possible to specify which files you don't want backed up using the X switch.

Let's say I want to backup everything in the www directory except for the apache2 and zope subdirectories. In order to use the X switch, I have to create a file containing the names of the files I wish to exclude. I've found that if you try to create this file using a text editor, it doesn't always work. However, If you create the file using echo, it does. So I'll make a file called exclude:

echo apache2 > exclude
echo zope >> exclude

Here, I used the echo command to redirect (>) the word apache2 to a new file called exclude. I then asked it to append (>>) the word zope to that same file. If I had forgotten to use two >'s, I would have overwritten the word apache2 with the word zope.

Now that I have a file to use with the X switch, I can make that backup:

tar cvfX backup.tar exclude www

This is the first backup I've demonstrated where the order of the switches is important. I need to tell tar that the f switch belongs with the word backup.tar and the X switch belongs with the word exclude. So if I decide to place the f switch before the X switch, I need to have the word backup.tar before the word exclude.

This command will also work as the right switch is still associated with the right word:

tar cvXf exclude backup.tar www

But this command would not work the way I want it to:

tar cvfX exclude backup.tar www
tar: can't open backup.tar : No such file or directory

Here you'll note that the X switch told tar to look for a file called backup.tar to tell it which files to exclude, which isn't what I meant to tell tar.

Let's return to the command that did work. To test that it didn't back up the file called apache2, I used grep to sort through tar's listing:

tar tf backup.tar | grep apache2

Since I just received my prompt back, I know my exclude file worked. It is interesting to note that since apache2 was really a subdirectory of www, all of the files in the apache2 subdirectory were also excluded from the backup. I then tested to see if the zope subdirectory was also excluded in the backup:

tar tf backup.tar | grep zope
www/zope-zpt/
www/zope-zpt/Makefile
www/zope-zpt/distinfo
www/zope-zpt/pkg-comment
<output snipped>

This time I got some information back, as there were other subdirectories that started with the term "zope," but the subdirectory that was just called zope was excluded from the backup.

Now that we know how to make backups, let's see how we can restore data from a backup. Remember from last week the difference between a relative and an absolute pathname, as this has an impact when you are restoring data. Relative pathnames are considered a good thing in a backup. Fortunately, the tar utility that comes with your FreeBSD system strips the leading slash, so it will always use a relative pathname -- unless you specifically overrride this default by using the P switch.

It's always a good idea to do a listing of the data in an archive before you try to restore it, especially if you receive a tar archive from someone else. You want to make sure that the listed files do not begin with "/", as that indicates an absolute pathname. I'll check the first few lines in my backup:

tar tf backup.tar | head
www/
www/mod_trigger/
www/mod_trigger/Makefile
www/mod_trigger/distinfo
www/mod_trigger/pkg-comment
www/mod_trigger/pkg-descr
www/mod_trigger/pkg-plist
www/Mosaic/
www/Mosaic/files/
www/Mosaic/files/patch-ai

None of these files begin with a "/", so I'll be able to restore this backup anywhere I would like. I'll practice a restore by making a directory I'll call testing, and then I'll restore the entire backup to that directory:

mkdir testing
cd testing
tar xvf ~test/backup.tar 

You'll note that I cd'ed into the directory to contain the restored files, then told tar to restore or extract the entire backup.tar file using the x switch. Once the restore was complete, I did a listing of the testing directory:

ls
www

I then did a listing of that new www directory and saw that I had successfully restored the entire www directory structure, including all of its subdirectories and files.

It's also possible to just restore a specific file from the archive. Let's say I only need to restore one file from the www/chimera directory. First, I'll need to know the name of the file, so I'll get a listing from tar and use grep to search for the files in the chimera subdirectory:

tar tf backup.tar | grep chimera
www/chimera/
www/chimera/files/
www/chimera/files/patch-aa
www/chimera/scripts/
www/chimera/scripts/configure
www/chimera/pkg-comment
www/chimera/Makefile
<snip>

I'd like to just restore the file www/chimera/Makefile, and I'd like to restore it to the home directory of the user named genisis. First, I'll cd to the directory to which I want that file restored, and then I'll tell tar just to restore that one file:

cd ~genisis
tar xvf ~test/backup.tar www/chimera/Makefile

You'll note some interesting things if you try this at home. When I did a listing of genisis' home directory, I didn't see a file called Makefile, but I did see a directory called www. This directory contained a subdirectory called chimera, which contained a file called Makefile. Remember, when you make an archive, you are including a directory structure, and when you restore from an archive, you recreate that directory structure.

You'll also note that the original ownesship, permissions, and file creation time were also restored with that file:

ls -l ~genisis/www/chimera/Makefile
-rw-r--r--  1 test  wheel  406 May 11 09:52 www/chimera/Makefile

That should get you started with using the tar utility. In next week's article, I'll continue with some of the interesting options that can be used with tar, and then I'll introduce the cpio archiver.

Backup using tar command in linux

The tar program is an archiving program designed to store and extract files from an archive file known as a tarball. A tarball may be made on a tape drive; however, it is also common to write a tarball to a normal file.

If you want to know more options about tar click here

Making backups with tar

A full backup can easily be made with tar:

# tar --create --file /dev/ftape /usr/src
tar: Removing leading / from absolute path names in the archive

The example above uses the GNU version of tar and its long option names. The traditional version of tar only understands single character options. The GNU version can also handle backups that don't fit on one tape or floppy, and also very long paths; not all traditional versions can do these things. (Linux only uses GNU tar.)

If your backup doesn't fit on one tape, you need to use the --multi-volume (-M) option:

# tar -cMf /dev/fd0H1440 /usr/src
tar: Removing leading / from absolute path names in the archive Prepare volume #2 for /dev/fd0H1440 and hit return:

Note that you should format the floppies before you begin the backup, or else use another window or virtual terminal and do it when tar asks for a new floppy.

After you've made a backup, you should check that it is OK, using the --compare (-d) option:

# tar --compare --verbose -f /dev/ftape
usr/src/
usr/src/linux
usr/src/linux-1.2.10-includes/

Failing to check a backup means that you will not notice that your backups aren't working until after you've lost the original data.

An incremental backup can be done with tar using the --newer

# tar --create --newer '8 Sep 1995' --file /dev/ftape /usr/src --verbose
tar: Removing leading / from absolute path names in the archive
usr/src/
usr/src/linux-1.2.10-includes/
usr/src/linux-1.2.10-includes/include/
usr/src/linux-1.2.10-includes/include/linux/
usr/src/linux-1.2.10-includes/include/linux/modules/
usr/src/linux-1.2.10-includes/include/asm-generic/
usr/src/linux-1.2.10-includes/include/asm-i386/
usr/src/linux-1.2.10-includes/include/asm-mips/
usr/src/linux-1.2.10-includes/include/asm-alpha/
usr/src/linux-1.2.10-includes/include/asm-m68k/
usr/src/linux-1.2.10-includes/include/asm-sparc/
usr/src/patch-1.2.11.gz

Unfortunately, tar can't notice when a file's inode information has changed, for example, that its permission bits have been changed, or when its name has been changed. This can be worked around using find and comparing current filesystem state with lists of files that have been previously backed up. Scripts and programs for doing this can be found on Linux ftp sites.

12.4.2. Restoring files with tar
The --extract (-x) option for tar extracts files:

# tar --extract --same-permissions --verbose --file /dev/fd0H1440
usr/src/
usr/src/linux
usr/src/linux-1.2.10-includes/
usr/src/linux-1.2.10-includes/include/
usr/src/linux-1.2.10-includes/include/linux/
usr/src/linux-1.2.10-includes/include/linux/hdreg.h
usr/src/linux-1.2.10-includes/include/linux/kernel.h

You also extract only specific files or directories (which includes all their files and subdirectories) by naming on the command line:

# tar xpvf /dev/fd0H1440
usr/src/linux-1.2.10-includes/include/linux/hdreg.h
usr/src/linux-1.2.10-includes/include/linux/hdreg.h


Use the --list (-t) option, if you just want to see what files are on a backup volume:

# tar --list --file /dev/fd0H1440
usr/src/
usr/src/linux
usr/src/linux-1.2.10-includes/
usr/src/linux-1.2.10-includes/include/
usr/src/linux-1.2.10-includes/include/linux/
usr/src/linux-1.2.10-includes/include/linux/hdreg.h
usr/src/linux-1.2.10-includes/include/linux/kernel.h

Note that tar always reads the backup volume sequentially, so for large volumes it is rather slow. It is not possible, however, to use random access database techniques when using a tape drive or some other sequential medium.

tar doesn't handle How-to Using Tar (Taring)

By SuperHornet from http://www.fluidgravity.com/ (http://www.fluidgravity.com/)

Ok well here is a short listing on how to use the command tar to backup you data..
Tar is solely an archiving app. Tar by its self wont compress files.

But you say "then what is a .tar.gz"

It’s a tar file that has been compressed with a different compression utility. The .gz=gzip is the compression app use to compress it.

Here is tar in its simplest form

tar -cvf filename.tar /path/to/files

-c means create
-f means filename (-f should always be last when you using syntax)
-v Verbose will display all the files its puts in the tar and error you might have incurred
You should see the filename.tar file in what ever directory you ran tar from.

You say "But I want to make the tarball compressed"

Well then -z is the option you want to include in your syntax

tar -zvcf filename.tar.gz /path/to/files


#notice I had to add the .gz extension.
-Z( no not -z) will run it thru the old compress app.

Now when I make a tarball I like to keep all the path's from which the file is in.
For this use the -P (absolute path)

tar -zPvcf filename.tar.gz /path/to/file 


When I extract it I will see a new directory called /path
and under that I will see the "to" directory, and the "file" is under "to"

Now you say "I want to backup ALL my files in my home directory EXCEPT the temp directory I use". No problem.

tar -zPvcf myhomebackup.tar.gz --exclude /home/erik/temp /home/erik


The --exclude will give you this option, just slip it in between the tar filename and the path your going to backup. This will exclude the whole temp directory.

You say "Ok this tar thing is pretty cool but I want to backup only single files from all around the drive.

No problem, this requires a bit more work, but hey this is UNIX, get used to it.

Make a file called locations (call it anything you like). In locations place the full path to each file you want to backup on a new line. Please be aware that you have to have read rights to the files you are going to backup.

/etc/mail/sendmail.cf
/usr/local/apache/conf/httpd.conf
/home/erik/scripts


Now with the -T option I can tell it to use the locations file.

tar -zPvcf backup.tar.gz -T locations

Now if you want to backup the whole drive. Then you will have to exclude lots of files like /var/log/* and /usr/local/named/*

Using the -X option you can create an exclude file just like the locations file.

tar -zPvcf fullbackup.tar.gz -X /path/to/excludefile -T /path/to/locationsfile

Now a month has gone by and you need to update your myhomebackup.tar.gz with new or changed files.

This requires a extra step (quit your bitching I already told you why)
You have to uncompress it first but not untar it.

gunzip /path/to/myhomebackup.tar.gz


This will leave your myhomebackup.tar.gz mising the .gz.
Now we can update your tarball with -u and then we are going to compress it again.

tar -Puvf myhomebackup.tar /home/erik | gzip mybackup.tar


It will add the .gz for you.

Tar is a pretty old app and has lots of Fetchers.
I suggest reading the man pages to get a lits of all the options.


I have included a little perl script that I made so I can run it as cron job evernight and get a full backup each time.
It wouldn't be that hard to update the tarball but I just like full backups.
Feel free to use it.

If you want to extract the tarball that is compressed

tar -zxvf filename.tar.gz


-x extract

If it is not compressed then

tar -xvf filename.tar


#!/usr/bin/perl
#sysbkup.pl
#Created by Erik Mathis hornet@fluidgravity.com 7/02

#Change These paths to fix your needs.
my $filename="/home/sysbkup/backup";
my $exclude="/home/erik/exclude";
my $data="/home/erik/locations";
my $tar="\.tar";
my $gz="\.gz";


$file=$filename.$tar.$gz;

system ("tar -Pzcvf $file -X $exclude -T $data");

Re Tar question

Sort of answered my own question. I downloaded and install star:

http://www.fokus.gmd.de/research/cc/glone/employees/joerg.schilling/private/star.html

an enhanced version of tar that includes a name-modification option:

-s replstr
Modify file or archive member names named by a pattern
according to the substitution expression replstr. The
format of replstr is:

-s /old/new/[gp]


eks

Re Tar question -- star is recommended until GNU Tar 1.14 is

On Thu, 2004-08-05 at 22:58, Erich Schroeder wrote:
> Sort of answered my own question. I downloaded and install star:
> http://www.fokus.gmd.de/research/cc/glone/employees/joerg.schilling/private/star.html
> an enhanced version of tar

It's not really an "enhanced version of tar" but a more _POSIX_
compliant version.   That's why it has been a part of Fedora Core (FC)
since version 0.8* and _recommended_over_ GNU Tar 1.13.

Understand that cpio, tar and, their new replacement, pax, just write
what is known as "ustar" format.  The latest IEEE POSIX 2001 and X/Open
Single Unix Specification (SUS) 3 from the "Austin Group" defines a lot
of new functionality that really makes up for lack of capability in the
older 1988 and subsequent releases until the late '90s drafts.

This includes overcoming POSIX 1988+ path/naming limitations, as well as
newer POSIX 2001 capabilities like storing POSIX EA/ACLs.

In the meanwhile, the GNU maintainers decided to release their own
extensions that are not compliant.  It was a necessary evil, but now
that the POSIX/SUS standard has been updated, it's time for GNU to come
around.  The current GNU Tar 1.14 alpha adds these capabilities.

star actually had EA/ACLs support on Solaris** _before_ the POSIX
standardization, so adopting it for POSIX 2001 / SUS 3 ustar meta-data
format was easy.

Unfortunately POSIX 2001 / SUS 3 still does _not_ address the issue of
compression.  I hate the idea of block compressing the entire archive,
which renders it largely unrecoverable after a single byte error (at
least with LZ77/gzip or LZO/lzop -- BWT/bzip2 may be better at recovery
though).  That's my "beef" with the whole ustar format in general.

I would have really liked a flexible per-file compression meta-data tag
in the standard.  Until then, we have aging cpio replacements like afio.

-- Bryan

*NOTE:  This is the actual "disttag" versioning (i.e., technical
reasons) for pre-Fedora Core "community Linux" releases from Red Hat
that are now recommended for Fedora Legacy support (i.e., FC 0.8 is fka
"RHL" 8), in addition to any relevant trademark (i.e., non-technical)
considerations.

**NOTE:   legacy star used Sun's tar approach -- an ACL "attribute file"
preceding the data file, but using the same name.  That way if the tar
program extracting it was "Sun EA/ACL aware," it would read it, but if
not, it would just overwrite the attribute file with the actual when
extracted.  Quite ingenious of an approach.


--
Engineers scoff at me because I have IT certifications
 IT Pros scoff at me because I am a degreed engineer
    I see and understand both of their viewpoints
  Unfortunately Engineers and IT Pros only see in me
       what they dislike about the other trade
------------------------------------------------------
Bryan J. Smith                      b.j.smith@ieee.org

[LUNI] Incremental Tar

Martin Maney maney at pobox.com
Sun Jun 8 21:25:09 CDT 2003

On Sun, Jun 08, 2003 at 05:31:10PM -0500, Patrick R. White wrote: > So isn't this a good reason to use the dump/restore utilities to begin > with?

Maybe, but dump/restore is no panacea. Back in 1991, at LISA V, Elizabeth Zwicky of SRI presented a fascinating paper comparing the performance and problems of then-extant version of tar, cpio, pax, afio, as well as dump.

dump did well on most of the tests, but by its design it is capable of really frighetening errors if the filesystem is not quiesced (in practice, unmounted or mounted r/o would appear to be necessary) during dump. Also worth noting is that dump is quite filesystem-specific, and I seem to recall hearing that the ext2 version was interestingly broken a while ago. Since I don't employ dump, I can't tell you any more than that, sorry.

The only link I can find to the paper is from here:

http://www.phys.washington.edu/~belonis/

There's a postscript file and jpegs of the printed document. I thought there used to be something less cumbersome, but Google isn't finding it for me. It did find a good number of now-dead links, though. :-(

Ah, "zwicky backup torture" is a better search key. Still mostly passing mentions of this seminal work. Here's a more recent survey paper about *nix backup techniques:

http://citeseer.nj.nec.com/chervenak98protecting.html

Here's another useful compendium that seems to be currently maintained

http://www.cybertiggyr.com/gene/htdocs/unix_tape/unix_tape.html

OTOH, "cpio: Some Linux folks appear to use this" seems... odd.

Ah, google-diving!

--

A delicate balance is necessary between sticking with the things you know and can rely upon, and exploring things which have the potential to be better. Assuming that either of these strategies is the one true way is silly. -- Graydon Hoare

Re tar minor POSIX incompliance

>From: Paul Eggert <eggert@twinsun.com>

>> From: Joey Hess <joey@kitenet.net>
>> Date: Mon, 25 Mar 2002 14:57:20 -0500
>>
>> According to the test suite documentation, POSIX 10.1.1-12(A) says
>> that Fields mode, uid, gid, size, mtime, chksum, devmajor and
>> devminor are leading zero-filled octal numbers in ASCII and are
>> terminated by one or more space or null characters.

>OK, I'll change the behavior of GNU "tar" in a future release.

I am not sure what the text from Joey Hess should be related to...
... his mail did not reach this group.

>From looking at the archives created by GNUtar, I see the following deviations:

-	Checksum field repeats a bug found in ancient TAR implementaions.
	This seems to be a rudiment from early tests done by John Gilmore
	in PD tar where he did try to run "cmp" on PD-tar vs. Sun-tar
	archives.

	This is a minor deviation and easy to fix.

-	The devmajor/devminor fields are missing if the file is not
	a block/char device - here we see non left zero filled fields.

	A minor deviation that is easy to fix.

-	The Magic Version field contains spaces instead of "00".

	This is just a proof that GNUtar is not POSIX.1-1990 compliant
	and should not be changed before GNUtar has been validated to
	create POSIX.1 compliant archives.

...

>conformance by running the "tar" command.  A POSIX test suite should
>invoke the "pax" command instead.

While this is the correct answer from theory, you should take into account
that "pax" has not been accepted by a major number of people in the community.

AFAIK, LSB intends to be UNIX-98 compliant, so it would make sense to support
cpio/pax/tar in a way compliant to the SUSv2 document.

Let me comment on the current Linux status. We have:

-	GNUcpio which is neither POSIX.1-1990 compliant nor does it allow
	to archive files >= 2 GB.

	For a list of problems look into:

		ftp://ftp.fokus.gmd.de/pub/unix/star/README.otherbugs

-	GNUtar which is not POSIX compliant too but supports files >= 2 GB.

	Problems with archive exchange with POSIX compliant platforms:

	-	does not handle long filenames in a POSIX compliant way.
		This has become better with recent alpha releases, but
		gnutar -tvf archive still does not work at all.
		Archives containing long filenames and created with gtar
		cannot be read by POSIX (only) tar implementations correctly.

	-	Is for unknown reason unable to list archives created with other
		TAR implementations (e.g. Sun's tar on Solaris or star).
		For an example look into:

			ftp://ftp.fokus.gmd.de/pub/unix/star/testscripts/README.gtarfail

-	Pax (the version fixed by Thorsten Kukuk) is POSIX.1-1990 compliant
	but it is not able to handle files >= 2 GB.

as part of commercial Linux distributions. From a standpoint of what people
might like to see, this could be better. A year 2002 POSIX OS should include at
least one program that creates POSIX compliant tar archives _and_ supports
large files.

People who get and compile software themselves may also use "star" which is
POSIX.1-1990 andd POSIX.1-2001 compliant and supports files >= 2 GB.
So why is star missing from Linux distributions?


>Also, I should mention that GNU tar does not generate POSIX-format
>ustar archives, nor does it claim to.  Volunteers to fix this
>deficiency would be welcome, but that's a different topic.  It is a
>quality-of-implementation issue, and is not strictly a
>POSIX-conformance issue.

There is "star" which is POSIX compliant. A good idea would be to move
gnutar to /bin/gtar on Linux and put star on /bin/star and /bin/tar.
This way, Linux gets a POSIX compliant TAR and users of gnutar will be granted
to have 100% backward compatibility when calling "gtar".

ftp://ftp.fokus.gmd.de/pub/unix/star/aplha/

If you don't like to do the transition too fast, here is an idea for an
intermediate step:

Put star on /bin/star, install the star man page for "star" and "tar" and move
the GNUtar man page to "gtar".
/*--------------------------------------------------------------------------*/
Another topic:

>From a discussion at CeBIT, I am now aware of the fact that LSB did
"standardise" on the GNUtar options at:

	http://www.linuxbase.org/spec/gLSB/gLSB/tar.html

Let me comment on this too:

It seems to be a bad idea to standardize TAR options that are incompatible
with POSIX standards. So let me first introduce a list of incompatible options
found in GNUtar. The complete list is in:

	ftp://ftp.fokus.gmd.de/pub/unix/star/aplha/STARvsGNUTAR

/*--------------------------------------------------------------------------*/
Gnu tar options that (in the single char variant) are incompatible:

BsS	-F, --info-script=FILE		run script at end of each tape (implies -M)
s	-L, --tape-length=NUM		change tape after writing NUM x 1024 bytes
s	-M, --multi-volume		create/list/extract multi-volume archive
s	-O, --to-stdout			extract files to standard output
sS (+)	-P, --absolute-names		don't strip leading `/'s from file names
s	-S, --sparse			handle sparse files efficiently
s	-T, -I, --files-from=NAME	get names to extract or create from file NAME
s	-U, --unlink-first		remove each file prior to extracting over it
s	-V, --label=NAME		create archive with volume name NAME
s	-d, --diff, --compare		find differences between archive and file system
sP	-l, --one-file-system		stay in local file system when creating archive
sP	-o, --old-archive, --portability write a V7 format archive

B	Incompatible with BSD tar
s	Incompatible with star
S	Incompatible with Sun's/SVr4 tar
P	Incompatible with POSIX

+)	This option is the only option where star deviates from other tar
	implementations, but as there is no other nice way to have an option to
	specify that the last record should be partial and the star option -/
	is easy to remember as well as -P for Partial record is I see no need
	to change star.

/*--------------------------------------------------------------------------*/

Please note that all these incompatibilities are "against" other TAR
implementations that are much older than GNUtar. As as example, let me use the
-M (do not cross mount points) option in star which is available since 1985.

It looks inapropriate to me to include single char options from GNUtar that are not
found in other tar implementations into something like LSB.

To avoid LSB systems to break POSIX.1-1990 and SVSv2, I would recommend to
change http://www.linuxbase.org/spec/gLSB/gLSB/tar.html so that the following
single char options will disappear (oder is the order from the web page):

-A	This option has low importance and there is no need to have a single
	char option for it.

-d	(*) Use by star with different semantic, the short option should not
	    1be in the LSB standard.

-F	(*) Used with a different semantic by BSD tar for a long time
	    the short option should not be in the LSB standard.

-G	The short option should not be in the LSB standard.

-g	The short option should not be in the LSB standard.

-K	The short option should not be in the LSB standard.

-l	This option violates the POSIX/SUSv2 semantics, it needs to be removed
	from the LSB standard.

-L	(*) The short option should not be in the LSB standard.

-M	(*) The short option should not be in the LSB standard.

-N	The short option should not be in the LSB standard.

-o	This option violates the POSIX/SUSv2 semantics, it needs to be removed
	from the LSB standard.

-O	(*) The short option should not be in the LSB standard.

-P	(*) The short option should not be in the LSB standard.

-R	The short option should not be in the LSB standard.

-s	The short option should not be in the LSB standard.

-S	(*) The short option should not be in the LSB standard.

-T	(*) The short option should not be in the LSB standard.

-V	(*) The short option should not be in the LSB standard.

-W	The short option should not be in the LSB standard.


*) Used by one or more other TAR implementations with different semantics
so defining it in LSB creates problems.

Jörg

 EMail:joerg@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
       js@cs.tu-berlin.de		(uni)  If you don't have iso-8859-1
       schilling@fokus.gmd.de		(work) chars I am J"org Schilling
 URL:   http://www.fokus.gmd.de/usr/schilling     ftp://ftp.fokus.gmd.de/pub/unix


--
To UNSUBSCRIBE, email to lsb-test-request@lists.linuxbase.org
with subject of "unsubscribe". Trouble? Email listmaster@lists.linuxbase.org

Recommended Links

Softpanorama hot topic of the month

Softpanorama Recommended


Reference

Solaris 9 tar manpage - Solaris tar understands ACLs, but GNU tar don't

AIX tar

GNU tar - Table of Contents

(gnu)tar - GNU version of tar archiving utility

Linux and Solaris ACLs - Backup

The Star tape archiver by Jörg Schilling, available at ftp://ftp.berlios.de/pub/star/, since version 1.4a07 supports backing up and restoring of POSIX Access Control Lists. For best results, it is recommended to use a recent star-1.5 version. Star is compatible with SUSv2 tar (UNIX-98 tar), understands the GNU tar archive extensions, and can generate pax archives.

download

GNU TAR

This manual page documents the GNU version of tar, an archiving program designed to store and extract files from an archive file known as a tarball. A tarball may be made on a tape drive, however, it is also common to write a tarball to a normal file. The first argument to tar must be one of the options Acdrtux, followed by any optional functions. The final arguments to tar are the names of the files or directories which should be archived. The use of a directory name always implies that the subdirectories below should be included in the archive.

Examples

tar -xvf foo.tar
verbosely extract foo.tar
tar -xzf foo.tar.gz
extract gzipped foo.tar.gz
tar -cjf foo.tar.bz2 bar/
create bzipped tar archive of the directory bar called foo.tar.bz2
tar -xjf foo.tar.bz2 -C bar/
extract bzipped foo.tar.bz2 after changing directory to bar
tar -xzf foo.tar.gz blah.txt
extract the file blah.txt from foo.tar.gz

Function Letters

One of the following options must be used:

-A, --catenate, --concatenate
append tar files to an archive
-c, --create
create a new archive
-d, --diff, --compare
find differences between archive and file system
-r, --append
append files to the end of an archive
-t, --list
list the contents of an archive
-u, --update
only append files that are newer than the existing in archive
-x, --extract, --get
extract files from an archive
--delete
delete from the archive (not for use on mag tapes!)

Common Options

All Options

--atime-preserve
don't change access times on dumped files
-b, --blocking-factor N
block size of Nx512 bytes (default N=20)
-B, --read-full-blocks
reblock as we read (for reading 4.2BSD pipes)
--backup BACKUP-TYPE
backup files instead of deleting them using BACKUP-TYPE simple or numbered
--block-compress
block the output of compression program for tapes
-C, --directory DIR
change to directory DIR
--check-links
warn if number of hard links to the file on the filesystem mismatch the number of links recorded in the archive
--checkpoint
print directory names while reading the archive
-f, --file [HOSTNAME:]F
use archive file or device F (default "-", meaning stdin/stdout)
-F, --info-script F --new-volume-script F
run script at end of each tape (implies --multi-volume)
--force-local
archive file is local even if has a colon
--format FORMAT
selects output archive format
v7 - Unix V7
oldgnu - GNU tar <=1.12
gnu - GNU tar 1.13
ustar - POSIX.1-1988
posix - POSIX.1-2001
-g, --listed-incremental F
create/list/extract new GNU-format incremental backup
-G, --incremental
create/list/extract old GNU-format incremental backup
-h, --dereference
don't dump symlinks; dump the files they point to
--help
like this manpage, but not as cool
-i, --ignore-zeros
ignore blocks of zeros in archive (normally mean EOF)
--ignore-case
ignore case when excluding files
--ignore-failed-read
don't exit with non-zero status on unreadable files
--index-file FILE
send verbose output to FILE instead of stdout
-j, --bzip2
filter archive through bzip2, use to decompress .bz2 files
-k, --keep-old-files
keep existing files; don't overwrite them from archive
-K, --starting-file F
begin at file F in the archive
--keep-newer-files
do not overwrite files which are newer than the archive
-l, --one-file-system
stay in local file system when creating an archive
-L, --tape-length N
change tapes after writing N*1024 bytes
-m, --touch, --modification-time
don't extract file modified time
-M, --multi-volume
create/list/extract multi-volume archive
--mode PERMISSIONS
apply PERMISSIONS while adding files (see chmod(1))
-N, --after-date DATE, --newer DATE
only store files newer than DATE
--newer-mtime DATE
like --newer, but with a DATE
--no-anchored
match any subsequenceof the name's components with --exclude
--no-ignore-case
use case-sensitive matching with --exclude
--no-recursion
don't recurse into directories
--no-same-permissions
apply user's umask when extracting files instead of recorded permissions
--no-wildcards
don't use wildcards with --exclude
--no-wildcards-match-slash
wildcards do not match slashes (/) with --exclude
--null
--files-from reads null-terminated names, disable --directory
--numeric-owner
always use numbers for user/group names
-o, --old-archive, --portability
like --format=v7; -o exhibits this behavior when creating an archive (deprecated behavior)
-o, --no-same-owner
do not attempt to restore ownesship when extracting; -o exhibits this behavior when extracting an archive
-O, --to-stdout
extract files to standard output
--occurrence NUM
process only NUM occurrences of each named file; used with --delete, --diff, --extract, or --list
--overwrite
overwrite existing files and directory metadata when extracting
--overwrite-dir
overwrite directory metadata when extracting
--owner USER
change owner of extraced files to USER
-p, --same-permissions, --preserve-permissions
extract all protection information
-P, --absolute-names
don't strip leading '/'s from file names
--pax-option KEYWORD-LIST
used only with POSIX.1-2001 archives to modify the way tar handles extended header keywords
--posix
like --format=posix
--preserve
like --preserve-permissions --same-order
--acls
this option causes tar to store each file's ACLs in the archive.
--selinux
this option causes tar to store each file's SELinux security context information in the archive.
--xattrs
this option causes tar to store each file's extended attributes in the archive. This option also enables --acls and--selinux if they haven't been set already, due to the fact that the data for those are stored in special xattrs.
--no-acls
This option causes tar not to store each file's ACLs in the archive and not to extract any ACL information in an archive.
--no-selinux
this option causes tar not to store each file's SELinux security context information in the archive and not to extract any SELinux information in an archive.
--no-xattrs
this option causes tar not to store each file's extended attributes in the archive and not to extract any extended attributes in an archive. This option also enables --no-acls and --no-selinux if they haven't been set already.
-R, --record-number
show record number within archive with each message
--record-size SIZE
use SIZE bytes per record when accessing archives
--recursion
recurse into directories
--recursive-unlink
remove existing directories before extracting directories of the same name
--remove-files
remove files after adding them to the archive
--rmt-command CMD
use CMD instead of the default /usr/sbin/rmt
--ssh-command CMD
use remote CMD instead of ssh(1)
-s, --same-order, --preserve-order
list of names to extract is sorted to match archive
-S, --sparse
handle sparse files efficiently
--same-owner
create extracted files with the same ownesship
--show-defaults
display the default options used by tar
--show-omitted-dirs
print directories tar skips while operating on an archive
--strip-components NUMBER, --strip-path NUMBER
strip NUMBER of leading

components from file names before extraction

(1) tar-1.14 uses --strip-path, tar-1.14.90+ uses --strip-components

--suffix SUFFIX
use SUFFIX instead of default '~' when backing up files
-T, --files-from F
get names to extract or create from file F
--totals
print total bytes written with --create
-U, --unlink-first
remove existing files before extracting files of the same name
--use-compress-program PROG
access the archive through PROG which is generally a compression program
--utc
display file modification dates in UTC
-v, --verbose
verbosely list files processed
-V, --label NAME
create archive with volume name NAME
--version
print tar program version number
--volno-file F
keep track of which volume of a multi-volume archive its working in FILE; used with --multi-volume
-w, --interactive, --confirmation
ask for confirmation for every action
-W, --verify
attempt to verify the archive after writing it
--wildcards
use wildcards with --exclude
--wildcards-match-slash
wildcards match slashes (/) with --exclude
--exclude PATTERN
exclude files based upon PATTERN
-X, --exclude-from FILE
exclude files listed in FILE
-Z, --compress, --uncompress
filter the archive through compress
-z, --gzip, --gunzip, --ungzip
filter the archive through gzip
--use-compress-program PROG
filter the archive through PROG (which must accept -d)
-[0-7][lmh]
specify drive and density

Top updates

Softpanorama Switchboard
Softpanorama Search


NEWS CONTENTS

Old News ;-)

Use of tar for system backup


# save everything except /mnt and /proc.

time tar cfpPzf $TARBALL  --directory=/ --one-file-system  -xattrs \
--exclude /mnt --exclude /proc 

Where:

Warning: Exclude actuallyally is condired to be a simple regex expression

[Jul 20, 2017] Server Backup Procedures

Jul 20, 2017 | www.tldp.org
.1.1. Backing up with ``tar'':

If you decide to use ``tar'' as your backup solution, you should probably take the time to get to know the various command-line options that are available; type " man tar " for a comprehensive list. You will also need to know how to access the appropriate backup media; although all devices are treated like files in the Unix world, if you are writing to a character device such as a tape, the name of the "file" is the device name itself (eg. `` /dev/nst0 '' for a SCSI-based tape drive).

The following command will perform a backup of your entire Linux system onto the `` /archive/ '' file system, with the exception of the `` /proc/ '' pseudo-filesystem, any mounted file systems in `` /mnt/ '', the `` /archive/ '' file system (no sense backing up our backup sets!), as well as Squid's rather large cache files (which are, in my opinion, a waste of backup media and unnecessary to back up):


tar -zcvpf /archive/full-backup-`date '+%d-%B-%Y'`.tar.gz \
    --directory / --exclude=mnt --exclude=proc --exclude=var/spool/squid .


Don't be intimidated by the length of the command above! As we break it down into its components, you will see the beauty of this powerful utility.

The above command specifies the options `` z '' (compress; the backup data will be compressed with ``gzip''), `` c '' (create; an archive file is begin created), `` v '' (verbose; display a list of files as they get backed up), `` p '' (preserve permissions; file protection information will be "remembered" so they can be restored). The `` f '' (file) option states that the very next argument will be the name of the archive file (or device) being written. Notice how a filename which contains the current date is derived, simply by enclosing the ``date'' command between two back-quote characters. A common naming convention is to add a `` tar '' suffix for non-compressed archives, and a `` tar.gz '' suffix for compressed ones.

The `` --directory '' option tells tar to first switch to the following directory path (the `` / '' directory in this example) prior to starting the backup. The `` --exclude '' options tell tar not to bother backing up the specified directories or files. Finally, the `` . '' character tells tar that it should back up everything in the current directory.

Note: Note: It is important to realize that the options to tar are cAsE-sEnSiTiVe! In addition, most of the options can be specified as either single mneumonic characters (eg. ``f''), or by their easier-to-memorize full option names (eg. ``file''). The mneumonic representations are identified by prefixing them with a ``-'' character, while the full names are prefixed with two such characters. Again, see the "man" pages for information on using tar.

Another example, this time writing only the specified file systems (as opposed to writing them all with exceptions as demonstrated in the example above) onto a SCSI tape drive follows:


tar -cvpf /dev/nst0 --label="Backup set created on `date '+%d-%B-%Y'`." \
    --directory / --exclude=var/spool/ etc home usr/local var/spool


In the above command, notice that the `` z '' (compress) option is not used. I strongly recommend against writing compressed data to tape, because if data on a portion of the tape becomes corrupted, you will lose your entire backup set! However, archive files stored without compression have a very high recoverability for non-affected files, even if portions of the tape archive are corrupted.

Because the tape drive is a character device, it is not possible to specify an actual file name. Therefore, the file name used as an argument to tar is simply the name of the device, `` /dev/nst0 '', the first tape device on the SCSI bus.

Note: Note: The `` /dev/nst0 '' device does not rewind after the backup set is written; therefore it is possible to write multiple sets on one tape. (You may also refer to the device as `` /dev/st0 '', in which case the tape is automatically rewound after the backup set is written.)

Since we aren't able to specify a filename for the backup set, the `` --label '' option can be used to write some information about the backup set into the archive file itself.

Finally, only the files contained in the `` /etc/ '', `` /home/ '', `` /usr/local '', and `` /var/spool/ '' (with the exception of Squid's cache data files) are written to the tape.

When working with tapes, you can use the following commands to rewind, and then eject your tape:


mt -f /dev/nst0 rewind



mt -f /dev/nst0 offline


Tip: Tip: You will notice that leading `` / '' (slash) characters are stripped by tar when an archive file is created. This is tar's default mode of operation, and it is intended to protect you from overwriting critical files with older versions of those files, should you mistakenly recover the wrong file(s) in a restore operation. If you really dislike this behavior (remember, its a feature !) you can specify the `` --absolute-paths '' option to tar, which will preserve the leading slashes. However, I don't recommend doing so, as it is Dangerous !

[Feb 20, 2017] Stupid tar Tricks

Aug 26, 2010 | www.linuxjournal.com

One of the most common programs on Linux systems for packaging files is the venerable tar. tar is short for tape archive, and originally, it would archive your files to a tape device. Now, you're more likely to use a file to make your archive. To use a tarfile, use the command-line option -f . To create a new tarfile, use the command-line option -c. To extract files from a tarfile, use the option -x. You also can compress the resulting tarfile via two methods. To use bzip2, use the -j option, or for gzip, use the -z option.

Instead of using a tarfile, you can output your tarfile to stdout or input your tarfile from stdin by using a hyphen (-). With these options, you can tar up a directory and all of its subdirectories by using:

tar cf archive.tar dir

Then, extract it in another directory with:

tar xf archive.tar

When creating a tarfile, you can assign a volume name with the option -V . You can move an entire directory structure with tar by executing:

tar cf - dir1 | (cd dir2; tar xf -)

You can go even farther and move an entire directory structure over the network by executing:

tar cf - dir1 | ssh remote_host "( cd /path/to/dir2; tar xf - )"

GNU tar includes an option that lets you skip the cd part, -C /path/to/dest. You also can interact with tarfiles over the network by including a host part to the tarfile name. For example:

tar cvf username@remotehost:/path/to/dest/archive.tar dir1

This is done by using rsh as the communication mechanism. If you want to use something else, like ssh, use the command-line option --rsh-command CMD. Sometimes, you also may need to give the path to the rmt executable on the remote host. On some hosts, it won't be in the default location /usr/sbin/rmt. So, all together, this would look like:

tar -c -v --rsh-command ssh --rmt-command /sbin/rmt 
 ↪-f username@host:/path/to/dest/archive.tar dir1

Although tar originally used to write its archive to a tape drive, it can be used to write to any device. For example, if you want to get a dump of your current filesystem to a secondary hard drive, use:

tar -cvzf /dev/hdd /

Of course, you need to run the above command as root. If you are writing your tarfile to a device that is too small, you can tell tar to do a multivolume archive with the -M option. For those of you who are old enough to remember floppy disks, you can back up your home directory to a series of floppy disks by executing:

tar -cvMf /dev/fd0 $HOME

If you are doing backups, you may want to preserve the file permissions. You can do this with the -p option. If you have symlinked files on your filesystem, you can dereference the symlinks with the -h option. This tells tar actually to dump the file that the symlink points to, not just the symlink.

Along the same lines, if you have several filesystems mounted, you can tell tar to stick to only one filesystem with the option -l. Hopefully, this gives you lots of ideas for ways to archive your files.

[Feb 04, 2017] How do I fix mess created by accidentally untarred files in the current dir, aka tar bomb

In such cases the UID of the file is often different from uid of "legitimate" files in polluted directories and you probably can use this fact for quick elimination of the tar bomb, But the idea of using the list of files from the tar bomb to eliminate offending files also works if you observe some precautions -- some directories that were created can have the same names as existing directories. Never do rm in -exec or via xargs without testing.
Notable quotes:
"... You don't want to just rm -r everything that tar tf tells you, since it might include directories that were not empty before unpacking! ..."
"... Another nice trick by @glennjackman, which preserves the order of files, starting from the deepest ones. Again, remove echo when done. ..."
"... One other thing: you may need to use the tar option --numeric-owner if the user names and/or group names in the tar listing make the names start in an unpredictable column. ..."
"... That kind of (antisocial) archive is called a tar bomb because of what it does. Once one of these "explodes" on you, the solutions in the other answers are way better than what I would have suggested. ..."
"... The easiest (laziest) way to do that is to always unpack a tar archive into an empty directory. ..."
"... The t option also comes in handy if you want to inspect the contents of an archive just to see if it has something you're looking for in it. If it does, you can, optionally, just extract the file(s) you want. ..."
Feb 04, 2017 | superuser.com

linux - Undo tar file extraction mess - Super User

first try to issue

tar tf archive
tar will list the contents line by line.

This can be piped to xargs directly, but beware : do the deletion very carefully. You don't want to just rm -r everything that tar tf tells you, since it might include directories that were not empty before unpacking!

You could do

tar tf archive.tar | xargs -d'\n' rm -v
tar tf archive.tar | sort -r | xargs -d'\n' rmdir -v

to first remove all files that were in the archive, and then the directories that are left empty.

sort -r (glennjackman suggested tac instead of sort -r in the comments to the accepted answer, which also works since tar 's output is regular enough) is needed to delete the deepest directories first; otherwise a case where dir1 contains a single empty directory dir2 will leave dir1 after the rmdir pass, since it was not empty before dir2 was removed.

This will generate a lot of

rm: cannot remove `dir/': Is a directory


and

rmdir: failed to remove `dir/': Directory not empty
rmdir: failed to remove `file': Not a directory

Shut this up with 2>/dev/null if it annoys you, but I'd prefer to keep as much information on the process as possible.

And don't do it until you are sure that you match the right files. And perhaps try rm -i to confirm everything. And have backups, eat your breakfast, brush your teeth, etc.

===

List the contents of the tar file like so:

tar tzf myarchive.tar

Then, delete those file names by iterating over that list:

while IFS= read -r file; do echo "$file"; done < <(tar tzf myarchive.tar.gz)

This will still just list the files that would be deleted. Replace echo with rm if you're really sure these are the ones you want to remove. And maybe make a backup to be sure.

In a second pass, remove the directories that are left over:

while IFS= read -r file; do rmdir "$file"; done < <(tar tzf myarchive.tar.gz)

This prevents directories with from being deleted if they already existed before.

Another nice trick by @glennjackman, which preserves the order of files, starting from the deepest ones. Again, remove echo when done.

tar tvf myarchive.tar | tac | xargs -d'\n' echo rm

This could then be followed by the normal rmdir cleanup.


Here's a possibility that will take the extracted files and move them to a subdirectory, cleaning up your main folder.
    #!/usr/bin/perl -w  

    use strict  ;  
    use   Getopt  ::  Long  ;  

    my $clean_folder   =     "clean"  ;  
    my $DRY_RUN  ;  
    die   "Usage: $0 [--dry] [--clean=dir-name]\n"  
          if     (     !  GetOptions  (  "dry!"     =>   \$DRY_RUN  ,  
                           "clean=s"     =>   \$clean_folder  ));  

      # Protect the 'clean_folder' string from shell substitution  
    $clean_folder   =~   s  /  '/'  \\  ''  /  g  ;  

      # Process the "tar tv" listing and output a shell script.  
    print   "#!/bin/sh\n"     if     (     !  $DRY_RUN   );  
      while     (<>)  
      {  
        chomp  ;  

          # Strip out permissions string and the directory entry from the 'tar' list  
        my $perms   =   substr  (  $_  ,     0  ,     10  );  
        my $dirent   =   substr  (  $_  ,     48  );  

          # Drop entries that are in subdirectories  
        next   if     (   $dirent   =~   m  :/.:     );  

          # If we're in "dry run" mode, just list the permissions and the directory  
          # entries.  
          #  
          if     (   $DRY_RUN   )  
          {  
            print   "$perms|$dirent\n"  ;  
            next  ;  
          }  

          # Emit the shell code to clean up the folder  
        $dirent   =~   s  /  '/'  \\  ''  /  g  ;  
        print   "mv -i '$dirent' '$clean_folder'/.\n"  ;  
      } 

Save this to the file fix-tar.pl and then execute it like this:

 $ tar tvf myarchive  .  tar   |   perl fix  -  tar  .  pl   --  dry 

This will confirm that your tar list is like mine. You should get output like:

  -  rw  -  rw  -  r  --|  batch
  -  rw  -  rw  -  r  --|  book  -  report  .  png
  -  rwx  ------|  CaseReports  .  png
  -  rw  -  rw  -  r  --|  caseTree  .  png
  -  rw  -  rw  -  r  --|  tree  .  png
drwxrwxr  -  x  |  sample  / 

If that looks good, then run it again like this:

$ mkdir cleanup
$ tar tvf myarchive  .  tar   |   perl fix  -  tar  .  pl   --  clean  =  cleanup   >   fixup  .  sh 

The fixup.sh script will be the shell commands that will move the top-level files and directories into a "clean" folder (in this instance, the folder called cleanup). Have a peek through this script to confirm that it's all kosher. If it is, you can now clean up your mess with:

 $ sh fixup  .  sh 

I prefer this kind of cleanup because it doesn't destroy anything that isn't already destroyed by being overwritten by that initial tar xv.

Note: if that initial dry run output doesn't look right, you should be able to fiddle with the numbers in the two substr function calls until they look proper. The $perms variable is used only for the dry run so really only the $dirent substring needs to be proper.

One other thing: you may need to use the tar option --numeric-owner if the user names and/or group names in the tar listing make the names start in an unpredictable column.

One other thing: you may need to use the tar option --numeric-owner if the user names and/or group names in the tar listing make the names start in an unpredictable column.

===

That kind of (antisocial) archive is called a tar bomb because of what it does. Once one of these "explodes" on you, the solutions in the other answers are way better than what I would have suggested.

The best "solution", however, is to prevent the problem in the first place.

The easiest (laziest) way to do that is to always unpack a tar archive into an empty directory. If it includes a top level directory, then you just move that to the desired destination. If not, then just rename your working directory (the one that was empty) and move that to the desired location.

If you just want to get it right the first time, you can run tar -tvf archive-file.tar | less and it will list the contents of the archive so you can see how it is structured and then do what is necessary to extract it to the desired location to start with.

The t option also comes in handy if you want to inspect the contents of an archive just to see if it has something you're looking for in it. If it does, you can, optionally, just extract the file(s) you want.

[Nov 06, 2016] GNU tar 1.29 6.1 Choosing and Naming Archive Files

www.gnu.org

The `-C' option allows to avoid using subshells:

$ tar -C sourcedir -cf - . | tar -C targetdir -xpf -

[Nov 06, 2016] How to restore a backup from a tgz file in linux

serverfault.com

Antonio Alimba Jun 9 '14 at 13:01

How can i restore from a backup.tgz file generated from another linux server on my own server? I tried the command the following command:
tar xvpfz backup.tgz -C /

The above command worked, but it replaced the existing system files which made my linux server not to work properly.

How can i restore without running into trouble?

You can use --skip-old-files command to tell tar not to overwrite existing files.

You could still run into problem with the backup files, if the software versions are different between the two servers. Some data file structure changes might have happened, and things might stop working.

A more refined backup process should be developed.

[Nov 06, 2016] Backup and restore using tar

www.unix.com
Q:

tar -cjpf /backup /bin /etc /home /opt /root /sbin /usr /var /boot

When i include the / directory it also tar's the /lib /sys /proc /dev filesystems too (and more but these seem to be problem directories.)

Although i have never tried to restore the /sys /proc and /dev directories I have not seen anyone mention that your cant restore /lib but when i tried the server crashed and would not even start the kernel (not even in single user mode).

Can anyone let me know why this happened and provide a more comprehensive list of directories than the 4 mentioned as to what should and shouldn't be backed up and restored? Or point me to a useful site that might explain why you should or shouldn't backup each one?

A:
There's no point in backing-up things like /proc because that's the dynamic handling of processes and memory working sets (virtual memory).

However, directories like /lib, although problematic to restore on a running system, you would definitely need them in a disaster recovery situation. You would restore /lib to hard disk in single user or cd boot mode.

So you need to backup all non-process, non-memory files for the backup to be sufficient to recover. It doesn't mean, however, that you should attempt to restore them on a running (multi-user) system.

Full Hard-Drive Backup with Linux Tar

6.4 Excluding Some Files

To avoid operating on files whose names match a particular pattern, use the `--exclude' or `--exclude-from' options.

`--exclude=pattern'
Causes tar to ignore files that match the pattern.

The `--exclude=pattern' option prevents any file or member whose name matches the shell wildcard (pattern) from being operated on. For example, to create an archive with all the contents of the directory `src' except for files whose names end in `.o', use the command `tar -cf src.tar --exclude='*.o' src'.

You may give multiple `--exclude' options.

`--exclude-from=file'
`-X file'
Causes tar to ignore files that match the patterns listed in file.

Use the `--exclude-from' option to read a list of patterns, one per line, from file; tar will ignore files matching those patterns. Thus if tar is called as `tar -c -X foo .' and the file `foo' contains a single line `*.o', no files whose names end in `.o' will be added to the archive.

Notice, that lines from file are read verbatim. One of the frequent errors is leaving some extra whitespace after a file name, which is difficult to catch using text editors.

However, empty lines are OK.

When archiving directories that are under some version control system (VCS), it is often convenient to read exclusion patterns from this VCS' ignore files (e.g. `.cvsignore', `.gitignore', etc.) The following options provide such possibility:

 
`--exclude-vcs-ignores'
Before archiving a directory, see if it contains any of the following files: `cvsignore', `.gitignore', `.bzrignore', or `.hgignore'. If so, read ignore patterns from these files.

The patterns are treated much as the corresponding VCS would treat them, i.e.:

`.cvsignore'
Contains shell-style globbing patterns that apply only to the directory where this file resides. No comments are allowed in the file. Empty lines are ignored.
`.gitignore'
Contains shell-style globbing patterns. Applies to the directory where `.gitfile' is located and all its subdirectories.

Any line beginning with a `#' is a comment. Backslash escapes the comment character.

`.bzrignore'
Contains shell globbing-patterns and regular expressions (if prefixed with `RE:'(16). Patterns affect the directory and all its subdirectories.

Any line beginning with a `#' is a comment.

`.hgignore'
Contains posix regular expressions(17). The line `syntax: glob' switches to shell globbing patterns. The line `syntax: regexp' switches back. Comments begin with a `#'. Patterns affect the directory and all its subdirectories.
`--exclude-ignore=file'
Before dumping a directory, tar checks if it contains file. If so, exclusion patterns are read from this file. The patterns affect only the directory itself.
`--exclude-ignore-recursive=file'
Same as `--exclude-ignore', except that the patterns read affect both the directory where file resides and all its subdirectories.
 
`--exclude-vcs'
Exclude files and directories used by following version control systems: `CVS', `RCS', `SCCS', `SVN', `Arch', `Bazaar', `Mercurial', and `Darcs'.

As of version 1.29, the following files are excluded:

  • `CVS/', and everything under it
  • `RCS/', and everything under it
  • `SCCS/', and everything under it
  • `.git/', and everything under it
  • `.gitignore'
  • `.gitmodules'
  • `.gitattributes'
  • `.cvsignore'
  • `.svn/', and everything under it
  • `.arch-ids/', and everything under it
  • `{arch}/', and everything under it
  • `=RELEASE-ID'
  • `=meta-update'
  • `=update'
  • `.bzr'
  • `.bzrignore'
  • `.bzrtags'
  • `.hg'
  • `.hgignore'
  • `.hgrags'
  • `_darcs'
`--exclude-backups'
Exclude backup and lock files. This option causes exclusion of files that match the following shell globbing patterns:
.#*
*~
#*#

When creating an archive, the `--exclude-caches' option family causes tar to exclude all directories that contain a cache directory tag. A cache directory tag is a short file with the well-known name `CACHEDIR.TAG' and having a standard header specified in http://www.brynosaurus.com/cachedir/spec.html. Various applications write cache directory tags into directories they use to hold regenerable, non-precious data, so that such data can be more easily excluded from backups.

There are three `exclude-caches' options, each providing a different exclusion semantics:

`--exclude-caches'
Do not archive the contents of the directory, but archive the directory itself and the `CACHEDIR.TAG' file.
`--exclude-caches-under'
Do not archive the contents of the directory, nor the `CACHEDIR.TAG' file, archive only the directory itself.
`--exclude-caches-all'
Omit directories containing `CACHEDIR.TAG' file entirely.

Another option family, `--exclude-tag', provides a generalization of this concept. It takes a single argument, a file name to look for. Any directory that contains this file will be excluded from the dump. Similarly to `exclude-caches', there are three options in this option family:

`--exclude-tag=file'
Do not dump the contents of the directory, but dump the directory itself and the file.
`--exclude-tag-under=file'
Do not dump the contents of the directory, nor the file, archive only the directory itself.
`--exclude-tag-all=file'
Omit directories containing file file entirely.

Multiple `--exclude-tag*' options can be given.

For example, given this directory:

 
$ find dir
dir
dir/blues
dir/jazz
dir/folk
dir/folk/tagfile
dir/folk/sanjuan
dir/folk/trote

The `--exclude-tag' will produce the following:

 
$ tar -cf archive.tar --exclude-tag=tagfile -v dir
dir/
dir/blues
dir/jazz
dir/folk/
tar: dir/folk/: contains a cache directory tag tagfile;
  contents not dumped
dir/folk/tagfile

Both the `dir/folk' directory and its tagfile are preserved in the archive, however the rest of files in this directory are not.

Now, using the `--exclude-tag-under' option will exclude `tagfile' from the dump, while still preserving the directory itself, as shown in this example:

 
$ tar -cf archive.tar --exclude-tag-under=tagfile -v dir
dir/
dir/blues
dir/jazz
dir/folk/
./tar: dir/folk/: contains a cache directory tag tagfile;
  contents not dumped

Finally, using `--exclude-tag-all' omits the `dir/folk' directory entirely:

 
$ tar -cf archive.tar --exclude-tag-all=tagfile -v dir
dir/
dir/blues
dir/jazz
./tar: dir/folk/: contains a cache directory tag tagfile;
  directory not dumped

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

Problems with Using the exclude Options

Some users find `exclude' options confusing. Here are some common pitfalls:

  • The main operating mode of tar does not act on a file name explicitly listed on the command line, if one of its file name components is excluded. In the example above, if you create an archive and exclude files that end with `*.o', but explicitly name the file `dir.o/foo' after all the options have been listed, `dir.o/foo' will be excluded from the archive.
  • You can sometimes confuse the meanings of `--exclude' and `--exclude-from'. Be careful: use `--exclude' when files to be excluded are given as a pattern on the command line. Use `--exclude-from' to introduce the name of a file which contains a list of patterns, one per line; each of these patterns can exclude zero, one, or many files.
  • When you use `--exclude=pattern', be sure to quote the pattern parameter, so GNU tar sees wildcard characters like `*'. If you do not do this, the shell might expand the `*' itself using files at hand, so tar might receive a list of files instead of one pattern, or none at all, making the command somewhat illegal. This might not correspond to what you want.

    For example, write:

     
    $ tar -c -f archive.tar --exclude '*.o' directory
    

    rather than:

     
    # Wrong!
    $ tar -c -f archive.tar --exclude *.o directory
    
  • You must use use shell syntax, or globbing, rather than regexp syntax, when using exclude options in tar. If you try to use regexp syntax to describe files to be excluded, your command might fail.
  • See The change in semantics must have occurred before 1.11, so I doubt if it is worth mentioning at all. Anyway, should at least specify in which version the semantics changed.

    In earlier versions of tar, what is now the `--exclude-from' option was called `--exclude' instead. Now, `--exclude' applies to patterns listed on the command line and `--exclude-from' applies to patterns listed in a file.

Exclude File

tar has the ability to ignore specified files and directories contained in a special file. the localtion of the file is specified with option -X.  The syntax is one definition per line. tar also has the capability to understand regular expressions (regexps). For example:

# Not old backups                                                               
/opt/backup/arch-full*                                                                   
                                                                                
# Not temporary files                                                           
/tmp/

# Not the cache for pacman
/var/cache/pacman/pkg/

see BackupYourSystem-TAR - Community Help Wiki

Recommended Links

Softpanorama hot topic of the month

Softpanorama Recommended

...



Etc

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least


Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: February 21, 2017