Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

How to have safer rm command and avoid some blunders

News Books Recommended Links Reference Saferm -- wrapper for rm command User proofing

Creative uses of rm

Root deletion protection Safe-rm Lack of testing complex, potentially destructive, commands Executing command in a wrong directory Pure stupidity Typical Errors In Using Find Performing the operation on a wrong computer
Unix mv command Unix cp command ln command Using -exec option with find Horror stories Unix History with some Emphasis on Scripting

Humor


Introduction

The command rm is very important and it is very primitive. Like a blade it can cause catastrophic damage to server filesystems in a shaky hands.

Many Unix sysadmin horror stories are related to unintended consequences such as side effects of classic Unix commands. One of the most prominent of such commands is rm.   Which has several idiosyncrasies which are iether easily forgotten or unknown by novice sysadmins. For example behavior of rm -r .* can easily be forgotten from one encounter to another, especially if sysadmin works both with Unix and Windows servers. Consequences if executed as root are tragic...  

While most, probably 99.9% of operations with rm are successful it is the remaining.01% that create a lot of damage and is subject to sysadmin folklore.

The command rm has relatively few options.  Two of the then are especially important:

Linux uses GNU implementation that as of April 2018 has the following options (note relatively new option -I ):

Wiping out useful data by using rm with wrong parameters or options is probably  the most common blunder by both novice and seasoned sysadmins. It happens rarely but the result are often devastating.  This danger can't be completely avoided, but in addition to having an up-to-date backup, there are several steps that you can do to reduce the chances of the major damage to the data:

  1. Do backup before any dangerous operation. Always do the backup of /etc on the server that you plan to work at the beginning of your working day. This can and probably should be done from dot scripts (such as .bash_profile)  on login.  Especially if you plan to delete files from such system directories such  as /etc,  /boot, /root, etc.  Or if the operation you intend to do is complex and requires traversing filesystem using of find -exec option or find -delete option. That's Rule No. 1
  2. Use wrapper such as saferm of safe-rm.  It really helps to prevent  many typical cases of accidental deletion of important system and user files
  3. If you plan to delete multiple files/directories using rm -r or  find -exec (see also Typical Errors In Using Find),  or regular expressions, first please run ls, tree, or find command with the argument you intend to use in rm command  and visually verify correctness of your list. Wrappers for rm command do it automatically.
  4. It is much safer to delete individual files using file manager like WinSCP, or Midnight Commander, then do this directly by typing on the command line.  In this case you have a very important visual feedback, and this less likely to make a mistake. Also you do not need to type the name of file or directory to be deleted, which also help to avoid the whole class of errors.
  5. When you are programming a simple script for file deletion, try to type in generated rm command (or any other destructive command) a part of the path  In other words writing rm $MYDIR is a very bad idea. It should be something like rm /home/joeuser/$MYDIR This protects from cases when your variable for some reason is left undefined, and your command will mistakenly operate on root directory, or current directory.  Similarly, "chmod -R 755 /Backup/$mypath" is much safer then chmod -R 755 $mybackup
     
  6. Use move to Trash folder instead of deletion, if you have enough space for files that you are deleting. You can delete them from the Trash folder later on.
     
  7. Use option -I instead of -i when deleting multiple files. First of all -i option (which it default alias in Red Hat and other distributions) is really annoying.  Moreover after a while the answer Yes became automatic, ingrained in your brain,  and that's a huge danger.  Many users start eventually simply disable it either by using option -f  or by specifying path, and thus avoiding using this alias. As option  -I is less intrusive it can be recommended as a better replacement. Unfortunately it does not solve that problem of reckless actions, but nothing probably can solve it. It prompts you only once before removing more than three files, or when removing recursively. See rm(1) remove files-directories:

    If the -I or --interactive=once option is given, and there are more than three files or the -r, -R, or --recursive are given, then rm prompts the user for whether to proceed with the entire operation. If the response is not affirmative, the entire command is aborted.

  8. To delete subdirectories and files starting with dot you can use "shopt -s dotglob". 
  9. Do not improvise complex, potentially destructive commands on the command line without necessity. History of command is your friend. If the command is complex or destructive it makes sense to write in the editor first, not directly improvise it on the command line.  Sometime while typing on command line  you can hit the Enter key or inserts a blank where you do not  intend. Using the editor help to avoid those blunders.

NOTES:

Classic blunders with RM command

Without monthly "safety minutes" you usually forget about the dangers that rm has and can accidentally step on the same rake again and again ;-).

There are several classic blunders committed using the rm command :

  1. Running rm - R command with regular expression, without testing regex expansion first by using ls . You should always use ls -Rl command to test complex rm -R  commands (  -R, --recursive means process  subdirectories recursively).
  2. Mistyping complex path or file name in rm commands. It's always better to use copy and paste operation for directories and files used in rm command as it helps to avoid various typos. If you do recursive deletion is useful to have a "dry run" using find or ls to see the files.

    Here are to example of typos:

    rm -rf /tmp/foo/bar/ *

    instead of

    rm -rf /tmp/foo/bar/*

    ===

    Let's assume that you are in directory /backup that has  a subdirectory etc that you want to delete. As path "/etc" is ingrained in sysadmin mind it is very easy automatically/subconsciously type

    rm -r /etc

    instead of

    rm -r etc

    And realize  what you have done in a second or so. That's why for commands which include names OS system directories it is safer to type them in the editor, inspect them and only then execute that command on the command line.  If you are using terminal emulator on Windows desktop then Notepad++ or similar editor is OK.

    This is also why it is prudent to alias rm to the saferm script of your choice, which should prevent such mishaps (aliases are not used in non interactive sessions)

    This actually is an interesting type of "induced" error because /etc is typed daily so often that it kind of ingrained in sysadmin head and can be typed subconsciously instead of etc

  3. Using option -r with  .*  regex without understanding its consequences.   That essentially make rm -r recursive and it traverses down to parent directories as this basic regular expression matches ".." (parent directory). 
  4. You can accidentally hit Enter by mistake before finishing typing the line containing rm command.  If this is after the first character and the first character is /, or after several characters and the first are the  name of system directory like  /boot in older versions of Linux  you  would find yourself in trouble. Sometimes this happens with the new keyboard when you still do not fully adapted to new keys "feel".
  5. You can accidentally insert a blank  after *, this separating the argument into two: "*" and the rest, with corresponding consequences.  Argument * probably should be allowed if and only if it is a single argument on the command  like unless -f is specified.  

Those cases can be viewed as shortcomings of rm implementation (For example, * should be allowed only as a single argument, not as one of several arguments, but Unix expands argument  before passing  them to command  (expansion is done  by shell and there is not simple access to the  original command line) so it is tricky to check; rm also should automatically block deletion of system directories like //etc/ and the list of "protected" directories specified in its config file unless -f  flag is specified. Using has no system attribute so it is difficult to distinguish system files form the rest, but files owned by root:root probably deserve a special treatment as system files, even if one is working as root.

Those cases can be viewed as shortcomings of rm implementation (it should automatically block * deletion of system directories like //etc/ and so on unless -f  flag is specified. As well as any system file.  Also Unix does not have system attribute for files although sticky bit on files can be used instead or along with ownership of sys instead of root).

Those cases can be viewed as shortcomings of rm implementation (it should automatically block * deletion of system directories like //etc/ and so on unless -f  flag is specified. As well as any system file.  Also Unix does not have system attribute for files although sticky bit on files can be used instead along with ownership of sys instead of root).

Writing a wrapper like "saferm"

Unix lacks system attributes for files although sticky bit on files or immutable attribute can be used instead.  It is wise to use wrappers for rm. There are several more or less usable approaches for writing such a wrapper:

In view of  danger of  rm -r unleashed as root it is wise to use wrappers for rm and alias rm to this wrapper in root .bash_profile or other profile file that you are using for root.  Aliases are not used in non-interactive sessions. So it will not affect any scripts. And in command line sessions rm is usually typed without path (which creates another set of dangers). 

There are several more or less usable features that might wish to experiment with when writing such a wrapper:

Of course each environment is unique and such a wrapper should take into account idiosyncrasies of such an environment.

If you know Perl, you can use Safe-rm as the initial implementation and enhance it. If it licensed under GPL. It present in Ubuntu I think.

How to use
-----------

Once you have installed safe-rm on your system (see INSTALL), you will need to
fill the system-wide or user-specific blacklists with the paths that you'd like
to protect against accidental deletion.

The system-wide blacklist lives in /etc/safe-rm.conf and you should probably add
paths like these:

  /
  /etc
  /usr
  /usr/lib
  /var

The user-specific blacklist lives in ~/.config/safe-rm and could include things like:

  /home/username/documents
  /home/username/documents/*
  /home/username/.mozilla


Other approaches
-----------------

If you want more protection than what safe-rm can offer, here are a few suggestions.

You could of course request confirmation every time you delete a file by putting this in
your /etc/bash.bashrc:

  alias rm='rm -i'

But this won't protect you from getting used to always saying yes, or from accidently
using 'rm -rf'.

Or you could make use of the Linux filesystem "immutable" attribute by marking (as root)
each file you want to protect:

  chattr +i file

Of course this is only usable on filesystems which support this feature.

Here are two projects which allow you to recover recently deleted files by trapping
all unlink(), rename() and open() system calls through the LD_PRELOAD facility:

  delsafe
  http://homepage.esoterica.pt/~nx0yew/delsafe/

  libtrashcan
  http://hpux.connect.org.uk/hppd/hpux/Development/Libraries/libtrash-0.2/readme.html

There are also projects which implement the FreeDesktop.org trashcan spec. For example:

  trash-cli
  http://code.google.com/p/trash-cli/

Finally, this project is a fork of GNU coreutils and adds features similar to safe-rm
to the rm command directly:

  http://wiki.github.com/d5h/rmfd/

Backup files before executing rm command, if you deleting a directory and it is not that large

Backups before execution of rm command are important. For example making backup of /etc directory on modern server takes a couple of seconds, but can save quite a lot of nerves in situations that otherwise can s can be devastating. For example, in the example above you would erase all your files and subdirectories in the /etc directory. Modern flavors of Unix usually prevent erasing / but not /etc. Linux which uses GNU version of rm prevents erasing all level 2 system directories.

You can also move directories to /Trash folder and delete them after, say 7 days or so.

You can also move the directory to /Trash folder is it is relatively small. In this forlder you can delete files that are say, older then 7 days with a cron script, if the total size of this directory exceeds certain threshold. 

More on blunders with the deleting a directory from the backup

I once automatically typed /etc instead of etc trying to delete directory to free space on a backup directory on a production server (/etc probably in engraved in sysadmin head as it is typed so often and can be substituted for etc subconsciously).  I realized that it was mistake and cancelled the command, but it was a fast server and one third of /etc was gone. The rest of the day was spoiled...  Actually not completely: I learned quite a bit about the behavior of AIX in this situation and the structure of AIX /etc directory this day so each such disaster is actually a great learning experience, almost like one day training course ;-). But it's much less nerve wracking to get this knowledge from the course... Another interesting thing is having backup was not enough is this case -- enterprise backup software stopped working on a damaged server. The same was true for telnet and ssh. And this was a remote server is a datacenter across the country.  I restored the directory on the other non-production server (overwriting its /etc directory in this second box with the help of operations, tell me about cascading errors and Murphy law :-). Then netcat helped to transfer the tar file. 

If you are working as root and perform dangerous operations never type a path of the command, copy it from the screen. If you can copy command from history instead of typing, do it ! Always check if the directory you are trying to delete is a symbolic link, such symbolic links are often used in home directories to simplify navigation...

Such blunders are really persistent as often used directories are types in a "semiconscious" fashion, in "autopilot" mode and you realize the blunder only after you hit Enter. For example, many years after the first blunder, I similarly typed rm -r /Soft instead of rm -r Soft in a backup directory.  And I was in a hurry so I did not even bother to load my profile and word on "bare root" account which did not have this "saferm" feature I was talking about earlier. Unfortunately for me /Soft was a huge directory so backup was not practical as I needed to free space.  It with all kind of software in it and the server was very fast to in a couple of seconds that elapsed before I realized the blunder approximately half gigabyte of files and directories was wiped out.  Luckily I have previous day backup.

In such cases network services with authentication stop working and the only way to transfer files is using CD/DVD, USB drive or netcat. That's why it is useful to have netcat on servers: netcat is the last resort file transfer program when  services with authentication like ftp or scp stop working.  It is especially useful to have it if the datacenter is remote.

netcat is the last resort file transfer program when  services with authentication like ftp or scp stop working.  It is especially useful to have it if the datacenter is remote.

The saying is that experience keeps the most expensive school but fools are unable to learn in any other ;-).

The saying is that experience keeps the most expensive school but fools are unable to learn in any other ;-). Please read classic sysadmin horror stories. 

A simple extra space often produced horrible consequences:

cd /usr/lib
ls /tmp/foo/bar

I typed
rm -rf /tmp/foo/bar/ *

instead of

rm -rf /tmp/foo/bar/*

The system doesn’t run very will without all of it’s libraries……

You can block such behavior requiring that if -r option was given you should have one and only one argument to rm. That should be a part of functionality of your "saferm" script.

Important class of subtle Unix errors: dot-star errors

Another poplar class of recursive rm errors are so called dot-star-errors, which often happn when rm is used with find.  Novice sysadmins usually do not realize that '.*' also matches '..' often with disastrous consequences.  If you are in any doubt as for how a wildcard will expand use echo command to see the result.

Using "convenience" symlinks to other (typically deeply nested) directories inside a directories and forgetting about them

Command  rm does not follows symlinks. But if you use it via find -exec option it will follow symlinks. So if you have a symlink to some system on important directory from the directory you are deleting all files in them will be deleted.

That's the danger of "convenience symlinks" which are used to simplify access to deeply nested directories from home directories. Using aliases or functions might be a better approach. If you use them, do this only from the level two directory created specially for this purpose like /Fav and protect this directory in your saferm script.

Disastrous typos in name of the directory or regex -- unintended, automatically entered space

There are some exotic typos that can lean you to troubles, especially with -r  option. One of them is unintended space:

 rm /homejoeuser/old *

instead of

rm /homejoeuser/old*

Or, similarly:

 rm * .bak

instead of

rm *.bak
In all cases when you are deleting multiple files it makes sense to get a listing of them first via ls command and then replace the name of the command. this approach is safer that typing it by yourself, especially if you are working with the remote server.

Such a mistakes are even more damaging  is you use -r option. For example:

rm -rf /tmp/foo/bar/ *

instead of

rm -rf /tmp/foo/bar/*

NOTE:  it is prudent to block execution of rm commands with multiple argument, if option -r is used in your saferm script.

Root deletion protection

To remove a file you must have write permission on the folder where it is stored. Sun introduced "rm -rf /" protection in Solaris 10, first released in 2005. Upon executing the command, the system now reports that the removal of / is not allowed.

Shortly after, the same functionality was introduced into FreeBSD version of rm  utility.

GNU rm  refuses to execute rm -rf /  if the --preserve-root  option is given, which has been the default since version 6.4 of GNU Core Utilities was released in 2006.

No such protection exists for critical system directories like /etc, but you can imitate it putting file "-i" into such directories or using a wrapper for the interactive usage of the command.  This trick is based on the fact that the file -i will be the first file in the list of the arguments to rm and it will trigger the acknowledgements. you can laso consuder any directory owned by system accounts to be system and refuse to delete then in you Safe-rm script.

There are also other approaches as moving files instead of their deletion to /Trash directory.

Using echo command to see expansion of your wildcards, if any

To understand what will be actually being executed after shell expansion, preface your rm command with echo. For example if there is a filename starting with the dash you will receive very confusing message from rm

	$ rm *
	rm: unknown option -- -
	usage: rm [-f|-i] [-dPRrvW] file ...
To find out what caused it prefix the command with echo
echo rm *
One common mishap is running as root complex command like  find  ... -exec rm {} \; without any testing or even rereading the command several times before hitting Enter.   This is covered in more details in

Top Visited
Switchboard
Latest
Past week
Past month


NEWS CONTENTS

Old News ;-)

[Nov 08, 2019] How to prevent and recover from accidental file deletion in Linux Enable Sysadmin

/n/n
trashy - Trashy ·/n GitLab might make sense in simple cases. But often massive file deletions are about attempts/n to get free space.
/n
/n
Nov 08, 2019 | www.redhat.com
/n Back/n up/n/n

You knew this would come first. Data recovery is a time-intensive process and rarely/n produces 100% correct results. If you don't have a backup plan in place, start one now.

/n/n

Better yet, implement two. First, provide users with local backups with a tool like/n rsnapshot . This utility creates snapshots/n of each user's data in a ~/.snapshots directory, making it trivial for them to/n recover their own data quickly.

/n/n

There are a great many other open source backup applications that/n permit your users to manage their own backup schedules.

/n/n

Second, while these local backups are convenient, also set up a remote backup plan for your/n organization. Tools like AMANDA or/n BackupPC are solid choices/n for this task. You can run them as a daemon so that backups happen automatically.

/n/n

Backup planning and preparation pay for themselves in both time, and peace of mind. There's/n nothing like not needing emergency response procedures in the first place.

Ban rm/n/n

On modern operating systems, there is a Trash or Bin folder where users drag the files they/n don't want out of sight without deleting them just yet. Traditionally, the Linux terminal has/n no such holding area, so many terminal power users have the bad habit of permanently deleting/n data they believe they no longer need. Since there is no "undelete" command, this habit can be/n quite problematic should a power user (or administrator) accidentally delete a directory full/n of important data.

/n/n

Many users say they favor the absolute deletion of files, claiming that they prefer their/n computers to do exactly what they tell them to do. Few of those users, though, forego their/n rm command for the more complete shred , which really/n removes their data. In other words, most terminal users invoke the rm command/n because it removes data, but take comfort in knowing that file recovery tools exist as a/n hacker's un- rm . Still, using those tools take up their administrator's precious/n time. Don't let your users -- or yourself -- fall prey to this breach of logic.

/n/n

If you really want to remove data, then rm is not sufficient. Use the/n shred -u command instead, which overwrites, and then thoroughly deletes the/n specified data

/n/n

However, if you don't want to actually remove data, don't use rm . This command/n is not feature-complete, in that it has no undo feature, but has the capacity to be undone./n Instead, use trashy or/n trash-cli to "delete"/n files into a trash bin while using your terminal, like so:

/n
/n$ trash ~/example.txt/n$ trash --list/nexample.txt/n
/n/n

One advantage of these commands is that the trash bin they use is the same your desktop's/n trash bin. With them, you can recover your trashed files by opening either your desktop Trash/n folder, or through the terminal.

/n/n

If you've already developed a bad rm habit and find the trash/n command difficult to remember, create an alias for yourself:

/n
/n$ echo "alias rm='trash'"/n
/n/n

Even better, create this alias for everyone. Your time as a system administrator is too/n valuable to spend hours struggling with file recovery tools just because someone mis-typed an/n rm command.

Respond efficiently/n/n

Unfortunately, it can't be helped. At some point, you'll have to recover lost files, or/n worse. Let's take a look at emergency response best practices to make the job easier. Before/n you even start, understanding what caused the data to be lost in the first place can save you a/n lot of time:

/n/n /n/n

No matter how the problem began, start your rescue mission with a few best practices:

/n/n /n/n

Once you have a sense of what went wrong, It's time to choose the right tool to fix the/n problem. Two such tools are Scalpel and TestDisk , both of which operate just as well on/n a disk image as on a physical drive.

Practice (or, go break stuff)/n/n

At some point in your career, you'll have to recover data. The smart practices discussed/n above can minimize how often this happens, but there's no avoiding this problem. Don't wait/n until disaster strikes to get familiar with data recovery tools. After you set up your local/n and remote backups, implement command-line trash bins, and limit the rm command,/n it's time to practice your data recovery techniques.

/n/n

Download and practice using Scalpel, TestDisk, or whatever other tools you feel might be/n useful. Be sure to practice data recovery safely, though. Find an old computer, install Linux/n onto it, and then generate, destroy, and recover. If nothing else, doing so teaches you to/n respect data structures, filesystems, and a good backup plan. And when the time comes and you/n have to put those skills to real use, you'll appreciate knowing what to do.

/n /n /n
/n/n

[Nov 08, 2019] My first sysadmin mistake by Jim Hall

/n/n
Wiping out of /etc directory is one thing that sysadmin accidentally do. This is often happen/n if the other directory is name etc, for example /Backup/etc. In such cases you automatically put/n a slash in front of etc because it is ingrained in your mind. And you put the slash in front of/n etc subconsciously, not realizing what you are doing. And then faces consequences. If you do not/n use saferm, the result are pretty devastating. In most cases the sever does not die, but new/n logins are impossible. SSH session survives. That's why it is important to backup /etc/at the/n first login to the server. On modern severs it takes a couple of seconds.
/n
If subdirectories are intact then you still can copy the content from another server. But content of sysconfig subdirectory in /n linux is unique to the server and you need a backup to restore it.
/n
Notable quotes:
/n
"... As root. I thought I was deleting some stale cache files for one of our programs. Instead, I wiped out all files in the /etc directory by mistake. Ouch. ..."
/n
"... I put together a simple strategy: Don't reboot the server. Use an identical system as a template, and re-create the ..."
/n
/n
Nov 08, 2019 | opensource.com
/n rm/n command in the wrong directory. As root. I thought I was deleting some stale cache files for/n one of our programs. Instead, I wiped out all files in the /etc directory by/n mistake. Ouch./n/n

My clue that I'd done something wrong was an error message that rm couldn't/n delete certain subdirectories. But the cache directory should contain only files! I immediately/n stopped the rm command and looked at what I'd done. And then I panicked. All at/n once, a million thoughts ran through my head. Did I just destroy an important server? What was/n going to happen to the system? Would I get fired?

/n/n

Fortunately, I'd run rm * and not rm -rf * so I'd deleted only/n files. The subdirectories were still there. But that didn't make me feel any better.

/n/n

Immediately, I went to my supervisor and told her what I'd done. She saw that I felt really/n dumb about my mistake, but I owned it. Despite the urgency, she took a few minutes to do some/n coaching with me. "You're not the first person to do this," she said. "What would someone else/n do in your situation?" That helped me calm down and focus. I started to think less about the/n stupid thing I had just done, and more about what I was going to do next.

/n/n

I put together a simple strategy: Don't reboot the server. Use an identical system as a/n template, and re-create the /etc directory.

/n/n

Once I had my plan of action, the rest was easy. It was just a matter of running the right/n commands to copy the /etc files from another server and edit the configuration so/n it matched the system. Thanks to my practice of documenting everything, I used my existing/n documentation to make any final adjustments. I avoided having to completely restore the server,/n which would have meant a huge disruption.

/n/n

To be sure, I learned from that mistake. For the rest of my years as a systems/n administrator, I always confirmed what directory I was in before running any command.

/n/n

I also learned the value of building a "mistake strategy." When things go wrong, it's/n natural to panic and think about all the bad things that might happen next. That's human/n nature. But creating a "mistake strategy" helps me stop worrying about what just went wrong and/n focus on making things better. I may still think about it, but knowing my next steps allows me/n to "get over it."

/n /n
/n/n

[Nov 08, 2019] How to use Sanoid to recover from data disasters Opensource.com

/n/n
/n
Nov 08, 2019 | opensource.com
/n /n/n

filesystem-level snapshot replication to move data from one machine to another, fast/n . For enormous blobs like virtual machine images, we're talking /n several orders of magnitude faster than rsync .

/n/n

If that isn't cool enough already, you don't even necessarily need to restore from/n backup if you lost the production hardware; you can just boot up the VM directly on the local/n hotspare hardware, or the remote disaster recovery hardware, as appropriate. So even in case/n of catastrophic hardware failure , you're still looking at that 59m RPO, <1m RTO.

/n/n

https://www.youtube.com/embed/5hEixXutaPo

/n/n

Backups -- and recoveries -- don't get much easier than this.

/n/n

The syntax is dead simple:

/n
/nroot@box1:~# syncoid pool/images/vmname root@box2:pooln/name/images/vmname/n
/n/n

Or if you have lots of VMs, like I usually do... recursion!

/n
/nroot@box1:~# syncoid -r pool/images/vmname root@box2:po/nolname/images/vmname/n
/n/n

This makes it not only possible, but easy to replicate multiple-terabyte VM images/n hourly over a local network, and daily over a VPN. We're not talking enterprise 100mbps/n symmetrical fiber, either. Most of my clients have 5mbps or less available for upload, which/n doesn't keep them from automated, nightly over-the-air backups, usually to a machine sitting/n quietly in an owner's house.

Preventing your own Humpty Level Events/n/n

Sanoid is open source software, and so are all its dependencies. You can run Sanoid and/n Syncoid themselves on pretty much anything with ZFS. I developed it and use it on Linux myself,/n but people are using it (and I support it) on OpenIndiana, FreeBSD, and FreeNAS too.

/n/n

You can find the GPLv3 licensed code on the/n website (which actually just redirects to Sanoid's GitHub project page), and there's also a/n Chef Cookbook and an/n Arch AUR repo/n available from third parties.

/n /n
/n/n

[Nov 07, 2019] How to prevent and recover from accidental file deletion in Linux Enable Sysadmin

/n/n
trashy - Trashy ·/n GitLab might make sense in simple case. But often deletions are about increasing free/n space.
/n
/n
Nov 07, 2019 | www.redhat.com
/n Back/n up/n/n

You knew this would come first. Data recovery is a time-intensive process and rarely/n produces 100% correct results. If you don't have a backup plan in place, start one now.

/n/n

Better yet, implement two. First, provide users with local backups with a tool like/n rsnapshot . This utility creates snapshots/n of each user's data in a ~/.snapshots directory, making it trivial for them to/n recover their own data quickly.

/n/n

There are a great many other open source backup applications that/n permit your users to manage their own backup schedules.

/n/n

Second, while these local backups are convenient, also set up a remote backup plan for your/n organization. Tools like AMANDA or/n BackupPC are solid choices/n for this task. You can run them as a daemon so that backups happen automatically.

/n/n

Backup planning and preparation pay for themselves in both time, and peace of mind. There's/n nothing like not needing emergency response procedures in the first place.

Ban rm/n/n

On modern operating systems, there is a Trash or Bin folder where users drag the files they/n don't want out of sight without deleting them just yet. Traditionally, the Linux terminal has/n no such holding area, so many terminal power users have the bad habit of permanently deleting/n data they believe they no longer need. Since there is no "undelete" command, this habit can be/n quite problematic should a power user (or administrator) accidentally delete a directory full/n of important data.

/n/n

Many users say they favor the absolute deletion of files, claiming that they prefer their/n computers to do exactly what they tell them to do. Few of those users, though, forego their/n rm command for the more complete shred , which really/n removes their data. In other words, most terminal users invoke the rm command/n because it removes data, but take comfort in knowing that file recovery tools exist as a/n hacker's un- rm . Still, using those tools take up their administrator's precious/n time. Don't let your users -- or yourself -- fall prey to this breach of logic.

/n/n

If you really want to remove data, then rm is not sufficient. Use the/n shred -u command instead, which overwrites, and then thoroughly deletes the/n specified data

/n/n

However, if you don't want to actually remove data, don't use rm . This command/n is not feature-complete, in that it has no undo feature, but has the capacity to be undone./n Instead, use trashy or/n trash-cli to "delete"/n files into a trash bin while using your terminal, like so:

/n
/n$ trash ~/example.txt/n$ trash --list/nexample.txt/n
/n/n

One advantage of these commands is that the trash bin they use is the same your desktop's/n trash bin. With them, you can recover your trashed files by opening either your desktop Trash/n folder, or through the terminal.

/n/n

If you've already developed a bad rm habit and find the trash/n command difficult to remember, create an alias for yourself:

/n
/n$ echo "alias rm='trash'"/n
/n/n

Even better, create this alias for everyone. Your time as a system administrator is too/n valuable to spend hours struggling with file recovery tools just because someone mis-typed an/n rm command.

Respond efficiently/n/n

Unfortunately, it can't be helped. At some point, you'll have to recover lost files, or/n worse. Let's take a look at emergency response best practices to make the job easier. Before/n you even start, understanding what caused the data to be lost in the first place can save you a/n lot of time:

/n/n /n/n

No matter how the problem began, start your rescue mission with a few best practices:

/n/n /n/n

Once you have a sense of what went wrong, It's time to choose the right tool to fix the/n problem. Two such tools are Scalpel and TestDisk , both of which operate just as well on/n a disk image as on a physical drive.

Practice (or, go break stuff)/n/n

At some point in your career, you'll have to recover data. The smart practices discussed/n above can minimize how often this happens, but there's no avoiding this problem. Don't wait/n until disaster strikes to get familiar with data recovery tools. After you set up your local/n and remote backups, implement command-line trash bins, and limit the rm command,/n it's time to practice your data recovery techniques.

/n/n

Download and practice using Scalpel, TestDisk, or whatever other tools you feel might be/n useful. Be sure to practice data recovery safely, though. Find an old computer, install Linux/n onto it, and then generate, destroy, and recover. If nothing else, doing so teaches you to/n respect data structures, filesystems, and a good backup plan. And when the time comes and you/n have to put those skills to real use, you'll appreciate knowing what to do.

/n /n /n
/n/n

[Oct 25, 2019] Get inode number of a file on linux - Fibrevillage

/n/n
/n
Oct 25, 2019 | www.fibrevillage.com
/n /n/n

Get inode number of a file on linux

/n/n

An inode is a data structure in UNIX operating systems that contains important information/n pertaining to files within a file system. When a file system is created in UNIX, a set amount/n of inodes is created, as well. Usually, about 1 percent of the total file system disk space is/n allocated to the inode table.

/n/n

How do we find a file's inode ?

ls -i Command: display inode/n
/nls -i Command: display inode/n$ls -i /etc/bashrc/n131094 /etc/bashrc/n131094 is the inode of /etc/bashrc./n
Stat Command: display Inode/n
/n$stat /etc/bashrc/n  File: `/etc/bashrc'/n  Size: 1386          Blocks: 8          IO Block: 4096   regular file/nDevice: fd00h/64768d    Inode: 131094      Links: 1/nAccess: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)/nAccess: 2013-12-10 10:01:29.509908811 -0800/nModify: 2013-06-06 11:31:51.792356252 -0700/nChange: 2013-06-06 11:31:51.792356252 -0700/n
find command: display inode/n
/n$find ./ -iname sysfs_fc_tools.tar -printf '%p %i\n'/n./sysfs_fc_tools.tar 28311964/n
/n/n

Notes :

/n
/n    %p stands for file path/n    %i stands for inode number/n
tree command: display inode under a directory/n
/n#tree -a -L 1 --inodes /etc/n/etc/n├── [ 132896]  a2ps/n├── [ 132898]  a2ps.cfg/n├── [ 132897]  a2ps-site.cfg/n├── [ 133315]  acpi/n├── [ 131864]  adjtime/n├── [ 132340]  akonadi/n.../n
usecase of using inode/n
/nfind / -inum XXXXXX -print to find the full path for each file pointing to inode XXXXXX./n
/n/n

Though you can use the example to do rm action, but simply I discourage to do so, for/n security concern in find command, also in other file system, same inode refers a very different/n file.

filesystem repair/n/n

If you get a bad luck on your filesystem, most of time, run fsck to fix it. It helps if you/n have inode info of the filesystem in hand.
/n This is another big topic, I'll have another article for it.

/n /n
/n/n

[Oct 25, 2019] Howto Delete files by inode number by Erik

/n/n
/n
Feb 10, 2011 | erikimh.com
/n /n/n
/n linux/n administration - tips, notes and projects/n/n

6 Comments

/n/n

Ever mistakenly pipe output to a file with special characters that you couldn't/n remove?

/n/n
/n

-rw-r–r– 1 eriks eriks 4 2011-02-10 22:37 –fooface

/n
/n/n

Good luck. Anytime you pass any sort of command to this file, it's going to interpret it/n as a flag. You can't fool rm, echo, sed, or anything else into actually deeming this a file/n at this point. You do, however, have a inode for every file.

/n/n

Traditional methods fail:

/n/n
/n

[eriks@jaded: ~]$ rm -f –fooface
/n rm: unrecognized option '–fooface'
/n Try `rm ./–fooface' to remove the file `–fooface'.
/n Try `rm –help' for more information.
/n [eriks@jaded: ~]$ rm -f '–fooface'
/n rm: unrecognized option '–fooface'
/n Try `rm ./–fooface' to remove the file `–fooface'.
/n Try `rm –help' for more information.

/n
/n/n

So now what, do you live forever with this annoyance of a file sitting inside your/n filesystem, never to be removed or touched again? Nah.

/n/n

We can remove a file, simply by an inode number, but first we must find out the file inode/n number:

/n/n
/n

$ ls -il | grep foo

/n
/n/n

Output:

/n/n
/n

[eriks@jaded: ~]$ ls -il | grep foo
/n 508160 drwxr-xr-x 3 eriks eriks 4096 2010-10-27 18:13 foo3
/n 500724 -rw-r–r– 1 eriks eriks 4 2011-02-10 22:37 –fooface
/n 589907 drwxr-xr-x 2 eriks eriks 4096 2010-11-22 18:52 tempfoo
/n 589905 drwxr-xr-x 2 eriks eriks 4096 2010-11-22 18:48 tmpfoo

/n
/n/n

The number you see prior to the file permission set is actually the inode # of the file/n itself.

/n/n

Hint: 500724 is inode number we want removed.

/n/n

Now use find command to delete file by inode:

/n/n
/n

# find . -inum 500724 -exec rm -i {} \;

/n
/n/n

There she is.

/n/n
/n

[eriks@jaded: ~]$ find . -inum 500724 -exec rm -i {} \;
/n rm: remove regular file `./–fooface'? y

/n
/n
/n /n
/n/n

[Oct 25, 2019] unix - Remove a file on Linux using the inode number - Super User

/n/n
/n
Oct 25, 2019 | superuser.com
/n /n/n

,

/n/n
/n ome other methods include:/n/n

escaping the special chars:

/n
/n[~]$rm \"la\*/n
/n/n

use the find command and only search the current directory. The find command can search/n for inode numbers, and has a handy -delete switch:

/n
/n[~]$ls -i/n7404301 "la*/n/n[~]$find . -maxdepth 1 -type f -inum 7404301/n./"la*/n/n[~]$find . -maxdepth 1 -type f -inum 7404301 -delete/n[~]$ls -i/n[~]$/n
/n
/n/n

,

/n/n
/n Maybe I'm missing something, but.../n
/nrm '"la*'/n
/n/n

Anyways, filenames don't have inodes, files do. Trying to remove a file without removing/n all filenames that point to it will damage your filesystem.

/n
/n /n
/n/n

[Oct 25, 2019] Linux - Unix Find Inode Of a File Command

/n/n
/n
Jun 21, 2012 | www.cyberciti.biz
/n ... ... .. /n/n

stat Command: Display Inode/n/n

/n/n

You can also use the stat command as follows:
/n $ stat fileName-Here
/n $ stat /etc/passwd

/n Sample outputs:

/n
/n  File: `/etc/passwd'/n  Size: 1644            Blocks: 8          IO Block: 4096   regular file/nDevice: fe01h/65025d    Inode: 25766495    Links: 1/nAccess: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)/nAccess: 2012-05-05 16:29:42.000000000 +0530/nModify: 2012-05-05 16:29:20.000000000 +0530/nChange: 2012-05-05 16:29:21.000000000 +0530/n
/n/n

Share on /n Facebook Twitter

/n/n Posted by: Vivek Gite/n/n

The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a/n trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on/n SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly/n email newsletter .

/n /n
/n/n

[Sep 04, 2019] Basic Trap for File Cleanup

/n/n
/n
Sep 04, 2019 | www.putorius.net
/n /n/n

Basic Trap for File Cleanup

/n/n

Using an trap to cleanup is simple enough. Here is an example of using trap to clean up a/n temporary file on exit of the script.

/n
/n#!/bin/bash/ntrap "rm -f /tmp/output.txt" EXIT/nyum -y update > /tmp/output.txt/nif grep -qi "kernel" /tmp/output.txt; then/n     mail -s "KERNEL UPDATED" [email protected] < /tmp/output.txt/nfi/n
/n/n

NOTE: It is important that the trap statement be placed at the beginning of the script to/n function properly. Any commands above the trap can exit and not be caught in the trap.

/n/n

Now if the script exits for any reason, it will still run the rm command to delete the file./n Here is an example of me sending SIGINT (CTRL+C) while the script was/n running.

/n
/n# ./test.sh/n ^Cremoved '/tmp/output.txt'/n
/n/n

NOTE: I added verbose ( -v ) output to the rm command so it prints "removed". The ^C/n signifies where I hit CTRL+C to send SIGINT.

/n/n

This is a much cleaner and safer way to ensure the cleanup occurs when the script exists./n Using EXIT ( 0 ) instead of a single defined signal (i.e. SIGINT – 2) ensures the cleanup/n happens on any exit, even successful completion of the script.

/n /n
/n/n

[Aug 26, 2019] linux - Avoiding accidental 'rm' disasters - Super User

/n/n
/n
Aug 26, 2019 | superuser.com
/n /n/n

Avoiding accidental/n 'rm' disasters Ask/n Question Asked 6 years, 3 months ago Active 6 years,/n 3 months ago Viewed 1k times 1

/n/n

Mr_Spock/n ,May 26, 2013 at 11:30

/n/n
/n Today, using sudo -s , I wanted to rm -R ./lib/ , but I actually/n rm -R /lib/ ./n/n

I had to reinstall my OS (Mint 15) and re-download and re-configure all my packages. Not/n fun.

/n/n

How can I avoid similar mistakes in the future?

/n
/n/n

Vittorio/n Romeo ,May 26, 2013 at 11:55

/n/n
/n First of all, stop executing everything as root . You never really need to do this. Only run/n individual commands with sudo if you need to. If a normal command doesn't work/n without sudo, just call sudo !! to execute it again./n/n

If you're paranoid about rm , mv and other operations while/n running as root, you can add the following aliases to your shell's configuration file:

/n
/n[ $UID = 0 ] && \/n  alias rm='rm -i' && \/n  alias mv='mv -i' && \/n  alias cp='cp -i'/n
/n/n

These will all prompt you for confirmation ( -i ) before removing a file or/n overwriting an existing file, respectively, but only if you're root (the user/n with ID 0).

/n/n

Don't get too used to that though. If you ever find yourself working on a system that/n doesn't prompt you for everything, you might end up deleting stuff without noticing it. The/n best way to avoid mistakes is to never run as root and think about what exactly you're doing/n when you use sudo .

/n
/n /n
/n/n

[Aug 26, 2019] bash - How to prevent rm from reporting that a file was not found

/n/n
/n
Aug 26, 2019 | stackoverflow.com
/n /n/n

/n How to prevent rm from reporting that a file was not found? Ask Question Asked 7 years, 4 months ago Active/n /n 1 year, 4 months ago Viewed 101k times 133 19

/n
/n/n

pizza/n ,Apr 20, 2012 at 21:29

/n/n
/n I am using rm within a BASH script to delete many files. Sometimes the files are/n not present, so it reports many errors. I do not need this message. I have searched the man/n page for a command to make rm quiet, but the only option I found is/n -f , which from the description, "ignore nonexistent files, never prompt", seems/n to be the right choice, but the name does not seem to fit, so I am concerned it might have/n unintended consequences./n/n /n
/n/n

Keith/n Thompson ,Dec 19, 2018 at 13:05

/n/n
/n The main use of -f is to force the removal of files that would not be removed/n using rm by itself (as a special case, it "removes" non-existent files, thus/n suppressing the error message)./n/n

You can also just redirect the error message using

/n
/n$ rm file.txt 2> /dev/null/n
/n/n

(or your operating system's equivalent). You can check the value of $?/n immediately after calling rm to see if a file was actually removed or not.

/n
/n/n

vimdude/n ,May 28, 2014 at 18:10

/n/n
/n Yes, -f is the most suitable option for this./n
/n/n

tripleee ,Jan 11 at 4:50

/n/n
/n -f is the correct flag, but for the test operator, not rm/n
/n[ -f "$THEFILE" ] && rm "$THEFILE"/n
/n/n

this ensures that the file exists and is a regular file (not a directory, device node/n etc...)

/n
/n/n

mahemoff/n ,Jan 11 at 4:41

/n/n
/n \rm -f file will never report not found./n
/n/n

Idelic/n ,Apr 20, 2012 at 16:51

/n/n
/n As far as rm -f doing "anything else", it does force ( -f is/n shorthand for --force ) silent removal in situations where rm would/n otherwise ask you for confirmation. For example, when trying to remove a file not writable by/n you from a directory that is writable by you./n
/n/n

Keith/n Thompson ,May 28, 2014 at 18:09

/n/n
/n I had same issue for cshell. The only solution I had was to create a dummy file that matched/n pattern before "rm" in my script./n
/n /n
/n/n

[Aug 26, 2019] shell - rm -rf return codes

/n/n
/n
Aug 26, 2019 | superuser.com
/n /n/n

rm/n -rf return codes Ask/n Question Asked 6 years ago Active 6 years ago Viewed/n 15k times 8 0

/n
/n/n

SheetJS ,Aug/n 15, 2013 at 2:50

/n/n
/n Any one can let me know the possible return codes for the command rm -rf other than zero i.e,/n possible return codes for failure cases. I want to know more detailed reason for the failure/n of the command unlike just the command is failed(return other than 0)./n
/n/n

Adrian/n Frühwirth ,Aug 14, 2013 at 7:00

/n/n
/n To see the return code, you can use echo $? in bash./n/n

To see the actual meaning, some platforms (like Debian Linux) have the perror/n binary available, which can be used as follows:

/n
/n$ rm -rf something/; perror $?/nrm: cannot remove `something/': Permission denied/nOS error code   1:  Operation not permitted/n
/n/n

rm -rf automatically suppresses most errors. The most likely error you will/n see is 1 (Operation not permitted), which will happen if you don't have/n permissions to remove the file. -f intentionally suppresses most errors

/n
/n/n

Adrian/n Frühwirth ,Aug 14, 2013 at 7:21

/n/n
/n grabbed coreutils from git..../n/n

looking at exit we see...

/n
/nopenfly@linux-host:~/coreutils/src $ cat rm.c | grep -i exit/n  if (status != EXIT_SUCCESS)/n  exit (status);/n  /* Since this program exits immediately after calling 'rm', rm need not/n  atexit (close_stdin);/n          usage (EXIT_FAILURE);/n        exit (EXIT_SUCCESS);/n          usage (EXIT_FAILURE);/n        error (EXIT_FAILURE, errno, _("failed to get attributes of %s"),/n        exit (EXIT_SUCCESS);/n  exit (status == RM_ERROR ? EXIT_FAILURE : EXIT_SUCCESS);/n
/n/n

Now looking at the status variable....

/n
/nopenfly@linux-host:~/coreutils/src $ cat rm.c | grep -i status/nusage (int status)/n  if (status != EXIT_SUCCESS)/n  exit (status);/n  enum RM_status status = rm (file, &x);/n  assert (VALID_STATUS (status));/n  exit (status == RM_ERROR ? EXIT_FAILURE : EXIT_SUCCESS);/n
/n/n

looks like there isn't much going on there with the exit status.

/n/n

I see EXIT_FAILURE and EXIT_SUCCESS and not anything else.

/n/n

so basically 0 and 1 / -1

/n/n

To see specific exit() syscalls and how they occur in a process flow try this

/n
/nopenfly@linux-host:~/ $ strace rm -rf $whatever/n
/n/n

fairly simple.

/n/n

ref:

/n/n

http://www.unix.com/man-page/Linux/EXIT_FAILURE/exit/

/n
/n /n
/n/n

[Feb 21, 2019] https://github.com/MikeDacre/careful_rm

/n/n
/n
Feb 21, 2019 | github.com
/n /n/n

rm is a powerful *nix tool that simply drops a file from the drive index. It/n doesn't delete it or put it in a Trash can, it just de-indexes it which makes the file hard to/n recover unless you want to put in the work, and pretty easy to recover if you are willing to/n spend a few hours trying (use shred to actually secure erase files).

/n/n

careful_rm.py is inspired by the -I interactive mode of/n rm and by safe-rm . safe-rm adds a recycle/n bin mode to rm, and the -I interactive mode adds a prompt if you delete more than/n a handful of files or recursively delete a directory. ZSH also has an option to/n warn you if you recursively rm a directory.

/n/n

These are all great, but I found them unsatisfying. What I want is for rm to be quick and/n not bother me for single file deletions (so rm -i is out), but to let me know when/n I am deleting a lot of files, and to actually print a list of files that are about to be/n deleted . I also want it to have the option to trash/recycle my files instead of just/n straight deleting them.... like safe-rm , but not so intrusive (safe-rm defaults/n to recycle, and doesn't warn).

/n/n

careful_rm.py is fundamentally a simple rm wrapper, that accepts/n all of the same commands as rm , but with a few additional options features. In/n the source code CUTOFF is set to 3 , so deleting more files than that will prompt/n the user. Also, deleting a directory will prompt the user separately with a count of all files/n and subdirectories within the folders to be deleted.

/n/n

Furthermore, careful_rm.py implements a fully integrated trash mode that can be/n toggled on with -c . It can also be forced on by adding a file at/n ~/.rm_recycle , or toggled on only for $HOME (the best idea), by/n ~/.rm_recycle_home . The mode can be disabled on the fly by passing/n --direct , which forces off recycle mode.

/n/n

The recycle mode tries to find the best location to recycle to on MacOS or Linux, on MacOS/n it also tries to use Apple Script to trash files, which means the original location is/n preserved (note Applescript can be slow, you can disable it by adding a/n ~/.no_apple_rm file, but Put Back won't work). The best location for/n trashes goes in this order:

/n/n
    /n
  1. $HOME/.Trash on Mac or $HOME/.local/share/Trash on Linux
  2. /n/n
  3. <mountpoint>/.Trashes on Mac or/n <mountpoint>/.Trash-$UID on Linux
  4. /n/n
  5. /tmp/$USER_trash
  6. /n
/n/n

Always the best trash can to avoid Volume hopping is favored, as moving across file systems/n is slow. If the trash does not exist, the user is prompted to create it, they then also have/n the option to fall back to the root trash ( /tmp/$USER_trash ) or just/n rm the files.

/n/n

/tmp/$USER_trash is almost always used for deleting system/root files, but note/n that you most likely do not want to save those files, and straight rm is generally/n better.

/n /n
/n/n

[Feb 21, 2019] https://github.com/lagerspetz/linux-stuff/blob/master/scripts/saferm.sh by Eemil Lagerspetz

/n/n
Shell script that tires to implement trash can idea
/n
/n
Feb 21, 2019 | github.com
/n /n/n
/n/n /n /n /n /n
#!/bin/bash
/n/n /n /n /n /n
##
/n/n /n /n /n /n
## saferm.sh
/n/n /n /n /n /n
## Safely remove files, moving them to GNOME/KDE trash instead of deleting.
/n/n /n /n /n /n
## Made by Eemil Lagerspetz
/n/n /n /n /n /n
## Login <vermind@drache>
/n/n /n /n /n /n
##
/n/n /n /n /n /n
## Started on Mon Aug 11 22:00:58 2008 Eemil Lagerspetz
/n/n /n /n /n /n
## Last update Sat Aug 16 23:49:18 2008 Eemil Lagerspetz
/n/n /n /n /n /n
##
/n/n /n /n /n /n
/n/n /n /n /n /n
version= " 1.16 " ;
/n/n

... ... ...

/n/n

/n

/n /n
/n/n

[Feb 21, 2019] The rm='rm -i' alias is an horror

/n/n
/n
Feb 21, 2019 | superuser.com
/n /n/n

The rm='rm -i' alias is an horror because after a while using it, you will/n expect rm to prompt you by default before removing files. Of course, one day/n you'll run it with an account that hasn't that alias set and before you understand what's going/n on, it is too late.

/n/n

... ... ...

/n/n If you want save aliases, but don't want to risk getting used to the commands working/n differently on your system than on others, you can to disable rm like this/n /n
alias rm='echo "rm is disabled, use remove or trash or /bin/rm instead."'/n
/n/n

Then you can create your own safe alias, e.g.

/n/n
alias remove='/bin/rm -irv'/n
/n/n

or use trash instead.

/n /n
/n/n

[Feb 21, 2019] Ubuntu Manpage trash - Command line trash utility.

/n/n
/n
Feb 21, 2019 | manpages.ubuntu.com
/n /n/n

xenial (/n 1 ) trash.1.gz

Provided/n by: trash-cli_0.12.9.14-2_all

NAME/n

/n
/n       trash - Command line trash utility./nSYNOPSIS 
/n
/n       trash [arguments] .../nDESCRIPTION 
/n
/n       Trash-cli  package  provides  a command line interface trashcan utility compliant with the/n       FreeDesktop.org Trash Specification.  It remembers the name, original path, deletion date,/n       and permissions of each trashed file./n/nARGUMENTS 
/n
/n       Names of files or directory to move in the trashcan.
EXAMPLES/n
/n       $ cd /home/andrea//n       $ touch foo bar/n       $ trash foo bar/nBUGS 
/n
/n       Report bugs to http://code.google.com/p/trash-cli/issues
AUTHORS/n
/n       Trash  was  written  by Andrea Francia <[email protected]> and Einar Orn/n       Olason <[email protected]>.  This manual page was written by  Steve  Stalcup  <[email protected]>./n       Changes made by Massimo Cavalleri <[email protected]>./n/nSEE ALSO 
/n
/n       trash-list(1),   trash-restore(1),   trash-empty(1),   and   the   FreeDesktop.org   Trash/n       Specification at http://www.ramendik.ru/docs/trashspec.html./n/n       Both are released under the GNU General Public License, version 2 or later./n
/n /n
/n/n

[Jan 10, 2019] saferm Safely remove files, moving them to GNOME/KDE trash instead of deleting by Eemil Lagerspetz

/n/n
/n
Jan 10, 2019 | github.com
/n /n
/n#!/bin/bash/n##/n## saferm.sh/n## Safely remove files, moving them to GNOME/KDE trash instead of deleting./n## Made by Eemil Lagerspetz/n## Login   <vermind@drache>/n## /n## Started on  Mon Aug 11 22:00:58 2008 Eemil Lagerspetz/n## Last update Sat Aug 16 23:49:18 2008 Eemil Lagerspetz/n##/n/nversion="1.16";/n/n## flags (change these to change default behaviour)/nrecursive="" # do not recurse into directories by default/nverbose="true" # set verbose by default for inexperienced users./nforce="" #disallow deleting special files by default/nunsafe="" # do not behave like regular rm by default/n/n## possible flags (recursive, verbose, force, unsafe)/n# don't touch this unless you want to create/destroy flags/nflaglist="r v f u q"/n/n# Colours/nblue='\e[1;34m'/nred='\e[1;31m'/nnorm='\e[0m'/n/n## trashbin definitions/n# this is the same for newer KDE and GNOME:/ntrash_desktops="$HOME/.local/share/Trash/files"/n# if neither is running:/ntrash_fallback="$HOME/Trash"/n/n# use .local/share/Trash?/nuse_desktop=$( ps -U $USER | grep -E "gnome-settings|startkde|mate-session|mate-settings|mate-panel|gnome-shell|lxsession|unity" )/n/n# mounted filesystems, for avoiding cross-device move on safe delete/nfilesystems=$( mount | awk '{print $3; }' )/n/nif [ -n "$use_desktop" ]; then/n    trash="${trash_desktops}"/n    infodir="${trash}/../info";/n    for k in "${trash}" "${infodir}"; do/n        if [ ! -d "${k}" ]; then mkdir -p "${k}"; fi/n    done/nelse/n    trash="${trash_fallback}"/nfi/n/nusagemessage() {/n        echo -e "This is ${blue}saferm.sh$norm $version. LXDE and Gnome3 detection./n    Will ask to unsafe-delete instead of cross-fs move. Allows unsafe (regular rm) delete (ignores trashinfo)./n    Creates trash and trashinfo directories if they do not exist. Handles symbolic link deletion./n    Does not complain about different user any more.\n";/n        echo -e "Usage: ${blue}/path/to/saferm.sh$norm [${blue}OPTIONS$norm] [$blue--$norm] ${blue}files and dirs to safely remove$norm"/n        echo -e "${blue}OPTIONS$norm:"/n        echo -e "$blue-r$norm      allows recursively removing directories."/n        echo -e "$blue-f$norm      Allow deleting special files (devices, ...)."/n  echo -e "$blue-u$norm      Unsafe mode, bypass trash and delete files permanently."/n        echo -e "$blue-v$norm      Verbose, prints more messages. Default in this version."/n  echo -e "$blue-q$norm      Quiet mode. Opposite of verbose."/n        echo "";/n}/n/ndetect() {/n    if [ ! -e "$1" ]; then fs=""; return; fi/n    path=$(readlink -f "$1")/n    for det in $filesystems; do/n        match=$( echo "$path" | grep -oE "^$det" )/n        if [ -n "$match" ]; then/n            if [ ${#det} -gt ${#fs} ]; then/n                fs="$det"/n            fi/n        fi/n    done/n}/n/n/ntrashinfo() {/n#gnome: generate trashinfo:/n        bname=$( basename -- "$1" )/n    fname="${trash}/../info/${bname}.trashinfo"/n    cat < "${fname}"/n[Trash Info]/nPath=$PWD/${1}/nDeletionDate=$( date +%Y-%m-%dT%H:%M:%S )/nEOF/n}/n/nsetflags() {/n    for k in $flaglist; do/n        reduced=$( echo "$1" | sed "s/$k//" )/n        if [ "$reduced" != "$1" ]; then/n            flags_set="$flags_set $k"/n        fi/n    done/n  for k in $flags_set; do/n        if [ "$k" == "v" ]; then/n            verbose="true"/n        elif [ "$k" == "r" ]; then /n            recursive="true"/n        elif [ "$k" == "f" ]; then /n            force="true"/n        elif [ "$k" == "u" ]; then /n            unsafe="true"/n        elif [ "$k" == "q" ]; then /n    unset verbose/n        fi/n  done/n}/n/nperformdelete() {/n                        # "delete" = move to trash/n                        if [ -n "$unsafe" ]/n                        then/n                          if [ -n "$verbose" ];then echo -e "Deleting $red$1$norm"; fi/n                    #UNSAFE: permanently remove files./n                    rm -rf -- "$1"/n                        else/n                          if [ -n "$verbose" ];then echo -e "Moving $blue$k$norm to $red${trash}$norm"; fi/n                    mv -b -- "$1" "${trash}" # moves and backs up old files/n                        fi/n}/n/naskfs() {/n  detect "$1"/n  if [ "${fs}" != "${tfs}" ]; then/n    unset answer;/n    until [ "$answer" == "y" -o "$answer" == "n" ]; do/n      echo -e "$blue$1$norm is on $blue${fs}$norm. Unsafe delete (y/n)?"/n      read -n 1 answer;/n    done/n    if [ "$answer" == "y" ]; then/n      unsafe="yes"/n    fi/n  fi/n}/n/ncomplain() {/n  msg=""/n  if [ ! -e "$1" -a ! -L "$1" ]; then # does not exist/n    msg="File does not exist:"/n        elif [ ! -w "$1" -a ! -L "$1" ]; then # not writable/n    msg="File is not writable:"/n        elif [ ! -f "$1" -a ! -d "$1" -a -z "$force" ]; then # Special or sth else./n        msg="Is not a regular file or directory (and -f not specified):"/n        elif [ -f "$1" ]; then # is a file/n    act="true" # operate on files by default/n        elif [ -d "$1" -a -n "$recursive" ]; then # is a directory and recursive is enabled/n    act="true"/n        elif [ -d "$1" -a -z "${recursive}" ]; then/n                msg="Is a directory (and -r not specified):"/n        else/n                # not file or dir. This branch should not be reached./n                msg="No such file or directory:"/n        fi/n}/n/nasknobackup() {/n  unset answer/n        until [ "$answer" == "y" -o "$answer" == "n" ]; do/n          echo -e "$blue$k$norm could not be moved to trash. Unsafe delete (y/n)?"/n          read -n 1 answer/n        done/n        if [ "$answer" == "y" ]/n        then/n          unsafe="yes"/n          performdelete "${k}"/n          ret=$?/n                # Reset temporary unsafe flag/n          unset unsafe/n          unset answer/n        else/n          unset answer/n        fi/n}/n/ndeletefiles() {/n  for k in "$@"; do/n          fdesc="$blue$k$norm";/n          complain "${k}"/n          if [ -n "$msg" ]/n          then/n                  echo -e "$msg $fdesc."/n    else/n        #actual action:/n        if [ -z "$unsafe" ]; then/n          askfs "${k}"/n        fi/n                  performdelete "${k}"/n                  ret=$?/n                  # Reset temporary unsafe flag/n                  if [ "$answer" == "y" ]; then unset unsafe; unset answer; fi/n      #echo "MV exit status: $ret"/n      if [ ! "$ret" -eq 0 ]/n      then /n        asknobackup "${k}"/n      fi/n      if [ -n "$use_desktop" ]; then/n          # generate trashinfo for desktop environments/n        trashinfo "${k}"/n      fi/n    fi/n        done/n}/n/n# Make trash if it doesn't exist/nif [ ! -d "${trash}" ]; then/n    mkdir "${trash}";/nfi/n/n# find out which flags were given/nafteropts=""; # boolean for end-of-options reached/nfor k in "$@"; do/n        # if starts with dash and before end of options marker (--)/n        if [ "${k:0:1}" == "-" -a -z "$afteropts" ]; then/n                if [ "${k:1:2}" == "-" ]; then # if end of options marker/n                        afteropts="true"/n                else # option(s)/n                    setflags "$k" # set flags/n                fi/n        else # not starting with dash, or after end-of-opts/n                files[++i]="$k"/n        fi/ndone/n/nif [ -z "${files[1]}" ]; then # no parameters?/n        usagemessage # tell them how to use this/n        exit 0;/nfi/n/n# Which fs is trash on?/ndetect "${trash}"/ntfs="$fs"/n/n# do the work/ndeletefiles "${files[@]}"/n/n/n/n
/n/n/n /n
/n/n

[Oct 22, 2018] linux - If I rm -rf a symlink will the data the link points to get erased, to

/n/n
Notable quotes:
/n
"... Put it in another words, those symlink-files will be deleted. The files they "point"/"link" to will not be touch. ..."
/n
/n
Oct 22, 2018 | unix.stackexchange.com
/n /n/n

user4951 ,Jan 25, 2013 at/n 2:40

/n/n
/n This is the contents of the /home3 directory on my system:/n
/n./   backup/    hearsttr@  lost+found/  randomvi@  sexsmovi@/n../  freemark@  investgr@  nudenude@    romanced@  wallpape@
/n/n

I want to clean this up but I am worried because of the symlinks, which point to another/n drive.

/n/n

If I say rm -rf /home3 will it delete the other drive?

/n
/n/n

John Sui

/n/n
/n rm -rf /home3 will delete all files and directory within home3 and/n home3 itself, which include symlink files, but will not "follow"(de-reference)/n those symlink./n/n

Put it in another words, those symlink-files will be deleted. The files they/n "point"/"link" to will not be touch.

/n
/n /n
/n/n

[Oct 22, 2018] Does rm -rf follow symbolic links?

/n/n
/n
Jan 25, 2012 | superuser.com
/n /n/n
/n I have a directory like this:/n
/n$ ls -l/ntotal 899166/ndrwxr-xr-x 12 me scicomp       324 Jan 24 13:47 data/n-rw-r--r--  1 me scicomp     84188 Jan 24 13:47 lod-thin-1.000000-0.010000-0.030000.rda/ndrwxr-xr-x  2 me scicomp       808 Jan 24 13:47 log/nlrwxrwxrwx  1 me scicomp        17 Jan 25 09:41 msg -> /home/me/msg/n
/n/n

And I want to remove it using rm -r .

/n/n

However I'm scared rm -r will follow the symlink and delete everything in/n that directory (which is very bad).

/n/n

I can't find anything about this in the man pages. What would be the exact behavior of/n running rm -rf from a directory above this one?

/n
/n/n

LordDoskias /n Jan 25 '12 at 16:43, Jan 25, 2012 at 16:43

/n/n
/n How hard it is to create a dummy dir with a symlink pointing to a dummy file and execute the/n scenario? Then you will know for sure how it works! –/n
/n/n

hakre ,Feb 4,/n 2015 at 13:09

/n/n
/n X-Ref: If I rm -rf a/n symlink will the data the link points to get erased, too? ; Deleting a folder that contains symlinks/n – hakre/n /n Feb 4 '15 at 13:09/n
/n/n

Susam Pal/n ,Jan 25, 2012 at 16:47

/n/n
/n Example 1: Deleting a directory containing a soft link to another directory./n
/nsusam@nifty:~/so$ mkdir foo bar/nsusam@nifty:~/so$ touch bar/a.txt/nsusam@nifty:~/so$ ln -s /home/susam/so/bar/ foo/baz/nsusam@nifty:~/so$ tree/n./n├── bar/n│   └── a.txt/n└── foo/n    └── baz -> /home/susam/so/bar//n/n3 directories, 1 file/nsusam@nifty:~/so$ rm -r foo/nsusam@nifty:~/so$ tree/n./n└── bar/n    └── a.txt/n/n1 directory, 1 file/nsusam@nifty:~/so$/n
/n/n

So, we see that the target of the soft-link survives.

/n/n

Example 2: Deleting a soft link to a directory

/n
/nsusam@nifty:~/so$ ln -s /home/susam/so/bar baz/nsusam@nifty:~/so$ tree/n./n├── bar/n│   └── a.txt/n└── baz -> /home/susam/so/bar/n/n2 directories, 1 file/nsusam@nifty:~/so$ rm -r baz/nsusam@nifty:~/so$ tree/n./n└── bar/n    └── a.txt/n/n1 directory, 1 file/nsusam@nifty:~/so$/n
/n/n

Only, the soft link is deleted. The target of the soft-link survives.

/n/n

Example 3: Attempting to delete the target of a soft-link

/n
/nsusam@nifty:~/so$ ln -s /home/susam/so/bar baz/nsusam@nifty:~/so$ tree/n./n├── bar/n│   └── a.txt/n└── baz -> /home/susam/so/bar/n/n2 directories, 1 file/nsusam@nifty:~/so$ rm -r baz//nrm: cannot remove 'baz/': Not a directory/nsusam@nifty:~/so$ tree/n./n├── bar/n└── baz -> /home/susam/so/bar/n/n2 directories, 0 files/n
/n/n

The file in the target of the symbolic link does not survive.

/n/n

The above experiments were done on a Debian GNU/Linux 9.0 (stretch) system.

/n
/n/n

Wyrmwood/n ,Oct 30, 2014 at 20:36

/n/n
/n rm -rf baz/* will remove the contents – Wyrmwood /n Oct 30 '14 at 20:36/n
/n/n

Buttle/n Butkus ,Jan 12, 2016 at 0:35

/n/n
/n Yes, if you do rm -rf [symlink], then the contents of the original directory will be/n obliterated! Be very careful. – Buttle Butkus /n Jan 12 '16 at 0:35/n
/n/n

frnknstn ,Sep/n 11, 2017 at 10:22

/n/n
/n Your example 3 is incorrect! On each system I have tried, the file a.txt will be removed in/n that scenario. – frnknstn /n Sep 11 '17 at 10:22/n
/n/n

Susam Pal/n ,Sep 11, 2017 at 15:20

/n/n
/n @frnknstn You are right. I see the same behaviour you mention on my latest Debian system. I/n don't remember on which version of Debian I performed the earlier experiments. In my earlier/n experiments on an older version of Debian, either a.txt must have survived in the third/n example or I must have made an error in my experiment. I have updated the answer with the/n current behaviour I observe on Debian 9 and this behaviour is consistent with what you/n mention. – Susam/n Pal /n Sep 11 '17 at 15:20/n
/n/n

Ken Simon/n ,Jan 25, 2012 at 16:43

/n/n
/n Your /home/me/msg directory will be safe if you rm -rf the directory from which you ran ls./n Only the symlink itself will be removed, not the directory it points to./n/n

The only thing I would be cautious of, would be if you called something like "rm -rf msg/"/n (with the trailing slash.) Do not do that because it will remove the directory that msg/n points to, rather than the msg symlink itself.

/n
/n/n

> ,Jan 25, 2012 at 16:54

/n/n
/n "The only thing I would be cautious of, would be if you called something like "rm -rf msg/"/n (with the trailing slash.) Do not do that because it will remove the directory that msg/n points to, rather than the msg symlink itself." - I don't find this to be true. See the third/n example in my response below. – Susam Pal /n Jan 25 '12 at 16:54/n
/n/n

Andrew/n Crabb ,Nov 26, 2013 at 21:52

/n/n
/n I get the same result as @Susam ('rm -r symlink/' does not delete the target of symlink),/n which I am pleased about as it would be a very easy mistake to make. – Andrew Crabb /n Nov 26 '13 at 21:52/n
/n/n

,

/n/n
/n rm should remove files and directories. If the file is symbolic link, link is/n removed, not the target. It will not interpret a symbolic link. For example what should be/n the behavior when deleting 'broken links'- rm exits with 0 not with non-zero to indicate/n failure/n
/n /n
/n/n

[Jun 13, 2010] Unix Blog -- rm command - Argument List too long

Some days back I had problems deleting n no of files in folder.

while using rm command it would say

$rm -f /home/sriram/tmp/*.txt
-bash: /bin/rm: Argument list too long

This is not a limitation of the rm command, but a kernel limitation on the size of the parameters of the command. Since I was performing shell globbing (selecting all the files with extension .txt), this meant that the size of the command line arguments became bigger with the number of the files.

One solution is to to either run rm command inside a loop and delete each individual result, or to use find with the xargs parameter to pass the delete command. I prefer the find solution so I had changed the rm line inside the script to:

find /home/$u/tmp/ -name '*.txt' -print0 xargs -0 rm

this does the trick and solves the problem. One final touch was to not receive warnings if there were no actual files to delete, like:

rm: too few arguments

Try `rm --help' for more information.

For this I have added the -f parameter to rm (-f, –force = ignore nonexistent files, never prompt). Since this was running in a shell script from cron the prompt was not needed also so no problem here. The final line I used to replace the rm one was:

find /home/$u/tmp/ -name '*.txt' -print0 xargs -0 rm -f

You can also do this in the directory:
ls xargs rm

Alternatively you could have used a one line find:

find /home/$u/tmp/ -name '*.txt' -exec rm {} \; -print

TidBITS A Mac User's Guide to the Unix Command Line, Part 1

Warning! The command line is not without certain risks. Unlike when you work in the Finder, some tasks you carry out are absolute and cannot be undone. The command I am about to present, rm, is very powerful. It removes files permanently and completely instead of just putting them in the Trash. You can't recover files after deleting them with rm, so use it with great care, and always use the -i option, as explained below, so Terminal asks you to confirm deleting each file.

Your prompt should look something like this, showing that you are still inside the Test directory you created earlier:

   [Walden:~/Test] kirk%

Type the following:

   % rm -i testfile

The rm command removes files and directories, in this case the file testfile. The -i option tells Terminal to run the rm command in interactive mode, asking you to make sure you want to delete the file. Terminal asks:

   remove testfile?

Type y for yes, then press Return or Enter and the file is removed. If you wanted to leave it there, you could just type n for no, or press Return.

We should check to make sure the file is gone:

   % ls

After typing ls, you should just see a prompt. Terminal doesn't tell you that the directory is empty, but it shows what's in the directory: nothing.

Now, move up into your Home folder. Type:

   % cd ..

This is the same cd command that we used earlier to change directories. Here, the command tells the Terminal to go up in the directory hierarchy to the next directory (the .. is a shortcut for the parent directory); in this case, that is your Home directory.

Type ls again to see what's in this directory:

   % ls

You should see something like this:

   Desktop    Library  Music     Public  Test
   Documents  Movies   Pictures  Sites

The Test directory is still there, but using rm, it's easy to delete it by typing:

   % rm -d -i Test

The -d option tells rm to remove directories. When Terminal displays:

   remove Test?

Type y, then press Return or Enter. (If you didn't remove testfile, as explained above, the rm command won't delete the directory because it won't, by default, delete directories that are not empty.)

Make one final check to see if the directory has been deleted.

   % ls
 
   Desktop    Library  Music     Public
   Documents  Movies   Pictures  Sites

The Answer Gang 62 about Unix command rm

I have a question about rm command. Would you please tell me how to remove all the files excepts certain files like anything ended with .c?

[Mike] The easiest way (meaning it will work on any Unix systems anywhere), is to move those files to a temporary directory, then delete "everything", then move those files back.


mkdir /tmp/tdir
mv *.c /tmp/tdir
rm *
mv /tmp/tdir/* .
rmdir /tmp/tdir

[Ben] The above would work, but seems rather clunky, as well as needing a lot of typing.

[Mike] Yes, it's not something you'd want to do frequently. However, if you don't know a lot about Unix commands, and are hesitant to write a shell script which deletes a lot of files, it's a good trick to remember.

[Ben] It's true that it is completely portable; the only questionable part of my suggestion immediately below might be the "-1" in the "ls", but all the versions of "ls" with which I'm familiar support the "single column display" function. It would be very easy to adapt.

My preference would be to use something like

rm $(ls -1|grep -v "\.c$")

because the argument given to "grep" can be a regular expression. Given that, you can say things like "delete all files except those that end in 'htm' or 'html'", "delete all except '*.c', '*.h', and '*.asm'", as well as a broad range of other things. If you want to eliminate the error messages given by the directories (rm can't delete them without other switches), as well as making "rm" ask you for confirmation on each file, you could use a "fancier" version -

rm -i $(ls -AF1|grep -v "/$"|grep -v "\.c$")

Note that in the second argument - the only one that should be changed - the "\" in front of the ".c" is essential: it makes the "." a literal period rather than a single-character match. As an example, lets try the above with different options.

In a directory that contains


testc
test-c
testcx
test.cdx
test.c

".c" means "'c' preceded by any character" - NO files would be deleted.

"\.c" means "'c' preceded by a period" - deletes the first 3 files.

"\.c$" means "'c' preceded by a period and followed by the end of the line" - all the files except the last one would be gone.

Here's a script that would do it all in one shot, including showing a list of files to be deleted:

See attached misc/tag/rmx.bash.txt

[Dan] Which works pretty well up to some limit, at which things break down and exit due to $skip being too long.

For a less interactive script which can remove inordinate numbers of files, something containing:

ls -AF1 | grep -v /$ | grep -v $1 | xargs rm

allows "xargs" to collect as many files as it can on a command line, and invoke "rm" repeatedly.

It would be prudent to try the thing out in a directory containing only expendable files with names similar to the intended victims/saved.

[Ben] Possibly a good idea for some systems. I've just tried it on a directory with 1,000 files in it (created just for the purpose) and deleted 990 of them in one shot, then recreated them and deleted only 9 of them. Everything worked fine, but testing is indeed a prudent thing to do.

[Dan] Or with some typists. I've more than once had to resort to backups due to a slip of the fingers (the brain?) with an "rm" expression.

[Ben] <*snort*> Never happened to me. No sir. Uh-uh. <Anxious glance to make sure the weekly backup disk is where it should be>

I just put in that "to be deleted" display for, umm, practice. Yeah.

<LOL> Good point, Dan.

Got that sinking feeling that often follows an overzealous rm? Our system doctor has a prescription.

by Mark Komarinski

There was recently a bit of traffic on the Usenet newsgroups about the need for (or lack of) an undelete command for Linux. If you were to type rm * tmp instead of rm *tmp and such a command were available, you could quickly recover your files.

The main problem with this idea from a filesystem standpoint involves the differences between the way DOS handles its filesystems and the way Linux handles its filesystems.

Let's look at how DOS handles its filesystems. When DOS writes a file to a hard drive (or a floppy drive) it begins by finding the first block that is marked "free" in the File Allocation Table (FAT). Data is written to that block, the next free block is searched for and written to, and so on until the file has been completely written. The problem with this approach is that the file can be in blocks that are scattered all over the drive. This scattering is known as fragmentation and can seriously degrade your filesystem's performance, because now the hard drive has to look all over the place for file fragments. When files are deleted, the space is marked "free" in the FAT and the blocks can be used by another file.

The good thing about this is that, if you delete a file that is out near the end of your drive, the data in those blocks may not be overwritten for months. In this case, it is likely that you will be able to get your data back for a reasonable amount of time afterwards.

Linux (actually, the second extended filesystem that is almost universally used under Linux) is slightly smarter in its approach to fragmentation. It uses several techniques to reduce fragmentation, involving segmenting the filesystem into independently-managed groups, temporarily reserving large chunks of contiguous space for files, and starting the search for new blocks to be added to a file from the current end of the file, rather than from the start of the filesystem. This greatly decreases fragmentation and makes file access much faster. The only case in which significant fragmentation occurs is when large files are written to an almost-full filesystem, because the filesystem is probably left with lots of free spaces too small to tuck files into nicely.

Because of this policy for finding empty blocks for files, when a file is deleted, the (probably large) contiguous space it occupied becomes a likely place for new files to be written. Also, because Linux is a multi-user, multitasking operating system, there is often more file-creating activity going on than under DOS, which means that those empty spaces where files used to be are more likely to be used for new files. "Undeleteability" has been traded off for a very fast filesystem that normally never needs to be defragmented.

The easiest answer to the problem is to put something in the filesystem that says a file was just deleted, but there are four problems with this approach:

  1. You would need to write a new filesystem or modify a current one (i.e. hack the kernel).
  2. How long should a file be marked "deleted"?
  3. What happens when a hard drive is filled with files that are "deleted"?
  4. What kind of performance loss and fragmentation will occur when files have to be written around "deleted" space?

Each of these questions can be answered and worked around. If you want to do it, go right ahead and try--the ext2 filesystem has space reserved to help you. But I have some solutions that require zero lines of C source code.

I have two similar solutions, and your job as a system administrator is to determine which method is best for you. The first method is a user-by-user no-root-needed approach, and the other is a system-wide approach implemented by root for all (or almost all) users.

The user-by-user approach can be done by anyone with shell access and it doesn't require root privileges, only a few changes to your .profile and .login or .bashrc files and a bit of drive space. The idea is that you alias the rm command to move the files to another directory. Then, when you log in the next time, those files that were moved are purged from the filesystem using the real /bin/rm command. Because the files are not actually deleted by the user, they are accessible until the next login. If you're using the bash shell, add this to your .bashrc file:

alias waste='/bin/rm'
alias rm='mv $1 ~/.rm'
and in your
.profile:
if [ -x ~/.rm ];
 then
   /bin/rm -r ~/.rm
   mkdir ~/.rm
   chmod og-r ~/.rm
 else
   mkdir ~/.rm
   chmod og-r ~/.rm
 fi

Advantages:

Disadvantages:

System-Wide

The second method is similar to the user-by-user method, but everything is done in /etc/profile and cron entries. The /etc/profile entries do almost the same job as above, and the cron entry removes all the old files every night. The other big change is that deleted files are stored in /tmp before they are removed, so this will not create a problem for users with quotas on their home directories.

The cron daemon (or crond) is a program set up to execute commands at a specified time. These are usually frequently-repeated tasks, such as doing nightly backups or dialing into a SLIP server to get mail every half-hour. Adding an entry requires a bit of work. This is because the user has a crontab file associated with him which lists tasks that the crond program has to perform. To get a list of what crond already knows about, use the crontab -l command, for "list the current cron tasks". To set new cron tasks, you have to use the crontab <file command for "read in cron assignments from this file". As you can see, the best way to add a new cron task is to take the list from crontab -l, edit it to suit your needs, and use crontab <file to submit the modified list. It will look something like this:

~# crontab -l > cron.fil
~# vi cron.fil
To add the necessary cron entry, just type the commands above as root and go to the end of the cron.fil file. Add the following lines:
# Automatically remove files from the
# /tmp/.rm directory that haven't been
# accessed in the last week.
0 0 * * * find /tmp/.rm -atime +7 -exec /bin/rm {} \;
Then type:
~# crontab cron.fil

Of course, you can change -atime +7 to -atime +1 if you want to delete files every day; it depends on how much space you have and how much room you want to give your users.

Now, in your /etc/profile (as root):

if [ -n "$BASH" == "" ] ;
then # we must be running bash
   alias waste='/bin/rm'
   alias rm='mv $1 /tmp/.rm/"$LOGIN"'
   undelete () {
     if [ -e /tmp/.rm/"$LOGIN"/$1 ] ; then
       cp /tmp/.rm/"$LOGIN"/$1 .
     else
       echo "$1 not available"
     fi
   }   if [ -n -e /tmp/.rm/"$LOGIN" ] ;
   then
     mkdir /tmp/.rm/"$LOGIN"
     chmod og-rwx /tmp/.rm/"$LOGIN"
   fi
fi

Once you restart cron and your users log in, your new `undelete' is ready to go for all users running bash. You can construct a similar mechanism for users using csh, tcsh, ksh, zsh, pdksh, or whatever other shells you use. Alternately, if all your users have /usr/bin in their paths ahead of /bin, you can make a shell script called /usr/bin/rm which does essentially the same thing as the alias above, and create an undelete shell script as well. The advantage of doing this is that it is easier to do complete error checking, which is not done here.

Advantages:

Disadvantages:

These solutions will work for simple use. More demanding users may want a more complete solution, and there are many ways to implement these. If you implement a very elegant solution, consider packaging it for general use, and send me an e-mail message about it so that I can tell everyone about it here.

Unix is a Four Letter Word Unix --- Strange names

There may come a time that you will discover that you have somehow created a file with a strange name that cannot be removed through conventional means. This section contains some unconventional approaches that may aid in removing such files.

Files that begin with a dash can be removed by typing

 rm ./-filename
A couple other ways that may work are
 rm -- -filename
and
 rm - -filename
Now let's suppose that we an even nastier filename. One that I ran across this summer was a file with no filename. The solution I used to remove it was to type
 rm -i *
This executes the rm command in interactive mode. I then answered "yes" to the query to remove the nameless file and "no" to all the other queries about the rest of the files.

Another method I could have used would be to obtain the inode number of the nameless file with

ls -i

and then type
 find . -inum number -ok rm '{}' \;
where number is the inode number.

The -ok flag causes a confirmation prompt to be displayed. If you would rather live on the edge and not be bothered with the prompting, you can use -exec in place of -ok.

Suppose you didn't want to remove the file with the funny name, but wanted to rename it so that you could access it more readily. This can be accomplished by following the previous procedure with the following modification to the find command:

 find . -inum number -ok mv '{}' new_filename \;

[Oct 01 2004] Meddling in the Affairs of Wizards

Most people who have spent any time on any version of Unix know that "rm -rf /" is about the worst mistake you can make on any given machine. (For novices, "/" is the root directory, and -r means recursive, so rm keeps deleting files until the entire file system is gone, or at least until something like libc is gone after which the system becomes, as we often joke, a warm brick.)

Well a couple of years ago one Friday afternoon a bunch of us were exchanging horror stories on this subject, when Bryan asked "why don't we fix rm?" So I did.

The code changes were, no surprise, trivial. The hardest part of the whole thing was that one reviewer wanted /usr/xpg4/bin/rm to be changed as well, and that required a visit to our standards guru. He thought the change made sense, but might technically violate the spec, which only allowed rm to treat "." and ".." as special cases for which it could immediately exit with an error. So I submitted a defect report to the appropriate standards committee, thinking it would be a slam dunk.

Well, some of these standards committee members either like making convoluted arguments or just don't see the world the same way I do, as more than one person suggested that the spec was just fine and that "/" was not worthy of special consideration. We tried all sorts of common sense arguments, to no avail. In the end, we had to beat them at their own game, by pointing out that if one attempts to remove "/" recursively, one will ultimately attempt to remove ".." and ".", and that all we are doing is allowing rm to pre-determine this heuristically. Amazingly, they bought that!

Anyway, in the end, we got the spec modified, and Solaris 10 has (since build 36) a version of /usr/bin/rm (/bin is a sym-link to /usr/bin on Solaris) and /usr/xpg4/bin/rm which behaves thus:

[28] /bin/rm -rf /
rm of / is not allowed
[29] 

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

rm (Unix) - Wikipedia, the free encyclopedia

Saferm implementations

Reference

UNIX man pages rm ()

Remove (unlink) the FILE(s).

-d, --directory
unlink FILE, even if it is a non-empty directory (super-user
only; this works only if your system

supports `unlink' for nonempty directories)

-f, --force
ignore nonexistent files, never prompt

-i, --interactive
prompt before any removal

--no-preserve-root do not treat `/' specially (the default)

--preserve-root
fail to operate recursively on `/'

-r, -R, --recursive
remove directories and their contents recursively

-v, --verbose
explain what is being done

--help display this help and exit

--version
output version information and exit

By default, rm does not remove directories. Use the --recursive (-r or
-R) option to remove each listed directory, too, along with all of its
contents.

To remove a file whose name starts with a `-', for example `-foo', use
one of these commands:

rm -- -foo

rm ./-foo

Note that if you use rm to remove a file, it is usually possible to
recover the contents of that file. If you want more assurance that the
contents are truly unrecoverable, consider using shred.



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: February 19, 2020