Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Skepticism and critical thinking is not panacea, but can help to understand the world better

How to have safer rm command and avoid some blunders

News Books Recommended Links Reference Saferm -- wrapper for rm command User proofing

Creative uses of rm

Root deletion protection Safe-rm Lack of testing complex, potentially destructive, commands Executing command in a wrong directory Pure stupidity Typical Errors In Using Find Performing the operation on a wrong computer
Unix mv command Unix cp command ln command Using -exec option with find Horror stories Unix History with some Emphasis on Scripting

Humor


Introduction

The command rm is very important and it is very primitive. Like a blade it can cause catastrophic damage to server filesystems in a shaky hands.

Many Unix sysadmin horror stories are related to unintended consequences such as side effects of classic Unix commands. One of the most prominent of such commands is rm.   Which has several idiosyncrasies which are iether easily forgotten or unknown by novice sysadmins. For example behavior of rm -r .* can easily be forgotten from one encounter to another, especially if sysadmin works both with Unix and Windows servers. Consequences if executed as root are tragic...  

While most, probably 99.9% of operations with rm are successful it is the remaining.01% that create a lot of damage and is subject to sysadmin folklore.

The command rm has relatively few options.  Two of the then are especially important:

Linux uses GNU implementation that as of April 2018 has the following options (note relatively new option -I ):

Wiping out useful data by using rm with wrong parameters or options is probably  the most common blunder by both novice and seasoned sysadmins. It happens rarely but the result are often devastating.  This danger can't be completely avoided, but in addition to having an up-to-date backup, there are several steps that you can do to reduce the chances of the major damage to the data:

  1. Do backup before any dangerous operation. Always do the backup of /etc on the server that you plan to work at the beginning of your working day. This can and probably should be done from dot scripts (such as .bash_profile)  on login.  Especially if you plan to delete files from such system directories such  as /etc,  /boot, /root, etc.  Or if the operation you intend to do is complex and requires traversing filesystem using of find -exec option or find -delete option. That's Rule No. 1
  2. Use wrapper such as saferm of safe-rm.  It really helps to prevent  many typical cases of accidental deletion of important system and user files
  3. If you plan to delete multiple files/directories using rm -r or  find -exec (see also Typical Errors In Using Find),  or regular expressions, first please run ls, tree, or find command with the argument you intend to use in rm command  and visually verify correctness of your list. Wrappers for rm command do it automatically.
  4. It is much safer to delete individual files using file manager like WinSCP, or Midnight Commander, then do this directly by typing on the command line.  In this case you have a very important visual feedback, and this less likely to make a mistake. Also you do not need to type the name of file or directory to be deleted, which also help to avoid the whole class of errors.
  5. When you are programming a simple script for file deletion, try to type in generated rm command (or any other destructive command) a part of the path  In other words writing rm $MYDIR is a very bad idea. It should be something like rm /home/joeuser/$MYDIR This protects from cases when your variable for some reason is left undefined, and your command will mistakenly operate on root directory, or current directory.  Similarly, "chmod -R 755 /Backup/$mypath" is much safer then chmod -R 755 $mybackup
     
  6. Use move to Trash folder instead of deletion, if you have enough space for files that you are deleting. You can delete them from the Trash folder later on.
     
  7. Use option -I instead of -i when deleting multiple files. First of all -i option (which it default alias in Red Hat and other distributions) is really annoying.  Moreover after a while the answer Yes became automatic, ingrained in your brain,  and that's a huge danger.  Many users start eventually simply disable it either by using option -f  or by specifying path, and thus avoiding using this alias. As option  -I is less intrusive it can be recommended as a better replacement. Unfortunately it does not solve that problem of reckless actions, but nothing probably can solve it. It prompts you only once before removing more than three files, or when removing recursively. See rm(1) remove files-directories:

    If the -I or --interactive=once option is given, and there are more than three files or the -r, -R, or --recursive are given, then rm prompts the user for whether to proceed with the entire operation. If the response is not affirmative, the entire command is aborted.

  8. To delete subdirectories and files starting with dot you can use "shopt -s dotglob". 
  9. Do not improvise complex, potentially destructive commands on the command line without necessity. History of command is your friend. If the command is complex or destructive it makes sense to write in the editor first, not directly improvise it on the command line.  Sometime while typing on command line  you can hit the Enter key or inserts a blank where you do not  intend. Using the editor help to avoid those blunders.

NOTES:

Classic blunders with RM command

Without monthly "safety minutes" you usually forget about the dangers that rm has and can accidentally step on the same rake again and again ;-).

There are several classic blunders committed using the rm command :

  1. Running rm - R command with regular expression, without testing regex expansion first by using ls . You should always use ls -Rl command to test complex rm -R  commands (  -R, --recursive means process  subdirectories recursively).
  2. Mistyping complex path or file name in rm commands. It's always better to use copy and paste operation for directories and files used in rm command as it helps to avoid various typos. If you do recursive deletion is useful to have a "dry run" using find or ls to see the files.

    Here are to example of typos:

    rm -rf /tmp/foo/bar/ *

    instead of

    rm -rf /tmp/foo/bar/*

    ===

    Let's assume that you are in directory /backup that has  a subdirectory etc that you want to delete. As path "/etc" is ingrained in sysadmin mind it is very easy automatically/subconsciously type

    rm -r /etc

    instead of

    rm -r etc

    And realize  what you have done in a second or so. That's why for commands which include names OS system directories it is safer to type them in the editor, inspect them and only then execute that command on the command line.  If you are using terminal emulator on Windows desktop then Notepad++ or similar editor is OK.

    This is also why it is prudent to alias rm to the saferm script of your choice, which should prevent such mishaps (aliases are not used in non interactive sessions)

    This actually is an interesting type of "induced" error because /etc is typed daily so often that it kind of ingrained in sysadmin head and can be typed subconsciously instead of etc

  3. Using option -r with  .*  regex without understanding its consequences.   That essentially make rm -r recursive and it traverses down to parent directories as this basic regular expression matches ".." (parent directory). 
  4. You can accidentally hit Enter by mistake before finishing typing the line containing rm command.  If this is after the first character and the first character is /, or after several characters and the first are the  name of system directory like  /boot in older versions of Linux  you  would find yourself in trouble. Sometimes this happens with the new keyboard when you still do not fully adapted to new keys "feel".
  5. You can accidentally insert a blank  after *, this separating the argument into two: "*" and the rest, with corresponding consequences.  Argument * probably should be allowed if and only if it is a single argument on the command  like unless -f is specified.  

Those cases can be viewed as shortcomings of rm implementation (For example, * should be allowed only as a single argument, not as one of several arguments, but Unix expands argument  before passing  them to command  (expansion is done  by shell and there is not simple access to the  original command line) so it is tricky to check; rm also should automatically block deletion of system directories like //etc/ and the list of "protected" directories specified in its config file unless -f  flag is specified. Using has no system attribute so it is difficult to distinguish system files form the rest, but files owned by root:root probably deserve a special treatment as system files, even if one is working as root.

Those cases can be viewed as shortcomings of rm implementation (it should automatically block * deletion of system directories like //etc/ and so on unless -f  flag is specified. As well as any system file.  Also Unix does not have system attribute for files although sticky bit on files can be used instead or along with ownership of sys instead of root).

Those cases can be viewed as shortcomings of rm implementation (it should automatically block * deletion of system directories like //etc/ and so on unless -f  flag is specified. As well as any system file.  Also Unix does not have system attribute for files although sticky bit on files can be used instead along with ownership of sys instead of root).

Writing a wrapper like "saferm"

Unix lacks system attributes for files although sticky bit on files or immutable attribute can be used instead.  It is wise to use wrappers for rm. There are several more or less usable approaches for writing such a wrapper:

In view of  danger of  rm -r unleashed as root it is wise to use wrappers for rm and alias rm to this wrapper in root .bash_profile or other profile file that you are using for root.  Aliases are not used in non-interactive sessions. So it will not affect any scripts. And in command line sessions rm is usually typed without path (which creates another set of dangers). 

There are several more or less usable features that might wish to experiment with when writing such a wrapper:

Of course each environment is unique and such a wrapper should take into account idiosyncrasies of such an environment.

If you know Perl, you can use Safe-rm as the initial implementation and enhance it. If it licensed under GPL. It present in Ubuntu I think.

How to use
-----------

Once you have installed safe-rm on your system (see INSTALL), you will need to
fill the system-wide or user-specific blacklists with the paths that you'd like
to protect against accidental deletion.

The system-wide blacklist lives in /etc/safe-rm.conf and you should probably add
paths like these:

  /
  /etc
  /usr
  /usr/lib
  /var

The user-specific blacklist lives in ~/.config/safe-rm and could include things like:

  /home/username/documents
  /home/username/documents/*
  /home/username/.mozilla


Other approaches
-----------------

If you want more protection than what safe-rm can offer, here are a few suggestions.

You could of course request confirmation every time you delete a file by putting this in
your /etc/bash.bashrc:

  alias rm='rm -i'

But this won't protect you from getting used to always saying yes, or from accidently
using 'rm -rf'.

Or you could make use of the Linux filesystem "immutable" attribute by marking (as root)
each file you want to protect:

  chattr +i file

Of course this is only usable on filesystems which support this feature.

Here are two projects which allow you to recover recently deleted files by trapping
all unlink(), rename() and open() system calls through the LD_PRELOAD facility:

  delsafe
  http://homepage.esoterica.pt/~nx0yew/delsafe/

  libtrashcan
  http://hpux.connect.org.uk/hppd/hpux/Development/Libraries/libtrash-0.2/readme.html

There are also projects which implement the FreeDesktop.org trashcan spec. For example:

  trash-cli
  http://code.google.com/p/trash-cli/

Finally, this project is a fork of GNU coreutils and adds features similar to safe-rm
to the rm command directly:

  http://wiki.github.com/d5h/rmfd/

Backup files before executing rm command, if you deleting a directory and it is not that large

Backups before execution of rm command are important. For example making backup of /etc directory on modern server takes a couple of seconds, but can save quite a lot of nerves in situations that otherwise can s can be devastating. For example, in the example above you would erase all your files and subdirectories in the /etc directory. Modern flavors of Unix usually prevent erasing / but not /etc. Linux which uses GNU version of rm prevents erasing all level 2 system directories.

You can also move directories to /Trash folder and delete them after, say 7 days or so.

You can also move the directory to /Trash folder is it is relatively small. In this forlder you can delete files that are say, older then 7 days with a cron script, if the total size of this directory exceeds certain threshold. 

More on blunders with the deleting a directory from the backup

I once automatically typed /etc instead of etc trying to delete directory to free space on a backup directory on a production server (/etc probably in engraved in sysadmin head as it is typed so often and can be substituted for etc subconsciously).  I realized that it was mistake and cancelled the command, but it was a fast server and one third of /etc was gone. The rest of the day was spoiled...  Actually not completely: I learned quite a bit about the behavior of AIX in this situation and the structure of AIX /etc directory this day so each such disaster is actually a great learning experience, almost like one day training course ;-). But it's much less nerve wracking to get this knowledge from the course... Another interesting thing is having backup was not enough is this case -- enterprise backup software stopped working on a damaged server. The same was true for telnet and ssh. And this was a remote server is a datacenter across the country.  I restored the directory on the other non-production server (overwriting its /etc directory in this second box with the help of operations, tell me about cascading errors and Murphy law :-). Then netcat helped to transfer the tar file. 

If you are working as root and perform dangerous operations never type a path of the command, copy it from the screen. If you can copy command from history instead of typing, do it ! Always check if the directory you are trying to delete is a symbolic link, such symbolic links are often used in home directories to simplify navigation...

Such blunders are really persistent as often used directories are types in a "semiconscious" fashion, in "autopilot" mode and you realize the blunder only after you hit Enter. For example, many years after the first blunder, I similarly typed rm -r /Soft instead of rm -r Soft in a backup directory.  And I was in a hurry so I did not even bother to load my profile and word on "bare root" account which did not have this "saferm" feature I was talking about earlier. Unfortunately for me /Soft was a huge directory so backup was not practical as I needed to free space.  It with all kind of software in it and the server was very fast to in a couple of seconds that elapsed before I realized the blunder approximately half gigabyte of files and directories was wiped out.  Luckily I have previous day backup.

In such cases network services with authentication stop working and the only way to transfer files is using CD/DVD, USB drive or netcat. That's why it is useful to have netcat on servers: netcat is the last resort file transfer program when  services with authentication like ftp or scp stop working.  It is especially useful to have it if the datacenter is remote.

netcat is the last resort file transfer program when  services with authentication like ftp or scp stop working.  It is especially useful to have it if the datacenter is remote.

The saying is that experience keeps the most expensive school but fools are unable to learn in any other ;-).

The saying is that experience keeps the most expensive school but fools are unable to learn in any other ;-). Please read classic sysadmin horror stories. 

A simple extra space often produced horrible consequences:

cd /usr/lib
ls /tmp/foo/bar

I typed
rm -rf /tmp/foo/bar/ *

instead of

rm -rf /tmp/foo/bar/*

The system doesn’t run very will without all of it’s libraries……

You can block such behavior requiring that if -r option was given you should have one and only one argument to rm. That should be a part of functionality of your "saferm" script.

Important class of subtle Unix errors: dot-star errors

Another poplar class of recursive rm errors are so called dot-star-errors, which often happn when rm is used with find.  Novice sysadmins usually do not realize that '.*' also matches '..' often with disastrous consequences.  If you are in any doubt as for how a wildcard will expand use echo command to see the result.

Using "convenience" symlinks to other (typically deeply nested) directories inside a directories and forgetting about them

Command  rm does not follows symlinks. But if you use it via find -exec option it will follow symlinks. So if you have a symlink to some system on important directory from the directory you are deleting all files in them will be deleted.

That's the danger of "convenience symlinks" which are used to simplify access to deeply nested directories from home directories. Using aliases or functions might be a better approach. If you use them, do this only from the level two directory created specially for this purpose like /Fav and protect this directory in your saferm script.

Disastrous typos in name of the directory or regex -- unintended, automatically entered space

There are some exotic typos that can lean you to troubles, especially with -r  option. One of them is unintended space:

 rm /homejoeuser/old *

instead of

rm /homejoeuser/old*

Or, similarly:

 rm * .bak

instead of

rm *.bak
In all cases when you are deleting multiple files it makes sense to get a listing of them first via ls command and then replace the name of the command. this approach is safer that typing it by yourself, especially if you are working with the remote server.

Such a mistakes are even more damaging  is you use -r option. For example:

rm -rf /tmp/foo/bar/ *

instead of

rm -rf /tmp/foo/bar/*

NOTE:  it is prudent to block execution of rm commands with multiple argument, if option -r is used in your saferm script.

Root deletion protection

To remove a file you must have write permission on the folder where it is stored. Sun introduced "rm -rf /" protection in Solaris 10, first released in 2005. Upon executing the command, the system now reports that the removal of / is not allowed.

Shortly after, the same functionality was introduced into FreeBSD version of rm  utility.

GNU rm  refuses to execute rm -rf /  if the --preserve-root  option is given, which has been the default since version 6.4 of GNU Core Utilities was released in 2006.

No such protection exists for critical system directories like /etc, but you can imitate it putting file "-i" into such directories or using a wrapper for the interactive usage of the command.  This trick is based on the fact that the file -i will be the first file in the list of the arguments to rm and it will trigger the acknowledgements. you can laso consuder any directory owned by system accounts to be system and refuse to delete then in you Safe-rm script.

There are also other approaches as moving files instead of their deletion to /Trash directory.

Using echo command to see expansion of your wildcards, if any

To understand what will be actually being executed after shell expansion, preface your rm command with echo. For example if there is a filename starting with the dash you will receive very confusing message from rm

	$ rm *
	rm: unknown option -- -
	usage: rm [-f|-i] [-dPRrvW] file ...
To find out what caused it prefix the command with echo
echo rm *
One common mishap is running as root complex command like  find  ... -exec rm {} \; without any testing or even rereading the command several times before hitting Enter.   This is covered in more details in

Top Visited
Switchboard
Latest
Past week
Past month


NEWS CONTENTS

Old News ;-)

[Feb 21, 2019] https://github.com/MikeDacre/careful_rm

Feb 21, 2019 | github.com

rm is a powerful *nix tool that simply drops a file from the drive index. It doesn't delete it or put it in a Trash can, it just de-indexes it which makes the file hard to recover unless you want to put in the work, and pretty easy to recover if you are willing to spend a few hours trying (use shred to actually secure erase files).

careful_rm.py is inspired by the -I interactive mode of rm and by safe-rm . safe-rm adds a recycle bin mode to rm, and the -I interactive mode adds a prompt if you delete more than a handful of files or recursively delete a directory. ZSH also has an option to warn you if you recursively rm a directory.

These are all great, but I found them unsatisfying. What I want is for rm to be quick and not bother me for single file deletions (so rm -i is out), but to let me know when I am deleting a lot of files, and to actually print a list of files that are about to be deleted . I also want it to have the option to trash/recycle my files instead of just straight deleting them.... like safe-rm , but not so intrusive (safe-rm defaults to recycle, and doesn't warn).

careful_rm.py is fundamentally a simple rm wrapper, that accepts all of the same commands as rm , but with a few additional options features. In the source code CUTOFF is set to 3 , so deleting more files than that will prompt the user. Also, deleting a directory will prompt the user separately with a count of all files and subdirectories within the folders to be deleted.

Furthermore, careful_rm.py implements a fully integrated trash mode that can be toggled on with -c . It can also be forced on by adding a file at ~/.rm_recycle , or toggled on only for $HOME (the best idea), by ~/.rm_recycle_home . The mode can be disabled on the fly by passing --direct , which forces off recycle mode.

The recycle mode tries to find the best location to recycle to on MacOS or Linux, on MacOS it also tries to use Apple Script to trash files, which means the original location is preserved (note Applescript can be slow, you can disable it by adding a ~/.no_apple_rm file, but Put Back won't work). The best location for trashes goes in this order:

  1. $HOME/.Trash on Mac or $HOME/.local/share/Trash on Linux
  2. <mountpoint>/.Trashes on Mac or <mountpoint>/.Trash-$UID on Linux
  3. /tmp/$USER_trash

Always the best trash can to avoid Volume hopping is favored, as moving across file systems is slow. If the trash does not exist, the user is prompted to create it, they then also have the option to fall back to the root trash ( /tmp/$USER_trash ) or just rm the files.

/tmp/$USER_trash is almost always used for deleting system/root files, but note that you most likely do not want to save those files, and straight rm is generally better.

[Feb 21, 2019] https://github.com/lagerspetz/linux-stuff/blob/master/scripts/saferm.sh by Eemil Lagerspetz

Shell script that tires to implement trash can idea
Feb 21, 2019 | github.com
#!/bin/bash
##
## saferm.sh
## Safely remove files, moving them to GNOME/KDE trash instead of deleting.
## Made by Eemil Lagerspetz
## Login <vermind@drache>
##
## Started on Mon Aug 11 22:00:58 2008 Eemil Lagerspetz
## Last update Sat Aug 16 23:49:18 2008 Eemil Lagerspetz
##
version= " 1.16 " ;

... ... ...

[Feb 21, 2019] The rm='rm -i' alias is an horror

Feb 21, 2019 | superuser.com

The rm='rm -i' alias is an horror because after a while using it, you will expect rm to prompt you by default before removing files. Of course, one day you'll run it with an account that hasn't that alias set and before you understand what's going on, it is too late.

... ... ...

If you want save aliases, but don't want to risk getting used to the commands working differently on your system than on others, you can to disable rm like this
alias rm='echo "rm is disabled, use remove or trash or /bin/rm instead."'

Then you can create your own safe alias, e.g.

alias remove='/bin/rm -irv'

or use trash instead.

[Feb 21, 2019] Ubuntu Manpage trash - Command line trash utility.

Feb 21, 2019 | manpages.ubuntu.com

xenial ( 1 ) trash.1.gz

Provided by: trash-cli_0.12.9.14-2_all

NAME

       trash - Command line trash utility.
SYNOPSIS 
       trash [arguments] ...
DESCRIPTION 
       Trash-cli  package  provides  a command line interface trashcan utility compliant with the
       FreeDesktop.org Trash Specification.  It remembers the name, original path, deletion date,
       and permissions of each trashed file.

ARGUMENTS 
       Names of files or directory to move in the trashcan.
EXAMPLES
       $ cd /home/andrea/
       $ touch foo bar
       $ trash foo bar
BUGS 
       Report bugs to http://code.google.com/p/trash-cli/issues
AUTHORS
       Trash  was  written  by Andrea Francia <andreafrancia@users.sourceforge.net> and Einar Orn
       Olason <eoo@hi.is>.  This manual page was written by  Steve  Stalcup  <vorian@ubuntu.com>.
       Changes made by Massimo Cavalleri <submax@tiscalinet.it>.

SEE ALSO 
       trash-list(1),   trash-restore(1),   trash-empty(1),   and   the   FreeDesktop.org   Trash
       Specification at http://www.ramendik.ru/docs/trashspec.html.

       Both are released under the GNU General Public License, version 2 or later.

[Jan 10, 2019] saferm Safely remove files, moving them to GNOME/KDE trash instead of deleting by Eemil Lagerspetz

Jan 10, 2019 | github.com
#!/bin/bash
##
## saferm.sh
## Safely remove files, moving them to GNOME/KDE trash instead of deleting.
## Made by Eemil Lagerspetz
## Login   <vermind@drache>
## 
## Started on  Mon Aug 11 22:00:58 2008 Eemil Lagerspetz
## Last update Sat Aug 16 23:49:18 2008 Eemil Lagerspetz
##

version="1.16";

## flags (change these to change default behaviour)
recursive="" # do not recurse into directories by default
verbose="true" # set verbose by default for inexperienced users.
force="" #disallow deleting special files by default
unsafe="" # do not behave like regular rm by default

## possible flags (recursive, verbose, force, unsafe)
# don't touch this unless you want to create/destroy flags
flaglist="r v f u q"

# Colours
blue='\e[1;34m'
red='\e[1;31m'
norm='\e[0m'

## trashbin definitions
# this is the same for newer KDE and GNOME:
trash_desktops="$HOME/.local/share/Trash/files"
# if neither is running:
trash_fallback="$HOME/Trash"

# use .local/share/Trash?
use_desktop=$( ps -U $USER | grep -E "gnome-settings|startkde|mate-session|mate-settings|mate-panel|gnome-shell|lxsession|unity" )

# mounted filesystems, for avoiding cross-device move on safe delete
filesystems=$( mount | awk '{print $3; }' )

if [ -n "$use_desktop" ]; then
    trash="${trash_desktops}"
    infodir="${trash}/../info";
    for k in "${trash}" "${infodir}"; do
        if [ ! -d "${k}" ]; then mkdir -p "${k}"; fi
    done
else
    trash="${trash_fallback}"
fi

usagemessage() {
        echo -e "This is ${blue}saferm.sh$norm $version. LXDE and Gnome3 detection.
    Will ask to unsafe-delete instead of cross-fs move. Allows unsafe (regular rm) delete (ignores trashinfo).
    Creates trash and trashinfo directories if they do not exist. Handles symbolic link deletion.
    Does not complain about different user any more.\n";
        echo -e "Usage: ${blue}/path/to/saferm.sh$norm [${blue}OPTIONS$norm] [$blue--$norm] ${blue}files and dirs to safely remove$norm"
        echo -e "${blue}OPTIONS$norm:"
        echo -e "$blue-r$norm      allows recursively removing directories."
        echo -e "$blue-f$norm      Allow deleting special files (devices, ...)."
  echo -e "$blue-u$norm      Unsafe mode, bypass trash and delete files permanently."
        echo -e "$blue-v$norm      Verbose, prints more messages. Default in this version."
  echo -e "$blue-q$norm      Quiet mode. Opposite of verbose."
        echo "";
}

detect() {
    if [ ! -e "$1" ]; then fs=""; return; fi
    path=$(readlink -f "$1")
    for det in $filesystems; do
        match=$( echo "$path" | grep -oE "^$det" )
        if [ -n "$match" ]; then
            if [ ${#det} -gt ${#fs} ]; then
                fs="$det"
            fi
        fi
    done
}


trashinfo() {
#gnome: generate trashinfo:
        bname=$( basename -- "$1" )
    fname="${trash}/../info/${bname}.trashinfo"
    cat < "${fname}"
[Trash Info]
Path=$PWD/${1}
DeletionDate=$( date +%Y-%m-%dT%H:%M:%S )
EOF
}

setflags() {
    for k in $flaglist; do
        reduced=$( echo "$1" | sed "s/$k//" )
        if [ "$reduced" != "$1" ]; then
            flags_set="$flags_set $k"
        fi
    done
  for k in $flags_set; do
        if [ "$k" == "v" ]; then
            verbose="true"
        elif [ "$k" == "r" ]; then 
            recursive="true"
        elif [ "$k" == "f" ]; then 
            force="true"
        elif [ "$k" == "u" ]; then 
            unsafe="true"
        elif [ "$k" == "q" ]; then 
    unset verbose
        fi
  done
}

performdelete() {
                        # "delete" = move to trash
                        if [ -n "$unsafe" ]
                        then
                          if [ -n "$verbose" ];then echo -e "Deleting $red$1$norm"; fi
                    #UNSAFE: permanently remove files.
                    rm -rf -- "$1"
                        else
                          if [ -n "$verbose" ];then echo -e "Moving $blue$k$norm to $red${trash}$norm"; fi
                    mv -b -- "$1" "${trash}" # moves and backs up old files
                        fi
}

askfs() {
  detect "$1"
  if [ "${fs}" != "${tfs}" ]; then
    unset answer;
    until [ "$answer" == "y" -o "$answer" == "n" ]; do
      echo -e "$blue$1$norm is on $blue${fs}$norm. Unsafe delete (y/n)?"
      read -n 1 answer;
    done
    if [ "$answer" == "y" ]; then
      unsafe="yes"
    fi
  fi
}

complain() {
  msg=""
  if [ ! -e "$1" -a ! -L "$1" ]; then # does not exist
    msg="File does not exist:"
        elif [ ! -w "$1" -a ! -L "$1" ]; then # not writable
    msg="File is not writable:"
        elif [ ! -f "$1" -a ! -d "$1" -a -z "$force" ]; then # Special or sth else.
        msg="Is not a regular file or directory (and -f not specified):"
        elif [ -f "$1" ]; then # is a file
    act="true" # operate on files by default
        elif [ -d "$1" -a -n "$recursive" ]; then # is a directory and recursive is enabled
    act="true"
        elif [ -d "$1" -a -z "${recursive}" ]; then
                msg="Is a directory (and -r not specified):"
        else
                # not file or dir. This branch should not be reached.
                msg="No such file or directory:"
        fi
}

asknobackup() {
  unset answer
        until [ "$answer" == "y" -o "$answer" == "n" ]; do
          echo -e "$blue$k$norm could not be moved to trash. Unsafe delete (y/n)?"
          read -n 1 answer
        done
        if [ "$answer" == "y" ]
        then
          unsafe="yes"
          performdelete "${k}"
          ret=$?
                # Reset temporary unsafe flag
          unset unsafe
          unset answer
        else
          unset answer
        fi
}

deletefiles() {
  for k in "$@"; do
          fdesc="$blue$k$norm";
          complain "${k}"
          if [ -n "$msg" ]
          then
                  echo -e "$msg $fdesc."
    else
        #actual action:
        if [ -z "$unsafe" ]; then
          askfs "${k}"
        fi
                  performdelete "${k}"
                  ret=$?
                  # Reset temporary unsafe flag
                  if [ "$answer" == "y" ]; then unset unsafe; unset answer; fi
      #echo "MV exit status: $ret"
      if [ ! "$ret" -eq 0 ]
      then 
        asknobackup "${k}"
      fi
      if [ -n "$use_desktop" ]; then
          # generate trashinfo for desktop environments
        trashinfo "${k}"
      fi
    fi
        done
}

# Make trash if it doesn't exist
if [ ! -d "${trash}" ]; then
    mkdir "${trash}";
fi

# find out which flags were given
afteropts=""; # boolean for end-of-options reached
for k in "$@"; do
        # if starts with dash and before end of options marker (--)
        if [ "${k:0:1}" == "-" -a -z "$afteropts" ]; then
                if [ "${k:1:2}" == "-" ]; then # if end of options marker
                        afteropts="true"
                else # option(s)
                    setflags "$k" # set flags
                fi
        else # not starting with dash, or after end-of-opts
                files[++i]="$k"
        fi
done

if [ -z "${files[1]}" ]; then # no parameters?
        usagemessage # tell them how to use this
        exit 0;
fi

# Which fs is trash on?
detect "${trash}"
tfs="$fs"

# do the work
deletefiles "${files[@]}"



[Oct 22, 2018] linux - If I rm -rf a symlink will the data the link points to get erased, to

Notable quotes:
"... Put it in another words, those symlink-files will be deleted. The files they "point"/"link" to will not be touch. ..."
Oct 22, 2018 | unix.stackexchange.com

user4951 ,Jan 25, 2013 at 2:40

This is the contents of the /home3 directory on my system:
./   backup/    hearsttr@  lost+found/  randomvi@  sexsmovi@
../  freemark@  investgr@  nudenude@    romanced@  wallpape@

I want to clean this up but I am worried because of the symlinks, which point to another drive.

If I say rm -rf /home3 will it delete the other drive?

John Sui

rm -rf /home3 will delete all files and directory within home3 and home3 itself, which include symlink files, but will not "follow"(de-reference) those symlink.

Put it in another words, those symlink-files will be deleted. The files they "point"/"link" to will not be touch.

[Oct 22, 2018] Does rm -rf follow symbolic links?

Jan 25, 2012 | superuser.com
I have a directory like this:
$ ls -l
total 899166
drwxr-xr-x 12 me scicomp       324 Jan 24 13:47 data
-rw-r--r--  1 me scicomp     84188 Jan 24 13:47 lod-thin-1.000000-0.010000-0.030000.rda
drwxr-xr-x  2 me scicomp       808 Jan 24 13:47 log
lrwxrwxrwx  1 me scicomp        17 Jan 25 09:41 msg -> /home/me/msg

And I want to remove it using rm -r .

However I'm scared rm -r will follow the symlink and delete everything in that directory (which is very bad).

I can't find anything about this in the man pages. What would be the exact behavior of running rm -rf from a directory above this one?

LordDoskias Jan 25 '12 at 16:43, Jan 25, 2012 at 16:43

How hard it is to create a dummy dir with a symlink pointing to a dummy file and execute the scenario? Then you will know for sure how it works! –

hakre ,Feb 4, 2015 at 13:09

X-Ref: If I rm -rf a symlink will the data the link points to get erased, too? ; Deleting a folder that contains symlinkshakre Feb 4 '15 at 13:09

Susam Pal ,Jan 25, 2012 at 16:47

Example 1: Deleting a directory containing a soft link to another directory.
susam@nifty:~/so$ mkdir foo bar
susam@nifty:~/so$ touch bar/a.txt
susam@nifty:~/so$ ln -s /home/susam/so/bar/ foo/baz
susam@nifty:~/so$ tree
.
├── bar
│   └── a.txt
└── foo
    └── baz -> /home/susam/so/bar/

3 directories, 1 file
susam@nifty:~/so$ rm -r foo
susam@nifty:~/so$ tree
.
└── bar
    └── a.txt

1 directory, 1 file
susam@nifty:~/so$

So, we see that the target of the soft-link survives.

Example 2: Deleting a soft link to a directory

susam@nifty:~/so$ ln -s /home/susam/so/bar baz
susam@nifty:~/so$ tree
.
├── bar
│   └── a.txt
└── baz -> /home/susam/so/bar

2 directories, 1 file
susam@nifty:~/so$ rm -r baz
susam@nifty:~/so$ tree
.
└── bar
    └── a.txt

1 directory, 1 file
susam@nifty:~/so$

Only, the soft link is deleted. The target of the soft-link survives.

Example 3: Attempting to delete the target of a soft-link

susam@nifty:~/so$ ln -s /home/susam/so/bar baz
susam@nifty:~/so$ tree
.
├── bar
│   └── a.txt
└── baz -> /home/susam/so/bar

2 directories, 1 file
susam@nifty:~/so$ rm -r baz/
rm: cannot remove 'baz/': Not a directory
susam@nifty:~/so$ tree
.
├── bar
└── baz -> /home/susam/so/bar

2 directories, 0 files

The file in the target of the symbolic link does not survive.

The above experiments were done on a Debian GNU/Linux 9.0 (stretch) system.

Wyrmwood ,Oct 30, 2014 at 20:36

rm -rf baz/* will remove the contents – Wyrmwood Oct 30 '14 at 20:36

Buttle Butkus ,Jan 12, 2016 at 0:35

Yes, if you do rm -rf [symlink], then the contents of the original directory will be obliterated! Be very careful. – Buttle Butkus Jan 12 '16 at 0:35

frnknstn ,Sep 11, 2017 at 10:22

Your example 3 is incorrect! On each system I have tried, the file a.txt will be removed in that scenario. – frnknstn Sep 11 '17 at 10:22

Susam Pal ,Sep 11, 2017 at 15:20

@frnknstn You are right. I see the same behaviour you mention on my latest Debian system. I don't remember on which version of Debian I performed the earlier experiments. In my earlier experiments on an older version of Debian, either a.txt must have survived in the third example or I must have made an error in my experiment. I have updated the answer with the current behaviour I observe on Debian 9 and this behaviour is consistent with what you mention. – Susam Pal Sep 11 '17 at 15:20

Ken Simon ,Jan 25, 2012 at 16:43

Your /home/me/msg directory will be safe if you rm -rf the directory from which you ran ls. Only the symlink itself will be removed, not the directory it points to.

The only thing I would be cautious of, would be if you called something like "rm -rf msg/" (with the trailing slash.) Do not do that because it will remove the directory that msg points to, rather than the msg symlink itself.

> ,Jan 25, 2012 at 16:54

"The only thing I would be cautious of, would be if you called something like "rm -rf msg/" (with the trailing slash.) Do not do that because it will remove the directory that msg points to, rather than the msg symlink itself." - I don't find this to be true. See the third example in my response below. – Susam Pal Jan 25 '12 at 16:54

Andrew Crabb ,Nov 26, 2013 at 21:52

I get the same result as @Susam ('rm -r symlink/' does not delete the target of symlink), which I am pleased about as it would be a very easy mistake to make. – Andrew Crabb Nov 26 '13 at 21:52

,

rm should remove files and directories. If the file is symbolic link, link is removed, not the target. It will not interpret a symbolic link. For example what should be the behavior when deleting 'broken links'- rm exits with 0 not with non-zero to indicate failure

[Jun 13, 2010] Unix Blog -- rm command - Argument List too long

Some days back I had problems deleting n no of files in folder.

while using rm command it would say

$rm -f /home/sriram/tmp/*.txt
-bash: /bin/rm: Argument list too long

This is not a limitation of the rm command, but a kernel limitation on the size of the parameters of the command. Since I was performing shell globbing (selecting all the files with extension .txt), this meant that the size of the command line arguments became bigger with the number of the files.

One solution is to to either run rm command inside a loop and delete each individual result, or to use find with the xargs parameter to pass the delete command. I prefer the find solution so I had changed the rm line inside the script to:

find /home/$u/tmp/ -name '*.txt' -print0 xargs -0 rm

this does the trick and solves the problem. One final touch was to not receive warnings if there were no actual files to delete, like:

rm: too few arguments

Try `rm --help' for more information.

For this I have added the -f parameter to rm (-f, –force = ignore nonexistent files, never prompt). Since this was running in a shell script from cron the prompt was not needed also so no problem here. The final line I used to replace the rm one was:

find /home/$u/tmp/ -name '*.txt' -print0 xargs -0 rm -f

You can also do this in the directory:
ls xargs rm

Alternatively you could have used a one line find:

find /home/$u/tmp/ -name '*.txt' -exec rm {} \; -print

TidBITS A Mac User's Guide to the Unix Command Line, Part 1

Warning! The command line is not without certain risks. Unlike when you work in the Finder, some tasks you carry out are absolute and cannot be undone. The command I am about to present, rm, is very powerful. It removes files permanently and completely instead of just putting them in the Trash. You can't recover files after deleting them with rm, so use it with great care, and always use the -i option, as explained below, so Terminal asks you to confirm deleting each file.

Your prompt should look something like this, showing that you are still inside the Test directory you created earlier:

   [Walden:~/Test] kirk%

Type the following:

   % rm -i testfile

The rm command removes files and directories, in this case the file testfile. The -i option tells Terminal to run the rm command in interactive mode, asking you to make sure you want to delete the file. Terminal asks:

   remove testfile?

Type y for yes, then press Return or Enter and the file is removed. If you wanted to leave it there, you could just type n for no, or press Return.

We should check to make sure the file is gone:

   % ls

After typing ls, you should just see a prompt. Terminal doesn't tell you that the directory is empty, but it shows what's in the directory: nothing.

Now, move up into your Home folder. Type:

   % cd ..

This is the same cd command that we used earlier to change directories. Here, the command tells the Terminal to go up in the directory hierarchy to the next directory (the .. is a shortcut for the parent directory); in this case, that is your Home directory.

Type ls again to see what's in this directory:

   % ls

You should see something like this:

   Desktop    Library  Music     Public  Test
   Documents  Movies   Pictures  Sites

The Test directory is still there, but using rm, it's easy to delete it by typing:

   % rm -d -i Test

The -d option tells rm to remove directories. When Terminal displays:

   remove Test?

Type y, then press Return or Enter. (If you didn't remove testfile, as explained above, the rm command won't delete the directory because it won't, by default, delete directories that are not empty.)

Make one final check to see if the directory has been deleted.

   % ls
 
   Desktop    Library  Music     Public
   Documents  Movies   Pictures  Sites

The Answer Gang 62 about Unix command rm

I have a question about rm command. Would you please tell me how to remove all the files excepts certain files like anything ended with .c?

[Mike] The easiest way (meaning it will work on any Unix systems anywhere), is to move those files to a temporary directory, then delete "everything", then move those files back.


mkdir /tmp/tdir
mv *.c /tmp/tdir
rm *
mv /tmp/tdir/* .
rmdir /tmp/tdir

[Ben] The above would work, but seems rather clunky, as well as needing a lot of typing.

[Mike] Yes, it's not something you'd want to do frequently. However, if you don't know a lot about Unix commands, and are hesitant to write a shell script which deletes a lot of files, it's a good trick to remember.

[Ben] It's true that it is completely portable; the only questionable part of my suggestion immediately below might be the "-1" in the "ls", but all the versions of "ls" with which I'm familiar support the "single column display" function. It would be very easy to adapt.

My preference would be to use something like

rm $(ls -1|grep -v "\.c$")

because the argument given to "grep" can be a regular expression. Given that, you can say things like "delete all files except those that end in 'htm' or 'html'", "delete all except '*.c', '*.h', and '*.asm'", as well as a broad range of other things. If you want to eliminate the error messages given by the directories (rm can't delete them without other switches), as well as making "rm" ask you for confirmation on each file, you could use a "fancier" version -

rm -i $(ls -AF1|grep -v "/$"|grep -v "\.c$")

Note that in the second argument - the only one that should be changed - the "\" in front of the ".c" is essential: it makes the "." a literal period rather than a single-character match. As an example, lets try the above with different options.

In a directory that contains


testc
test-c
testcx
test.cdx
test.c

".c" means "'c' preceded by any character" - NO files would be deleted.

"\.c" means "'c' preceded by a period" - deletes the first 3 files.

"\.c$" means "'c' preceded by a period and followed by the end of the line" - all the files except the last one would be gone.

Here's a script that would do it all in one shot, including showing a list of files to be deleted:

See attached misc/tag/rmx.bash.txt

[Dan] Which works pretty well up to some limit, at which things break down and exit due to $skip being too long.

For a less interactive script which can remove inordinate numbers of files, something containing:

ls -AF1 | grep -v /$ | grep -v $1 | xargs rm

allows "xargs" to collect as many files as it can on a command line, and invoke "rm" repeatedly.

It would be prudent to try the thing out in a directory containing only expendable files with names similar to the intended victims/saved.

[Ben] Possibly a good idea for some systems. I've just tried it on a directory with 1,000 files in it (created just for the purpose) and deleted 990 of them in one shot, then recreated them and deleted only 9 of them. Everything worked fine, but testing is indeed a prudent thing to do.

[Dan] Or with some typists. I've more than once had to resort to backups due to a slip of the fingers (the brain?) with an "rm" expression.

[Ben] <*snort*> Never happened to me. No sir. Uh-uh. <Anxious glance to make sure the weekly backup disk is where it should be>

I just put in that "to be deleted" display for, umm, practice. Yeah.

<LOL> Good point, Dan.

Got that sinking feeling that often follows an overzealous rm? Our system doctor has a prescription.

by Mark Komarinski

There was recently a bit of traffic on the Usenet newsgroups about the need for (or lack of) an undelete command for Linux. If you were to type rm * tmp instead of rm *tmp and such a command were available, you could quickly recover your files.

The main problem with this idea from a filesystem standpoint involves the differences between the way DOS handles its filesystems and the way Linux handles its filesystems.

Let's look at how DOS handles its filesystems. When DOS writes a file to a hard drive (or a floppy drive) it begins by finding the first block that is marked "free" in the File Allocation Table (FAT). Data is written to that block, the next free block is searched for and written to, and so on until the file has been completely written. The problem with this approach is that the file can be in blocks that are scattered all over the drive. This scattering is known as fragmentation and can seriously degrade your filesystem's performance, because now the hard drive has to look all over the place for file fragments. When files are deleted, the space is marked "free" in the FAT and the blocks can be used by another file.

The good thing about this is that, if you delete a file that is out near the end of your drive, the data in those blocks may not be overwritten for months. In this case, it is likely that you will be able to get your data back for a reasonable amount of time afterwards.

Linux (actually, the second extended filesystem that is almost universally used under Linux) is slightly smarter in its approach to fragmentation. It uses several techniques to reduce fragmentation, involving segmenting the filesystem into independently-managed groups, temporarily reserving large chunks of contiguous space for files, and starting the search for new blocks to be added to a file from the current end of the file, rather than from the start of the filesystem. This greatly decreases fragmentation and makes file access much faster. The only case in which significant fragmentation occurs is when large files are written to an almost-full filesystem, because the filesystem is probably left with lots of free spaces too small to tuck files into nicely.

Because of this policy for finding empty blocks for files, when a file is deleted, the (probably large) contiguous space it occupied becomes a likely place for new files to be written. Also, because Linux is a multi-user, multitasking operating system, there is often more file-creating activity going on than under DOS, which means that those empty spaces where files used to be are more likely to be used for new files. "Undeleteability" has been traded off for a very fast filesystem that normally never needs to be defragmented.

The easiest answer to the problem is to put something in the filesystem that says a file was just deleted, but there are four problems with this approach:

  1. You would need to write a new filesystem or modify a current one (i.e. hack the kernel).
  2. How long should a file be marked "deleted"?
  3. What happens when a hard drive is filled with files that are "deleted"?
  4. What kind of performance loss and fragmentation will occur when files have to be written around "deleted" space?

Each of these questions can be answered and worked around. If you want to do it, go right ahead and try--the ext2 filesystem has space reserved to help you. But I have some solutions that require zero lines of C source code.

I have two similar solutions, and your job as a system administrator is to determine which method is best for you. The first method is a user-by-user no-root-needed approach, and the other is a system-wide approach implemented by root for all (or almost all) users.

The user-by-user approach can be done by anyone with shell access and it doesn't require root privileges, only a few changes to your .profile and .login or .bashrc files and a bit of drive space. The idea is that you alias the rm command to move the files to another directory. Then, when you log in the next time, those files that were moved are purged from the filesystem using the real /bin/rm command. Because the files are not actually deleted by the user, they are accessible until the next login. If you're using the bash shell, add this to your .bashrc file:

alias waste='/bin/rm'
alias rm='mv $1 ~/.rm'
and in your
.profile:
if [ -x ~/.rm ];
 then
   /bin/rm -r ~/.rm
   mkdir ~/.rm
   chmod og-r ~/.rm
 else
   mkdir ~/.rm
   chmod og-r ~/.rm
 fi

Advantages:

Disadvantages:

System-Wide

The second method is similar to the user-by-user method, but everything is done in /etc/profile and cron entries. The /etc/profile entries do almost the same job as above, and the cron entry removes all the old files every night. The other big change is that deleted files are stored in /tmp before they are removed, so this will not create a problem for users with quotas on their home directories.

The cron daemon (or crond) is a program set up to execute commands at a specified time. These are usually frequently-repeated tasks, such as doing nightly backups or dialing into a SLIP server to get mail every half-hour. Adding an entry requires a bit of work. This is because the user has a crontab file associated with him which lists tasks that the crond program has to perform. To get a list of what crond already knows about, use the crontab -l command, for "list the current cron tasks". To set new cron tasks, you have to use the crontab <file command for "read in cron assignments from this file". As you can see, the best way to add a new cron task is to take the list from crontab -l, edit it to suit your needs, and use crontab <file to submit the modified list. It will look something like this:

~# crontab -l > cron.fil
~# vi cron.fil
To add the necessary cron entry, just type the commands above as root and go to the end of the cron.fil file. Add the following lines:
# Automatically remove files from the
# /tmp/.rm directory that haven't been
# accessed in the last week.
0 0 * * * find /tmp/.rm -atime +7 -exec /bin/rm {} \;
Then type:
~# crontab cron.fil

Of course, you can change -atime +7 to -atime +1 if you want to delete files every day; it depends on how much space you have and how much room you want to give your users.

Now, in your /etc/profile (as root):

if [ -n "$BASH" == "" ] ;
then # we must be running bash
   alias waste='/bin/rm'
   alias rm='mv $1 /tmp/.rm/"$LOGIN"'
   undelete () {
     if [ -e /tmp/.rm/"$LOGIN"/$1 ] ; then
       cp /tmp/.rm/"$LOGIN"/$1 .
     else
       echo "$1 not available"
     fi
   }   if [ -n -e /tmp/.rm/"$LOGIN" ] ;
   then
     mkdir /tmp/.rm/"$LOGIN"
     chmod og-rwx /tmp/.rm/"$LOGIN"
   fi
fi

Once you restart cron and your users log in, your new `undelete' is ready to go for all users running bash. You can construct a similar mechanism for users using csh, tcsh, ksh, zsh, pdksh, or whatever other shells you use. Alternately, if all your users have /usr/bin in their paths ahead of /bin, you can make a shell script called /usr/bin/rm which does essentially the same thing as the alias above, and create an undelete shell script as well. The advantage of doing this is that it is easier to do complete error checking, which is not done here.

Advantages:

Disadvantages:

These solutions will work for simple use. More demanding users may want a more complete solution, and there are many ways to implement these. If you implement a very elegant solution, consider packaging it for general use, and send me an e-mail message about it so that I can tell everyone about it here.

Unix is a Four Letter Word Unix --- Strange names

There may come a time that you will discover that you have somehow created a file with a strange name that cannot be removed through conventional means. This section contains some unconventional approaches that may aid in removing such files.

Files that begin with a dash can be removed by typing

 rm ./-filename
A couple other ways that may work are
 rm -- -filename
and
 rm - -filename
Now let's suppose that we an even nastier filename. One that I ran across this summer was a file with no filename. The solution I used to remove it was to type
 rm -i *
This executes the rm command in interactive mode. I then answered "yes" to the query to remove the nameless file and "no" to all the other queries about the rest of the files.

Another method I could have used would be to obtain the inode number of the nameless file with

ls -i

and then type
 find . -inum number -ok rm '{}' \;
where number is the inode number.

The -ok flag causes a confirmation prompt to be displayed. If you would rather live on the edge and not be bothered with the prompting, you can use -exec in place of -ok.

Suppose you didn't want to remove the file with the funny name, but wanted to rename it so that you could access it more readily. This can be accomplished by following the previous procedure with the following modification to the find command:

 find . -inum number -ok mv '{}' new_filename \;

[Oct 01 2004] Meddling in the Affairs of Wizards

Most people who have spent any time on any version of Unix know that "rm -rf /" is about the worst mistake you can make on any given machine. (For novices, "/" is the root directory, and -r means recursive, so rm keeps deleting files until the entire file system is gone, or at least until something like libc is gone after which the system becomes, as we often joke, a warm brick.)

Well a couple of years ago one Friday afternoon a bunch of us were exchanging horror stories on this subject, when Bryan asked "why don't we fix rm?" So I did.

The code changes were, no surprise, trivial. The hardest part of the whole thing was that one reviewer wanted /usr/xpg4/bin/rm to be changed as well, and that required a visit to our standards guru. He thought the change made sense, but might technically violate the spec, which only allowed rm to treat "." and ".." as special cases for which it could immediately exit with an error. So I submitted a defect report to the appropriate standards committee, thinking it would be a slam dunk.

Well, some of these standards committee members either like making convoluted arguments or just don't see the world the same way I do, as more than one person suggested that the spec was just fine and that "/" was not worthy of special consideration. We tried all sorts of common sense arguments, to no avail. In the end, we had to beat them at their own game, by pointing out that if one attempts to remove "/" recursively, one will ultimately attempt to remove ".." and ".", and that all we are doing is allowing rm to pre-determine this heuristically. Amazingly, they bought that!

Anyway, in the end, we got the spec modified, and Solaris 10 has (since build 36) a version of /usr/bin/rm (/bin is a sym-link to /usr/bin on Solaris) and /usr/xpg4/bin/rm which behaves thus:

[28] /bin/rm -rf /
rm of / is not allowed
[29] 

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

rm (Unix) - Wikipedia, the free encyclopedia

Saferm implementations

Reference

UNIX man pages rm ()

Remove (unlink) the FILE(s).

-d, --directory
unlink FILE, even if it is a non-empty directory (super-user
only; this works only if your system

supports `unlink' for nonempty directories)

-f, --force
ignore nonexistent files, never prompt

-i, --interactive
prompt before any removal

--no-preserve-root do not treat `/' specially (the default)

--preserve-root
fail to operate recursively on `/'

-r, -R, --recursive
remove directories and their contents recursively

-v, --verbose
explain what is being done

--help display this help and exit

--version
output version information and exit

By default, rm does not remove directories. Use the --recursive (-r or
-R) option to remove each listed directory, too, along with all of its
contents.

To remove a file whose name starts with a `-', for example `-foo', use
one of these commands:

rm -- -foo

rm ./-foo

Note that if you use rm to remove a file, it is usually possible to
recover the contents of that file. If you want more assurance that the
contents are truly unrecoverable, consider using shred.



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: February 22, 2019