Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Data Deduplication

News

Simple Unix Backup Tools

Recommended Books

Recommended Links

Dump and restore

Partimage

Backing Up And Restoring Linux With SystemImager

Emergency Restore Procedure Baseliners Software RAID Ext3 filesystem LVM Linux backup and recovery  
Perl Backup Scripts and Systems Linux Swap filesystem  Booting into Rescue Mode Grub How To Manage Your Disk By UUID Data deduplication Linux filesystems
Linux Multipath udev Labeling the partition Loopback filesystem Sysadmin Horror Stories Humor Etc

Data deduplication (AKA single instance, common factoring or capacity optimized storage) technologies strive to reduce the amount of duplicate data being backed up and then stored. The technologies identify and eliminate common files or data in and across backup streams. Data deduplication can provide significant level of compression of data if the correct approach is taken. for example executables on two linux systems on the same patch level are usually 60% or more identical even if different applicatins are installed. 

Deduplication vendors often claim that their products offer 20:1 or even greater data reduction ratios. that's plain vanilla hype and in reality much depends on the nature of the data and whether the backup is incremental or not. Because full backups contain mostly unchanged data, once the first full backup has been analysed and stored, all subsequent full backups can see a very high occurrence of deduplication. But what if the business doesn’t retain 64 backup copies? What if the backups have a higher change rate? Space savings numbers from a vendors marketing materials don’t represent a real-life environment, what should be expected for space savings on real backup data sets.

There are really two main types of deduplication with respect to backups, target-based, and source-based.

Removing copies (duplicates) of data and replacing it with pointers to the first (unique) copy of the data can result in many things:

These results are the fundamental reason that data deduplication technology is the rage at the moment. Who doesn’t like saving money, time, network bandwidth, etc.? But as with everything, the devil is always in the details. In this article the concepts and issues in data deduplication will be presented.

Deduplication is really not a new technology. It is really an out growth of compression. Compression searches a single file for repeated binary patterns and replaces duplicates with pointers to the original or unique piece of data. Data deduplication extends this concept to include deduplication…

A quick illustration of deduplication versus compression is that if you have two files that are identical, compression applies deduplication to each file independently. But data deduplication recognizes that the files are duplicates and stores the first one. In addition, it can also search the first file for duplicate data, further reducing the size of the stored data (ala’ compression).

A very simple example of data deduplication is derived from an EMC video

In this example there are three files. The first file, document1.docx, is a simple Microsoft Word file that is 6MB is size. The second file, document2.docx is just a copy of the first file but with a different file name. And finally, the last file, document_new.docx, is derived from document1.docx but with some small changes to the data and is also 6MB in size.

Let’s assume that a data deduplication process divides the files into 6 pieces (this is a very small number and is for illustrative purposes only). The first file has pieces A, B, C, D, E, and F. The second file, since it’s a copy of the first file, has the exact same pieces. The third file has one piece changed which is labeled G and is 6MB in size. Without data deduplication, a backup of the files would have to backup 18MB of data(6MB times 3). But with data deduplication only the first file and the new block G in the third file are backed up. This is a total of 7MB of data.

One additional feature that data deduplication offers is that after the backup, the pieces, A, B, C, D, E, F, and G are typically stored in a list (sometimes called an index). Then when new files are backed up, their pieces are compared to the ones that have already been backed-up. This is a feature of doing data deduplication over time.

One of the first questions asked after, “what is data deduplication?” is, “what level of deduplication can I expect?” The specific answer depends upon the details of the situation and the dedup implementation, but EMC is quoting a range of 20:1 to 50:1 over a period of time.

Devilish Details

Data deduplication is not a “standard” in any sense so all of the implementations are proprietary. Deduplication companies differentiate themselves in several areas:

One of the problems with using these hash algorithms is hash collisions. A hash collision is something of a “false” positive. That is, the hash for a piece of data may actually correspond to a different piece of data (i.e. the hash is not unique). Consequently, the piece of data may not be backed-up because it has the same hash number as is stored in the index, but in fact the data is different. Obviously this can lead to data corruption. So what data dedup companies do is to use several hash algorithms or combinations of them for deduplication to make sure it truly is a duplicate piece of data. In addition, some dedup vendors will use metadata to help identify and prevent collisions.

To give you an idea of the likely-hood of a hash collision requires a little bit of math. This article does a pretty good job explaining the odds of a hash collision. The basic conclusion is that the odds are 1:2^160. This is a huge number. Alternatively, if you have 95 EB (Exabytes - 1,000 Petabytes), then you have a0.00000000000001110223024625156540423631668090820313% chance of getting a false positive in the hash comparison and throwing away a piece of data you should have kept. Given the size of 95 EB, it’s not likely you will encounter this chance even over an extended period of time. But, never say never (after all, someone predicted we’d only need 640KB of memory).

Implementation

Choosing one solution over another is a bit of an art and requires careful consideration of your environment and processes. The previously mentioned video has a couple rules of thumb based on the fundamental difference between source-based and target-based deduplication.

On the other hand, target-based deduplication works well for SANs, LANs, and possibly databases. The reason for this is that moving the data around the network is not very expensive and you may already have your backup packages chosen and in production.

Finally the video also claims that for source-based dedup you can achieve a deduplication of 50:1 and that target-based dedup can achieve 20:1. Both levels of dedup are very impressive. There are a number of articles that discuss how to estimate the deduplication ratios you can achieve. A ratio of 20:1 seems definitely possible.

There are many commercial deduplication products. Any list in this article is incomplete and is not meant as a slight toward a particular company. Nevertheless here is a quick list of companies providing deduplication capabilities:

These are a few of the solutions that are available. There are some smaller companies that offer deduplication products as well.

Deduplication and Open-Source

There are not very many (any?) deduplication projects in the open-source world. However, you can use a target-based deduplication device because it allows you to use your existing backup software which could be open-source. However, it is suggested you talk to the vendor to make sure that they have tested it with Linux.

The only deduplication project that could be found is called LessFS. It is a FUSE based file system that has built-in deduplication. It is still early in the development process but it has demonstrated deduplication capabilities and has incorporated encryption (ah, the beauty of FUSE).


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

Reducing the Cost of Storage Everyday – Deduplication Solutions Make it Happen! - Data Storage

NetApp Launches Advanced Deduplication Tool - Data Storage from eWeek

Most deduplication software is currently dedicated to backup and archival storage.

NetApps A-SIS (Advanced Single Instance Storage) software is now available in its NearStore R200 and FAS storage systems for the first time.

Deduplication-which replicates only the unique segments of data that need to be stored-can cut the amount of data by 50 to 90 percent, not only saving storage space but also increasing bandwidth, lowering power and cooling requirements due to "resting" or inactive servers, and saving companies money on the bottom line.

NetApps A-SIS deduplication reduces the amount of storage enterprises need to purchase and manage, and the reduction in quantity of physical storage translates into savings in power and cooling costs and data center real estate costs.

"Deduplication, at its core, is another form of data virtualization, in which one physical copy represents many logical copies," said Tony Asaro, senior analyst at Enterprise Strategy Group in Milford, Mass.

"Deduplication creates a domino effect of efficiency, reducing capital, administrative and facility costs. We believe that data deduplication is one of the most important and valuable technologies in storage."

NetApp A-SIS deduplication technology has been in customer use for approximately two years, exclusively in conjunction with Symantec NetBackup. As of today, the same deduplication software can be deployed with a wide range of data types.

A-SIS deduplication can be enabled on NetApp FAS and NearStore R200 storage systems with one command, the spokesperson said. It runs seamlessly in the background, with virtually no read/write performance overhead, and is entirely application transparent, a spokesperson for Sunnyvale, Calif.-based NetApp said.

CommVault, a leading provider of Unified Data Management solutions, has completed testing of A-SIS deduplication with CommVault backup software. Based on the results, users can achieve up to a 20:1 space savings over traditional models, with the possibility of experiencing even greater compression ratios over time, a company spokesperson said.

"The space savings numbers that A-SIS deduplication offers users speak for themselves," said David West, vice president of Marketing and Business Development at CommVault.

"With A-SIS deduplication, users will be able to increase the number of data protection and archive copies they can store on NetApp FAS or NearStore storage systems, and gain even greater storage efficiencies."

Intuitive Surgical, a global leader in the rapidly emerging field of robotic-assisted minimally invasive surgery, is a customer that uses NetApp deduplication.

"NetApp has decreased our day-to-day storage management needs for home directories and databases, significantly helping to reduce the TCO," said Steve Lucchesi, vice president, Information Systems at Intuitive Surgical.

"NetApp deduplication provides us with yet another method for conserving space on our storage systems while retaining high performance."

For more information on NetApp deduplication, visit this Web site, or visit the NetApp Tech Talk podcast on deduplication here.

Check out eWEEK.coms for the latest news, reviews and analysis on enterprise and small business storage hardware and software.

Data Deduplication for Backup, Recovery, and Archiving

Data deduplication (AKA single instance, common factoring or capacity optimized storage) technologies strive to reduce the amount of duplicate data being backed up and then stored. The technologies identify and eliminate common data in and across backup streams. By eliminating the common objects, the resulting storage requirement will be reduced. COPAN believes that data deduplication can be a valuable technology and can provide significant value to customers if the correct approach is taken.

Recommended links

Data deduplication - Wikipedia, the free encyclopedia

Open Source LessFS a Serious Competitor in the Data Deduplication ...

Vendors with content in the Data Deduplication Space

Extreme Binning- Scalable, Parallel Deduplication for Chunk-based ...

Symantec Adds Deduplication to Backup Software ...

Symantec to Deliver Deduplication Everywhere to Mid-Sized ...

DATA DEDUPLICATION- Improving Backup Storage and Bandwidth Consumption

Seven things to mull when choosing data deduplication tools ...

New Simpana 8 Deduplication Features Should Give Enterprises Pause ...

Resources and References:



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March, 12, 2019