SEARCH
NEW RPMS
DIRECTORIES
ABOUT
FAQ
VARIOUS
BLOG
DONATE


YUM REPOSITORY

 
 

MAN page from RedHat EL 8 duperemove-0.11.1-1.el8.1.x86_64.rpm

duperemove

Section: Misc. Reference Manual Pages (8)
Updated: September 2016
Index 

NAME

duperemove - Find duplicate extents and print them to stdout 

SYNOPSIS

duperemove [options] files... 

DESCRIPTION

duperemove is a simple tool for finding duplicated extents andsubmitting them for deduplication. When given a list of files it willhash their contents on a block by block basis and compare those hashesto each other, finding and categorizing extents that match eachother. When given the -d option, duperemove will submitthose extents for deduplication using the Linux kernel extent-sameioctl.

duperemove can store the hashes it computes in a hashfile. Ifgiven an existing hashfile, duperemove will only compute hashesfor those files which have changed since the last run. Thus you can runduperemove repeatedly on your data as it changes, without having tore-checksum unchanged data. For more on hashfiles see the--hashfile option below as well as the Examples section.

duperemove can also take input from the fdupes program, see the--fdupes option below.

 

GENERAL

Duperemove has two major modes of operation one of which is a subsetof the other.

 

Readonly / Non-deduplicating Mode

When run without -d (the default) duperemove will print out one ormore tables of matching extents it has determined would be idealcandidates for deduplication. As a result, readonly mode is useful forseeing what duperemove might do when run with -d. The output couldalso be used by some other software to submit the extents fordeduplication at a later time.

It is important to note that this mode will not print out allinstances of matching extents, just those it would consider fordeduplication.

Generally, duperemove does not concern itself with the underlyingrepresentation of the extents it processes. Some of them could becompressed, undergoing I/O, or even have already been deduplicated. Indedupe mode, the kernel handles those details and therefore we try notto replicate that work.

 

Deduping Mode

This functions similarly to readonly mode with the exception that theduplicated extents found in our "read, hash, and compare" step willactually be submitted for deduplication. An estimate of the total datadeduplicated will be printed after the operation is complete. Thisestimate is calculated by comparing the total amount of shared bytesin each file before and after the dedupe.

 

OPTIONS

files can refer to a list of regular files and directories or bea hyphen (-) to read them from standard input.If a directory is specified, all regular files within it will also bescanned. Duperemove can also be told to recursively scan directories withthe '-r' switch.

-r
Enable recursive dir traversal.

-d
De-dupe the results - only works on btrfs and xfs (experimental)FR.

-A
Opens files readonly when deduping. Primarily for use by privilegedusers on readonly snapshots.

-h
Print numbers in human-readable format.

-q
Quiet mode. Duperemove will only print errors and a short summary of any dedupe.

--hashfile=hashfile
Use a file for storage of hashes instead of memory. This option drasticallyreduces the memory footprint of duperemove and is recommended when your dataset is more than a few files large. Hashfiles are also reusable,allowing you to further reduce the amount of hashing done on subsequentdedupe runs.

If hashfile does not exist it will be created. If it exists,duperemove will check the file paths stored inside of it for changes.Files which have changed will be rescanned and their updated hashes will bewritten to the hashfile. Deleted files will be removed from the hashfile.

New files are only added to the hashfile if they are discoverable viathe files argument. For that reason you probably want to provide thesame files list and -r arguments on each run ofduperemove. The file discovery algorithm is efficient and will onlyvisit each file once, even if it is already in the hashfile.

Adding a new path to a hashfile is as simple as adding it to the filesargument.

When deduping from a hashfile, duperemove will avoid deduping files whichhave not changed since the last dedupe.

-L
Print all files in the hashfile and exit. Requires the --hashfile option.Will print additional information about each file when run with -v.

-R [file]
Remove file from the db and exit. Can be specified multipletimes. Duperemove will read the list from standard input if a hyphen(-) is provided. Requires the --hashfile option.

Note: If you are piping filenames from another duperemove instance itis advisable to do so into a temporary file first as running duperemovesimultaneously on the same hashfile may corrupt that hashfile.

--fdupes
Run in fdupes mode. With this option you can pipe the output offdupes to duperemove to dedupe any duplicate files found. Whenreceiving a file list in this manner, duperemove will skip the hashing phase.

-v
Be verbose.

--skip-zeroes
Read data blocks and skip any zeroed blocks, useful for speedup duperemove,but can prevent deduplication of zeroed files.

-b size
Use the specified block size. Raising the block size will consume lessmemory but may miss some duplicate blocks. Conversely, lowering theblocksize consumes more memory and may find more duplicate blocks. Thedefault blocksize of 128K was chosen with these parameters inmind.

--io-threads=N
Use N threads for I/O. This is used by the file hashing and dedupestages. Default is automatically detected based on number ofhost cpus.

--cpu-threads=N
Use N threads for CPU bound tasks. This is used by the duplicateextent finding stage. Default is automatically detected based onnumber of host cpus.

Note: Hyperthreading can adversely affect performance of theextent finding stage. If duperemove detects an Intel CPU withhyperthreading it will use half the number of cores reported by thesystem for cpu bound tasks.

--dedupe-options=options
Comma separated list of options which alter how we dedupe. Prepend 'no' to anoption in order to turn it off.
[no]same
Defaults to off. Allow dedupe of extents within the samefile.
[no]fiemap
Defaults to on. Duperemove uses the fiemap ioctl duringthe dedupe stage to optimize out already deduped extents as well as toprovide an estimate of the space saved after dedupe operations arecomplete.

Unfortunately, some versions of Btrfs exhibit extremely poorperformance in fiemap as the number of references on a file extentgoes up. If you are experiencing the dedupe phase slowing downor 'locking up' this option may give you a significant amount ofperformance back.

Note: This does not turn off all usage of fiemap, to disablefiemap during the file scan stage, you will also want to use the--lookup-extents=no option.

[no]block
Defaults to off. Dedupe by block - don't optimize our data intoextents before dedupe. Generally this is undesirable as it willgreatly increase the total number of dedupe requests. There is also alarger potential for file fragmentation.

--help
Prints help text.

--lookup-extents=[yes|no]
Defaults to no. Allows duperemove to skip checksumming some blocks bychecking their extent state.

-x
Don't cross filesystem boundaries, this is the default behavior sinceduperemove v0.11. The option is kept for backwards compatibility.

--read-hashes=hashfile
This option is primarily for testing. See the --hashfile option if you want to use hashfiles.

Read hashes from a hashfile. A file list is not required with thisoption. Dedupe can be done if duperemove is run from the same basedirectory as is stored in the hash file (basically duperemove has tobe able to find the files).

--write-hashes=hashfile
This option is primarily for testing. See the --hashfile option if you want to use hashfiles.

Write hashes to a hashfile. These can be read in at a later date anddeduped from.

--debug
Print debug messages, forces -v if selected.

--hash-threads=N
Deprecated, see --io-threads above.

--hash=alg
You can choose between murmur3 and xxhash. The default is murmur3 asit is very fast and can generate 128 bit digests for a very smallchance of collision. Xxhash may be faster but generates only 64 bitdigests. Both hashes are fast enough that the default should work wellfor the overwhelming majority of users.

 

EXAMPLES

 

Simple Usage

Dedupe the files in directory /foo, recurse into all subdirectories. You only want to use this for small data sets.
duperemove -dr /foo

Use duperemove with fdupes to dedupe identical files below directory foo.

fdupes -r /foo | duperemove --fdupes

 

Using Hashfiles

Duperemove can optionally store the hashes it calculates in ahashfile. Hashfiles have two primary advantages - memory usage andre-usability. When using a hashfile, duperemove will stream computedhashes to it, instead of main memory.

If Duperemove is run with an existing hashfile, it will only scanthose files which have changed since the last time the hashfile wasupdated. The files argument controls which directoriesduperemove will scan for newly added files. In the simplest usage, yourerun duperemove with the same parameters and it will only scanchanged or newly added files - see the first example below.

Dedupe the files in directory foo, storing hashes in foo.hash. We canrun this command multiple times and duperemove will only checksum anddedupe changed or newly added files.

duperemove -dr --hashfile=foo.hash foo/

Don't scan for new files, only update changed or deleted files, then dedupe.

duperemove -dr --hashfile=foo.hash

Add directory bar to our hashfile and discover any files that wererecently added to foo.

duperemove -dr --hashfile=foo.hash foo/ bar/

List the files tracked by foo.hash.

duperemove -L --hashfile=foo.hash

 

FAQ

 

Is there an upper limit to the amount of data duperemove can process?

Duperemove v0.11 is fast at reading and cataloging data. Dedupe runs will bememory limited unless the '--hashfile' option is used. '--hashfile' allowsduperemove to temporarily store duplicated hashes to disk, thus removing thelarge memory overhead and allowing for a far larger amount of data to bescanned and deduped. Realistically though you will be limited by the speed ofyour disks and cpu. In those situations where resources are limited you mayhave success by breaking up the input data set into smaller pieces.

When using a hashfile, duperemove will only store duplicate hashes inmemory. During normal operation then the hash tree will make up thelargest portion of duperemove memory usage. As of Duperemove v0.11hash entries are 88 bytes in size. If you know the number of duplicateblocks in your data set you can get a rough approximation of memoryusage by multiplying with the hash entry size.

Actual performance numbers are dependent on hardware - up to datetesting information is kept on the duperemove wiki (see below for the link).

 

How large of a hashfile will duperemove create?

Hashfiles are essentially sqlite3 database files with several tables,the largest of which are the files and hashes tables. Each hashestable entry is under 90 bytes though that may grow as features areadded. The size of a files table entry depends on the file path but agood estimate is around 270 bytes per file.

If you know the total number of blocks and files in your data set thenyou can calculate the hashfile size as:

Hashfile Size = Num Hashes X 90 + Num Files X 270

Using a real world example of 1TB (8388608 128K blocks) of data over 1000 files:

8388608 * 90 + 270 * 1000 = 755244720 or about 720MB for 1TB spread over 1000 files.

 

Is is safe to interrupt the program (Ctrl-C)?

Yes, Duperemove uses a transactional database engine and organizes dbchanges to take advantage of those features. The result is that youshould be able to ctrl-c the program at any point and re-run withoutexperiencing corruption of your hashfile.

 

How can I find out my space savings after a dedupe?

Duperemove will print out an estimate of the saved space after adedupe operation for you.

You can get a more accurate picture by running 'btrfs fi df' beforeand after each duperemove run.

Be careful about using the 'df' tool on btrfs - it is common for spacereporting to be 'behind' while delayed updates get processed, so animmediate df after deduping might not show any savings.

 

Why is the total deduped data report an estimate?

At the moment duperemove can detect that some underlying extents areshared with other files, but it can not resolve which files thoseextents are shared with.

Imagine duperemove is examing a series of files and it notes a shareddata region in one of them. That data could be shared with a fileoutside of the series. Since duperemove can't resolve that informationit will account the shared data against our dedupe operation while inreality, the kernel might deduplicate it further for us.

 

Why are my files showing dedupe but my disk space is not shrinking?

This is a little complicated, but it comes down to a feature in Btrfscalled _bookending_. The Btrfs wiki explains this in detail:http://en.wikipedia.org/wiki/Btrfs#Extents.

Essentially though, the underlying representation of an extent inBtrfs can not be split (with small exception). So sometimes we can endup in a situation where a file extent gets partially deduped (and theextents marked as shared) but the underlying extent item is not freedor truncated.

 

Is duperemove safe for my data?

Yes. To be specific, duperemove does not deduplicate the data itself.It simply finds candidates for dedupe and submits them to the Linuxkernel extent-same ioctl. In order to ensure data integrity, thekernel locks out other access to the file and does a byte-by-bytecompare before proceeding with the dedupe.

 

What is the cost of deduplication?

Deduplication will lead to increased fragmentation. The blocksizechosen can have an effect on this. Larger blocksizes will fragmentless but may not save you as much space. Conversely, smaller blocksizes may save more space at the cost of increased fragmentation.

 

NOTES

Deduplication is currently only supported by the btrfs and xfs filesystem.

The Duperemove project page can be found at https://github.com/markfasheh/duperemove

There is also a wiki at https://github.com/markfasheh/duperemove/wiki

 

SEE ALSO

hashstats(8)filesystems(5)btrfs(8)xfs(8)fdupes(1)


 

Index

NAME
SYNOPSIS
DESCRIPTION
GENERAL
Readonly / Non-deduplicating Mode
Deduping Mode
OPTIONS
EXAMPLES
Simple Usage
Using Hashfiles
FAQ
Is there an upper limit to the amount of data duperemove can process?
How large of a hashfile will duperemove create?
Is is safe to interrupt the program (Ctrl-C)?
How can I find out my space savings after a dedupe?
Why is the total deduped data report an estimate?
Why are my files showing dedupe but my disk space is not shrinking?
Is duperemove safe for my data?
What is the cost of deduplication?
NOTES
SEE ALSO

This document was created byman2html,using the manual pages.