Announcement

Collapse
No announcement yet.

LARGE volume difference using ZSTD instead of LZO compress, also space-cache v2

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    LARGE volume difference using ZSTD instead of LZO compress, also space-cache v2

    A very unscientific test:

    Installed a new 16TB drive into my server, made a whole drive btrfs file system and used ZSTD compression and space_cache=v2 options on the mount for the first time.

    ZSTD is supposed to have higher compression and better speeds, but last I checked you can't boot from a ZSTD compressed drive (found out the hard way) so I haven't been using it.

    https://btrfs.wiki.kernel.org/index.php/Compression

    The new version of space_cache is supposed to fix this:
    On very large filesystems (many terabytes) and certain workloads, the performance of the v1 space cache may degrade drastically. The v2 implementation, which adds a new B-tree called the free space tree, addresses this issue.
    Then I copied 22 subvolumes of data totaling 6.9TB from a 10TB drive to the new drive - this drive also being a whole drive file system using LZO compression and space_cache v1.

    The final result was the new drive is reporting 6.8TB used vs. 6.9TB on the other drive. A savings of 100GB.

    Like I said, not a scientific result but interesting none the less. I could change the compression on the 10TB drive and re-compress the entire thing to see if the results are consistent, but that would take a lot of time.

    Please Read Me

    #2
    Interesting!

    I wonder how fast a balance or a scrub would be on those subvolumes?
    "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
    – John F. Kennedy, February 26, 1962.

    Comment

    Working...
    X