Announcement

Collapse
No announcement yet.

Btrfs defragment command

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Btrfs defragment command

    For me, defragment wouldn't run on the <ROOT_FS>, i.e., "/mnt".

    Defragment should never be run against snapshots. Like scrub, defrag does NOT repair any problems, it improves performance by reducing extents. The Btrfs check command can fix errors, but it is run on the unmounted system from a LiveUSB stick.

    sudo -i

    btrfs filesystem defragment -v -r -f /

    -v = verbose, list files as they are checked
    -r = recursive. check files in each directory recursively, one after the other.
    -f = flush after each file check, "This will limit the amount of dirty data to current file, otherwise the amount accumulates from several files and will increase system load. This can also lead to ENOSPC if there’s too much dirty data to write and it’s not possible to make the reservations for the new data (ie. how the COW design works)."
    "The data affected by the defragmentation process will be newly written and will consume new space, the links to the original extents will not be kept. "

    On / defragment reported 82 errors, i.e., extents it could not clean up for some reason. Run the command again and the number may change.

    On /home defragment reported 0 errors.

    Checking my device stats I find that:

    Code:
    :~$ [B][FONT=courier new]sudo btrfs dev stats /[/FONT]
    [/B]
    [/dev/sda1].write_io_errs   0
    [/dev/sda1].read_io_errs    0
    [/dev/sda1].flush_io_errs   0
    [/dev/sda1].corruption_errs 0
    [/dev/sda1].generation_errs 0
    So my Btrfs system is not giving me any problems.
    "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
    – John F. Kennedy, February 26, 1962.

    #2
    Great information -- thanks GG!

    Here's a non-hypothetical question -- I think I know the answer but I would like to hear the thoughts of others. After running my dual-drive BTRFS filesystem for 5 years, 24/7/365, I recently decided to build a new computer with all-new components including a new pair of WD1000 drives. So when the new system was running and stable, I backed up (i.e. copied) the ~700 GB of data off via USB cable to an external drive formatted ext4. When that was finished (i.e. the next day) I copied the data onto the new BTRFS filesystem on the new hardware.

    I would assume that, regardless of the degree of fragmentation that may have existed on the old BTRFS filesystem, the process of writing the data onto the empty ext4 filesystem would have effectively defragmented it, such that it is now totally defragmented on the new BTRFS filesystem. Does that sound logical?
    Last edited by dibl; Jul 16, 2018, 05:47 AM.

    Comment


      #3
      Btrfs defragment command

      Yes!

      As I understand it, the cp command, or drag & drop using Dolphin, or the Btrfs send & receive command using the “-f” switch to send the subvolumes as ASCII text files to an EaxT4 filesystem ... all of those methods leave behind Btrfs and extents. An EXT4 filesystem does not use cow so it cannot generate or accept extents.

      In those cases where, for what ever reason, Btrfs becomes corrupted there are three commands I plan to do posts on that might rescue a corrupted system: restore, rescue and check. Of these the method of last resort is check (fsck.btrfs), and by the time one thinks they need to do that they should already have done the copying you did followed by restore and the rescue.

      I’ve never had a single problem with Btrfs (that my own stupidity or carelessness didn’t cause) but one never knows when hardware failure will strike.

      Doing a scrub (CRC checksum of all files) once a week or so is good practice, followed by a @ and @home snapshot. Some folks create other subvolumes, say @data on a separate HD that is mounted to /Home/data in /etc/fstab, and snapshot the in addition to @ and @home. IMO, it is important to send @dataYYYYMMDD to an external Btrfs HD just the way @ and @home should be. Taking a snapshot of @home will NOT include /Home/data for the same reason a snapshot of @ does not include a snapshot of @home even though /home is a subdirectory of /.

      When planning a defragment operation first delete all your snapshots. Defragmenting reads and writes blocks, causing cow to populate snapshots. One should never keep more than a dozen snapshots. More will slow down Btrfs. My sda1 is 698GB. 110GB is data. If I had 10 snapshot pairs and defragged I could easily exceed 90% or more of my pool space, severely slowing down Btrfs. I could, however, add a couple 64 or 128GB sticks to give me some breathing room while I straightened things out.

      Because of kernel regressions I understand that RAID5&6 have a write hole problem again and shouldn’t be used. And, if one plans to use a dynamic virtual hard drive making its directory nocow before creating the VHD is still good practice.
      Last edited by GreyGeek; Jul 16, 2018, 06:49 AM.
      "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
      – John F. Kennedy, February 26, 1962.

      Comment


        #4
        Thanks GG.

        I occasionally run BTRFS fsck on my system, but it rarely finds anything amiss, and on the couple of occasions when there was a minor issue, it was easily fixed. As you can see from my recent project, I'm not interested in having my last 25 years of work sitting on a machine until it dies of old age. My old hardware is still running flawlessly, but I have the luxury of preemptively replacing it before it starts spitting errors.

        Originally posted by GreyGeek View Post
        And, if one plans to use a dynamic virtual hard drive making its directory nocow ...
        Yep, there's plenty of space on an ext4 partition on my SSD for my Windows 10 VM, and among the data on the BTRFS filesystem is a weekly backup of the entire VM.

        Comment


          #5
          IF I had a need or a desire to run WinX, or I was a developer using s database like PostgreSQL as a back end to work against, I would dual boot with Windows and I'd put PostgreSQL on a server running EXT4 and connect to it remotely, like I did at work when I installed PostgreSQL on a SuSE 6.3 server running in my office.

          Or, even better, I'd run WinX on one laptop and Neon on another. People have two & three monitors on their deskstop, so nothing is stopping them from putting two laptops on there as well. If I wanted to run more than one distro I'd use Oshunluver's method of multi-booting.

          A couple days ago I experimented around with KVM/QEMU. I found that the Kubuntu 18.04 ISO ran reasonably well when I was running it in memory and before I installed it to the qcow2 VHD. After I installed it the performance was terrible, even with nocow on the /var/lib/libvert/images directory.

          Moral: Do not use Btrfs IF you have to run other OS's in VM's or need to run a database system.
          "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
          – John F. Kennedy, February 26, 1962.

          Comment


            #6
            In case it wasn't already noted, there's also an "autodefrag" brtfs mount option now. I've been using it without issue.

            Please Read Me

            Comment


              #7
              The defrag option makes a CoW copy of old extents as new extents gathered into one region of blocks, which maintains speed. However, snapshots still reference the blocks of the old extents. This forces the population of snapshots, making them equal in size to their sources. That eats up pool space. If the usage plus the total size of populated snapshots exceeds 90% of the pool performance will suffer.
              To recover that pool space one can delete all their snapshots before doing a defrag, followed by a scrub and a balance, then create a new pair and archive them as backups.

              I usually keep 3 or 4 pairs of @&@home snapshots. I create the new pair and send them to my archive drives, then I delete the oldest pair on both my system and my archives.

              While noatime is a good mount parameter for ssd’s, I’ve come to the opinion that it would be good for spinning drives as well.
              "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
              – John F. Kennedy, February 26, 1962.

              Comment

              Working...
              X