Announcement

Collapse
No announcement yet.

Getting back lost filesystem space

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Getting back lost filesystem space

    Some BTRFS users have reported that they get an "out of disk space" message even though they have plenty of space left. The solution is to convert unused allocation to free space. You can do that by using BTRFS's balance command. The trick is knowing which sizes to use in the command.

    You can start by using
    sudo btrfs balance start -dusage=0
    sudo btrfs balance start -musage=0


    then use

    sudo btrfs balance start -dusage=20
    sudo btrfs balance start -musage=20


    If you can't do either command sequence then plug in a 4GB USB stick, add it to your @ subvolume.

    btrfs device add -f /dev/sdX /
    and do a
    sudo btrfs balance start -dusage=1
    then remove the device from @

    btrfs device remove /dev/sdX /


    Now comes the trick. Use
    btrfs fi usage /
    The closer the "used" space is to the "allocated" space the closer you are to being out of usable space.

    Divide the "used" space by the "allocated" space, times 100. Take that value, say 87, and redo the balance command using it on both d and m.

    sudo btrfs balance start -dusage=87
    sudo btrfs balance start -musage
    =87

    Systemd will take care of the fstrim later.

    I didn't think this up. I'm not that smart. I read about it here:

    https://ohthehugemanatee.org/blog/20...ency-response/
    Last edited by GreyGeek; Oct 31, 2020, 02:39 PM. Reason: fix typo
    "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
    – John F. Kennedy, February 26, 1962.

    #2
    Thanks for posting this, GreyGeek !
    How current would you say this still is? In the BTRFS FAQ it says you should not have to do balancing yourself... and there were some other posts saying it was not necessary anymore... But recently I've run into disk space issues and my filesystem usage looks like this:
    Code:
    Overall: 
       Device size:                 232.38GiB
       Device allocated:            230.38GiB
       Device unallocated:            2.00GiB
       Device missing:              232.38GiB
       Used:                        195.76GiB
       Free (estimated):             34.48GiB      (min: 34.48GiB)
       Data ratio:                       1.00
       Metadata ratio:                   1.00
       Global reserve:              502.72MiB      (used: 0.00B)
    
    Data,single: Size:225.37GiB, Used:192.89GiB (85.59%)
    
    Metadata,single: Size:5.01GiB, Used:2.86GiB (57.18%)
    
    System,single: Size:4.00MiB, Used:48.00KiB (1.17%)
    Before I deleted some older snapshots Device allocated was equal to Device size and some programs crashed, presumably because I ran out of space.
    As you can see, I still have plenty of free space, though...

    Would you still recommend to run balance? Do you have any idea how long that will take and how much this will impact performance? (i.e. should I do it over night, or can I run it in the background while I am working?)

    Thanks!
    Last edited by Snowhog; Mar 21, 2022, 10:25 AM.

    Comment


      #3
      I hope you have a good backup of your data. You may need it.

      You are definitely out of disk space. Because of CoW it may not be possible to erase much unless you do it a file at a time, if that is even possible.

      As things currently are, I doubt if you would have enough room (2Gb unallocated) to run balance, or even to reliably run your system because log entries may attempt to consume the rest as errors increase. IF you have a 64GB or a 128GB USB stick that you can format as a BTRFS subvolume you can use the "btrfs device add" command to add it to your system, as shown in the #1 post, to give you enough room.

      With the extra room I'd first use "btrfs send ..." at least one snapshot to backup storage. Then use "btrfs subvol delete -C ..,." to delete the snapshots. With only @ and @home left it's time to run the balance command.

      In your situation I would run the command starting at 5, repeating both the d & m in increments of 10, until you get to 95. This could take an hour or more on an SSD and all day on a spinner.

      If the balance command crashes then it is probable that your filesystem is hosed and your quickest and best bet would be to reformat it and reinstall your distro.

      IF the balance commands complete without errors then I would run the "btrfs check" command.
      https://www.kubuntuforums.net/forum/...ns-and-giggles

      Regarding the fstrim command, systemd has an fstrim.timer which runs it weekly and automatically. Fstrim doesn't work on spinners, so if you installed on an SSD then fstrim will be in the lines in /etc/fstab that mount @ and @home to / and /home.




      "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
      – John F. Kennedy, February 26, 1962.

      Comment


        #4
        Hey GreyGeek , thanks a lot for your reply!

        It did actually work out pretty well in the end! But I also had a pretty good backup history on my external HD, so I was not too worried.

        I was able to free another GB by deleting old snapshots, then I ran started running
        Code:
        btrfs balance start -dusage=n% /
        and
        Code:
        btrfs balance start -musage=m%
        , starting with very small percentages as you suggested ('1', actually). This immediately freed up a few GBs and ran pretty quickly (less than a minute - I have an SSD)!
        Up to about 50, these ran pretty quickly (a few minutes; and -musage even much faster); I ran -dusage up to 70 and -musage up to 99 and will probably run higher -dusage over night.

        Now I am at this point:
        Code:
        Overall:
        Device size: 232.38GiB
        Device allocated: 182.03GiB
        Device unallocated: 50.35GiB
        Device missing: 232.38GiB
        Used: 175.67GiB
        Free (estimated): 55.79GiB (min: 55.79GiB)
        Data ratio: 1.00
        Metadata ratio: 1.00
        Global reserve: 376.00MiB (used: 0.00B)
        
        Data,single: Size:179.00GiB, Used:173.56GiB (96.96%)
        
        Metadata,single: Size:3.00GiB, Used:2.11GiB (70.23%)
        
        System,single: Size:32.00MiB, Used:48.00KiB (0.15%)
        I am really surprised what a difference a little bit of re-balancing did!

        I actually feel a bit mislead, tbh, since the BTRFS FAQ say you don't need to run regular balance operations:
        https://btrfs.wiki.kernel.org/index....e_regularly.3F
        Clearly you do need to run (re)balance!

        Do you run balance on a cron job regularly or just whenever you are close to running out of space?

        Thanks again!

        Comment


          #5
          My system shows:
          Code:
          $ btrfs filesystem usage /
          ....
          Overall:
             Device size:                 441.04GiB
             Device allocated:            170.04GiB
             Device unallocated:          271.00GiB
             Device missing:              441.04GiB
             Used:                        137.04GiB
             Free (estimated):            302.37GiB      (min: 302.37GiB)
             Data ratio:                       1.00
             Metadata ratio:                   1.00
             Global reserve:              219.45MiB      (used: 0.00B)
          
          Data,single: Size:167.01GiB, Used:135.64GiB (81.22%)
          
          Metadata,single: Size:3.00GiB, Used:1.40GiB (46.64%)
          
          System,single: Size:32.00MiB, Used:48.00KiB (0.15%)
          As you can see, I am a long way from running out of space. The fstrim.timer systemd service is set to run once a week, so I've only run balance when I've been experimenting around. Once I added a bank of USB sticks to my system to see how it would run with them added. I ran balance to distribute my system among my SSD and the USBs. When I completed my experimenting I deleted the devices and then ran balance again.

          If you notice your system slowing down then checking the percentage of Data Used will tell you if you need to run balance and/or check.

          IIRC, oshunluver says he has fstrim run about once a month. If you run it too often on an SSD it would burn up your "Lifetime Writes" faster than you'd want.
          My main drive is a Samsung 860 EVO 500Gb. I've had it for 3 years. It has a "Power On Time" of 428 days and 15 hours.
          My NVMe is a Samsung SSD 980 1 Tb. I've had it for 3 months. It has a "Power On time" of 11 hours.
          The main drive has Lifetime Writes of 4.76 Tb. 300Tb is promised. The NMVe has LifetimeWrites of 1.67 Tb. 600 Tb is promised.

          HD Sentiniel says they both have more than 1,000 power on days of useful life left.
          Last edited by Snowhog; Mar 25, 2022, 01:15 PM.
          "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
          – John F. Kennedy, February 26, 1962.

          Comment


            #6
            Good point about the SSD writes/lifetime!

            I have a Samsung Evo 250GB, but it is already 7.5 years old and well used... like the whole laptop. I should probably upgrade my hardware soon-ish...

            Weekly fstim is default in 20.04, right? At least that's what's in my /usr/lib/systemd/system/fstrim.timer, and I never changed that.
            Does fstrim interact with btrfs balance, though?

            Comment


              #7
              Originally posted by Chopstick View Post
              Good point about the SSD writes/lifetime!

              I have a Samsung Evo 250GB, but it is already 7.5 years old and well used... like the whole laptop. I should probably upgrade my hardware soon-ish...

              Weekly fstim is default in 20.04, right? At least that's what's in my /usr/lib/systemd/system/fstrim.timer, and I never changed that.
              Does fstrim interact with btrfs balance, though?
              Ya, fstrim is run once a week on 20.04 and on my Neon. The man page states
              Running fstrim frequently, or even using mount -o discard, might negatively affect the lifetime of poor-quality SSD devices. For most desktop and server systems a sufficient trimming frequency is once a week. Note that not all devices support a queued trim, so each trim command incurs a performance penalty on whatever else might be trying to use the disk at the time.
              My fstab contains
              UUID=9bc19383-57db-4a32-938d-8466a133d94d / btrfs subvol=/@, defaults, noatime, space_cache, autodefrag, discard, compress=lzo 0 0
              Discard acts like TRIM, and can be removed from fstab for SSD's. If you are using it and you remove it be sure to run

              sudo update-initramfs -u -k all
              (taken from link below)

              to update your initramfs.

              Here is some interesting info on speeding up BRTFS on SSD's https://wiki.debian.org/SSDOptimization
              Last edited by GreyGeek; Mar 25, 2022, 06:27 PM.
              "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
              – John F. Kennedy, February 26, 1962.

              Comment

              Working...
              X