Announcement

Collapse
No announcement yet.

Upgrading btrfs @home mount with dedicated drive

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    [SOLVED] Upgrading btrfs @home mount with dedicated drive

    Hi,

    I currently have a single disk containing @ (root) and @home subvolumes. I'm using timeshift to take backups of the root partition and snapper (btrfs-assistant) to take backups of /home. Everything is working great except my disk is getting maxed out and so I want to migrate /home to a dedicated 1TB drive.

    Currently the @ (root) and @home are mounted as such:

    UUID=f6e56409-da8e-43b1-b238-d041419bbce4 / btrfs subvol=@,compress=zstd:1,ssd,noatime,space_cache=v 2,discard=async 0 0
    UUID=f6e56409-da8e-43b1-b238-d041419bbce4 /home btrfs subvol=@home,compress=zstd:1,ssd,noatime,space_cac he=v2,discard=async 0 0



    These are the available filesystems

    ~/ sudo btrfs filesystem show
    Label: 'fedora_localhost-live' uuid: f6e56409-da8e-43b1-b238-d041419bbce4
    Total devices 1 FS bytes used 159.65GiB
    devid 1 size 199.30GiB used 192.07GiB path /dev/nvme1n1p3

    Label: 'Home' uuid: 54678ba4-0944-45a9-8027-497aee5cfba4
    Total devices 1 FS bytes used 637.52GiB
    devid 1 size 953.87GiB used 641.02GiB path /dev/nvme0n1p1


    These are the subvolumes mounted on /

    btrfs subvolume list /

    ID 257 gen 120222 top level 5 path @home
    ID 494 gen 120207 top level 5 path @
    ID 511 gen 120152 top level 257 path @home/.snapshots
    ID 638 gen 119007 top level 5 path timeshift-btrfs/snapshots/2022-09-01_20-06-55/@
    ID 674 gen 119007 top level 5 path timeshift-btrfs/snapshots/2022-09-03_10-00-01/@
    ID 694 gen 119007 top level 5 path timeshift-btrfs/snapshots/2022-09-04_14-00-02/@
    ID 723 gen 119007 top level 5 path timeshift-btrfs/snapshots/2022-09-05_12-00-01/@
    ID 875 gen 119007 top level 5 path timeshift-btrfs/snapshots/2022-09-09_21-00-02/@
    ID 876 gen 119007 top level 5 path timeshift-btrfs/snapshots/2022-09-09_22-14-31/@
    ID 877 gen 119007 top level 5 path timeshift-btrfs/snapshots/2022-09-09_23-21-20/@
    ID 972 gen 119007 top level 5 path timeshift-btrfs/snapshots/2022-09-13_14-00-02/@
    ID 1012 gen 119007 top level 5 path timeshift-btrfs/snapshots/2022-09-14_18-00-01/@
    ID 1254 gen 119007 top level 5 path timeshift-btrfs/snapshots/2022-09-23_21-00-01/@
    ID 1257 gen 66675 top level 494 path var/lib/docker/btrfs/subvolumes/ab0d5fbba151ed84f96f00aa448b463675913c2a5224d42ecc f9093f229b4037
    ID 1258 gen 66673 top level 494 path var/lib/docker/btrfs/subvolumes/f52451a1a0cee52d1de74348b840cbd2c25da13efe116b15b7 d45de2d4d024a9-init
    ID 1259 gen 66673 top level 494 path var/lib/docker/btrfs/subvolumes/f52451a1a0cee52d1de74348b840cbd2c25da13efe116b15b7 d45de2d4d024a9
    ID 1260 gen 66676 top level 494 path var/lib/docker/btrfs/subvolumes/6ba6a864c29a7d1a01ac3201587ec3d6471f717242c968fa71 a3c3061e7557f4-init
    ID 1261 gen 66676 top level 494 path var/lib/docker/btrfs/subvolumes/6ba6a864c29a7d1a01ac3201587ec3d6471f717242c968fa71 a3c3061e7557f4
    ID 1363 gen 119007 top level 5 path timeshift-btrfs/snapshots/2022-09-27_20-00-01/@
    ID 1537 gen 119007 top level 5 path timeshift-btrfs/snapshots/2022-10-05_08-55-04/@
    ID 1769 gen 119007 top level 5 path timeshift-btrfs/snapshots/2022-10-13_13-42-30/@
    ID 1822 gen 119007 top level 5 path timeshift-btrfs/snapshots/2022-10-16_13-00-01/@
    .....
    etc


    I formatted my new drive, with gnome-disks as a btrfs. Mounted it to /mnt/new_home and copied all the data from /home to it using tar -cf - /home |tar -C /mnt/new-home -xvf - and all files and permissions look to be okay.

    At this point I'm not sure how to proceed. I tried to update the fstab by commenting out the original /home entry and replacing it with the one below but It goes straight to system maintenance mode until I put the original entry back in place. I'm guessing I may to do some work in a liveboot environment, not sure


    # New Home:
    UUID=54678ba4-0944-45a9-8027-497aee5cfba4 /home btrfs subvol=home,compress=zstd:1,ssd,noatime,space_cach e=v2,discard=async 0 0


    Anyone know what I need to do get the new home drive mounted up right so I can back it up with Snapper and also how I can mount the original home to /mnt/old_home until everythings working and i can delete it?

    I'm learning btrfs atm so any help is really appreciated!

    Cheers
    Sunny​

    #2
    You did it wrong or you have to change the way you mount home. Based on your description, you copied the FILES in /home instead of the /home subvolume, so you no longer have to ability to snapshot and backup your home using btrfs functions or use timeshift or snapper. You must use a subvolume to use the snapshot features of BTRFS.

    Here's what you should have done:
    1. Turn off timeshift/snapper.
    2. Create a BTRFS file system on /dev/nvme0n1p1 (which you did, I believe).
    3. Mount /dev/nvme0n1p3 somewhere
    4. Mount /dev/nvme0n1p1 somewhere
    5. Make a read-only snapshot of @home
    6. Use the BTRFS function to SEND|RECEIVE @home from /dev/nvme0n1p3/@home to /dev/nvme0n1p1/@home
    7. Change the read-only status of /dev/nvme0n1p1/@home to read-write.
    8. Edit /etc/fstab so that /home mount points to /dev/nvme0n1p1/@home
    9. Log out and back in.
    10. Assuming all went correctly, mount /dev/nvme0n1p3/ and delete the old @home subvolume.
    11. Clean up (delete) all the old @home timeshift and snapper shapshots
    12. Re-configure timeshift/snapper to use your new /home location.

    Command example using /part3 and /part1 as mount points:
    Code:
    sudo -i
    mkdir /part3 /part1
    mount /dev/nvme0n1p1 /part1
    mount /dev/nvme0n1p3 /part3
    btrfs su sn -r /part3/@home /part3/@home-ro
    btrfs send /part3/@home-ro | btrfs receive /part1/
    btrfs su sn /part3/@home-ro /part3/@home
    btrfs su de -c /part3/@home-ro
    Then edit /etc/fstab and change this:
    Code:
    UUID=f6e56409-da8e-43b1-b238-d041419bbce4 /home  btrfs subvol=home,compress=zstd:1,ssd,noatime,space_cache=v2,discard=async 0 0
    ​to this:
    Code:
    UUID=54678ba4-0944-45a9-8027-497aee5cfba4 /home  btrfs subvol=home,compress=zstd:1,ssd,noatime,space_cache=v2,discard=async 0 0
    Then step 9 above and continue. That should do it.

    I can't help you with timeshift or snapper because I don't use either. They have a tendency - as you've discovered - to fill your drive space without warning.

    Please Read Me

    Comment


      #3
      Yeah i think you probably nailed it, I missed the steps 3-8! Thanks, I'll have another go and let you know how I get on.

      Super Star, thanks!
      S

      Comment


        #4
        As a snapper user, I'll add that often after moving or renaming snapper-managed subvolumes, I've had to recreate the .snapshots subvolume in them, for snapper to start working on them.

        When snapper has been not working, or has been hitting the space limit, nothing has told me about it. I've learned to check things out (snapper-gui is a quick way) after moving or renaming subvolumes, or release upgrades.

        On my desktop I'm trying the Arch recommendation for snapper, which is to use a subvolume at the top level (say, @home_snapshots) symlinking /home/.snapshots to it. Snapper has not objected. For the symlink to work, the btrfs top level has to be mounted in /etc/fstab.
        Regards, John Little

        Comment


          #5
          Originally posted by jlittle View Post
          As a snapper user, I'll add that often after moving or renaming snapper-managed subvolumes, I've had to recreate the .snapshots subvolume in them, for snapper to start working on them.

          When snapper has been not working, or has been hitting the space limit, nothing has told me about it. I've learned to check things out (snapper-gui is a quick way) after moving or renaming subvolumes, or release upgrades.

          On my desktop I'm trying the Arch recommendation for snapper, which is to use a subvolume at the top level (say, @home_snapshots) symlinking /home/.snapshots to it. Snapper has not objected. For the symlink to work, the btrfs top level has to be mounted in /etc/fstab.
          Thanks, John. Thats super useful. I'm using BTRFS-Assistant to do my snapper backups. I'll make sure to set that up too

          Comment


            #6
            Just for kicks, I'll give myself a plug. I don't like adding yet another application where something I am familiar with can do the job, so I wrote this service menu for Dolphin quite awhile ago:

            https://store.kde.org/p/1214134

            It provides manual management of BTRFS subvolumes using Dolphin.

            Please Read Me

            Comment


              #7
              Originally posted by jlittle View Post
              As a snapper user, I'll add that often after moving or renaming snapper-managed subvolumes, I've had to recreate the .snapshots subvolume in them, for snapper to start working on them.

              When snapper has been not working, or has been hitting the space limit, nothing has told me about it. I've learned to check things out (snapper-gui is a quick way) after moving or renaming subvolumes, or release upgrades.

              On my desktop I'm trying the Arch recommendation for snapper, which is to use a subvolume at the top level (say, @home_snapshots) symlinking /home/.snapshots to it. Snapper has not objected. For the symlink to work, the btrfs top level has to be mounted in /etc/fstab.
              I’ve gotten to the snapper setup stage now. Just wondering what would be the advantage of setting up the root level @home_snapshot subvolume with symlink? Does that stop snapper from filling up the filesystem?

              Comment


                #8
                Originally posted by oshunluvr View Post
                Just for kicks, I'll give myself a plug. I don't like adding yet another application where something I am familiar with can do the job, so I wrote this service menu for Dolphin quite awhile ago:

                https://store.kde.org/p/1214134

                It provides manual management of BTRFS subvolumes using Dolphin.
                Nice! I’ll take a look

                Btw, your work earlier was excellent. Everything is working beautifully now! Thanks

                Comment


                  #9
                  Great, glad it worked for you.

                  I''ll go ahead and mark the thread solved for you. You can do that yourself next time by editing your initial post in the thread and changing the Prefix to [Solved].

                  BTW, welcome to the forum.

                  Please Read Me

                  Comment


                    #10
                    Originally posted by sunnyod View Post
                    Just wondering what would be the advantage of setting up the root level @home_snapshot subvolume with symlink?
                    A typical use case is one wants to revert to an older snapshot of the Linux root. Let's say that's called @, and there's a snapshot in /.snapshots/3294/snapshot. A quick way to do this is
                    1. If it's not already mounted, mount somewhere the btrfs root (aka subvolid=5, subvol=/, FS_TREE). Let's say that's /mnt/top.
                    2. Change directory to /mnt/top and rename @ to @old with mv.
                    3. Snapshot /.snapshots/3294/snapshot to @ (to get a non read only @).
                    4. Reboot.
                    This leaves behind the /.snapshots in @old, and snapper doesn't run on root after the reboot. I suppose one can move .snapshots from /mnt/top/@old/ to /, but there's limitations on doing that, I think one has to be booted from somewhere else so that nothing is open. So one has to rmdir /.snapshots, and recreate it as a subvolume, and either remember to later manually clean up the snapshots from @old/.snapshots, or snapshot them from @old/.snapshots to /.snapshots. If the revert has gone well, and @old is to be discarded, all of the subvolumes nested in it have to be moved or deleted before @old can be deleted.

                    But with the arch set up, the snapshots stay in @snapshots, and all the symlinks to it are all fine, nothing to do. That's the theory; I haven't yet done this in the heat of necessity.

                    Originally posted by sunnyod View Post
                    Does that stop snapper from filling up the filesystem?
                    No. Setting SPACE_LIMIT in the snapper config is intended to stop that.
                    Regards, John Little

                    Comment

                    Working...
                    X