Announcement

Collapse
No announcement yet.

How to install Kubuntu 21.10 with BTRFS on root?

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #31
    Short answer is: there's no advantage to having snapshot inside snaphots - literally nested subvolumes, which is what you describe by send backups (which are subvolumes) into a subvolume. It's "cleaner" IMO to send backups to the root file system and use folders if you want organization.

    You can nest subvolumes if it makes sense to you or has some purpose. There is one disadvantage to doing that - you have to delete nested subvolumes before you can delete the top level subvolume.

    How you set it up depends on what you're trying to accomplish. Are you making backups or are you storing snapshots? Are you working with a single drive or multiple? As always with Linux you have choices.

    Here's an example of a basic BTRFS installation;

    Going back to your initial post, I'll assume this:
    • /dev/nvme0n1 is your boot drive with a couple partitions (EFI and SWAP) and your BTRFS root filesystem on partition 3 so /dev/nvme0n1p3 will be mounted under rootfs.
    • /dev/sda and /dev/sdb (no partitions) are joined as a BTRFS RAID1 file system which we will mount under media.
    We need 4 mounts: @, @home, rootfs, and media.

    /etc/fstab looks like:
    Code:
    <UUID1>  /           btrfs  <options>,subvol=@
    <UUID1>  /home       btrfs  <options>,subvol=@home
    <UUID1>  /mnt/rootfs btrfs  <options>
    <UUID2>  /mnt/media  btrfs  <options>
    Note the first three use the same UUID because they are all on the same file system. I know it seems odd to have what appears to be the same file system mounted three times, but that's how subvolumes work.

    If you enter "ls /mnt/rootfs" in a terminal your output would be:
    Code:
    @  @home
    showing you the two subvolumes there.

    Now create a folder under /mnt/media named backups to store your backups in.
    Code:
    sudo mkdir /mnt/media/backups
    BTRFS snapshots and backups are really the same thing. Basically a backup is a snapshot moved to a different file system.

    With the above setup as I have described, you could take snapshots of @ and @home in /mnt/rootfs. When you wanted to make a backup, you would "send|receive" the snapshot it to /mnt/media/backups.

    These two commands take a read-only snapshot of @ and make a backup:
    Code:
    sudo btrfs subvolume snapshot -r /mnt/rootfs/@ /mnt/rootfs/@_snap1
    sudo btrfs send /mnt/rootfs/@_snap1 | sudo btrfs receive /mnt/media/backups
    Note the "-r" (read-only) switch in the snapshot command. Snapshots must be read-only to "send" them. Also note the absence of a target file name for the backup. The received snapshot will always have the same name as the sent snapshot.

    A larger question is how are you going to use your media file system? Again, choices can be made here. You can use it like any regular file system and just make folders like Music, Pictures, etc. which might make sense for a network share if that's how you're using it. My media server is more complex and has several other purposes so I have my media divided into subvolumes which I backup individually AND have them mounted so I can share them via several protocols.

    Please Read Me

    Comment


      #32
      Originally posted by oshunluvr View Post
      It's actually pretty simple: mkfs.btrfs -L Media -m raid1 -d raid1 /dev/sda /dev/sdb[/LIST]I have had a home server for over a decade. Initially I replaced drives to increase capacity as my need grew. Now that drives are so large, I replace them as the start to fail or show signs they are likely to. The last capacity upgrade occurred this way (note I have a 4-drive hot-swap capable server):
      Initially, 2x6tb drives and 2x2tb drives (8tb storage and backup) configured as JBOD (not RAID). I needed more capacity and the 2x2TB drives were quite old, so I replaced them with a single 10TB drive. The new configuration was to be 10TB (new drive) storage and 12TB (2x6TB) backup. To accomplish the change over;
      • I unmounted the backup file system and physically removed the 2TB backup drive
      • This left a 6TB drive unused and the storage filesystem (one each 6TB +2TB drives) still "live."
      • I then inserted the 10TB drive, did "btrfs device add" and added it to "storage" resulting in 18TB storage.
      • Finally "btrfs device delete" removed the 6TB drive and 2TB drive from storage leaving 10TB on the new drive alone.
      • I then physically pulled the last 2TB drive.
      • The final step was to create a new backup filesystem using the 2X6TB drives and mount to resume the backup procedure.
      The important things to note here are filesystem access and time. NOT ONCE during the entire operation above did I have to take the server off-line or power down. The whole operation occurred while still accessing the files, using the media server, and other services. Obviously, if you don't have hot-swap capability, you'd have to power down for physical drive changes.
      Moving TBs of data around does take time, but since BTRFS does it in the background I simply issued the needed command and came back later to do the next step. All the above took days to complete, but partially because there was no rush to complete it because I could still use the server. I checked back a couple times a day to issue the next command when it was ready for it.[/LIST]
      I've been working through some test cases and examining the use case you listed above. I'm trying to figure out the recovery process.

      First this is where I'm coming from. If my boot SSD fails, I have to install a new SSD, then boot and restore from my USB Key Clonezilla image which could be weeks or months old. This takes about 1/2 hour. Not much changes on the boot drive except a mariadb database but that is backed up daily on the Data drives. If one of my Data RAID1 mirrors fail, I pop a new identical drive in and add it to the mirror and the re-silvering starts. Obviously you have no mirror to protect you during that process.

      How do you recover from a drive failure when you discover the failure by the system becoming unusable or not booting?


      Comment


        #33
        Originally posted by jfabernathy View Post
        How do you recover from a drive failure when you discover the failure by the system becoming unusable or not booting?
        I have a second bootable drive in my desktop system or use a LiveUSB bootable drive on my server. The btrfs subvolume backups allow for an eventual full OS and data recovery, but it's not protection from a drive becoming unbootable due to hardware failure.


        Please Read Me

        Comment


          #34
          So if I start with the assumption that a boot drive failure will require booting from a Kubuntu ISO I can do a minimal install with the same manual partitioning I did originally with the primary partition being btrfs, I can use the backup btrfs drive images that were create with the btrfs send/receive command to restore my brand new @ and @home subvolumes.
          Is that done by something like rsync avz or is the there a btrfs command that can take the full images and send them to the @ and @home subvolumes?

          Comment


            #35
            Originally posted by jfabernathy View Post
            So if I start with the assumption that a boot drive failure will require booting from a Kubuntu ISO I can do a minimal install with the same manual partitioning I did originally with the primary partition being btrfs, I can use the backup btrfs drive images that were create with the btrfs send/receive command to restore my brand new @ and @home subvolumes.
            Is that done by something like rsync avz or is the there a btrfs command that can take the full images and send them to the @ and @home subvolumes?
            You got it. No rsync or other external programs needed. Install new drive, install Kubuntu from ISO, restore the subvolume backups, reboot.

            The end result of a btrfs send|receive operation IS a full copy of the subvolume. So you would just have to reverse the direction of the operation - from backup to your root FS.

            You would literally re-install from the ISO, then replace the newly installed @ and @home subvolumes with your subvolumes from the backup drive and reboot.

            One other point to touch on: subvolumes MUST be read-only status to send|receive. Obviously, you wouldn't want to boot to a R/O operating system, so prior to rebooting you need to make the @ and @home subvolumes read-write after the send|receive. You can do this two ways - either take a R/W snapshot of the R/O subvolumes and rename appropriately, or change the subvolume attribute with this command:

            Code:
            btrfs property set -ts /path/to/snapshot ro false

            Please Read Me

            Comment


              #36
              An install from the ISO is fairly quick, but I prefer to understand what's needed, and have the necessary backups. On a UEFI system all that's really needed is the ESP, and so I do a backup of its contents, any sort of copy will do, and maybe make a note of an EFI boot variable entry such as shown by efibootmgr. The UUIDs not matching might cause trouble, but as I use labels instead I just have to set the labels (well known by me) when creating the partitions. But if one is reluctant to not use UUIDs, it's /etc/fstab and /boot/grub/grub.cfg where they're used.

              The need for "image" backups comes from the MBR days and Windows, because Microsoft wasn't keen on making it easy to copy Windows.
              Regards, John Little

              Comment


                #37
                Originally posted by jlittle View Post
                An install from the ISO is fairly quick, but I prefer to understand what's needed, and have the necessary backups. On a UEFI system all that's really needed is the ESP, and so I do a backup of its contents, any sort of copy will do, and maybe make a note of an EFI boot variable entry such as shown by efibootmgr. The UUIDs not matching might cause trouble, but as I use labels instead I just have to set the labels (well known by me) when creating the partitions. But if one is reluctant to not use UUIDs, it's /etc/fstab and /boot/grub/grub.cfg where they're used.

                The need for "image" backups comes from the MBR days and Windows, because Microsoft wasn't keen on making it easy to copy Windows.
                I'm about to come to the conclusion that my critical server should only have a boot SSD with the EFI boot directory, SWAP partition, and <ROOTFS> with just what's needed to run *buntu, and my programs. That does not need any BTRFS or ZFS or RAID. Once that SSD is created, keep a backup boot-able USB drive with a clonezilla type image on it for quick restore. I can restore my boot image to a clean new SSD faster than you can install the ISO with no further restore needed.

                When it comes to my data on this critical server, it's all either old and new media collected over time, or backups of what the users on my network think is worth saving either via automated backup problems like deja-dup or simply copies to their server directory. So rolling back of snapshots would never be useful on that server. There is never any experimenting on that server. One of the advantages of running a LTS version of *buntu is I've never had an OS security patch or system update not work out. You can't say that about my lab Archlinux system. Those rolling releases need fixing from updates all the time. Playing with Arch keeps my production stuff on LTS *buntu.

                Based on what happened to me yesterday, I think your critical data backup and restore scheme needs to factor in how it will work in a panic situation. Remembering how to replace snapshots under those situation could be problematic and lead to mistakes. I found this out my having one of my test systems start running a muck and deleting files, but I forget my backup server was attached via SMB/CIFS and the mount point cause the test system to start walking some of those backup directories.

                Luckily all those file are on my Cloud server, IDrive.com. So I spent the last 12 hours restoring ~500GB from my online backup. Lots of fun and panic.

                So I think I'm going to reserve btrfs with snapshots to my daily driver PCs and there the snapshot rollback makes sense. More likely to need to rollback a recent install or update. Critical data is still on the server for those PCs

                Comment


                  #38
                  Originally posted by oshunluvr View Post
                  You got it. No rsync or other external programs needed. Install new drive, install Kubuntu from ISO, restore the subvolume backups, reboot.

                  The end result of a btrfs send|receive operation IS a full copy of the subvolume. So you would just have to reverse the direction of the operation - from backup to your root FS.

                  You would literally re-install from the ISO, then replace the newly installed @ and @home subvolumes with your subvolumes from the backup drive and reboot.

                  One other point to touch on: subvolumes MUST be read-only status to send|receive. Obviously, you wouldn't want to boot to a R/O operating system, so prior to rebooting you need to make the @ and @home subvolumes read-write after the send|receive. You can do this two ways - either take a R/W snapshot of the R/O subvolumes and rename appropriately, or change the subvolume attribute with this command:

                  Code:
                  btrfs property set -ts /path/to/snapshot ro false
                  This sounds like it's worth a test for use on any lab PC where you don't want to spend hours reinstalling everything to get back to where you were. In that case you are not talking about failed hardware, but failed software. This scheme is definitely worth me testing out.
                  Last edited by Snowhog; Jan 18, 2022, 08:20 AM. Reason: Correct spelling error

                  Comment


                    #39
                    Restoring (send|receive) is infinitely easier than using a third-party backup tool and simpler than even rsync.

                    There are other benefits to btrfs as well. Of course, everyone's use-case and priorities vary.

                    Please Read Me

                    Comment


                      #40
                      Originally posted by oshunluvr View Post
                      Restoring (send|receive) is infinitely easier than using a third-party backup tool and simpler than even rsync.

                      There are other benefits to btrfs as well. Of course, everyone's use-case and priorities vary.
                      So let me see if I have the sequence of commands right.

                      I'm going from the Beginner's Guide posted as the first sticky post in the BTRFS category.

                      I have the drive partition that contains @ and @home mounted on /mnt.
                      I create /mnt/snapshots
                      I do my first 2 snapshots with:
                      Code:
                      btrfs su snapshot -r /mnt/@ /mnt/snapshots/@_basic_install
                      btrfs su snapshot -r /mnt/@home /mnt/snapshots/@home_basic_install
                      I then send/receive those to my other hard drive that is formatted btrfs and is mounted at /backup
                      Code:
                      btrfs send /mnt/snapshots/@_basic_install | btrfs receive /backup
                      btrfs send /mnt/snapshots/@home_basic_install | btrfs receive /backup
                      As a test I install libreoffice and Google-Chrome to change things up on both / and /home.

                      In my case I rebooted to clean up the temporary mounts of /mnt and /backup.

                      The example in the Beginner's Guide goes thru the typical case of something messed up and you want to go back to a previous snapshot.

                      Code:
                      mv /mnt/@ /mnt/@old
                      mv /mnt/@home /mnt/@homeold
                      Now you are ready to create a new @ and @home subvolume using the btrfs snapshot command:
                      Code:
                      btrfs subvol snapshot /mnt/snapshots/@_basic_install /mnt/@
                      btrfs subvol snapshot /mnt/snapshots/@home_basic_install /mnt/@home
                      This puts the system back to the basic install condition and it's read-write.

                      If the situation is no snapshots in /mnt/snapshots because of a fresh Kubuntu install on the drive, what's the next step? I'm guessing you want to mount the partition that contained the btrfs @ and @home on /mnt again and the backup drive with the full images from btrfs send/receive on /backup. Then you want to do the commands
                      Code:
                      btrfs send /backup/@_basic_install | btrfs receive /mnt/@
                      btrfs send /backup/@home_basic_install | btrfs receive /mnt/@
                      I'm really not sure about this part. The full images on /backup are not r/o snapshots. They are full images so they can't be sent outside their file system. This is the part I need to better understand.


                      Comment


                        #41
                        Most of that looks correct. My comments:

                        I would never do this:
                        Code:
                        mv /mnt/@ /mnt/@old
                        mv /mnt/@home /mnt/@homeold
                        Moving the subvolume, then taking a snapshot is redundant. Simply snapshot the snapshot. I would do this instead:
                        Code:
                        sudo btrfs su sn /mnt/@old /mnt/@ 
                        sudo btrfs su sn /mnt/@homeold /mnt/@home
                        and skip the move commands. Same exact results - a read-write snapshot in the correct location - just simpler.

                        This is incorrect:
                        The full images on /backup are not r/o snapshots.
                        Actually they are the same thing.

                        Maybe terminology is part of the issue. Using btrfs "snapshot" is a verb not a noun. We are working with subvolumes, period. When you take a snapshot of a subvolume, a new subvolume is created. When someone uses "snapshot" as a noun, it means "a subvolume created by using the btrfs snapshot function". A snapshot that is sent becomes a "backup" but in this context that mean "a subvolume created on another file system using the snapshot command, then sent to this file system."

                        Snapshots are subvolumes and when you send it, a subvolume must be r/o to be sent, it arrives r/o. Unless you re-snapshot it to get a r/w copy or use the btrfs set command to change it from r/o to r/w, it remains r/o.

                        The functional difference is, a snapshot exists on the same file system as it's source. Once you send it to another file system, it becomes a backup - but they are ALL subvolumes. If that new subvolume created as a snapshot remains on the same file system as the source, it only occupies the amount of data space required to store the changes made to source - it dynamically grows. If you send that subvolume to another file system, it requires the full amount of data space as the source was using at the time of the snapshot - it remains a fixed size.

                        This is also incorrect
                        They are full images so they can't be sent outside their file system.
                        Any btrfs subvolume can be sent from one btrfs to any other btrfs file system as long as it's r/o. This is exactly what the send|receive commands are for - moving a subvolume from one file system to another. In the case of restoring from a backup (a snapshot that exists on a different file system) to begin using it "live" again, you simply send|receive it than change it to r/w.

                        However, you cannot "snapshot" a subvolume from a different file system. The "snapshot" command only functions within a single file system, whereas the send|receive function is file system-to-file system.

                        Please Read Me

                        Comment


                          #42
                          Another comment: To roll back to a previous snapshot, the best method IMO is to rename your current subvolume, then snapshot the previous snapshot to the desired name. This leaves your saved snapshots intact and makes is easier to keep track of.

                          Example:
                          /mnt contains subvolumes @ and @home and folder "snaps"

                          Code:
                          ls /mnt
                          @
                          @home
                          snaps
                          Every day I take a snapshot of @ and @home into the snaps folder and add the day of the week to the snapshot name so Tuesday's snapshots are @_tue and @home_tue.
                          One Wednesday, a failed update breaks my desktop. So I boot to console mode, log in and navigate to /mnt, then do this:
                          Code:
                          sudo mv @ @_bad
                          sudo mv @home @home_bad
                          sudo btrfs su sn snaps/@_tue @
                          sudo btrfs su sn snaps/@home_tue
                          then reboot.

                          BTRFS will let you rename the subvolume you are using while you are using it because the files remain in the same location.

                          This has some easy to understand examples: https://btrfs.wiki.kernel.org/index.php/UseCases

                          Read the parts about rollbacks and backups
                          Last edited by Snowhog; Jan 18, 2022, 02:16 PM. Reason: Spelling correction

                          Please Read Me

                          Comment


                            #43
                            Originally posted by oshunluvr View Post
                            Another comment: To roll back to a previous snapshot, the best method IMO is to rename your current subvolume, then snapshot the previous snapshot to the desired name. This leaves your saved snapshots intact and makes is easier to keep track of.

                            Example:
                            /mnt contains subvolumes @ and @home and folder "snaps"

                            Code:
                            ls /mnt
                            @
                            @home
                            snaps
                            BTRFS will let you rename the subvolume you are using while you are using it because the files remain in the same location.
                            Not quite there yet.

                            On my last try, I did what you said not to do and it would not boot at all. So I reinstalled Kubuntu again. But I still had all those snapshots that I had created and used send/received
                            to send to an external drive.

                            So as soon as a had a new system. I mount the btrfs root partition on /mnt and created /mnt/snapshots directory. Then I mounted the external drive to /backup.

                            I reversed the send/receive to put the /backup/snapshots in /mnt/snapshots. No issues in doing this and the 8GB of @20220118 snapshot took several minutes to copy as expected. Then I move /mnt/@ and /mnt/@home out of the way and did the
                            Code:
                            btrfs su sn /mnt/snapshot/@20220118 /mnt/@
                            btrfs su sn /mnt/snapshot/@home20220118 /mnt/@home
                            All the commands worked without complaint, but when the system boot, after grub, I saw an error about not finding some device and listed a UUID. Later I saw "not waiting for hibernate/suspend device, then it broke into busybox. I'm guessing since I did a restore of the snapshot from the external drive it replaced /etc/fstab which had all the old UUIDs in it's mount statements, but the new install had created new ones. Just guessing but seems reasonable.

                            Comment


                              #44
                              Yeah, when you reformat a file system, the UUIDs will change. You can manage that a couple of ways:
                              • Set GRUB to boot to labels instead of UUIDs (I've never done that. See https://ubuntuforums.org/showthread....51#post9585951 ) AND use labels in fstab.
                              • When you format the new file system, use the previous UUID (using "-U <UUID> during mkfs will allow manual assignment of a specific UUID) or change the UUID after creating the file system (sudo btrfstune -U <UUID> /dev/sdaXX)
                              • After restoring the subvolumes but BEFORE rebooting, edit fstab to show the new labels. Then at reboot, manually edit grub ("e" to edit when you see the boot menu), change the UUID, then after reboot run "update-grub".
                              Granted, none of these are quick and easy, but this is catastrophe recovery, not a daily grind. Note that the first option could be wiped out if a future update reset GRUB back to using UUIDs.

                              One could store the list of needed UUIDs in small files in the backup folder (or simply read them from the @/etc/fstab file) for easy restoration along with a binary copy of the boot record and partition table backup (sgdisk can do this) - probably only useful if the drive replacement was identical.

                              I'm curious how UEFI or systemd boot (instead of grub) would handle this sort of recovery. I don't use either on my main machines and frankly don't want to. I have 5 or 6 installs on my desktop PC and having to manage EFI isn't on my list of things to do.

                              Fair to point out here that any backup/recovery operation short of a full drive copy requires these sort of steps. Managing partitions and booting is not dependent on the file system.

                              Please Read Me

                              Comment


                                #45
                                So I'm going to setup some more tests to see which works and is the easiest. As a side note. I wanted to avoid all the time it takes to reinstall Kubunu from a Boot USB. So after I had it built last time. I used a bootable USB to SATA SSD drive with Clonzilla on it and enough space to put a lot of restore images. It took me 8 minutes to create a new complete backup image of my sda drive. And to test it I cleared the drive then booted the Clonezilla drive and did a full image restore. It took under 7 minutes.

                                So I'm thinking if I create subvolumes besides @ and @, maybe @var or anything else that changes from the boot image then I could only have to do the following on a drive failure:
                                1. Boot Clonzilla drive and do a restore.
                                2. Mount Backup drive and send/receive latest backup to snapshot directory.
                                3. Do a btrfs su sn of the restore subvolume to a read-write subvolume where it originally was.

                                On a production server, you'd only need a Clonezilla image of the boot drive after major updates. One every few months or more. Snapshots could run daily.

                                Can't do any test today as all my house electricity is off while the workers are wiring up a whole house generator.

                                Comment

                                Working...
                                X