Announcement

Collapse
No announcement yet.

How to install Kubuntu 21.10 with BTRFS on root?

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • jfabernathy
    replied
    Originally posted by oshunluvr View Post
    Sounds like a plan!

    Have you researched using incremental send for the backups? It saves a lot of time.

    https://btrfs.wiki.kernel.org/index....emental_Backup
    For my media server, I probably will not because once the software is setup it will not change for a year at least. Just the data, but that is on a mirror. Plus I forgot to mention that all the boot drive and media drive files are stored in the Cloud and synced every night.

    Now for my daily driver where I'm always experimenting the incremental snapshots makes sense and I'll be exploring that next.

    Leave a comment:


  • oshunluvr
    replied
    Sounds like a plan!

    Have you researched using incremental send for the backups? It saves a lot of time.

    https://btrfs.wiki.kernel.org/index....emental_Backup

    Leave a comment:


  • jfabernathy
    replied
    I've done enough experimenting that I've decided on just doing my own snapshots manually and sending them to an external drive. I played with snapper a lot over the last several days instead of going out and playing in the snow. In one test case, after I rolled back a snapshot it for some reason didn't have the old /etc/fstab. Everything else was rolled back but that. There's no explaining it but doing it manually works just like the Beginner's Guide shows.

    So moving forward, My new media server will have BTRFS boot SSD and a couple of 4TB HD's in BTRFS RAID 1 mirror. I'll do manually snapshots regularly and send/receive them to a snapshot backup folder on the Mirror where the media is.

    Then I'll have a Clonezilla image of the boot drive done before any major changes.

    I think they call that belt and suspenders.

    Leave a comment:


  • jfabernathy
    replied
    Well I have something that works but I'm going to refine it in the next few days. Using some of the tricks previously suggested. This worked.

    I took a snapshot of a working system then send/receive that snapshot to an external drive.

    I reinstalled Kubuntu and after first boot I saved the working fstab to my backup drive and printed it out.

    I then mounted the rootfs on /mnt and created the directory, /mnt/snapshots.

    Mounted the external drive to /backup and reversed the send/receive of the external snapshot back to /mnt/snapshots.

    I renamed /mnt/@ and /mnt/@home to /mnt/@old and /mnt/@homeold

    Then I did
    Code:
    btrfs su sn /mnt/snapshots/@yymmdd /mnt/@
    btrfs su sn /mnt/snapshots/@homeyymmdd /mnt/@home
    Before rebooting I edited /mnt/@etc/fstab and put in the new UUIDs replacing the ones saved from the original install.

    Then I rebooted and edited the grub entries during boot and replaced the old UUIDs with the new one and booted.

    As soon as I logged in, I did
    Code:
    sudo upgrade-grub.
    I'm working on a cleaner way, but so far this at least works for replacing a failed drive.




    Leave a comment:


  • jlittle
    replied
    IIUC the ESP partition for UEFI is not and cannot be btrfs. For a restore to an empty volume that partition should be recreated and copied separately.

    btrfs subvolumes may be mounted inside other subvolumes, but are not part of them. (This just like other, non-btrfs mounts; for example, if one mounts /dev/sdb1 in /mnt/foo where / is on /dev/sda1, one doesn't expect /dev/sdb1 to be "in" /dev/sda1.) So the contents of /boot/efi are not part of @. A send/receive of @ does not include any other subvolumes.

    IMO it's confusing that for most commands we have no other way to identify and address them than by the path. Well, it confused me, anyway.

    I would worry about using /mnt to mount the btrfs root-fs. In principle it wouldn't stop the normal use of /mnt, but those ancient Unix directories are encumbered with historical misuse. I started using /mnt/top, but laziness made me change to /top, then to /t. (I might switch to /f or /j to save ~15 mm of finger movement .)

    Leave a comment:


  • jfabernathy
    replied
    Originally posted by jfabernathy View Post
    ...

    Maybe all of this restore stuff should be done from the Kubuntu install ISO terminal?
    My second test I did the same procedure as before, but didn't do any restore while running normally. I boot the Kubuntu Live ISO and opened a terminal.
    I mounted both the rootfs partition on /mnt and the external backup drive on /backup

    I then reversed the send/receive from the /backup to /mnt/snapshots directory.
    Next I moved /mnt/@ to /mnt/@old, and /mnt/@home to /mnt/@homeold.
    Finally did the
    Code:
    btrfs su sn /mnt/snapshots/@_basic_install /mnt/@
    btrfs su sn /mnt/snapshots/@home_basic_install /mnt/@home
    Umounted everything and rebooted.

    This worked without any other changes. The file system was read-write as expected.

    Next test is using one of the suggested methods to handle UUIDs or labels with a new HD

    Leave a comment:


  • jfabernathy
    replied
    Well, I had a partial success. I went through the steps below and they all seemed to work. But when I booted, I was at the grub command line with nothing to edit. I booted the Kubuntu ISO and went to "Try Kubuntu" and opened a terminal. When I looked at /dev/sdc1 which was my EFI partition, there no boot directory or what should have been in the boot directory after it was mounted on the / file system. I also mounted /dev/sdc3 which was my partition containing the rootfs. There was no @ or @home. There was @old and @homeold, and snapshots. I was also expecting to find @ and @home.
    So I took a chance and did
    Code:
    btrfs su sn /mnt/snapshots/@_basic_install /mnt/@
    btrfs su sn /mnt/snapshots/@home_basic_install /mnt/@home
    Then I did
    Code:
     grub-install /dev/sdc1
    and rebooted. I was back to where I was expecting to be. Prior to installing the extra packages.

    Here's the sequence of commands I did.

    Code:
    mount /dev/sdc3 /mnt
    mkdir /mnt/snapshots
    mount /dev/sda1 /backup
    btrfs su snapshot -r /mnt/@ /mnt/snapshots/@_basic_install
    btrfs su snapshot -r /mnt/@home /mnt/snapshots/@home_basic_install
    btrfs send /mnt/snapshots/@_basic_install | btrfs receive /backup
    btrfs send /mnt/snapshots/@home_basic_install | btrfs receive /backup
    Install some extra stuff

    To simulate new drive but same UUIDs, delete old snapshots.

    Code:
    btrfs su de /mnt/snapshots/@_basic_install
    btrfs su de /mnt/snapshots/@home_basic_install
    Restore images from external backup to /mnt/snapshots directory

    Code:
    btrfs send /backup/@_basic_install | btrfs receive /mnt/snapshots/
    btrfs send /backup/@home_basic_install | btrfs receive /mnt/snapshots/
    make it read-write and put it at /mnt@ and /mnt/@home
    Code:
    btrfs su sn /mnt/snapshots/@_basic_install /mnt/@
    btrfs su sn /mnt/snapshots/@home_basic_install /mnt/@home
    ​​​​​​​
    Not sure why grub was not there. Must have something to do with the EFI partition being mounted on /boot which is part of the @ subvolume.

    Maybe all of this restore stuff should be done from the Kubuntu install ISO terminal?

    Leave a comment:


  • jfabernathy
    replied
    So I'm going to setup some more tests to see which works and is the easiest. As a side note. I wanted to avoid all the time it takes to reinstall Kubunu from a Boot USB. So after I had it built last time. I used a bootable USB to SATA SSD drive with Clonzilla on it and enough space to put a lot of restore images. It took me 8 minutes to create a new complete backup image of my sda drive. And to test it I cleared the drive then booted the Clonezilla drive and did a full image restore. It took under 7 minutes.

    So I'm thinking if I create subvolumes besides @ and @, maybe @var or anything else that changes from the boot image then I could only have to do the following on a drive failure:
    1. Boot Clonzilla drive and do a restore.
    2. Mount Backup drive and send/receive latest backup to snapshot directory.
    3. Do a btrfs su sn of the restore subvolume to a read-write subvolume where it originally was.

    On a production server, you'd only need a Clonezilla image of the boot drive after major updates. One every few months or more. Snapshots could run daily.

    Can't do any test today as all my house electricity is off while the workers are wiring up a whole house generator.

    Leave a comment:


  • oshunluvr
    replied
    Yeah, when you reformat a file system, the UUIDs will change. You can manage that a couple of ways:
    • Set GRUB to boot to labels instead of UUIDs (I've never done that. See https://ubuntuforums.org/showthread....51#post9585951 ) AND use labels in fstab.
    • When you format the new file system, use the previous UUID (using "-U <UUID> during mkfs will allow manual assignment of a specific UUID) or change the UUID after creating the file system (sudo btrfstune -U <UUID> /dev/sdaXX)
    • After restoring the subvolumes but BEFORE rebooting, edit fstab to show the new labels. Then at reboot, manually edit grub ("e" to edit when you see the boot menu), change the UUID, then after reboot run "update-grub".
    Granted, none of these are quick and easy, but this is catastrophe recovery, not a daily grind. Note that the first option could be wiped out if a future update reset GRUB back to using UUIDs.

    One could store the list of needed UUIDs in small files in the backup folder (or simply read them from the @/etc/fstab file) for easy restoration along with a binary copy of the boot record and partition table backup (sgdisk can do this) - probably only useful if the drive replacement was identical.

    I'm curious how UEFI or systemd boot (instead of grub) would handle this sort of recovery. I don't use either on my main machines and frankly don't want to. I have 5 or 6 installs on my desktop PC and having to manage EFI isn't on my list of things to do.

    Fair to point out here that any backup/recovery operation short of a full drive copy requires these sort of steps. Managing partitions and booting is not dependent on the file system.

    Leave a comment:


  • jfabernathy
    replied
    Originally posted by oshunluvr View Post
    Another comment: To roll back to a previous snapshot, the best method IMO is to rename your current subvolume, then snapshot the previous snapshot to the desired name. This leaves your saved snapshots intact and makes is easier to keep track of.

    Example:
    /mnt contains subvolumes @ and @home and folder "snaps"

    Code:
    ls /mnt
    @
    @home
    snaps
    BTRFS will let you rename the subvolume you are using while you are using it because the files remain in the same location.
    Not quite there yet.

    On my last try, I did what you said not to do and it would not boot at all. So I reinstalled Kubuntu again. But I still had all those snapshots that I had created and used send/received
    to send to an external drive.

    So as soon as a had a new system. I mount the btrfs root partition on /mnt and created /mnt/snapshots directory. Then I mounted the external drive to /backup.

    I reversed the send/receive to put the /backup/snapshots in /mnt/snapshots. No issues in doing this and the 8GB of @20220118 snapshot took several minutes to copy as expected. Then I move /mnt/@ and /mnt/@home out of the way and did the
    Code:
    btrfs su sn /mnt/snapshot/@20220118 /mnt/@
    btrfs su sn /mnt/snapshot/@home20220118 /mnt/@home
    All the commands worked without complaint, but when the system boot, after grub, I saw an error about not finding some device and listed a UUID. Later I saw "not waiting for hibernate/suspend device, then it broke into busybox. I'm guessing since I did a restore of the snapshot from the external drive it replaced /etc/fstab which had all the old UUIDs in it's mount statements, but the new install had created new ones. Just guessing but seems reasonable.

    Leave a comment:


  • oshunluvr
    replied
    Another comment: To roll back to a previous snapshot, the best method IMO is to rename your current subvolume, then snapshot the previous snapshot to the desired name. This leaves your saved snapshots intact and makes is easier to keep track of.

    Example:
    /mnt contains subvolumes @ and @home and folder "snaps"

    Code:
    ls /mnt
    @
    @home
    snaps
    Every day I take a snapshot of @ and @home into the snaps folder and add the day of the week to the snapshot name so Tuesday's snapshots are @_tue and @home_tue.
    One Wednesday, a failed update breaks my desktop. So I boot to console mode, log in and navigate to /mnt, then do this:
    Code:
    sudo mv @ @_bad
    sudo mv @home @home_bad
    sudo btrfs su sn snaps/@_tue @
    sudo btrfs su sn snaps/@home_tue
    then reboot.

    BTRFS will let you rename the subvolume you are using while you are using it because the files remain in the same location.

    This has some easy to understand examples: https://btrfs.wiki.kernel.org/index.php/UseCases

    Read the parts about rollbacks and backups
    Last edited by Snowhog; Jan 18, 2022, 02:16 PM. Reason: Spelling correction

    Leave a comment:


  • oshunluvr
    replied
    Most of that looks correct. My comments:

    I would never do this:
    Code:
    mv /mnt/@ /mnt/@old
    mv /mnt/@home /mnt/@homeold
    Moving the subvolume, then taking a snapshot is redundant. Simply snapshot the snapshot. I would do this instead:
    Code:
    sudo btrfs su sn /mnt/@old /mnt/@ 
    sudo btrfs su sn /mnt/@homeold /mnt/@home
    and skip the move commands. Same exact results - a read-write snapshot in the correct location - just simpler.

    This is incorrect:
    The full images on /backup are not r/o snapshots.
    Actually they are the same thing.

    Maybe terminology is part of the issue. Using btrfs "snapshot" is a verb not a noun. We are working with subvolumes, period. When you take a snapshot of a subvolume, a new subvolume is created. When someone uses "snapshot" as a noun, it means "a subvolume created by using the btrfs snapshot function". A snapshot that is sent becomes a "backup" but in this context that mean "a subvolume created on another file system using the snapshot command, then sent to this file system."

    Snapshots are subvolumes and when you send it, a subvolume must be r/o to be sent, it arrives r/o. Unless you re-snapshot it to get a r/w copy or use the btrfs set command to change it from r/o to r/w, it remains r/o.

    The functional difference is, a snapshot exists on the same file system as it's source. Once you send it to another file system, it becomes a backup - but they are ALL subvolumes. If that new subvolume created as a snapshot remains on the same file system as the source, it only occupies the amount of data space required to store the changes made to source - it dynamically grows. If you send that subvolume to another file system, it requires the full amount of data space as the source was using at the time of the snapshot - it remains a fixed size.

    This is also incorrect
    They are full images so they can't be sent outside their file system.
    Any btrfs subvolume can be sent from one btrfs to any other btrfs file system as long as it's r/o. This is exactly what the send|receive commands are for - moving a subvolume from one file system to another. In the case of restoring from a backup (a snapshot that exists on a different file system) to begin using it "live" again, you simply send|receive it than change it to r/w.

    However, you cannot "snapshot" a subvolume from a different file system. The "snapshot" command only functions within a single file system, whereas the send|receive function is file system-to-file system.

    Leave a comment:


  • jfabernathy
    replied
    Originally posted by oshunluvr View Post
    Restoring (send|receive) is infinitely easier than using a third-party backup tool and simpler than even rsync.

    There are other benefits to btrfs as well. Of course, everyone's use-case and priorities vary.
    So let me see if I have the sequence of commands right.

    I'm going from the Beginner's Guide posted as the first sticky post in the BTRFS category.

    I have the drive partition that contains @ and @home mounted on /mnt.
    I create /mnt/snapshots
    I do my first 2 snapshots with:
    Code:
    btrfs su snapshot -r /mnt/@ /mnt/snapshots/@_basic_install
    btrfs su snapshot -r /mnt/@home /mnt/snapshots/@home_basic_install
    I then send/receive those to my other hard drive that is formatted btrfs and is mounted at /backup
    Code:
    btrfs send /mnt/snapshots/@_basic_install | btrfs receive /backup
    btrfs send /mnt/snapshots/@home_basic_install | btrfs receive /backup
    As a test I install libreoffice and Google-Chrome to change things up on both / and /home.

    In my case I rebooted to clean up the temporary mounts of /mnt and /backup.

    The example in the Beginner's Guide goes thru the typical case of something messed up and you want to go back to a previous snapshot.

    Code:
    mv /mnt/@ /mnt/@old
    mv /mnt/@home /mnt/@homeold
    Now you are ready to create a new @ and @home subvolume using the btrfs snapshot command:
    Code:
    btrfs subvol snapshot /mnt/snapshots/@_basic_install /mnt/@
    btrfs subvol snapshot /mnt/snapshots/@home_basic_install /mnt/@home
    This puts the system back to the basic install condition and it's read-write.

    If the situation is no snapshots in /mnt/snapshots because of a fresh Kubuntu install on the drive, what's the next step? I'm guessing you want to mount the partition that contained the btrfs @ and @home on /mnt again and the backup drive with the full images from btrfs send/receive on /backup. Then you want to do the commands
    Code:
    btrfs send /backup/@_basic_install | btrfs receive /mnt/@
    btrfs send /backup/@home_basic_install | btrfs receive /mnt/@
    I'm really not sure about this part. The full images on /backup are not r/o snapshots. They are full images so they can't be sent outside their file system. This is the part I need to better understand.


    Leave a comment:


  • oshunluvr
    replied
    Restoring (send|receive) is infinitely easier than using a third-party backup tool and simpler than even rsync.

    There are other benefits to btrfs as well. Of course, everyone's use-case and priorities vary.

    Leave a comment:


  • jfabernathy
    replied
    Originally posted by oshunluvr View Post
    You got it. No rsync or other external programs needed. Install new drive, install Kubuntu from ISO, restore the subvolume backups, reboot.

    The end result of a btrfs send|receive operation IS a full copy of the subvolume. So you would just have to reverse the direction of the operation - from backup to your root FS.

    You would literally re-install from the ISO, then replace the newly installed @ and @home subvolumes with your subvolumes from the backup drive and reboot.

    One other point to touch on: subvolumes MUST be read-only status to send|receive. Obviously, you wouldn't want to boot to a R/O operating system, so prior to rebooting you need to make the @ and @home subvolumes read-write after the send|receive. You can do this two ways - either take a R/W snapshot of the R/O subvolumes and rename appropriately, or change the subvolume attribute with this command:

    Code:
    btrfs property set -ts /path/to/snapshot ro false
    This sounds like it's worth a test for use on any lab PC where you don't want to spend hours reinstalling everything to get back to where you were. In that case you are not talking about failed hardware, but failed software. This scheme is definitely worth me testing out.
    Last edited by Snowhog; Jan 18, 2022, 08:20 AM. Reason: Correct spelling error

    Leave a comment:

Working...
X