Announcement

Collapse
No announcement yet.

How to install Kubuntu 21.10 with BTRFS on root?

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • jfabernathy
    replied
    Originally posted by jlittle View Post
    An install from the ISO is fairly quick, but I prefer to understand what's needed, and have the necessary backups. On a UEFI system all that's really needed is the ESP, and so I do a backup of its contents, any sort of copy will do, and maybe make a note of an EFI boot variable entry such as shown by efibootmgr. The UUIDs not matching might cause trouble, but as I use labels instead I just have to set the labels (well known by me) when creating the partitions. But if one is reluctant to not use UUIDs, it's /etc/fstab and /boot/grub/grub.cfg where they're used.

    The need for "image" backups comes from the MBR days and Windows, because Microsoft wasn't keen on making it easy to copy Windows.
    I'm about to come to the conclusion that my critical server should only have a boot SSD with the EFI boot directory, SWAP partition, and <ROOTFS> with just what's needed to run *buntu, and my programs. That does not need any BTRFS or ZFS or RAID. Once that SSD is created, keep a backup boot-able USB drive with a clonezilla type image on it for quick restore. I can restore my boot image to a clean new SSD faster than you can install the ISO with no further restore needed.

    When it comes to my data on this critical server, it's all either old and new media collected over time, or backups of what the users on my network think is worth saving either via automated backup problems like deja-dup or simply copies to their server directory. So rolling back of snapshots would never be useful on that server. There is never any experimenting on that server. One of the advantages of running a LTS version of *buntu is I've never had an OS security patch or system update not work out. You can't say that about my lab Archlinux system. Those rolling releases need fixing from updates all the time. Playing with Arch keeps my production stuff on LTS *buntu.

    Based on what happened to me yesterday, I think your critical data backup and restore scheme needs to factor in how it will work in a panic situation. Remembering how to replace snapshots under those situation could be problematic and lead to mistakes. I found this out my having one of my test systems start running a muck and deleting files, but I forget my backup server was attached via SMB/CIFS and the mount point cause the test system to start walking some of those backup directories.

    Luckily all those file are on my Cloud server, IDrive.com. So I spent the last 12 hours restoring ~500GB from my online backup. Lots of fun and panic.

    So I think I'm going to reserve btrfs with snapshots to my daily driver PCs and there the snapshot rollback makes sense. More likely to need to rollback a recent install or update. Critical data is still on the server for those PCs

    Leave a comment:


  • jlittle
    replied
    An install from the ISO is fairly quick, but I prefer to understand what's needed, and have the necessary backups. On a UEFI system all that's really needed is the ESP, and so I do a backup of its contents, any sort of copy will do, and maybe make a note of an EFI boot variable entry such as shown by efibootmgr. The UUIDs not matching might cause trouble, but as I use labels instead I just have to set the labels (well known by me) when creating the partitions. But if one is reluctant to not use UUIDs, it's /etc/fstab and /boot/grub/grub.cfg where they're used.

    The need for "image" backups comes from the MBR days and Windows, because Microsoft wasn't keen on making it easy to copy Windows.

    Leave a comment:


  • oshunluvr
    replied
    Originally posted by jfabernathy View Post
    So if I start with the assumption that a boot drive failure will require booting from a Kubuntu ISO I can do a minimal install with the same manual partitioning I did originally with the primary partition being btrfs, I can use the backup btrfs drive images that were create with the btrfs send/receive command to restore my brand new @ and @home subvolumes.
    Is that done by something like rsync avz or is the there a btrfs command that can take the full images and send them to the @ and @home subvolumes?
    You got it. No rsync or other external programs needed. Install new drive, install Kubuntu from ISO, restore the subvolume backups, reboot.

    The end result of a btrfs send|receive operation IS a full copy of the subvolume. So you would just have to reverse the direction of the operation - from backup to your root FS.

    You would literally re-install from the ISO, then replace the newly installed @ and @home subvolumes with your subvolumes from the backup drive and reboot.

    One other point to touch on: subvolumes MUST be read-only status to send|receive. Obviously, you wouldn't want to boot to a R/O operating system, so prior to rebooting you need to make the @ and @home subvolumes read-write after the send|receive. You can do this two ways - either take a R/W snapshot of the R/O subvolumes and rename appropriately, or change the subvolume attribute with this command:

    Code:
    btrfs property set -ts /path/to/snapshot ro false

    Leave a comment:


  • jfabernathy
    replied
    So if I start with the assumption that a boot drive failure will require booting from a Kubuntu ISO I can do a minimal install with the same manual partitioning I did originally with the primary partition being btrfs, I can use the backup btrfs drive images that were create with the btrfs send/receive command to restore my brand new @ and @home subvolumes.
    Is that done by something like rsync avz or is the there a btrfs command that can take the full images and send them to the @ and @home subvolumes?

    Leave a comment:


  • oshunluvr
    replied
    Originally posted by jfabernathy View Post
    How do you recover from a drive failure when you discover the failure by the system becoming unusable or not booting?
    I have a second bootable drive in my desktop system or use a LiveUSB bootable drive on my server. The btrfs subvolume backups allow for an eventual full OS and data recovery, but it's not protection from a drive becoming unbootable due to hardware failure.

    Leave a comment:


  • jfabernathy
    replied
    Originally posted by oshunluvr View Post
    It's actually pretty simple: mkfs.btrfs -L Media -m raid1 -d raid1 /dev/sda /dev/sdb[/LIST]I have had a home server for over a decade. Initially I replaced drives to increase capacity as my need grew. Now that drives are so large, I replace them as the start to fail or show signs they are likely to. The last capacity upgrade occurred this way (note I have a 4-drive hot-swap capable server):
    Initially, 2x6tb drives and 2x2tb drives (8tb storage and backup) configured as JBOD (not RAID). I needed more capacity and the 2x2TB drives were quite old, so I replaced them with a single 10TB drive. The new configuration was to be 10TB (new drive) storage and 12TB (2x6TB) backup. To accomplish the change over;
    • I unmounted the backup file system and physically removed the 2TB backup drive
    • This left a 6TB drive unused and the storage filesystem (one each 6TB +2TB drives) still "live."
    • I then inserted the 10TB drive, did "btrfs device add" and added it to "storage" resulting in 18TB storage.
    • Finally "btrfs device delete" removed the 6TB drive and 2TB drive from storage leaving 10TB on the new drive alone.
    • I then physically pulled the last 2TB drive.
    • The final step was to create a new backup filesystem using the 2X6TB drives and mount to resume the backup procedure.
    The important things to note here are filesystem access and time. NOT ONCE during the entire operation above did I have to take the server off-line or power down. The whole operation occurred while still accessing the files, using the media server, and other services. Obviously, if you don't have hot-swap capability, you'd have to power down for physical drive changes.
    Moving TBs of data around does take time, but since BTRFS does it in the background I simply issued the needed command and came back later to do the next step. All the above took days to complete, but partially because there was no rush to complete it because I could still use the server. I checked back a couple times a day to issue the next command when it was ready for it.[/LIST]
    I've been working through some test cases and examining the use case you listed above. I'm trying to figure out the recovery process.

    First this is where I'm coming from. If my boot SSD fails, I have to install a new SSD, then boot and restore from my USB Key Clonezilla image which could be weeks or months old. This takes about 1/2 hour. Not much changes on the boot drive except a mariadb database but that is backed up daily on the Data drives. If one of my Data RAID1 mirrors fail, I pop a new identical drive in and add it to the mirror and the re-silvering starts. Obviously you have no mirror to protect you during that process.

    How do you recover from a drive failure when you discover the failure by the system becoming unusable or not booting?


    Leave a comment:


  • oshunluvr
    replied
    Short answer is: there's no advantage to having snapshot inside snaphots - literally nested subvolumes, which is what you describe by send backups (which are subvolumes) into a subvolume. It's "cleaner" IMO to send backups to the root file system and use folders if you want organization.

    You can nest subvolumes if it makes sense to you or has some purpose. There is one disadvantage to doing that - you have to delete nested subvolumes before you can delete the top level subvolume.

    How you set it up depends on what you're trying to accomplish. Are you making backups or are you storing snapshots? Are you working with a single drive or multiple? As always with Linux you have choices.

    Here's an example of a basic BTRFS installation;

    Going back to your initial post, I'll assume this:
    • /dev/nvme0n1 is your boot drive with a couple partitions (EFI and SWAP) and your BTRFS root filesystem on partition 3 so /dev/nvme0n1p3 will be mounted under rootfs.
    • /dev/sda and /dev/sdb (no partitions) are joined as a BTRFS RAID1 file system which we will mount under media.
    We need 4 mounts: @, @home, rootfs, and media.

    /etc/fstab looks like:
    Code:
    <UUID1>  /           btrfs  <options>,subvol=@
    <UUID1>  /home       btrfs  <options>,subvol=@home
    <UUID1>  /mnt/rootfs btrfs  <options>
    <UUID2>  /mnt/media  btrfs  <options>
    Note the first three use the same UUID because they are all on the same file system. I know it seems odd to have what appears to be the same file system mounted three times, but that's how subvolumes work.

    If you enter "ls /mnt/rootfs" in a terminal your output would be:
    Code:
    @  @home
    showing you the two subvolumes there.

    Now create a folder under /mnt/media named backups to store your backups in.
    Code:
    sudo mkdir /mnt/media/backups
    BTRFS snapshots and backups are really the same thing. Basically a backup is a snapshot moved to a different file system.

    With the above setup as I have described, you could take snapshots of @ and @home in /mnt/rootfs. When you wanted to make a backup, you would "send|receive" the snapshot it to /mnt/media/backups.

    These two commands take a read-only snapshot of @ and make a backup:
    Code:
    sudo btrfs subvolume snapshot -r /mnt/rootfs/@ /mnt/rootfs/@_snap1
    sudo btrfs send /mnt/rootfs/@_snap1 | sudo btrfs receive /mnt/media/backups
    Note the "-r" (read-only) switch in the snapshot command. Snapshots must be read-only to "send" them. Also note the absence of a target file name for the backup. The received snapshot will always have the same name as the sent snapshot.

    A larger question is how are you going to use your media file system? Again, choices can be made here. You can use it like any regular file system and just make folders like Music, Pictures, etc. which might make sense for a network share if that's how you're using it. My media server is more complex and has several other purposes so I have my media divided into subvolumes which I backup individually AND have them mounted so I can share them via several protocols.

    Leave a comment:


  • jfabernathy
    replied
    Originally posted by oshunluvr View Post
    Well, mounting the root file system at /mnt/backup is even simpler, assuming partition is /dev/sda1

    sudo mount /dev/sda1 /mnt/backup

    should do it.
    I guess the confusing part is, I can mount the BTRFS partition of the backup drive on /mnt/backup like a normal drive or I can create a subvol=@backup and mount that with all the compress, noatime, etc options on /mnt/backup. Either way the btrfs send/receive works and the results are the same.

    So are there any advantages to one way or the other.

    Leave a comment:


  • oshunluvr
    replied
    Well, mounting the root file system at /mnt/backup is even simpler, assuming partition is /dev/sda1

    sudo mount /dev/sda1 /mnt/backup

    should do it.

    Leave a comment:


  • jfabernathy
    replied
    what I did that worked did not involve creating any BTRFS subvolumes, just mounted the BTRFS formatted partition to /mnt/backup directory.
    What I originally thought I should do was mount the BTRFS partition and create a subdirectory where the subvolume would be mounted under that. Something is missing in that step.

    I'll completely start over and capture what I'm doing and if it fails, I'll post.

    Leave a comment:


  • oshunluvr
    replied
    Originally posted by jfabernathy View Post
    I'm not having any success with mounting the second hard drive's subvolume.
    Care to post your attempts and error messages?

    It shouldn't be any different than mounting any other file system.

    Code:
    sudo mount /dev/sda1 -o subvol=@whatever /mnt/point
    Maybe you're leaving out the device?

    BTW, there aren't any rules pertaining to naming of subvolumes. *bunutus default to @ and @home so that it's clear they are more than just regular folders, but that's an adopted convention, not a rule. You can even create subvolumes "in place" or where ever you want.

    For example, lets say you want to have individual subvolumes for your Documents, Pictures, and Music instead of folder. You can make subvolumes somewhere, then mount them at the folder location OR you can just make the subvolume right in your home folder like so:

    cd /home
    rmdir Documents
    sudo btrfs subvolume create Documents
    sudo chown 1000:1000 Documents

    Now your Documents folder is gone and has be replaced by a subvolume named Documents. Note that I had to "chown" the subvolume to me user to gain easy access to it.

    Since @home is already a subvolume, you now have nested Documents within it. This reads out like this in thee subvolume list:

    Code:
    ID 2351 gen 3005916 top level 5 path @home
    ID 3296 gen 3005916 top level 2351 path @home/stuart/Documents
    5 is my root fs, @home is subvolume 2351 in 5, and Documents is 3296 in 2351.

    Cool or confusing?

    Leave a comment:


  • jfabernathy
    replied
    I'm following the Beginner's Guide referenced in the first answer to my original question.

    I want to see if this makes sense to other. So I have Kubuntu 21.10 installed on an SSD using btrfs.

    I mounted /dev/sdc3 on /mnt/subvol directory which is my btrfs @ subvol.
    Then I creates /mnt/subvol/snapshots directory.

    Next I create the first snapshot:
    Code:
    btrfs su snapshot -r /mnt/@ /mnt/subvol/snapshots/@_basic_install
    btrfs su snapshot -r /mnt/@home /mnt/subvol/snapshots/@home_basic_install
    To have a place to store backups I partitioned /dev/sda and created 1 partition, then I formatted with mkfs.btrfs /dev/sda1

    Then I simple mounted /dev/sda1 to /mnt/backup. I was surprised I wouldn't create subvolumes first and use the subvolume mount commands, but that didn't work.

    The backups were created with:
    Code:
    btrfs send /mnt/subvol/snapshots/@_basic_install | btrfs receive /mnt/backup
    btrfs send /mnt/subvol/snapshots/@home_basic_install | btrfs receive /mnt/backup
    What I ended up with in /mnt/backup were 2 directories @_basic_install and @_home_basic_install.

    Inside those directories were a complete copy of / and /home of my original install.


    Leave a comment:


  • GreyGeek
    replied
    My use case differs from that of oshunluver's. At one point I tested all the backup alternatives for BTRFS. Snapper and Timeshift were far an above the best, with TimeShift's GUI taking the lead. It can be set to back up more than system files. However, TimeSHift mirrors a /mnt/snapshots configuration under "/run", which it creates and keeps permanently mounted. I don't like that setup for a variety of reasons, but I won't get into that, and between the two I'd prefer snapper.

    However, l like oshunluver, I went the "write my own script" route, which I keep under root and run with a sudo command from the CLI. It is customized to my use case.
    My use case? I modified my initial subvoumes, @ and @home, by moving @home's contents into @/home and then commenting out the line in fstab that mounts @home. (One cannot move one subvolume into another one so just the contents of /home/jerry were moved to @home to create @/home/jerry/... Notice that @/home/jerry is not the same as @home/jerry.

    Why, you ask? Because the installation of many packages can put new files, or erase old ones, in BOTH @ and @home. When I was using both I'd create my snapshots so that they would have the same yyyymmddhhmm extension, thus identifying them as a pair. That way, I wouldn't replace @ with a @ snapshot which wasn't paired with a @home snapshot.

    So, I have only one subvolume, @, and I need to make only one snapshot each evening and then use the incremental send command to send that snapshot to my /backup SSD. The snapshot takes a fraction of a second and with the "btrfs send -p" incremental command making a copy of @ on /backup usually takes less than a minute, depending on what I've added and removed from my system. If I use the regular send command to send a snapshot to /backup it can take up to 15-20 minutes. HOWEVER, with btrfs one can continue working even while the snapshot is being sent.

    When you open a terminal and mount your primary btrfs partition to /mnt what you are actually doing is making /mnt the "ROOTFS", or root file system. Everything in btrfs is under the ROOTFS. That's why you see /mnt/@ and /mnt/@home, and other subvolumes you may have created under /mnt. When you are working in /mnt you are working in the live system. No harm, though. BTRFS is very flexible and all but a couple of its parameters are tuned automatically, so you don't have a ton of settings to adjust to "tune" your system. That's probably why Facebook runs BTRFS as its file system on its hundreds of thousands of servers, which are assembled out of the cheapest components they can buy.

    If I want to recover a file or folder from a previous snapshot I either use Dolphin or MC. I browse down into a previous snapshot, say @202201041531, and copy the file I want over to my account, /home/jerry/somefolder/somefile. Or, I can use the cp or mv command. Much faster than trying to do it with either TimeShift or snapper. I've added and removed files and folders from previous snapshots without harm or foul, but realize that using the "-r" parameter makes the snapshot a read only. If you want to make @yyyymmddhhmm the new @ you have to use "btrfs subvol snapshot /mnt/snapshots/@yyyymmddhhmm /mnt/@" without the -r switch, otherwise your next boot will fail.

    I've used BTRFS since 2016, and I will NEVER use a distro which doesn't allow me to use BTRFS as the rootfs. I do LOTS of experimentation, and often I need to roll back to recover. Recovery is just a couple of minutes away, which is a LOT faster than trying to roll back manually changes which could be in the hundreds.


    Last edited by GreyGeek; Jan 14, 2022, 03:51 PM.

    Leave a comment:


  • jfabernathy
    replied
    I sort of understand the concept of mounting the btrfs partition at /mnt and then creating normal directories that become places where you can mount new subvolumes after they are created by the btrfs su cr command.

    I'm experimenting with having another hard drive in the system for backups and formating it as btrfs so I can easily use the btrfs send command to create backups and store them in the other hard drive.

    I'm not having any success with mounting the second hard drive's subvolume. I'm not looking at creating a RAID, just a second drive that contains backups built from snapshots and the send command.

    I know I'm missing a key idea somewhere.

    Leave a comment:


  • oshunluvr
    replied
    RAID1 is probably easier IF you have a spare drive around, don't mind the down time, and know how to fix it.

    BTRFS has so much more flexibility to offer than old-school RAID. If that isn't a plus factor for you, then go with what you know.

    I wasn't trying to talk you out of RAID1, just pointing out with BTRFS you have more options than just 1+1=1

    Leave a comment:

Working...
X