Announcement

Collapse
No announcement yet.

install Kubuntu on BTRFS RAID1/mirror array

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    install Kubuntu on BTRFS RAID1/mirror array

    Hello all! First post!

    I would like to install Kubuntu 19.10 on a mirror/RAID1 array using BTRFS, however my google-fu is failing me and I can't find a guide on how to do so.

    Could someone please help? The closest I've come is a guide on how to do it in Ubuntu Server 18.04: https://work-work.work/blog/2018/12/...804-btrfs.html
    but I quickly get stuck, particularly at the part where the guide says not to reboot, but instead to hit alt+F2 to continue in CLI - this doesn't work during the Kubuntu install.

    thanks heaps!

    PS: I'm only a year in to using Linux, so I still consider myself a n00b. Was a full-time windooze user prior!

    #2
    This is a link to the BTRFS subforum: https://www.kubuntuforums.net/forumd....php/282-BTRFS

    What I would do is install Kubuntu on a single drive as if it were going to be the only drive. Do not install on the raw disk. Create a partition containing the entire drive first.

    During the install when you get to the HD format section chose manual, select the drive PARTITION to install on, select BTRFS as the fs, and select "/" as the mount point. Do not get fancy and attempt to create /home or some other mount point. That will be handled in another way later. During the install both @ and @home will be created and so will /etc/fstab, in which @ is bound to / and @home is bound to /home. A swap file will be created but BTRFS does not use a swap file, although some applications may.

    Then, after you finished the install and booted into your home drive you can open a Konsole and use BTRFS commands (consult the BTRFS forum) to add the addition drives to your first drive, then use the information give here to add your other drives and create a RAID1. That instruction was written by oshunluver, the reigning BTFRS fu master on this forum. You should read that entire post before you make any moves. See his post here:
    https://www.kubuntuforums.net/showth...dd-a-new-drive!

    But, understand this: once you install Kubuntu on an HD with BTRFS as the <ROOT_FS> then any and all actions, except a couple, can be done on the system while you are using it. That includes adding or removing drives, switching to or from RAID1, resizing your <ROOT_FS> up or down for what ever reason, balancing your BTRFS system, etc...

    Also understand that creating snapshots of your @ and @home subvolumes can be done manually or with an app like TimeShift, or snapper, etc. Personally, I don't recommend any of them regardless of how easy they are to create snapshots and roll back to a previous snapshot because I find it just as easy, with more control, to open a Konsole and "sudo -i" from which I mount my UUID partition to /mnt and work from there. All this is explained in that first link above. IF you do decided to use TimeShift remember than if you decide to delete it you MUST delete all the snapshots you've created using it BEFORE you delete it, or it WILL corrupt your BTRFS fs. All of this is explained in the first post under the BTRFS subforum.

    I've been using BTRFS exclusively, and I've used it in all manners from a singleton (which I returned to a couple years ago) to a multi-disk RAID1 and back to RAID0. It has never hiccuped once on me. Most of the time I make a snapshot of @ and @home (@YYYYMMDD and @homeYYYYMMDD) before each change (updates, adding apps, changing hardware configs, etc.) and if things work out I delete the oldest snapshot pair in my /mnt/snapshot subdirectory and continue on. If things don't work out (my nvidia install gets broken, the printer stops working, an app won't install properly, etc.) instead of putzing for half a day trying to undo changes one at a time manually I merely open a console, sudo -i, mount my UUID to /mnt, and roll back to my most recent saved snapshots, as described in the first post of the BTRFS subforum. I have two SSD's and a spinner in my laptop and I now use incremental backups to back up snapshots to the second ssd and the spinner. The first incremental backup can take 30 minutes +- 5, but subsequent incremental backups take 4 min +- 1 minute. BTRFS send & receives pipe the whole system over to an external destination and can take 30 minutes or so. But, for 95% of your snapshots to /mnt/snapshots it takes less than 3 minutes, including opening the console, mounting the UUID to /mnt, using history to recover the most recent snapshot command, modifying its date and executing it, using sync twice, umounting /mnt and closing the Konsole. With TImeShift you click a button, if it is not on a time schedule, but never forget to delete all TimeShift made snapshots before you uninstall that app.

    One last word about snapshots. When first created snapshots take up almost ZERO bytes disk space. As your system changes copies of how things were before the changes are written to any and all snapshots (@date and @homedate). Eventually, an old snapshot can fill up. For example: you have an update that updates 245 files. All of those files as they existed before the update will be written to the appropriate snapshots. As snapshots fill up the increase the amount of space they take up. If you have a dozen snapshots (6 of @ and 6 of @home) the oldest pair can get about as big as your entire system is. IF your drive is 500GB and your @ and @home together take up 130GB then only 2 o3 three snapshot pairs would consume all of your HD space! Your system would complain about running out of room. My system looks like this:
    Code:
    # mount /dev/disk/by-uuid/ce2b5741-c01e-4b3d-b6ba-401ad7f7fcdf /mnt
    vdir /mnt
    drwxr-xr-x 1 root root 200 Jan  9 23:21 @
    drwxr-xr-x 1 root root  30 Jan  5 12:15 @home
    drwxr-xr-x 1 root root 220 Mar  9 12:12 snapshots
    
    root@Aspire-V3-771:~# vdir /mnt/snapshots
    total 0
    drwxr-xr-x 1 root root 200 Jan  9 23:21 @20200222
    drwxr-xr-x 1 root root 200 Jan  9 23:21 @20200227
    drwxr-xr-x 1 root root 200 Jan  9 23:21 @20200302
    drwxr-xr-x 1 root root 200 Jan  9 23:21 @20200307
    drwxr-xr-x 1 root root 200 Jan  9 23:21 @20200309
    drwxr-xr-x 1 root root  30 Jan  5 12:15 @home20200222
    drwxr-xr-x 1 root root  30 Jan  5 12:15 @home20200227
    drwxr-xr-x 1 root root  30 Jan  5 12:15 @home20200302
    drwxr-xr-x 1 root root  30 Jan  5 12:15 @home20200307
    drwxr-xr-x 1 root root  30 Jan  5 12:15 @home20200309
    My usage command shows:
    Code:
    root@Aspire-V3-771:~# btrfs fi usage /mnt
    Overall:
        Device size:                 465.76GiB
        Device allocated:            115.04GiB
        Device unallocated:          350.72GiB
        Device missing:                  0.00B
        Used:                        109.24GiB
        Free (estimated):            354.93GiB      (min: 354.93GiB)
        Data ratio:                       1.00
        Metadata ratio:                   1.00
        Global reserve:              198.41MiB      (used: 0.00B)
    
    Data,single: Size:112.01GiB, Used:107.80GiB (96.24%)
       /dev/sda1     112.01GiB
    
    Metadata,single: Size:3.00GiB, Used:1.44GiB (48.09%)
       /dev/sda1       3.00GiB
    
    System,single: Size:32.00MiB, Used:16.00KiB (0.05%)
       /dev/sda1      32.00MiB
    
    Unallocated:
       /dev/sda1     350.72GiB
    I'm using 96.24% of the space allocated for singe data. (Because I'm set up as a singleton, not RAID0 or RAID1).
    So, I should delete a couple of my old ones. They are backed up on the other storage devices.

    BTRFS seems overwhelming at first, but it's not. I have a poooorrr memory so I keep all the commands I use the most on a sheet taped to the wall next to my chair. There are seven section on it that jog my memory.

    Code:
                           [SIZE=3][B]Making a backup snapshot:[/B][/SIZE]
    
    Open a Konsole
    sudo -i
    
    mount /dev/disk/by-uuid/xxxxxxxxxxxxxxx /backup/  (If remote storage is needed)
    mount /dev/disk/by-uuid/xxxxxxxxxxxxxxy /mnt
    
    btrfs su snapshot -r /mnt/@ /mnt/snapshots/@YYYYMMDD
    btrfs su snapshot -r /mnt/@home /mnt/snapshots/@homeYYYYMMDD
    If backups are all that is wanted jump to the “Exit” steps
    
    [SIZE=3][B]Sending snapshots to a remote storage device[/B][/SIZE]
    btrfs send /mnt/snapshots/@YYYYMMDD | btrfs receive /backup/
    btrfs send /mnt/snapshots/@homeYYYYMMDD | btrfs receive /backup/
    sync
    
    [SIZE=3][B]Making an Incremental Backup to a remote storage device[/B][/SIZE]
    btrfs send -p /mnt/snapshots/@_apreviousdate /mnt/snapshots/@_todaysdate | btrfs receive /backup 
    sync
    btrfs send -p /mnt/snapshots/@home_apreviousdate /mnt/snapshots/@home_todaysdate | btrfs receive /backup 
    sync
    
    
    [SIZE=3][B]Unmount backup and mnt[/B][/SIZE]
    umount /backup (If backup was mounted)
    umount /mnt
    
    [SIZE=3][B]Rolling back to a specific snapshot:[/B][/SIZE]
    Open a Konsole and issue
    sudo -i
    
    mount /dev/disk/by-uuid/xxxxxxxxxxxxxxxxxx /mnt
    
    mv /mnt/@ [URL="https://www.kubuntuforums.net/mnt/@old"]/mnt/@old[/URL]  (the system will continue to use @ as @old)
    sync
    mv /mnt/@home [URL="https://www.kubuntuforums.net/mnt/@homeold"]/mnt/@homeold[/URL]  (the system will continue to use @home as @homeold)
    sync
    
    btrfs subvol snapshot /mnt/snapshots/@homeYYYYMMDD [URL="https://www.kubuntuforums.net/mnt/@home"]/mnt/@home[/URL]
    sync (notice the lack of the “-r” parameter.  This allows rw)
    btrfs subvol snapshot /mnt/snapshots/@YYYYMMDD /mnt/@
    sync
    
    umount /backup
    umount /mnt
    
    [SIZE=3][B]Exit root and Konsole[/B][/SIZE]
    exit, exit
    
    [SIZE=3][B]Reboot the computer (If rolling back)[/B][/SIZE]
    If reboot is successful then mount /mnt and:
    btrfs subvol delete -c /mnt/@homeold"
    btrfs subvol delete -C [URL="https://www.kubuntuforums.net/mnt/@old"]/mnt/@old[/URL]
    or, mv them to /mnt/snapshots with new names
    Last edited by GreyGeek; Mar 20, 2020, 08:34 PM.
    "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
    – John F. Kennedy, February 26, 1962.

    Comment


      #3
      Hello GreyGeek! Thank you soo much for your detailed reply, highly appreciated!

      Tell me, does the method you shared have the same 'features/functionality' as the the one in the guide I posted? In that guide, the person went about the set-up in such a way that if a drive went down, the system would still be completely functional, which is something that highly interests me

      Comment


        #4
        just read thru a bit more in depth of the first link you shared.. point 7 makes me think that it does not have the same functionality aforementioned

        Comment


          #5
          Originally posted by ae00711 View Post
          just read thru a bit more in depth of the first link you shared.. point 7 makes me think that it does not have the same functionality aforementioned
          What you want is a RAID10 which requires at least FOUR drives. The following link will help:
          https://btrfs.wiki.kernel.org/index....ltiple_Devices
          Forget the info in the link in your first post. There is no need to create a FAT32 partition on which to place /boot.

          On your first device create two partitions, sda1 and sda2. Make sda1 no bigger than 15-20GB and install Kubuntu on it as a normal user. Then boot into that installation and create a four disk RAID10 using
          mkfs.btrfs -m raid10 -d raid10 /dev/sda2 /dev/sdb1 /dev/sdc1 /dev/sdd1

          Mount it by mounting any of the four partitions to some directory, say /kserver. That mounts them all.
          The link I gave discusses how to replace a failed drive.
          You can work on the RAID10 (add a drive, remove a drive, replacing a failed drive, making snapshots, balancing, checking, scrubbing, etc.) without taking /kserver out of service. Just log into your Kubuntu installation on sda1 and go to work. If sda1 fails then you won't be able to boot into Kubuntu and mount the RAID10. To avoid that use FIVE devices, install Kubuntu on sda1, storing the boot on sda, and make kserver using sdb1 through sde1 (using their uuid's) /etc/fstab on sda1 can hold the kserver mount command.

          BUT, if you are actually wanting to create a server only, and that server box has at 32GB of RAM and an i7 CPU, and at least 4 storage devices, you may want to consider using ZFS, which can also be installed during installation as the <ROOT_FS>. I know essentially nothing about ZFS and won't be able to help.

          A caveat: I use

          mount /dev/disk/by-uuid/ce2b5741-c01e-4b3d-b6ba-401ad7f7fcdf /mnt

          to mount my system to do maintenance using the UUID designation for devices. That is what the system puts into /etc/fstab as well. Because the /dev/sdX designation of a device can change without notice, especially if one swaps out a device, it is NOT wise to mount using /dev/sdX designations. You can use blkid to find the UUID path of a device.
          Code:
          [FONT=monospace][B][COLOR=#000000]# blkid[/COLOR][/B]
          /dev/sda1: UUID="[B]ce2b5741-c01e-4b3d-b6ba-401ad7f7fcdf[/B]" UUID_SUB="e4e0902f-6a80-47cd-a53a-571632f78cc
          5" TYPE="btrfs" PARTUUID="e00dfb49-01"
          /dev/sdc1: UUID="e84e2cdf-d635-41c5-9f6f-1d0235322f48" UUID_SUB="c78731d5-d423-4546-9335-f9751c14817
          4" TYPE="btrfs" PARTUUID="dc864468-01"
          /dev/sdb1: LABEL="sdb1" UUID="17f4fe91-5cbc-46f6-9577-10aa173ac5f6" UUID_SUB="4d5f96d5-c6c6-4183-814
          b-88118160b615" TYPE="btrfs" PARTUUID="5fa5762c-9d66-4fdf-ba8f-5c699763e636"
          /dev/sr0: UUID="4bcf88e10000000f" LABEL="CDROM" TYPE="udf"
          /dev/loop0: TYPE="squashfs"
          /dev/loop1: TYPE="squashfs"
          /dev/loop2: TYPE="squashfs"
          /dev/loop3: TYPE="squashfs"
          /dev/loop4: TYPE="squashfs"
          /dev/loop5: TYPE="squashfs"
          /dev/loop6: TYPE="squashfs"
          /dev/loop7: TYPE="squashfs"
          /dev/loop8: TYPE="squashfs"
          /dev/loop9: TYPE="squashfs"
          /dev/loop10: TYPE="squashfs"
          [/FONT]

          Ignore the loop devices, they are created by snapd.

          I generally use the history command and copy & paste my last mount command so I don't have to use blkid or my poooor memory each time I do maintenance.
          Last edited by Snowhog; Mar 21, 2020, 09:13 AM.
          "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
          – John F. Kennedy, February 26, 1962.

          Comment


            #6
            mmm, no, I don't see how a 4-drive RAID10 would be the suitable, especially given that setting up a 2-drive RAID1, with 'fail over protection' is clearly possible. :/

            Comment


              #7
              I also tried the method you linked, and came undone at the RAID array creation step - it returned with the error that the drive was busy, as in it is mounted (obviously, as it's the OS drive). Not sure if I did something wrong :/

              Comment

              Working...
              X