I'm missing something about the method you are using being easier than RAID 1. I clearly don't completely understand what you are doing with all your drives and using incremental backups. Studying your daily script may clear it up for me. As to RAID, the last time I got errors on one of my mirror drives, it automatically degraded the mirror. I pulled the drive and out a new one in. I did a few mdadm commands and the new drive was now a part of the mirror. It remained degraded for a few hours while mdadm repaired the mirror. Down time could be a problem if you didn't have spare drives laying around.
Thanks for the explanation. I'll study closely and see if I can understand your approach. I'll setup some test cases.
Announcement
Collapse
No announcement yet.
How to install Kubuntu 21.10 with BTRFS on root?
Collapse
This topic is closed.
X
X
-
It's actually pretty simple: mkfs.btrfs -L Media -m raid1 -d raid1 /dev/sda /dev/sdb
This says: "Make a BTRFS file system, label it Media, store the metadata and data using RAID1, using the whole drives sda and sdb,"
Note these are not partitions, but whole disk file systems - no partition table needed. IMO the only reason to have a partition table and other partitions would be if you wanted these drives bootable or to partition them for some other purpose than media storage.
reference: https://btrfs.wiki.kernel.org/index....ltiple_Devices
However - just my opinion - RAID in general has pitfalls. It's not uncomplicated nor quick to repair a degraded RAID array, especially if you don't have a replacement drive on hand. In fact, it may take days if you have a lot of data.
Since BTRFS has built-in backup capability, I left RAID1 behind and went with automated incremental backups instead. This has some advantages (again, IMO), and there's no performance difference in use.
Using automated backup rather than RAID:- A failed drive means you only have to change the mount and you're back in business - down only for a few seconds.
- When you get around to replacing the drive, the backup will resume automatically and in the background (no additional downtime).
- Incremental backups happen quickly, in the background.
- You can backup incrementally at any interval you feel safe with. I do it weekly, but no reason you couldn't do it hourly or even more often.
- You can use different sized devices without losing the extra space.
- A failed drive means a very long reboot time, followed by manual intervention to mount the array in degraded mode to remove the dead device.
- The array must then be rebuilt with a replacement drive OR RAID removed from the filesystem before using it normally can resume. This can take many hours.
- If your drives are not the same size, the larger drive will only be partially used by RAID1 (you can mitigate this with partitioning).
Initially, 2x6tb drives and 2x2tb drives (8tb storage and backup) configured as JBOD (not RAID). I needed more capacity and the 2x2TB drives were quite old, so I replaced them with a single 10TB drive. The new configuration was to be 10TB (new drive) storage and 12TB (2x6TB) backup. To accomplish the change over;- I unmounted the backup file system and physically removed the 2TB backup drive
- This left a 6TB drive unused and the storage filesystem (one each 6TB +2TB drives) still "live."
- I then inserted the 10TB drive, did "btrfs device add" and added it to "storage" resulting in 18TB storage.
- Finally "btrfs device delete" removed the 6TB drive and 2TB drive from storage leaving 10TB on the new drive alone.
- I then physically pulled the last 2TB drive.
- The final step was to create a new backup filesystem using the 2X6TB drives and mount to resume the backup procedure.
Moving TBs of data around does take time, but since BTRFS does it in the background I simply issued the needed command and came back later to do the next step. All the above took days to complete, but partially because there was no rush to complete it because I could still use the server. I checked back a couple times a day to issue the next command when it was ready for it.
About three years after above, the oldest 6TB drive died, and I have since replaced it with a 16TB drive using the same procedure, leaving 16TB storage and backup.
The point of this is using RAID1 accomplishes a simple backup, but does not ensure 100% reliable access. The above list looks complicated on the surface, but actually was only 6 command entries total;- umount ~~~ | take "backup" off-line
- btrfs device add ~~~ | add the 10TB drive to storage
- btrfs device delete ~~~ | delete the 6 and 2 TB drives from storage
- wipefs ~~~ | erase the file system on the 2 6TB drives
- mkfs.btrfs ~~~ | make a new backup file system
- mount ~~~ | mount the backup
- Top
- Bottom
Leave a comment:
-
So I have choices between, "roll-your-own", timeshift + a user files backup system like deja-dup, and snapper.
To keep the disks from filling up with snapshots, I only want the automatic ones to happen with apt installs, and system upgrades. I've watched a video on being about to boot from a live USB of the distro and using the command line to restore a recent snapshot to recover a bad situation. I tested that and it works. So I want that capability.
From reading between the lines, I think I need to add an additional subvolume in my case. I think I want snapshots of @home and @ done once or twice a day. However, the largest amount of data is TV recordings I do with MythTV DVR. Those recordings are many GBs, but get deleted within a few weeks as we watch the TV shows and then delete them. I don't think I want or need snapshots of those. Currently those recordings are stored in the NAS mirror part of my system which is mounted at /mnt/md0. I would need a new BTRFS raid mirror mounted as a subvolume somewhere else.
I thinking about having 3 drives. The main drive would be a nvme m.2 SSD with @ and @home. Then I would have the other 2 drives be 2 4TB hard drives in a RAID 1 mirror. The mirror currently is used as a NAS for the home network and that is where I keep the recorded media.
I don't think I need snapshots of the mirror, since the mirror is backed up incrementally to a cloud service. I have lost a drive before but 2 commands and physically replacing the drive was all it took.
I'm not sure how you go about setting up a subvolume in BTRFS that is a Raid Mirror. I've done this with my testing with ZFS and it's trivial there under Ubuntu.
So any advice on this would be appreciated.
- Top
- Bottom
Leave a comment:
-
Originally posted by jfabernathy View PostWhat do you use for automatic snapshots, Timeshift? or what.
I use snapper. It's default config would fill a filesystem quickly, in the past, but now it has a space limit; I'd recommend considering how many daily, weekly, and monthly automatic snapshots you want in the light of your backup schedule. Snapper creates what it needs; OpenSUSE installs it by default and it seems they've tried to make it suitable as such for end users, with various hand-holding tools. I haven't kept up with them, except that I use snapper-gui a little. I do hourly snapshots (default is 2 hours). I can't imagine doing without automatic snapshots now.
grub-btrfs is used in Archlinux to make sure when changes are made for system updates or install that grub had those snapshots made before and after the update/install as bootable images just like old versions of the kernel.
As to what subvolumes to have... I used the Ubuntu default @ and @home for years, but now I split things up depending on the backup strategy I want for that data. I don't see the point of having GB of browser caches filling up my backups and snapshots so I direct them to their own subvolumes, and have separate subvolumes for appimages, isos, and other big downloads.
One practice I recommend is renaming @ and @home. One has to adjust /etc/fstab and grub accordingly. With @ and @home renamed, I can do a fresh *buntu install into the same btrfs; I often have half a dozen installs in the one btrfs, all bootable, avoiding a lot of the partition shuffling I used to do before I used btrfs. When a new Kubuntu release is in the offing, I usually test out a fresh install, and keep the old release going for a while after upgrading to the new.
- Top
- Bottom
Leave a comment:
-
Originally posted by oshunluvr View PostI always use these at a minimum:
Code:noatime,space_cache,compress=lzo,autodefrag
- Top
- Bottom
Leave a comment:
-
I always use these at a minimum:
Code:noatime,space_cache,compress=lzo,autodefrag
- Top
- Bottom
Leave a comment:
-
Defaults vary:
defaults
Use the default options: rw, suid, dev, exec, auto, nouser, and async.
Note that the real set of all default mount options depends on kernel and filesystem type. See the beginning of this section for more details.
- Top
- Bottom
Leave a comment:
-
Originally posted by jfabernathy View PostI was surprised to see my /etc/fstab entry for root no include any options like noatime, space_cache=2 and compress=lzo. Is it too late to add those to my fstab? I wonder what the defaults are
Code:UUID=26d5699e-e992-4a60-8d4a-39caa10df08c / btrfs defaults,subvol=@ 0 1
- Top
- Bottom
Leave a comment:
-
I was surprised to see my /etc/fstab entry for root no include any options like noatime, space_cache=2 and compress=lzo. Is it too late to add those to my fstab? I wonder what the defaults are
Code:UUID=26d5699e-e992-4a60-8d4a-39caa10df08c / btrfs defaults,subvol=@ 0 1
- Top
- Bottom
Leave a comment:
-
If you look in your fstab you'll see the mounts include the subvolume name. Something like mine:
Code:## root UUID=<UUID> / btrfs noatime,space_cache,compress=lzo,autodefrag,subvol=@ 0 1 ## home UUID=<UUID> /home btrfs noatime,space_cache,compress=lzo,autodefrag,subvol=@home 0 2
To "expose" the subvolumes, you must mount the root file system. The mount is the same, just without a subvol=, like so
Code:## root BTRFS filesystem UUID=<UUID> /mnt/subvol btrfs auto,users,noatime,space_cache,compress=lzo,autodefrag 0 0
@ mounted at /
@home mounted at /home
The root file system is mounted at /mnt/subvol
So now, when I navigate to /mnt/subvol, I see @ and @home there. As I think I posted earlier, my habit is to create a snapshots folder in /mnt/subvol so I can keep the snapshots neatly in their own folder.
I know it seems odd to have a mount in a mount that way, but that's how it's done.
- Top
- Bottom
Leave a comment:
-
Originally posted by GreyGeek View PostThe installation "how to" is in the first part of the top post in the BTRFS subforum.
https://www.kubuntuforums.net/forum/...guide-to-btrfs
Choose the manual partition method when the installation gets to the point of selecting the storage medium.
In the manual mode create the necessary partitions, depending on if you use mbr or gpt.
Create at least one partition, call it sda1 and for the OS select btrfs. For the mount point select "/". Proceed with the installation.
The results will be two subvolumes: @ and @home, which will be mounted to "/" and "/home" respectively, as shown in the /etc/fstab file.
I've got a lot more reading to do to understand this on Kubuntu.
Thanks,
- Top
- Bottom
Leave a comment:
-
Originally posted by jfabernathy View PostWhat do you use for automatic snapshots, Timeshift? or what.
Code:#!/bin/bash # # IMPORTANT: # This script requires the root btrfs file system be mounted somewhere before running. # Alternately, have the mount declared in fstab and the script will mount it. # exec 1> >(logger -s -t $(basename $0)) 2>&1 # Log the script activity ## Set the variables. Edit this section as needed # declare an array variable of all target subvolumes declare -a SUBVLIST=("@KDEneon" "@KDEneon_home") # other variables NOTIFYUSER="stuart" # the username to get messages NOTIFYDISPLAY=":0.0" # the above user's X session source_folder="/subvol/" # Root file system mount location backup_folder='/mnt/root_backup/' # Backup file system mount location snapshot_folder="/subvol/snapshots/" # Snapshots folder addname="_daily_" # This is added to the subvolume name of the daily snapshots lastsnapnum=6 # Last snapshot number to be saved before moving to backup status # Verify the root BTRFS file system is mounted and mount it if not, or fail; if [ -d "$source_folder" ]; then if ! mountpoint -q "$source_folder"; then mount "$source_folder" if ! mountpoint -q "$source_folder"; then su $NOTIFYUSER -c "export DISPLAY=${NOTIFYDISPLAY}; /usr/bin/notify-send -i ksnapshot -t 0 'Daily snapshots:' 'Daily snapshot operation failed - no mountable subvolume folder.'" exit 1 fi fi else su $NOTIFYUSER -c "export DISPLAY=${NOTIFYDISPLAY}; /usr/bin/notify-send -i ksnapshot -t 0 'Daily snapshots:' 'Snapshot operation failed - subvolume folder incorrect.'" exit 1 fi ## Begin snapshot process # loop through the list of subvolumes for SUBV in "${SUBVLIST[@]}"; do SUBVPATH="$source_folder""$SUBV" SUBVSNAP="$snapshot_folder""$SUBV" # Move last daily snapshot to backup status if [[ -d "$SUBVSNAP""$addname""$lastsnapnum" ]]; then btrfs su de -c "$SUBVSNAP""_backup-new" mv "$SUBVSNAP""$addname""$lastsnapnum" "$SUBVSNAP""_backup-new" btrfs pr set -ts "$SUBVSNAP""_backup-new" ro true fi # Roll the current snapshot names by adding 1 to each trailing number for ((i="$lastsnapnum-1";i>=0;i--)); do if [[ -d "$SUBVSNAP""$addname"$i ]]; then mv "$SUBVSNAP""$addname"$i "$SUBVSNAP""$addname"$(($i+1)); fi done # Take new read-only snapshot btrfs su sn "$SUBVPATH" "$SUBVSNAP""$addname""0" touch "$SUBVSNAP""$addname""0" done # Notify the user that the job is done su $NOTIFYUSER -c "export DISPLAY=${NOTIFYDISPLAY}; /usr/bin/notify-send -i ksnapshot -t 0 'Daily snapshots:' 'Snapshot operation complete.'" # If it's Sunday, do a backup if [[ $(date +%u) -ne 7 ]] ; then # Script complete exit 0 fi # Verify the backup file system is mounted and mount it if not, or fail; if [ -d "$backup_folder" ]; then if ! mountpoint -q "$backup_folder"; then mount "$backup_folder" if ! mountpoint -q "$backup_folder"; then su $NOTIFYUSER -c "export DISPLAY=${NOTIFYDISPLAY}; /usr/bin/notify-send -i ksnapshot -t 0 'Weekly backup:' 'Backup operation failed - no mountable subvolume folder.'" exit 1 fi fi else su $NOTIFYUSER -c "export DISPLAY=${NOTIFYDISPLAY}; /usr/bin/notify-send -i ksnapshot -t 0 'Weekly backup:' 'Backup operation failed - subvolume folder incorrect.'" exit 1 fi ## Begin backup process # loop through the list of subvolumes for SUBV in "${SUBVLIST[@]}"; do SUBVSNAPRO="$snapshot_folder""$SUBV""_backup" SUBVBACK="$backup_folder""$SUBV""_backup" # Send incremental backup btrfs send -p "$SUBVSNAPRO" "$SUBVSNAPRO""-new" | btrfs receive "$backup_folder" # Remove old snapshot and backup btrfs su de -c "$SUBVSNAPRO" btrfs su de -c "$SUBVBACK" # Move new backups to old backup status # if [[ -d "$SUBVBACK" ]]; then mv "$SUBVBACK""-new" "$SUBVBACK" # fi # if [[ -d "$SUBVSNAPRO" ]]; then mv "$SUBVSNAPRO""-new" "$SUBVSNAPRO" # fi done # Notify the user that the job is done su $NOTIFYUSER -c "export DISPLAY=${NOTIFYDISPLAY}; /usr/bin/notify-send -i ksnapshot -t 0 'Weekly backup:' 'Backup operation complete.'" # Script complete exit 0
It's saved my bacon more than once!
- Top
- Bottom
Leave a comment:
-
Originally posted by Snowhog View PostAnd, at least with *buntu's, /var et al is "old school" and isn't necessary anymore. At most you need two partitions: root ( / ) and home ( /home). Maybe swap.
1. "/Home" is fairly stable minimal changes and they need backed up regularly
2. "/" needs updating before and after major updates of packages.
3. mysql databases in /var/mysql get hammered for both read and write. Also needs backed up daily. Currently done to NAS.
3. Need to store media that is recorded from OTA TV tuners. Can be put in any directory and does not need to be backed up or snapshoted.
Currently I have all the above conditions solved the old school way. I have Ubuntu 20.04.3 booting from SATA SSD EXT4 file system. I also have Mirrored Hard Drives using mdadm RAID-1. This is where all my media goes as well as a directory I use as a NAS for samba/cifs access from everywhere on the local LAN. All other PC store their critical files on the NAS
- Top
- Bottom
Leave a comment:
-
What do you use for automatic snapshots, Timeshift? or what.
- Top
- Bottom
Leave a comment:
Leave a comment: