Announcement

Collapse
No announcement yet.

Migrating From EXT4 to BTRFS?

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    This morning when I booted up I found that my wifi wouldn't connect. It spun it wheels but no joy. Iwconfig listed the wifi, iwscan listed all the Access Points visible to it (about 20-30) including my two: GreyGeek and GreyGeek5, so it was up and running. But, neither would connect. I re-entered my password several times and it didn't make a difference. My system is set up to run IPv6 as the default, with a one second fallback to IPv4. I had forgotten how SLOW IPv4 is when compared to IPv6.

    The problem? It turns out that the update yesterday afternoon reset the wifi configuration to encrypt the password. I have wallet turned off so the password never reached Network-Manager. I switched to unencrypted, which it was before, because I am the only one on this system and my firewalls are up. My 5Ghz GreyGeek5 connection connected immediately.

    I rolled back from 20180522 to 20180525 and rebooted.
    All is well in the garden!
    "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
    – John F. Kennedy, February 26, 1962.

    Comment


      #17
      That has been repaired already according to J. Riddell. Update again and you should be back on track.
      If you think Education is expensive, try ignorance.

      The difference between genius and stupidity is genius has limits.

      Comment


        #18
        Hmm, I just noticed that Timeshift apparently supports BTRFS operations.

        Comment


          #19
          Originally posted by PhysicistSarah View Post
          Hmm, I just noticed that Timeshift apparently supports BTRFS operations.
          It does, and I tried it out. I would NOT recommend it.

          When the Kubuntu installation is complete, using Btrfs as the root file system, it creates two subvolumes; @ and @home. They are bound to / and /home in fstab. From within the system you cannot access @ and @home directly unless you open a root konsole and do so manually. Here's how
          sudo -i
          mount /dev/disk/by-uuid/uuidofinstallationdisk /mnt

          When you list the directory contents of /mnt you'll get
          vdir /mnt
          /mnt/@
          /mnt/@home

          "/mnt/" becomes the "ROOT_FS" for the installation and @ & @home are its top level members. To make this clear some folks will create the directory /root_fs and use it.

          It is here that I create a snapshots subdirectory:
          mkdir /mnt/snapshots

          vdir /mnt
          /mnt/@
          /mnt/@home
          /mnt/snapshots

          When I make a snapshot I use
          btrfs su snapshot -r /mnt/@ /mnt/snapshots/@YYYYMMDD
          btrfs su snapshot -r /mnt/@home /mnt/snapshots/@homeYYYYMMDD

          vdir /mnt/snapshots
          @YYYYMMDD
          @homeYYYYMMDD


          Timeshift does things differently. It is even different from snapper, which is in the repository. More about snapper later.

          You tell Timeshift which drive your Btrfs installation is on. It lists the possibility as sda, sdb, etc... That's not good because referring to a drive by /dev/sdx can cause problems unrelated to Btrfs but made worse. When adding and/or removing storage devices sudden shifts in tagging can take place. For me, my RAID1 setup was composed of sda1 and sdb1. When I plugged in a 3rd HD it became sdb1 and sdb1 became sdc1 The raid1 continued to work fine, but I got lots of warnings about two drives having identical IDs.

          While lots of comments and "instructions" on the web refer to using /dev/sdXn devices the developers recommend using /dev/disk/by-uuid/uuidofdriveorpartition. Here's mine for my /dev/sda1 partition

          mount /dev/disk/by-uuid/47a4794f-b168-4c61-9d0c-cc22f4880116 /mnt

          Uuid's can be assigned to bash variables and passed to the command using $uuidsda1, for example.

          The second and most serious problem is that Timeshift linked a copy of @ and @home from "root_fs" to a directory under /, and drags @, @home and snapshots along with it. So, if you mount /dev/disk/.... to /mnt you will see
          /mnt/@
          /mnt/@home
          /mnt/snapshots/... etc...
          /mnt/timeshift/...
          but if you browse down the timeshift subdirectory you'll find copies of @, @home, snapshots...
          Backups are kept under a folder named timeshift in the root of the backup device. This path cannot be changed. The developers also recommend that after restoring from a Timeshift snapshot that the user should re-install grub. That's utter nonsense. If you roll back @ you will be getting what grub is in that snapshot. Since you will be rolling back @home as well you'll have a matched set. One can always create a subvolume, say /mnt/@data or /mnt/@project or whatever, and bind it to /data or /home/user/data in fstab, and snapshot that subvolume independently of @ or @home.

          If you delete @ from timeshift's subdirectory it blows /mnt/@ away too, and you are trapped in a konsole without an bin or sbin commands access. That will require you to boot with a LiveUSB that has a persistent Kubuntu 16.04 or 18.04 with Btrfs as the root file system, so you can snapshot a @YYYYMMDD to @ and reboot. So, before you uninstall Timeshift you MUST use it to delete ALL the snapshots that you've made, so it can reverse its changes in the directory structure.

          To make matters worse, if you DON'T add /home (@home) to your snapshot list then you'll need to add the specific config files and data in the hidden directories of your home account, plus any other things you want to backup. Since you are picking and choosing the only thing Timeshift can do is use the cp & rsync commands to copy them to the Timeshift directory. This changes the snapshot time from nearly instantaneous to how ever long it takes to copy the stuff over from your home directory.

          Snapper (in the repo) creates snapshots under / and in /home but does not drag along subvolumes in the root_fs like Timeshift does. Personally, I don't like snapshots that are stored under /. But, Snapper can also be used on the CLI. It's default config is to make a LOT of snapshots, hourly, daily, monthly, yearly. I counted over 50 and it was eating my disk space alive. I modified it to make only SINGLETONS, with no timeline. I wrote a script (it's on this forum) that I used before any change to create a "PRE" singleton, and after the changes a "POST" singleton.
          Using snapper commands I could "rollback" from a post to a pre. But, it was just as easy to open a Konsole and do what I wrote above.

          Btrbk is another Btrfs backup tool, but I haven't run it Here is an interesting article about it
          https://news.ycombinator.com/item?id=14722245
          Last edited by GreyGeek; May 26, 2018, 09:32 PM.
          "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
          – John F. Kennedy, February 26, 1962.

          Comment


            #20
            I forgot to add one IMPORTANT point about having snapshots under root "/".

            Boot is also under root. If, as has happened to me when messing around with Oshunluver's single Btrfs volume for multiple distros, you mess up the /boot file and you can't even reach grub rescue, your hosed. Why? because your snapshots are under /.snapshots and /home/.snapshots (for Snapper), or "<TREE_FS>"/timeshift for Timeshift. With Snapper and Timeshift you *might* be able to boot a LiveUSB that was made with Btrfs as the root file system (if not persistent), and then use it to do the mv'ing and snapshotting to move a @YYYYMMDD to @ and @homeYYYYMMDD to @home, but I wouldn't guarantee it. With Btrfs I would.

            I've been messing around with ZFS yesterday and today and confirmed what I've heard and read about snapshots. A ZFS zpool can, theoretically, contain 2^64 snapshots. One person theorized that he could create a snapshot on every file change, thus treating ZFS like a version control app. A more experienced user pointed out the two problems:
            1) A ZFS pool with several thousand (25-50 was mentioned) snapshots can take one or more hours to boot.
            2) If you take snapshots daily, and you have to roll back to Tuesday's snapshot, Wednesday and Thursday's snapshots are destroyed.


            Btrfs can address a total of 2^64 subvolumes and snapshots. However, there are practical limits on how many snapshots, in particular, you want hanging around. A file size limit in Btrfs is 2^64 bytes. I've never had more than about a dozen snapshots on my <TREE_FS> at one time, IIRC. Some things to consider with Btrfs snapshots: defragmentation of a btrfs pool can lead to a large growth of snapshots as files are read and written back to the drive. Also, deleting a snapshot out of the middle of a series can lead to large growth of older snapshots as files in newer snapshots are written to older snapshots. Deleting snapshots starting with the oldest will reduce snapshot growth.

            While noting commands in ZFS sizing datasets and snapshots with sizing snapshots and subvolumes in Btrfs I found a nice script which computes the size of Btrfs subvolumes and snapshots. Here is what it gave for me:
            Code:
            root@jerry-Aspire-V3-771:/# [FONT=courier new][B]/home/jerry/btrfs-du /[/B][/FONT]
            Subvolume                                                         Total  Exclusive  ID        
            ─────────────────────────────────────────────────────────────────────────────────────────
            snapshots/@_20180507                                            6.44GiB    1.25GiB  366       
            snapshots/@home_20180507                                       92.32GiB  406.60MiB  367       
            snapshots/@home_20180512                                       92.49GiB  421.71MiB  374       
            snapshots/@_20180512                                            7.70GiB    1.02GiB  375       
            snapshots/@_20180522                                            7.53GiB  614.79MiB  386       
            snapshots/@home_20180522                                       92.63GiB  460.28MiB  387       
            snapshots/@_20180525                                            7.96GiB  353.86MiB  397       
            snapshots/@home_20180525                                       94.92GiB  314.04MiB  398       
            @                                                               7.52GiB  374.48MiB  401       
            @home                                                          94.94GiB  352.03MiB  402       
            ─────────────────────────────────────────────────────────────────────────────────────────
            Total exclusive data                                                            4.13GiB
            THe "ID" listed above is from this command:
            Code:
            root@jerry-Aspire-V3-771:/# [FONT=courier new][B]btrfs qgroup show -r .  [/B][/FONT]  
            qgroup[B]id[/B]         rfer         excl     max_rfer 
            --------         ----         ----     -------- 
            0/5          16.00KiB     16.00KiB         none 
            0/257           0.00B        0.00B         none 
            0/258           0.00B        0.00B         none 
            0/366         6.45GiB      1.25GiB         none 
            0/367        92.32GiB    406.61MiB         none 
            0/374        92.49GiB    421.71MiB         none 
            0/375         7.71GiB      1.02GiB         none 
            0/386         7.54GiB    614.79MiB         none 
            0/387        92.63GiB    460.28MiB         none 
            0/391        64.86MiB        0.00B         none 
            0/392        16.78GiB        0.00B         none 
            0/397         7.97GiB    353.87MiB         none 
            0/398        94.92GiB    314.04MiB         none 
            0/399           0.00B        0.00B         none 
            0/400           0.00B        0.00B         none 
            0/401         7.53GiB    374.49MiB         none 
            0/402        94.95GiB    352.21MiB         none 
            0/404        16.00EiB        0.00B         none 
            255/392      16.78GiB        0.00B         none 
            255/399         0.00B        0.00B         none 
            255/402      94.95GiB    352.21MiB         none
            The script connects the ID with the name of the subvolume and snapshot.

            The script, copied below, is from here.
            Turning on qgroups is not necessary.

            Code:
            #!/bin/bash
             
            #
            # Script that outputs the filesystem usage of snapshots in a location ( root if omited )
            #
            # Usage:
            #          sudo btrfs-du ( path )
            #
            # Copyleft 2017 by Ignacio Nunez Hernanz <nacho _a_t_ ownyourbits _d_o_t_ com>
            # GPL licensed (see end of file) * Use at your own risk!
            #
            # Contributors: Jeroen Wiert Pluimers (jpluimers)
            #
            # Based on btrfs-size by Kyle Agronick
            #
            # More at https://ownyourbits.com/2017/12/06/check-disk-space-of-your-btrfs-snapshots-with-btrfs-du/
            #
             
            # bytesToHumanIEC based on https://unix.stackexchange.com/questions/44040/a-standard-tool-to-convert-a-byte-count-into-human-kib-mib-etc-like-du-ls1/259254#259254
            # both bytesToHumanIEC, an SI implementatation and test cases for both are at https://gist.github.com/jpluimers/0f21bf1d937fe0b9b4044f21944a90ec
             
            bytesToHumanIEC() {
                b=${1:-0}; d=''; s=0; S=(Bytes {K,M,G,T,E,P,Z,Y}iB)
                while ((b > 1024)); do
                    d="$(printf ".%02d" $((b % 1024 * 100 / 1024)))"
                    b=$((b / 1024))
                    let s++
                done
                echo "$b$d${S[$s]}"
            }
             
            LOCATION=${1:-/}
             
            # checks
            [[ ${EUID} -ne 0 ]] && {
              printf "Must be run as root. Try 'sudo $( basename "$0" )'\n"
              exit 1
            }
             
            [[ -d "$LOCATION" ]] || {
              echo "$LOCATION not found"
              exit 1
            }
             
            [[ "$( stat -fc%T "$LOCATION" )" != "btrfs" ]] && {
              echo "$LOCATION is not in a BTRFS system"
              exit 1
            }
             
            # determine if the '--raw' option is supported:
            btrfs_qgroup_show_raw="btrfs qgroup show --raw"
             
            OUT=$( $btrfs_qgroup_show_raw 2>&1 )
             
            grep -q "unrecognized option" <<< "$OUT" && {
              echo "INFO: Legacy btrfs version; trying 'btrfs qgroup show' without '--raw'..."
              # not supported by "Btrfs v3.12+20131125" and likekly other legacy versions
              # luckily "Btrfs v3.12+20131125" gives the same output without '--raw'
              # as "btrfs-progs v4.13.3" does with '--raw'
              btrfs_qgroup_show_raw="btrfs qgroup show"
            }
             
            # quota management
            sync
            btrfs qgroup show "$LOCATION" 2>&1 | grep -q "quotas not enabled" && {
              QFLAG=1
              btrfs quota enable "$LOCATION"
            }
             
            # if we just enabled quota, might have to wait for rescan using 'btrfs qgroup show'
             
            OUT=$( $btrfs_qgroup_show_raw "$LOCATION" 2>&1 )
            grep -q -e "rescan is running" -e "data inconsistent" <<< "$OUT" && {
              echo "INFO: Quota is disabled. Waiting for rescan to finish ..."
              while true; do
                sleep 2
                OUT=$( $btrfs_qgroup_show_raw "$LOCATION" 2>&1 )
                grep -q -e "rescan is running" -e "data inconsistent" <<< "$OUT" || break
              done
            }
             
            # data
             
            ## qgroup data
            OUT=$( sed '1,3d;s|^.*/||' <<< "$OUT" )
            ID__=( $( awk '{ print $1 }' <<< "$OUT" ) )
            TOT_=( $( awk '{ print $2 }' <<< "$OUT" ) )
            EXC_=( $( awk '{ print $3 }' <<< "$OUT" ) )
             
            for (( i = 0 ; i < ${#ID__[@]} ; i++ )); do
              TOT[${ID__[$i]}]=$( bytesToHumanIEC ${TOT_[$i]} )
              EXC[${ID__[$i]}]=$( bytesToHumanIEC ${EXC_[$i]} )
            done
             
            ## naming data
            OUT=$( btrfs subvolume list --sort=rootid "$LOCATION" | cut -d ' ' -f 2,9 )
            ID__=( $( awk '{ print $1 }' <<< "$OUT" ) )
            NAM_=( $( awk '{ print $2 }' <<< "$OUT" ) )
             
            for (( i = 0 ; i < ${#ID__[@]} ; i++ )); do
              NAME[${ID__[$i]}]=${NAM_[$i]}
            done
             
            EXCL_TOTAL=0
             
            [[ "$QFLAG" == "1" ]] && btrfs quota disable "$LOCATION"
             
            formatMask="%-60s %10s %10s  %-10s\n"
            separatorLine="─────────────────────────────────────────────────────────────────────────────────────────\n"
             
            # output
            printf "$formatMask" "Subvolume" "Total" "Exclusive" "ID"
            printf "$separatorLine"
             
            ## matching by IDs in btrfs subvolume list
            for (( i = 0 ; i < ${#ID__[@]} ; i++ )); do
              printf "$formatMask" ${NAME[${ID__[$i]}]} "${TOT[${ID__[$i]}]}" "${EXC[${ID__[$i]}]}" ${ID__[$i]}
              EXCL_TOTAL=$(( EXCL_TOTAL + ${EXC_[$i]} ))
            done
             
            EXCL_TOTAL=$( bytesToHumanIEC "$EXCL_TOTAL" )
             
            printf "$separatorLine"
            printf "%s %66s\n" "Total exclusive data" "$EXCL_TOTAL"
             
            # License
            #
            # This script is free software; you can redistribute it and/or modify it
            # under the terms of the GNU General Public License as published by
            # the Free Software Foundation; either version 2 of the License, or
            # (at your option) any later version.
            #
            # This script is distributed in the hope that it will be useful,
            # but WITHOUT ANY WARRANTY; without even the implied warranty of
            # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
            # GNU General Public License for more details.
            #
            # You should have received a copy of the GNU General Public License
            # along with this script; if not, write to the
            # Free Software Foundation, Inc., 59 Temple Place, Suite 330,
            # Boston, MA  02111-1307  USA
            I saved in in my home account and added the execute bit. I call it in a Konsole using sudo.
            Last edited by GreyGeek; May 27, 2018, 06:15 PM.
            "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
            – John F. Kennedy, February 26, 1962.

            Comment


              #21
              I began what I thought would be a series of posts on ZFS on this thread.

              It didn't take as long as I thought it would to come to the conclusion that for me Btrfs is the correct choice. I thought that if ZFS were given a systemd mount generator that I might switch to it, but there are a couple things about ZFS that are fixed in stone, and they preclude me from moving to ZFS.
              1) the zfs snapshots are not easily accessed (at least on Ubuntu based distros, While I could find the .zfs file containing "share" and "snapshots" they contained nothing, yet snapshots existed. Kubical showed me how to make them visible and where they should exist, so this complaint is moot. Snapshots are easy to find once you know how to adjust the proper settings and where they are located relative to the dataset you are snapshotting.
              2) The real killer was the fact that snapshots in ZFS are inside the pool, just like Snapper creates snapshots under "/". In Btrfs snapshots are where I want to put them, OUTSIDE the subvolume, on level with @ and @home, just under <ROOT_FS>

              The last piece of the Brtfs puzzle was filled in for me when I discovered the "btrfs-du" script which matched up ID numbers with subvolume and snapshot names, listing the total sizes for each, as shown in my post just previous to this one.
              Last edited by GreyGeek; Jun 03, 2018, 02:07 PM.
              "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
              – John F. Kennedy, February 26, 1962.

              Comment


                #22
                1. Can you tell me what partitions are required when using brtfs with a clean install? Only one for /root and one for /home or /root should be divided into special like /boot, etc.

                2. If I decide to convert /root from ext4 to brtfs do I have to have extra space on this partition available for the conversion purpose? If yes, how much?
                Yes, I have read warning
                https://btrfs.wiki.kernel.org/index....sion_from_Ext3

                Comment


                  #23
                  Is there a reason why you want to keep /root as EXT4? I would advise against it. It would make maintenance of both FS types a headache and probably lead to trouble.

                  A clean install is giving Btrfs to sda1 and assigning "/" as the partition label, making it a btrfs root file system.

                  When you are at the partition stage the default option for the primary partition is EXT4 and "/". Changing EXT4 to Btrfs and letting "/" remain will create an installation with Btrfs as the root file system with two default subvolumes; @ and @home, which are assigned in fstab to "/" and "/home".

                  If you delete all partitions (i.e., "/root") and then create just one, sda1, and give it to Btrfs as "/" and let the installer put Grub on sda you will have what I consider to be a standard Btrfs install.

                  You don't have to worry figuring out what a "safe" size for /root would be if you delete the EXT4 root partition and let it be part of sda1, letting the installer put grub on sda (or even sda1) and then all unused space goes into the btrfs pool, from which @ (which normally contains /boot) and @home can draw from as needed without deciding in advance how much space to allocate to each. Using Btrfs you can shrink the pool to make room for swap file, if you want one, which Btrfs does not use but some apps might use, or for another distro.

                  With the entire system belonging to Btrfs the task of making @ and @home snapshots is trivial and backs up everything. If you create a hybrid system you'll have trouble coordinating backups of /root on EXT4 with /home on Btrfs so that files in /home have the proper counter parts in /root.

                  It would be like requiring a chevy carb be installed on a diesel engine.
                  Last edited by GreyGeek; Jun 03, 2018, 02:16 PM.
                  "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                  – John F. Kennedy, February 26, 1962.

                  Comment


                    #24
                    Originally posted by gnomek
                    what partitions are required when using brtfs with a clean install
                    If your computer firmware is UEFI, and this is your boot drive, you should already have (and need) an EFI System Partition, fat32 and usually small (on mine there are 4 MiB used out of 500).

                    Also, if you want to have swap space, needed for hibernation, it should not go in a file in a btrfs partition. Many systems do without swap these days, and IME Kubuntu doesn't like to swap at all. But if you do, a swap partition as large as the RAM is in order.
                    Regards, John Little

                    Comment


                      #25
                      Good points. I forgot about EFI. A swap partition about 5% larger than the RAM should work.
                      "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                      – John F. Kennedy, February 26, 1962.

                      Comment


                        #26
                        Also, unless it's changed, you can't boot to a btfs file system that doesn't have a partition table so you must have at least one partition to boot to it.

                        As usual, Arch maintains an excellent wiki on EFI partitioning: https://wiki.archlinux.org/index.php...stem_Partition

                        Please Read Me

                        Comment


                          #27
                          Originally posted by oshunluvr View Post
                          Also, unless it's changed, you can't boot to a btfs file system that doesn't have a partition table so you must have at least one partition to boot to it.

                          As usual, Arch maintains an excellent wiki on EFI partitioning: https://wiki.archlinux.org/index.php...stem_Partition
                          But if you don't plan on booting from an HD you can give its sdX (not sdX1) to Btrfs, but I've read that even though that is possible, and I've done it, it is not a good idea.
                          "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                          – John F. Kennedy, February 26, 1962.

                          Comment


                            #28
                            Let's say I have 120-128GiB SSD. I want to have separate partition for data and system snapshots. Separate partition for data in case I have to reinstall my system, which will happen every two years, bacause of new LTS release.
                            At the moment I don't have UEFI motherboard but thinking about the future I will leave 550 MiB for EFI partition.

                            Also about 1MiB for grub2 should be left:
                            https://bugs.launchpad.net/ubuntu/+s...2/+bug/1059827

                            Does creating separate home partition make sense with brtfs considering that I will have to do a clean system install in the future and keep my user settings?

                            Originally posted by GreyGeek View Post
                            When you are at the partition stage the default option for the primary partition is EXT4 and "/". Changing EXT4 to Btrfs and letting "/" remain will create an installation with Btrfs as the root file system with two default subvolumes; @ and @home, which are assigned in fstab to "/" and "/home".
                            If I create brtfs root partition and separate brtfs home during clean system install will they also be created as subvolumes @ and @home?

                            Or should I give up thinking in old terms and forget about creating separate home partition during install? Actually, on this disk I already have old root, home, data partitions. Should I leave it as is and only add EFI space, or do like this:
                            1. EFI
                            2. Root and home on one brtfs space
                            3. data and backup brtfs partition

                            What would be the best practice? Of course I am aware that I also need to keep system backups on another disk (to be safe I always do double backup) but that is another story.

                            Comment


                              #29
                              I'll echo the question by gnomek. I am comfortable with the EXT4 method of keeping separate root and home partitions and thus being able to protect my home data during a system update.

                              Using btrfs, there does not appear to be such protection. At least not that I've heard explained.

                              Can someone explain how I can use btrfs and still protect my home data during upgrades?
                              Kubuntu 23.11 64bit under Kernel 6.9.1, Hp Pavilion, 6MB ram. All Bow To The Great Google... cough, hack, gasp.

                              Comment


                                #30
                                Originally posted by gnomek View Post
                                Also about 1MiB for grub2 should be left:
                                https://bugs.launchpad.net/ubuntu/+s...2/+bug/1059827
                                I detailed how to handle this here: https://www.kubuntuforums.net/showth...light=grub+gpt

                                Originally posted by gnomek View Post
                                Does creating separate home partition make sense with brtfs considering that I will have to do a clean system install in the future and keep my user settings?
                                If you are using BTRFS, no this does not make sense, especially with a small SSD. One of the main advantages to using btrfs is that drive space can be (and is by default) divided using subvolumes instead of partitions. This allows the separate subvolumes to share the free space on the partition. Think of subvolumes as partitions with data boundaries but no free space boundaries.

                                Originally posted by gnomek View Post
                                If I create brtfs root partition and separate brtfs home during clean system install will they also be created as subvolumes @ and @home?
                                No. I haven't tried this but I suspect you would get two btrfs file systems with zero subvolumes. Thus removing another great benefit to using btrfs in the first place - subvolumes and their snapshots and backups. I'm going to test this and report back.

                                Originally posted by gnomek View Post
                                Or should I give up thinking in old terms and forget about creating separate home partition during install? Actually, on this disk I already have old root, home, data partitions. Should I leave it as is and only add EFI space, or do like this:
                                1. EFI
                                2. Root and home on one brtfs space
                                3. data and backup brtfs partition
                                Well, for sure give up thinking in old terms

                                Originally posted by gnomek View Post
                                What would be the best practice? Of course I am aware that I also need to keep system backups on another disk (to be safe I always do double backup) but that is another story.
                                Re. btrfs, snapshots, backups, etc. on a small single disk system, here's my opinion: Backups stored on the same disk as the source are of little value. If the drive fails, what good are your backups? When you factor using btrfs, with which you can take snapshots as often as you like, backups on the same device are of zero value. So why waste otherwise usable but limited space by splitting your drive into pieces? BTRFS can divide your drive using subvolumes instead and keep all the free space available to all your subvolumes.

                                Think of a btrfs filesystem as a swimming pool and your subvolumes as floating rafts. You can keep throwing rafts into the pool and the other rafts just move aside and make room. When a raft grows in size, the others just slide over a bit. If you reduce a raft in size or just totally pull a raft out of the pool, the other rafts are now free to use the space.

                                IMO, Best practices for BTRFS and a small ssd:
                                Drive preparation:
                                1. Partition for grub as outlined in the link above.
                                2. Make a partition for EFI (min 100MB recommended 500MB). *
                                3. Make a partition for swap if you're going to want it (BTRFS doesn't do swap files well).
                                4. Use all the remaining space as a single BTRFS partition/file system.
                                * Obviously, partition 2 is only needed if you intend to use EFI.

                                After installation:
                                1. Develop and use a regular snapshot routine to preserve your system subvolume from damage and to preserve your data from accidental mistakes.
                                2. Develop and use a regular backup routine to backup any critical system or personal data on a BTRFS file system on a different device to preserve your system and data from drive failure.

                                I wrote a script that runs automatically to take daily snapshots and make weekly backups. This seemed sufficient for my needs.

                                Personally, I don't use EFI nor do I see any advantage to doing so. I don't boot Windows to bare metal (no one should IMO) and why should I add an extra layer of complication when GRUB works just fine? I'm currently booting 8 distros from a single BTRFS file system without using anything other than grub.

                                Please Read Me

                                Comment

                                Working...
                                X