Announcement

Collapse
No announcement yet.

btrfs adventure with gentoo

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    Originally posted by GreyGeek
    A brtfs subvolume cannot contain another subvolume...
    IMO GG is right, for the usual meaning of "contain". For a btrfs mounted in /mnt/work, with non-trivial amount of data all in subvolumnes, if you
    • create a subvolume /mnt/work/foo
    • snapshot / to /mnt/work/foo/bar
    • run ls -R /mnt/work/foo/bar

    you won't see much at all, only directories. Another way to see this is if you btrfs send a read-only snapshot of the parent subvolume, it won't send the contents of nested subvolumes.

    Originally posted by oshunluvr View Post
    Subvolumes can reside inside other subvolumes.
    "reside" yes, they are addressed that way, but the contents of a nested subvolume may not be in the parent subvolume. (They may be through snapshotting or cp --ref-link or deduplication.)

    So nested subvolumes is a naming and addressing thing, their data are usually not in the parent subvolumes. A lot like mounting devices in the hierarchy of another.
    Regards, John Little

    Comment


      #17
      OK, right. So we're arguing semantics.

      Of course, the data of one subvolume is not "in" another subvolume (excepting snapshots), but that's a broad and incorrect assumption that I meant that. That's the exact opposite of how the subvolumes work. I assumed we all knew that.

      Are we talking about the actual data or the hierarchy of the metadata? Your post pretty much just repeats what I stated in post #10 doesn't it?

      I assumed "contain" meant "nested" when GG said it. That may have been an incorrect assumption, but:

      Contain: transitive verb: 1. To have within; hold. 2. To be capable of holding.
      Nested: transitive verb: 1. To place in or as if in a nest. 2. To put snugly together or inside one another:

      Sounds like the're very similar in meaning. Funny thing is "reside" had this definition (of several);

      Reside: intransitive verb: 4 Computers. To be located or stored: a file that resides on a shared drive.

      My English education is not good enough to differentiate between which of these is the correct term or if they are at all different.

      Regardless, I think it is clear and we can all agree that nesting a subvolume does not somehow magically relocate the data into the other subvolume. Again, I assumed we all knew that. It simply changes the method - the directory structure, if you will - of how one locates the data. An subvolume can be mounted OR it can <insert preferred trans. verb here> within another subvolume and it's contents accessed just like any other directory without the need for an additional mount.

      Anyway - back to the discussion - As I stated in post #10 a rather large problem is the folder left behind (I'm referring to it as a "bread crumb") by nested subvolumes when you snapshot the parent subvolume is problematic if you wish to reconstruct a file system containing nested subvolumes from a set of snapshots. I filed a bug report and one of the devs seem to think it has or was going to be addressed anyway as they were aware of the behavior. Assuming this is the case and they "fix" this, I can see a set up where /var, /opt. /boot, or whatever could be nested subvolumes and not require managing a plethora of mounts. My media server, for example, has 15 subvolumes containing various parts of the file system. I think it would be easier to nest them - less to mount, less to export, only one folder per subvolume rather than mounts and links, etc. It would be interesting to see if nesting effects NFS. I doubt it, but it would need to be looked at.

      https://bugzilla.kernel.org/show_bug.cgi?id=208789

      Please Read Me

      Comment


        #18
        What? You mean I was right after all? Now I'm really confused.

        Time to do some experiments. ... after dinner .... BTW, the 150M hop of Starship SN5 was dramatically aborted!
        "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
        – John F. Kennedy, February 26, 1962.

        Comment


          #19
          And now, Elon Musk posted a video of the dramatic flight of the SN5!


          I found this link which showed that one subvolume can exist inside another, but if the outside subvolume needs to be deleted one must delete the inside subvolume first, which is the exact problem I had with deleting snapper snapshots, except it included a labeled numeric folder (/0, /1. /2, etc..) between subvolumes.

          https://www.howtoforge.com/community...bvolume.63549/

          I added @data to /mnt to create
          /mnt/@
          /mnt/@home
          /mnt/@data
          and created a directory /home/jerry/data
          Then I added stanza in fstab to mount it to /home/jerry/data and rebooted.
          After that I proceeded to move the Documents folder to ~/data, long with the contents of ~/Downloads, and several other folders and files whose locations were immaterial. I added to and removed stuff from ~/data while taking snapshots. As you'd suspect, the @home/jerry/data directory was empty. Snapshots of @data were normal and stuff could be copied into and out of it. However, I didn't add @data to @home, so my experimenting wasn't a real test of that scenario.

          It did make obvious a problem, from my POV, of adding 50% to the time of snapshotting and rolling back, even though one doesn't have to link a @data snapshot to any other snapshot, the way I always link a snapshot of @home to a snapshot of @ by giving both the same yyyymmdd suffix.

          That got me thinking about cutting my normal snapshotting and rolling back time in half by eliminating the @home subvolume and moving the contents of /home/jerry to the @ snapshot so that /home/jerry in @root would NOT be empty but contain the contents of my home account. That process should change and config files, links or other problems I had moving SageMath, for example, to /home/jerry/data.

          So putting all my eggs into the @ subvolume basket is my next experiment. The @home subvoume is automatically created during the install of Kubuntu. I'm wondering if the installer can be modified (easily) to bypass creating @ and just populating /home/myacct normally?
          Last edited by GreyGeek; Aug 05, 2020, 12:15 PM.
          "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
          – John F. Kennedy, February 26, 1962.

          Comment


            #20
            That was easy!

            I opened a Konsole and executed "sudo -i".
            At the prompt I mounted the sda1 blkid to /mnt
            I opened mc and from its left pane I opened /mnt/@
            From mc's right pane I opened /mnt/@home and migrated to /home/jerry
            I highlighted "jerry" and then hit F5, the copy command. I preserved the attributes and symlinks
            It took perhaps 30 seconds to copy 68GB to @/home/jerry
            I used Kate from Dolphin to open /etc/fstab and put a "#" in front of the stanza that mounted @home to /home.
            Then I rebooted.

            Here I am, running Kubuntu 20.04 with only a single subvolume, @.
            I've tried my wine programs, sagemath, various games, utilities and other applications. Everything appears to be working normally.

            systemctl list-units gave normal results.
            systemd-analyze and systemd-analyze blame gave the same output.

            So, now I need only snapshot one subvolume, @.
            "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
            – John F. Kennedy, February 26, 1962.

            Comment


              #21
              So I did mine a different way when I migrated to an internal /home:

              1. I logged out
              2. Logged into the terminal via ALT-F2
              3. Changed to the root folder (cd /)
              4. Un-mounted @home
              5. Deleted the /home folder
              6. Took a snapshot of @home: sudo btrfs su sn /subvols/@home home
              7. Logged out
              8. Logged into the GUI (ALT-F1)


              Then @home was still present, but not mounted. The new snapshot resides at /home. So to make it permanent, I deleted the fstab line for @home and deleted the @home subvolume. All done.

              Please Read Me

              Comment


                #22
                Originally posted by oshunluvr View Post
                So I did mine a different way when I migrated to an internal /home:

                1. I logged out
                2. Logged into the terminal via ALT-F2
                3. Changed to the root folder (cd /)
                4. Un-mounted @home
                5. Deleted the /home folder
                6. Took a snapshot of @home: sudo btrfs su sn /subvols/@home home
                7. Logged out
                8. Logged into the GUI (ALT-F1)


                Then @home was still present, but not mounted. The new snapshot resides at /home. So to make it permanent, I deleted the fstab line for @home and deleted the @home subvolume. All done.
                1- no one is now in home
                2- root terminal from the login
                3- moved to /
                4- can do that because no one is in a home acct
                5- removed the empty /home directory in the @ subvolume
                6- created a read-write snapshot of @home as home. Now home is a nested subvol of / (@)
                7- log out of root back to the login screen because home became "mounted" when moved to /
                8- logged into the home account

                Advantage, point and Match - Oshunluver!

                I'm not smart enough to think of the method you used, which doesn't require a reboot.

                Now I am going to delete my previous snapshots and create my first @ only snapshot.

                I decided to do this because when I created @data and set the mount point to /home/jerry/data in fstab, I ran into issues with applications with components residing in root expecting to see its other components residing in /home/jerry. My biggest offender was SageMath. Its "only move once" script didn't work properly. And, I added to my workload by having another subvolume to snapshot and archive.

                Since I was always making my snapshots in pairs to avoid such mixups I saw no reason to continue making snapshots of both @ and @home.

                I just went through all of my systemd units and they are fine with the loss of @home. Hardware's running fine.
                It looks like a winner.
                Now, I think I am going to play with TimeShift and see how it works with just one subvolume.
                "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                – John F. Kennedy, February 26, 1962.

                Comment


                  #23
                  Using the snapshot instead of the file copy is the largest benefit to my method. BUT you should know that making a snapshot of / will NOT include the files in /home. What you'll find is a folder named /home but nothing below that. You still have to make separate snapshots to save your /home. Just instead of @home you take a snapshot of /home.

                  To restore the system (roll-back) you have to:

                  1. Restore the @ snapshot
                  2. Delete the empty /home folder
                  3. Restore the /home snapshot


                  It's kind of a PITA. I posted a bug about the left over folder that causes the extra step. We'll see if they fix it.

                  It boils down to this IMO: Do you want a simpler fstab or a more straight-forward roll-back procedure?

                  Obviously, a very small script would handle the operation just fine, but it's something to be aware of. I think for my desktop, I'm leaving it "factory" with the @ and @home. However my server has an fstab that looks like:

                  Code:
                  smith@server:~$ cat /etc/fstab# /etc/fstab: static file system information.
                  #
                  # Use 'blkid' to print the universally unique identifier for a
                  # device; this may be used with UUID= as a more robust way to name devices
                  # that works even if disks are added and removed. See fstab(5).
                  #
                  # <file system> <mount point>   <type>  <options>       <dump>  <pass>
                  # / was on /dev/sda3 during installation
                  UUID=e5e1be14-a10e-4188-b511-a610bea9e3a0 /               btrfs   rw,auto,noatime,nodiratime,space_cache,compress=lzo,autodefrag,subvol=@Ubuntu_Server_1804 0       1
                  UUID=e5e1be14-a10e-4188-b511-a610bea9e3a0 /mnt/install    btrfs   rw,noauto,relatime,space_cache,compress=lzo,autodefrag 0 0
                  # swap was on /dev/sda2 during installation
                  UUID=474a8e78-9450-4942-b3ea-428b0a9ff870 none            swap    sw,pri=1,defaults,noatime  0       0
                  # swap was on /dev/sdc2 during installation
                  UUID=68bf458c-1133-4f4c-8b60-1d4ad15db901 none            swap    sw,pri=1,defaults,noatime  0       0
                  # /tmp in RAM
                  tmpfs  /tmp  tmpfs nodev,nosuid 0 0
                  # Swap partitions
                  #UUID=474a8e78-9450-4942-b3ea-428b0a9ff870 none                   swap    sw,pri=1,defaults,noatime       0       0
                  #UUID=68bf458c-1133-4f4c-8b60-1d4ad15db901 none                   swap    sw,pri=1,defaults,noatime       0       0
                  # Main boot drive 
                  #UUID=e5e1be14-a10e-4188-b511-a610bea9e3a0 /mnt/install           btrfs   rw,space_cache,compress=lzo,autodefrag 0 1
                  # Server root install on sde2
                  #UUID=e5e1be14-a10e-4188-b511-a610bea9e3a0 /                      btrfs   rw,noatime,space_cache,compress=lzo,autodefrag,subvol=@Ubuntu_Server_14_04 0 1
                  # Desktop root install on sde2
                  UUID=e5e1be14-a10e-4188-b511-a610bea9e3a0 /mnt/desktop           btrfs   rw,noauto,space_cache,compress=lzo,autodefrag,subvol=@Kubuntu_15_10 0 0
                  # Desktop home on sde2
                  UUID=e5e1be14-a10e-4188-b511-a610bea9e3a0 /mnt/desktop_home      btrfs   rw,noauto,space_cache,compress=lzo,autodefrag,subvol=@Kubuntu_15_10_home 0 0
                  # Main media pool
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /mnt/pool              btrfs   rw,space_cache,compress=lzo,autodefrag 0 0
                  # Media backups
                  UUID=d8d4c888-11cf-4ec8-a3ca-b7ad8afb4944 /mnt/backup1           btrfs   rw,noauto,space_cache,compress=lzo,autodefrag 0 0
                  UUID=64479112-5df9-4710-ba1a-273d3f78e943 /mnt/backup2           btrfs   rw,noauto,space_cache,compress=lzo,autodefrag 0 0
                  # Install backups 
                  UUID=3367c782-2e51-42eb-832d-b58e907eb8b3 /mnt/install1          btrfs   rw,noauto,space_cache,compress=lzo,autodefrag 0 0
                  UUID=56b6d47e-8891-4645-86a5-f1b16e1accf0 /mnt/install2          btrfs   rw,noauto,space_cache,compress=lzo,autodefrag 0 0
                  # Exported media mounts
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /exports/Alt_Movies    btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=65534,gid=560,autodefrag,subvol=@Alt_Movies 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /exports/Incoming      btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=65534,gid=560,autodefrag,subvol=@Incoming 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /exports/Movies        btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=65534,gid=560,autodefrag,subvol=@Movies 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /exports/TV_Shows      btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=65534,gid=560,autodefrag,subvol=@TV_Shows 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /exports/Audio         btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=65534,gid=560,autodefrag,subvol=@Audio 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /exports/Backups       btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=65534,gid=560,autodefrag,subvol=@Backups 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /exports/Documents     btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=65534,gid=560,autodefrag,subvol=@Documents 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /exports/Downloads     btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=65534,gid=560,autodefrag,subvol=@Downloads 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /exports/Music         btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=65534,gid=560,autodefrag,subvol=@Music 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /exports/Pictures      btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=65534,gid=560,autodefrag,subvol=@Pictures 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /exports/Projects      btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=65534,gid=560,autodefrag,subvol=@Projects 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /exports/Videos        btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=65534,gid=560,autodefrag,subvol=@Videos 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /exports/Home_Movies   btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=65534,gid=560,autodefrag,subvol=@Home_Movies 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /exports/Music_Videos  btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=65534,gid=560,autodefrag,subvol=@Music_Videos 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /var/www               btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,autodefrag,subvol=@www 0 0
                  # Exported private media
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /private/stuart  btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=1000,gid=1000,autodefrag,subvol=@stuart 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /private/lisa    btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=1001,gid=1001,autodefrag,subvol=@lisa 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /private/lily    btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=1002,gid=1002,autodefrag,subvol=@lily 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /private/sarah   btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=1003,gid=1003,autodefrag,subvol=@sarah 0 0
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /private/guest   btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,uid=1010,gid=1010,autodefrag,subvol=@guest 0 0
                  # Plex mount
                  UUID=8c45e18c-4781-4461-86f8-721d8bc33c0c /var/lib/plexmediaserver  btrfs   x-systemd.automount,x-systemd.device-timeout=10,rw,space_cache,compress=lzo,autodefrag,subvol=@plexdata 0 0
                  Think of all the time that takes!

                  Please Read Me

                  Comment


                    #24
                    WOW! 30 UUID's to snapshot!

                    That's almost as big as ZFS
                    You use a script, of course?
                    Or, TimeShift?
                    "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                    – John F. Kennedy, February 26, 1962.

                    Comment


                      #25
                      A script I wrote. It a 12TB drive and I totally automated weekly snapshots and backups with a cron script. Hte back drives aree 2x6tb drives.I did it manually once a month or so for awhile - after adding some data - but got bored doing that so decided to automate it. Usually the stored data doesn't change much and the script uses incremental backups so it actually doesn't run that long very often.

                      I haven't really seriously considered redoing the way it's mounted or the file structure because it works fine. All of this has been an intellectual exercise. However, when I transition to a new server version I'll seriously consider it.

                      Please Read Me

                      Comment


                        #26
                        Having reduced my subvolumes to one, namely @, I decided to try TimeShift to see how it played with having no @home.
                        It seemed to work great. I played around awhile, making a few snapshots and deleting some. Easy peazy.

                        I decided to remove TimeShift. Remembering that uninstalling TimeShift without removing ALL of its snapshots can lead to a lost of one's filesystem, I made sure to remove all the snapshots showing in the TimeShift GUI application. Then I purged TimeShift.


                        To check, I ran Dolphin and noticed the following:
                        Click image for larger version

Name:	timeshift_file_layout.jpg
Views:	1
Size:	76.6 KB
ID:	644856

                        Before I installed TimeShift I had the following file structure:
                        /mnt/@
                        /mnt/@home (not mounted at boot)
                        /mnt/snapshots/@20200801
                        /mnt/snapshots/@home20200801
                        /mnt/snapshots/@solo20200806

                        After I installed TimeShift and took a snapshot, TimeShift stored that snapshot under
                        /run/timeshift/backup
                        but look at what it did, as shown in the image, made after the purge! It had the entire /mnt hierarchy also under /run/timeshift/backup, and under /run/timeshift/backup/timeshift-btrfs was the first snapshot I made using TimeShift, which the GUI indicated was deleted by erasing it from the GUI.

                        I though all except the snapshot I made were links to the /mnt locations. They weren't.
                        Before I can delete /run/timeshift I have to delete the subvolumes under /run/timeshift.
                        Specifically, I can delete everything under /run/timeshift/backup except @, and did that. BUT, IF I delete /run/timeshift/backup/@ it will delete my system, cutting off the branch I am setting on, but that's my only choice.

                        So, I used the btrfs delete command to delete /run/timeshift/backup/@ and immediately I had no access to my system. No bash commands worked. Btrfs was gone. The keyboard and mouse were all that worked. Anticipating that I plugged my Ubuntudde USB stick in and hit the power button. When that desktop appeared I sudo's to root, mounted my backup disk to /backup and my primary SSD to /mnt. I used btrfs send/receive to copy my /backup/snapshots/@20200806 snapshot to /mnt/@, and then rebooted.

                        My system came back up beautifully, just as it was before I installed TimeShift.

                        I know that snapper will work beautifully with only a @ snapshot so there is no reason to try it, and lots of reasons not too. Working with /mnt/@ manually in a root terminal is, IMO, the only way to fly for simple installations like mine.
                        "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                        – John F. Kennedy, February 26, 1962.

                        Comment

                        Working...
                        X