Announcement

Collapse
No announcement yet.

Anyone try Timeshift yet?

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    Thanks a lot for the extensive explanation, GreyGeek!! That was super-helpful!
    I think my conceptual error was that I did not realize that @ was not the <root_fs>.

    So from this point-of-view I can see why it is advantageous to mount the <root_fs> to somewhere else when doing backups.

    Now, I could just permanently mount the <root_fs> to, say, /mnt/btrfs_pool/ as the btrbk examples seem to suggest, or I could wrap btrbk into a script that just mounts immediately before the snapshot/backup operation, and then unmounts. Any thoughts on this?

    Also another consideration: with the new mount point for <root_fs> I can create either a folder or a subvolume to hold my snapshots - Cubicle suggests a subvolume, and I can see that this could be convenient, since you can mount it directly, but then snapshots are also subvolumes, so they would have to be mounted separately anyway, wouldn't they? So you could also just use a folder in <root_fs>? This is actually what Timeshift seems to do.

    So at the moment I have this:
    Code:
    $ sudo btrfs subvolume list /mnt/btrfs_pool/
    ID 256 gen 14941 top level 5 path @
    ID 257 gen 14942 top level 5 path @home
    ID 263 gen 13955 top level 5 path @swap
    ID 270 gen 14800 top level 5 path timeshift-btrfs/snapshots/2021-02-12_17-36-38/@
    ID 271 gen 13930 top level 5 path timeshift-btrfs/snapshots/2021-02-12_17-36-38/@home
    ...
    ID 660 gen 14795 top level 5 path timeshift-btrfs/snapshots/2021-03-15_19-57-21/@
    ID 661 gen 14793 top level 5 path timeshift-btrfs/snapshots/2021-03-15_19-57-21/@home
    ID 662 gen 14942 top level 5 path @snapshots
    ID 663 gen 14942 top level 662 path @snapshots/@.20210315
    ID 664 gen 14942 top level 662 path @snapshots/@home.20210315

    Comment


      #17
      Originally posted by Chopstick View Post
      Thanks a lot for the extensive explanation, GreyGeek!! That was super-helpful!
      I think my conceptual error was that I did not realize that @ was not the <root_fs>.

      So from this point-of-view I can see why it is advantageous to mount the <root_fs> to somewhere else when doing backups.

      Now, I could just permanently mount the <root_fs> to, say, /mnt/btrfs_pool/ as the btrbk examples seem to suggest, or I could wrap btrbk into a script that just mounts immediately before the snapshot/backup operation, and then unmounts. Any thoughts on this?
      Except for name changes (btr_pool instead of btrfs_pool) the schema in that Snapper config that Kubicle linked to is about the same as what you are proposing. I would NOT leave the <ROOT_FS> permenantely mounted to /mnt or any other place. Mounting, doing snapshot operations, and then unmounting is better, IMO.

      That snapper config documentation on GitHub that Kubicle linked to also includes this comment:
      When taking a snapshot of @ (mounted at the root /), other subvolumes are not included in the snapshot. Even if a subvolume is nested below @, a snapshot of @ will not include it.
      That's true regardless if you use snapper or not.

      I have not used btrbk but from what I've read it seems like a good utility.

      Originally posted by Chopstick View Post
      Also another consideration: with the new mount point for <root_fs> I can create either a folder or a subvolume to hold my snapshots - Cubicle suggests a subvolume, and I can see that this could be convenient, since you can mount it directly, but then snapshots are also subvolumes, so they would have to be mounted separately anyway, wouldn't they?
      Yes, as explained below

      Originally posted by Chopstick View Post
      So you could also just use a folder in <root_fs>? This is actually what Timeshift seems to do.

      So at the moment I have this:
      Code:
      $ sudo btrfs subvolume list /mnt/btrfs_pool/
      ID 256 gen 14941 top level 5 path @
      ID 257 gen 14942 top level 5 path @home
      ID 263 gen 13955 top level 5 path @swap
      ID 270 gen 14800 top level 5 path timeshift-btrfs/snapshots/2021-02-12_17-36-38/@
      ID 271 gen 13930 top level 5 path timeshift-btrfs/snapshots/2021-02-12_17-36-38/@home
      ...
      ID 660 gen 14795 top level 5 path timeshift-btrfs/snapshots/2021-03-15_19-57-21/@
      ID 661 gen 14793 top level 5 path timeshift-btrfs/snapshots/2021-03-15_19-57-21/@home
      ID 662 gen 14942 top level 5 path @snapshots
      ID 663 gen 14942 top level 662 path @snapshots/@.20210315
      ID 664 gen 14942 top level 662 path @snapshots/@home.20210315
      If you mount your <ROOT_FS> (called subvolid=5 in the snapper docs) and then browse through @ you will find that it contains /home. If you browse into /home you will find that it is empty. This is because a subvolume (/home which is @home) cannot be stored directly under another subvolume (/ which is @). However, as you saw in my postings, I mounted <ROOT_FS> to /mnt and then created a folder, snapshots, under /mnt to give /mnt/snapshots. In snapshots I store all my snapshots, which are subvolumes. So, yes, you must have a folder between two subvolumes. I don't know what TimeShift is doing with @swap since BTRFS does not honor swap files or partitions. For @snapshot to be useful it must be mounted someplace where folders can be appended, so Gen 663 and 664 look like nonsense to me.

      Below, snapshots is a subdirectory of /mnt, which is <ROOT_FS> or subvolid=5.
      Code:
      [FONT=monospace][COLOR=#000000]:~# mount /dev/disk/by-uuid/ce2b5741-c01e-4b3d-b6ba-401ad7f7fcdf /mnt [/COLOR]
      :~# vdir /mnt 
      total 0 
      drwxr-xr-x 1 root root 208 Feb 20 01:15 @ 
      drwxr-xr-x 1 root root 130 Mar 14 21:59 snapshots[/FONT]
      So, it's subvolume/snapshots/subvolume (i.e., <ROOT_FS>/snapshots/, holding multiple snapshots of @).

      You don't see @home under /mnt because I used mc to copy everything under /home/jerry to /mnt/@/home:
      Code:
      [FONT=monospace][COLOR=#000000]:~# vdir /mnt/@ [/COLOR]
      total 28 
      drwxr-xr-x 1 root root    0 Jan  9  2020 backup 
      lrwxrwxrwx 1 root root    7 Jan  5  2020 bin -> usr/bin 
      drwxr-xr-x 1 root root  600 Mar 12 21:32 boot 
      drwxr-xr-x 1 root root    0 Jan  5  2020 cdrom 
      drwxr-xr-x 1 root root  140 Jan  3  2020 dev 
      drwxr-xr-x 1 root root 5324 Mar 15 17:58 etc 
      drwxr-xr-x 1 root root   10 Aug  6  2020 home 
      ....
      -rwxr-xr-x 1 root root 1421 Oct 19 21:46 make_snapshot.sh 
      ....
      drwxrwxrwt 1 root root 1806 Mar 15 20:31 tmp 
      drwxr-xr-x 1 root root  126 Dec 31 13:08 usr 
      drwxr-xr-x 1 root root  120 Feb 20 01:15 var[/FONT]
      and then in /etc/fstab I commented out the stanza that mounted @home.
      Code:
      [FONT=monospace][COLOR=#000000][/COLOR][COLOR=#ff0000][B]#[/B][/COLOR][COLOR=#000000]UUID=ce2b5741-c01e-4b3d-b6ba-401ad7f7fcdf /home   btrfs   defaults,subvol=@home 0  2[/COLOR]
      [/FONT]
      and finally I deleted @home.

      That's why my backup script only backs up @. Everything is in it. Why did I do that? Because for several years I was constantly making snapshots of @ and @home using the same timestamp suffix to keep them a matched pair. One doesn't want to mix up the @somedate snapshot with @homeotherdate snapshot for obvious reasons. If I had some data that was independent of either @ or @home I'd create @data as I explained in a previous post and keep the data in there.

      i'm going to browse the source code for btrbk and see what's in it.
      "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
      – John F. Kennedy, February 26, 1962.

      Comment


        #18
        Originally posted by Chopstick View Post
        I think my conceptual error was that I did not realize that @ was not the <root_fs>.
        That took me a while to grasp that, too. Root not being the root means the concepts, both called root, have to be separated; counter-intuitive and confusing.
        Ubuntu's installer putting the root in a subvolume called "@" is totally useful but adds to the confusion, being a single character just like "/". I think the installer should ask what subvolume name to use. I always rename @ and @home to something else, to allow more Ubuntu installs. Doing this requires these steps:
        1. Mounting the file system root somewhere so that one can see the @ and @home names in a directory, then using sudo mv in that directory.
        2. Adjusting /etc/fstab so that the new subvolume names are used in the "subvol=" options. Using labels in /etc/fstab greatly improves its readability.
        3. Telling grub about it. I manually maintain my grub config, and IMO this practice is much simpler, easier and more reliable than using the standard update-grub scripts. I boot Kubuntu (on @r, with the btrfs called "main") with this grub stanza:
          Code:
          menuentry 'Kubuntu Groovy' {
           search --set=root --label "main"
           linux /@r/boot/vmlinuz root=LABEL=main ro rootflags=subvol=@r
           initrd /@r/boot/initrd.img
          }
          However, running sudo update-grub should fix it up, too.

        Every six months or so when a new release is out, I preserve options by making snapshots and making both old and new bootable by following the steps above, before running do-release-upgrade on one of them. I usually do a clean install as well, which goes into @ and @home. Then I can boot to before and after the upgrade, and to a fresh install. I've still got 20.04 bootable, but it's not been updated so I should delete it.
        Regards, John Little

        Comment


          #19
          Watching the video, kind of confusing Users>Exclude all files. Filters>exclude pattern. I have mine setup for Schedule>Daily>Keep 5. It has yet to run on its own in 3 days since I installed it, I did do couple of manual runs on my own.
          Linux since 2008, Kubuntu 20.10
          *ASUS 970 PRO GAMING/AURA AM3+ AMD 970 + SB 950 SATA 6Gb/s USB 3.1
          *AMD FX-8370 with AMD Wraith cooler Vishera 8-Core 4.0 GHz (4.3 GHz Turbo)
          *G.SKILL Ripjaws X Series 16GB DDR3 SDRAM -- Asus GEFORCE GTX 1050 TI 4 GB

          Comment


            #20
            Thanks a lot for the help guys! I think I got my system down now.

            I am using btrbk to automate cycling/scheduling (running once a day) to back up my system to an external HDD.
            This is done inside a script which mounts/unmounts the main btrfs file system to /mnt/btrfs_pool before/after the backup; snapshots are stored in folders under the <root_fs> of the system drive as well as the external backup HDD.

            At the moment I still have to mount my backup drive manually, though, since it has an encrypted LVM - I suppose I could set up my laptop as a trusted machine, though... but I also just tend to have it mounted all the time, so not a big deal...

            I'd be happy to share my script (and btrbk config) in case anyone is interested.

            Now, to circle back to Timeshift:
            1) I suppose I could disable backup of the home folder in Timeshift, which is also the default, I believe, since my home is backed up to the external drive and I also have snapshots available in a subdirectory of the btrfs root_fs
            2) In this context I noticed something funny: Timeshift apparently has the <root_fs> permanently mounted under '/run/timeshift/backup/', since I can see all the content of the <root_fs> there, including my new btrbk_snaps subfolder! Not sure what to think of this...

            rdonelly, which video are you referring to?
            For me, Timeshift is definitely making its scheduled backups.

            Comment


              #21
              Originally posted by Chopstick View Post
              2) In this context I noticed something funny: Timeshift apparently has the <root_fs> permanently mounted under '/run/timeshift/backup/', since I can see all the content of the <root_fs> there, including my new btrbk_snaps subfolder! Not sure what to think of this...
              I setup all my systems with the root fs mounted at /subvol which also contains a folder /snapshots. I have 4 to 8 separate installs at any given time and keeping snapshots in their own folder keeps the root mount cleaner looking.

              I do automated snapshots and backups but I wrote my own script rather than relying on a third party tool.

              Please Read Me

              Comment


                #22
                Chopstick, I am referring to the video in this link. He leaves users and filters at their default setting.

                https://teejeetech.com/timeshift/
                Linux since 2008, Kubuntu 20.10
                *ASUS 970 PRO GAMING/AURA AM3+ AMD 970 + SB 950 SATA 6Gb/s USB 3.1
                *AMD FX-8370 with AMD Wraith cooler Vishera 8-Core 4.0 GHz (4.3 GHz Turbo)
                *G.SKILL Ripjaws X Series 16GB DDR3 SDRAM -- Asus GEFORCE GTX 1050 TI 4 GB

                Comment


                  #23
                  Chopstick, I am referring to the video in this link. He leaves users and filters at their default settings.

                  https://teejeetech.com/timeshift/
                  Linux since 2008, Kubuntu 20.10
                  *ASUS 970 PRO GAMING/AURA AM3+ AMD 970 + SB 950 SATA 6Gb/s USB 3.1
                  *AMD FX-8370 with AMD Wraith cooler Vishera 8-Core 4.0 GHz (4.3 GHz Turbo)
                  *G.SKILL Ripjaws X Series 16GB DDR3 SDRAM -- Asus GEFORCE GTX 1050 TI 4 GB

                  Comment


                    #24
                    Originally posted by Chopstick View Post
                    ...
                    Now, to circle back ...
                    That sound's familiar!



                    Originally posted by Chopstick View Post
                    .
                    ...
                    2) In this context I noticed something funny: Timeshift apparently has the <root_fs> permanently mounted under '/run/timeshift/backup/', since I can see all the content of the <root_fs> there, including my new btrbk_snaps subfolder! Not sure what to think of this...
                    THAT is exactly what I was referring to. If I recall correctly, if you mount your HD under /mnt you will see
                    /mnt/@
                    /mnt/@home
                    and any other directories or subvolumes you've created under <ROOT_FS>.. IF you use btrfs to delete /run/timeshift/backup/@ it will take out /mnt/@ as well and hang your system. At least that's what happened when I uninstalled TimeShift first and then attempted to delete its snapshots. Searching the web I was informed that to avoid deleing your <ROOT_FS>/@ subvolume you should use TimeShift to delete ALL your snapshots first. That's why I pointed that out in a previous post on this thread, and why I won't use TimeShift for making BTRFS snapshots.

                    Btrbk looks good, though. IF I hadn't already written my own script I would probably adopt it or Snapper. Snapper initally only snapshot's @, which is all I have, so its default config is OK by me, now. With @ and @home I didn't like it so much because of the possibility of mismatching a @home subvolume with a previous or later @ subvolume.

                    Allow me to emphasize again that just because one can make hundreds of snapshots doesn't mean taking that many is a good idea. While researching this I got in contact with one of the BTRFS developers and his recommendation was no more than 10 snapshots per subvolume. For @ and @home that means 20 in total.

                    The reason is this: all though a new snapshot is nearly empty, containing only meta data, as your system changes over time the existing snapshots will collect changes (CoW) and gradually expand. Eventually the older snapshots could get as large as the parent subvolume, especially if you are keeping snapshots for months, a year, or "forever".

                    My system SSD is 500GB. "Btrfs fi usage /" shows
                    Code:
                    [FONT=monospace][COLOR=#000000]:[/COLOR][COLOR=#5454ff][B]~[/B][/COLOR][COLOR=#000000]$ [B]btrfs fi usage / [/B][/COLOR]
                    WARNING: cannot read detailed chunk info, per-device usage will not be shown, run as root 
                    Overall: 
                        Device size:                 465.76GiB 
                        Device allocated:            134.03GiB 
                        Device unallocated:          331.73GiB 
                        Device missing:              465.76GiB 
                        Used:                        110.63GiB 
                        Free (estimated):            353.47GiB      (min: 353.47GiB) 
                        Data ratio:                       1.00 
                        Metadata ratio:                   1.00 
                        Global reserve:              204.16MiB      (used: 0.00B) 
                    
                    Data,single: Size:131.00GiB, Used:109.26GiB (83.40%) 
                    Metadata,single: Size:3.00GiB, Used:1.37GiB (45.77%) 
                    System,single: Size:32.00MiB, Used:16.00KiB (0.05%)[/FONT]
                    My "Used:" shows 110.63GiB, which is 83.40% of my "Device Allocated:". It would only take FOUR snapshots, if they filled up toa total of 440GiB, to eat 95% of my SSD's 465.76GiB available space. That would bring my system to an absolute crawl. The crawl would probably happen before I fill 4 snapshots because extents are not freed up until fstrim runs, which is once a week on my system.

                    That's why my script always deletes the oldest of my snapshots every time I create a new one. I only keep five available.
                    "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                    – John F. Kennedy, February 26, 1962.

                    Comment


                      #25
                      Does Timeshift create a cron tab? I do not have one in /var/spool/cron/crontabs/ for timeshift.
                      Last edited by rdonnelly; Mar 17, 2021, 09:10 AM.
                      Linux since 2008, Kubuntu 20.10
                      *ASUS 970 PRO GAMING/AURA AM3+ AMD 970 + SB 950 SATA 6Gb/s USB 3.1
                      *AMD FX-8370 with AMD Wraith cooler Vishera 8-Core 4.0 GHz (4.3 GHz Turbo)
                      *G.SKILL Ripjaws X Series 16GB DDR3 SDRAM -- Asus GEFORCE GTX 1050 TI 4 GB

                      Comment


                        #26
                        Found the timeshift crontab in cron.d, I will have to contact the developer and see why my backups dont run?
                        Linux since 2008, Kubuntu 20.10
                        *ASUS 970 PRO GAMING/AURA AM3+ AMD 970 + SB 950 SATA 6Gb/s USB 3.1
                        *AMD FX-8370 with AMD Wraith cooler Vishera 8-Core 4.0 GHz (4.3 GHz Turbo)
                        *G.SKILL Ripjaws X Series 16GB DDR3 SDRAM -- Asus GEFORCE GTX 1050 TI 4 GB

                        Comment


                          #27
                          Out of the blue, it just started backing up. Not sure why, but it his been doing daily backups for a week now.
                          Linux since 2008, Kubuntu 20.10
                          *ASUS 970 PRO GAMING/AURA AM3+ AMD 970 + SB 950 SATA 6Gb/s USB 3.1
                          *AMD FX-8370 with AMD Wraith cooler Vishera 8-Core 4.0 GHz (4.3 GHz Turbo)
                          *G.SKILL Ripjaws X Series 16GB DDR3 SDRAM -- Asus GEFORCE GTX 1050 TI 4 GB

                          Comment

                          Working...
                          X