Announcement

Collapse
No announcement yet.

Anyone try Timeshift yet?

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • rdonnelly
    replied
    Out of the blue, it just started backing up. Not sure why, but it his been doing daily backups for a week now.

    Leave a comment:


  • rdonnelly
    replied
    Found the timeshift crontab in cron.d, I will have to contact the developer and see why my backups dont run?

    Leave a comment:


  • rdonnelly
    replied
    Does Timeshift create a cron tab? I do not have one in /var/spool/cron/crontabs/ for timeshift.
    Last edited by rdonnelly; Mar 17, 2021, 09:10 AM.

    Leave a comment:


  • GreyGeek
    replied
    Originally posted by Chopstick View Post
    ...
    Now, to circle back ...
    That sound's familiar!



    Originally posted by Chopstick View Post
    .
    ...
    2) In this context I noticed something funny: Timeshift apparently has the <root_fs> permanently mounted under '/run/timeshift/backup/', since I can see all the content of the <root_fs> there, including my new btrbk_snaps subfolder! Not sure what to think of this...
    THAT is exactly what I was referring to. If I recall correctly, if you mount your HD under /mnt you will see
    /mnt/@
    /mnt/@home
    and any other directories or subvolumes you've created under <ROOT_FS>.. IF you use btrfs to delete /run/timeshift/backup/@ it will take out /mnt/@ as well and hang your system. At least that's what happened when I uninstalled TimeShift first and then attempted to delete its snapshots. Searching the web I was informed that to avoid deleing your <ROOT_FS>/@ subvolume you should use TimeShift to delete ALL your snapshots first. That's why I pointed that out in a previous post on this thread, and why I won't use TimeShift for making BTRFS snapshots.

    Btrbk looks good, though. IF I hadn't already written my own script I would probably adopt it or Snapper. Snapper initally only snapshot's @, which is all I have, so its default config is OK by me, now. With @ and @home I didn't like it so much because of the possibility of mismatching a @home subvolume with a previous or later @ subvolume.

    Allow me to emphasize again that just because one can make hundreds of snapshots doesn't mean taking that many is a good idea. While researching this I got in contact with one of the BTRFS developers and his recommendation was no more than 10 snapshots per subvolume. For @ and @home that means 20 in total.

    The reason is this: all though a new snapshot is nearly empty, containing only meta data, as your system changes over time the existing snapshots will collect changes (CoW) and gradually expand. Eventually the older snapshots could get as large as the parent subvolume, especially if you are keeping snapshots for months, a year, or "forever".

    My system SSD is 500GB. "Btrfs fi usage /" shows
    Code:
    [FONT=monospace][COLOR=#000000]:[/COLOR][COLOR=#5454ff][B]~[/B][/COLOR][COLOR=#000000]$ [B]btrfs fi usage / [/B][/COLOR]
    WARNING: cannot read detailed chunk info, per-device usage will not be shown, run as root 
    Overall: 
        Device size:                 465.76GiB 
        Device allocated:            134.03GiB 
        Device unallocated:          331.73GiB 
        Device missing:              465.76GiB 
        Used:                        110.63GiB 
        Free (estimated):            353.47GiB      (min: 353.47GiB) 
        Data ratio:                       1.00 
        Metadata ratio:                   1.00 
        Global reserve:              204.16MiB      (used: 0.00B) 
    
    Data,single: Size:131.00GiB, Used:109.26GiB (83.40%) 
    Metadata,single: Size:3.00GiB, Used:1.37GiB (45.77%) 
    System,single: Size:32.00MiB, Used:16.00KiB (0.05%)[/FONT]
    My "Used:" shows 110.63GiB, which is 83.40% of my "Device Allocated:". It would only take FOUR snapshots, if they filled up toa total of 440GiB, to eat 95% of my SSD's 465.76GiB available space. That would bring my system to an absolute crawl. The crawl would probably happen before I fill 4 snapshots because extents are not freed up until fstrim runs, which is once a week on my system.

    That's why my script always deletes the oldest of my snapshots every time I create a new one. I only keep five available.

    Leave a comment:


  • rdonnelly
    replied
    Chopstick, I am referring to the video in this link. He leaves users and filters at their default settings.

    https://teejeetech.com/timeshift/

    Leave a comment:


  • rdonnelly
    replied
    Chopstick, I am referring to the video in this link. He leaves users and filters at their default setting.

    https://teejeetech.com/timeshift/

    Leave a comment:


  • oshunluvr
    replied
    Originally posted by Chopstick View Post
    2) In this context I noticed something funny: Timeshift apparently has the <root_fs> permanently mounted under '/run/timeshift/backup/', since I can see all the content of the <root_fs> there, including my new btrbk_snaps subfolder! Not sure what to think of this...
    I setup all my systems with the root fs mounted at /subvol which also contains a folder /snapshots. I have 4 to 8 separate installs at any given time and keeping snapshots in their own folder keeps the root mount cleaner looking.

    I do automated snapshots and backups but I wrote my own script rather than relying on a third party tool.

    Leave a comment:


  • Chopstick
    replied
    Thanks a lot for the help guys! I think I got my system down now.

    I am using btrbk to automate cycling/scheduling (running once a day) to back up my system to an external HDD.
    This is done inside a script which mounts/unmounts the main btrfs file system to /mnt/btrfs_pool before/after the backup; snapshots are stored in folders under the <root_fs> of the system drive as well as the external backup HDD.

    At the moment I still have to mount my backup drive manually, though, since it has an encrypted LVM - I suppose I could set up my laptop as a trusted machine, though... but I also just tend to have it mounted all the time, so not a big deal...

    I'd be happy to share my script (and btrbk config) in case anyone is interested.

    Now, to circle back to Timeshift:
    1) I suppose I could disable backup of the home folder in Timeshift, which is also the default, I believe, since my home is backed up to the external drive and I also have snapshots available in a subdirectory of the btrfs root_fs
    2) In this context I noticed something funny: Timeshift apparently has the <root_fs> permanently mounted under '/run/timeshift/backup/', since I can see all the content of the <root_fs> there, including my new btrbk_snaps subfolder! Not sure what to think of this...

    rdonelly, which video are you referring to?
    For me, Timeshift is definitely making its scheduled backups.

    Leave a comment:


  • rdonnelly
    replied
    Watching the video, kind of confusing Users>Exclude all files. Filters>exclude pattern. I have mine setup for Schedule>Daily>Keep 5. It has yet to run on its own in 3 days since I installed it, I did do couple of manual runs on my own.

    Leave a comment:


  • jlittle
    replied
    Originally posted by Chopstick View Post
    I think my conceptual error was that I did not realize that @ was not the <root_fs>.
    That took me a while to grasp that, too. Root not being the root means the concepts, both called root, have to be separated; counter-intuitive and confusing.
    Ubuntu's installer putting the root in a subvolume called "@" is totally useful but adds to the confusion, being a single character just like "/". I think the installer should ask what subvolume name to use. I always rename @ and @home to something else, to allow more Ubuntu installs. Doing this requires these steps:
    1. Mounting the file system root somewhere so that one can see the @ and @home names in a directory, then using sudo mv in that directory.
    2. Adjusting /etc/fstab so that the new subvolume names are used in the "subvol=" options. Using labels in /etc/fstab greatly improves its readability.
    3. Telling grub about it. I manually maintain my grub config, and IMO this practice is much simpler, easier and more reliable than using the standard update-grub scripts. I boot Kubuntu (on @r, with the btrfs called "main") with this grub stanza:
      Code:
      menuentry 'Kubuntu Groovy' {
       search --set=root --label "main"
       linux /@r/boot/vmlinuz root=LABEL=main ro rootflags=subvol=@r
       initrd /@r/boot/initrd.img
      }
      However, running sudo update-grub should fix it up, too.

    Every six months or so when a new release is out, I preserve options by making snapshots and making both old and new bootable by following the steps above, before running do-release-upgrade on one of them. I usually do a clean install as well, which goes into @ and @home. Then I can boot to before and after the upgrade, and to a fresh install. I've still got 20.04 bootable, but it's not been updated so I should delete it.

    Leave a comment:


  • GreyGeek
    replied
    Originally posted by Chopstick View Post
    Thanks a lot for the extensive explanation, GreyGeek!! That was super-helpful!
    I think my conceptual error was that I did not realize that @ was not the <root_fs>.

    So from this point-of-view I can see why it is advantageous to mount the <root_fs> to somewhere else when doing backups.

    Now, I could just permanently mount the <root_fs> to, say, /mnt/btrfs_pool/ as the btrbk examples seem to suggest, or I could wrap btrbk into a script that just mounts immediately before the snapshot/backup operation, and then unmounts. Any thoughts on this?
    Except for name changes (btr_pool instead of btrfs_pool) the schema in that Snapper config that Kubicle linked to is about the same as what you are proposing. I would NOT leave the <ROOT_FS> permenantely mounted to /mnt or any other place. Mounting, doing snapshot operations, and then unmounting is better, IMO.

    That snapper config documentation on GitHub that Kubicle linked to also includes this comment:
    When taking a snapshot of @ (mounted at the root /), other subvolumes are not included in the snapshot. Even if a subvolume is nested below @, a snapshot of @ will not include it.
    That's true regardless if you use snapper or not.

    I have not used btrbk but from what I've read it seems like a good utility.

    Originally posted by Chopstick View Post
    Also another consideration: with the new mount point for <root_fs> I can create either a folder or a subvolume to hold my snapshots - Cubicle suggests a subvolume, and I can see that this could be convenient, since you can mount it directly, but then snapshots are also subvolumes, so they would have to be mounted separately anyway, wouldn't they?
    Yes, as explained below

    Originally posted by Chopstick View Post
    So you could also just use a folder in <root_fs>? This is actually what Timeshift seems to do.

    So at the moment I have this:
    Code:
    $ sudo btrfs subvolume list /mnt/btrfs_pool/
    ID 256 gen 14941 top level 5 path @
    ID 257 gen 14942 top level 5 path @home
    ID 263 gen 13955 top level 5 path @swap
    ID 270 gen 14800 top level 5 path timeshift-btrfs/snapshots/2021-02-12_17-36-38/@
    ID 271 gen 13930 top level 5 path timeshift-btrfs/snapshots/2021-02-12_17-36-38/@home
    ...
    ID 660 gen 14795 top level 5 path timeshift-btrfs/snapshots/2021-03-15_19-57-21/@
    ID 661 gen 14793 top level 5 path timeshift-btrfs/snapshots/2021-03-15_19-57-21/@home
    ID 662 gen 14942 top level 5 path @snapshots
    ID 663 gen 14942 top level 662 path @snapshots/@.20210315
    ID 664 gen 14942 top level 662 path @snapshots/@home.20210315
    If you mount your <ROOT_FS> (called subvolid=5 in the snapper docs) and then browse through @ you will find that it contains /home. If you browse into /home you will find that it is empty. This is because a subvolume (/home which is @home) cannot be stored directly under another subvolume (/ which is @). However, as you saw in my postings, I mounted <ROOT_FS> to /mnt and then created a folder, snapshots, under /mnt to give /mnt/snapshots. In snapshots I store all my snapshots, which are subvolumes. So, yes, you must have a folder between two subvolumes. I don't know what TimeShift is doing with @swap since BTRFS does not honor swap files or partitions. For @snapshot to be useful it must be mounted someplace where folders can be appended, so Gen 663 and 664 look like nonsense to me.

    Below, snapshots is a subdirectory of /mnt, which is <ROOT_FS> or subvolid=5.
    Code:
    [FONT=monospace][COLOR=#000000]:~# mount /dev/disk/by-uuid/ce2b5741-c01e-4b3d-b6ba-401ad7f7fcdf /mnt [/COLOR]
    :~# vdir /mnt 
    total 0 
    drwxr-xr-x 1 root root 208 Feb 20 01:15 @ 
    drwxr-xr-x 1 root root 130 Mar 14 21:59 snapshots[/FONT]
    So, it's subvolume/snapshots/subvolume (i.e., <ROOT_FS>/snapshots/, holding multiple snapshots of @).

    You don't see @home under /mnt because I used mc to copy everything under /home/jerry to /mnt/@/home:
    Code:
    [FONT=monospace][COLOR=#000000]:~# vdir /mnt/@ [/COLOR]
    total 28 
    drwxr-xr-x 1 root root    0 Jan  9  2020 backup 
    lrwxrwxrwx 1 root root    7 Jan  5  2020 bin -> usr/bin 
    drwxr-xr-x 1 root root  600 Mar 12 21:32 boot 
    drwxr-xr-x 1 root root    0 Jan  5  2020 cdrom 
    drwxr-xr-x 1 root root  140 Jan  3  2020 dev 
    drwxr-xr-x 1 root root 5324 Mar 15 17:58 etc 
    drwxr-xr-x 1 root root   10 Aug  6  2020 home 
    ....
    -rwxr-xr-x 1 root root 1421 Oct 19 21:46 make_snapshot.sh 
    ....
    drwxrwxrwt 1 root root 1806 Mar 15 20:31 tmp 
    drwxr-xr-x 1 root root  126 Dec 31 13:08 usr 
    drwxr-xr-x 1 root root  120 Feb 20 01:15 var[/FONT]
    and then in /etc/fstab I commented out the stanza that mounted @home.
    Code:
    [FONT=monospace][COLOR=#000000][/COLOR][COLOR=#ff0000][B]#[/B][/COLOR][COLOR=#000000]UUID=ce2b5741-c01e-4b3d-b6ba-401ad7f7fcdf /home   btrfs   defaults,subvol=@home 0  2[/COLOR]
    [/FONT]
    and finally I deleted @home.

    That's why my backup script only backs up @. Everything is in it. Why did I do that? Because for several years I was constantly making snapshots of @ and @home using the same timestamp suffix to keep them a matched pair. One doesn't want to mix up the @somedate snapshot with @homeotherdate snapshot for obvious reasons. If I had some data that was independent of either @ or @home I'd create @data as I explained in a previous post and keep the data in there.

    i'm going to browse the source code for btrbk and see what's in it.

    Leave a comment:


  • Chopstick
    replied
    Thanks a lot for the extensive explanation, GreyGeek!! That was super-helpful!
    I think my conceptual error was that I did not realize that @ was not the <root_fs>.

    So from this point-of-view I can see why it is advantageous to mount the <root_fs> to somewhere else when doing backups.

    Now, I could just permanently mount the <root_fs> to, say, /mnt/btrfs_pool/ as the btrbk examples seem to suggest, or I could wrap btrbk into a script that just mounts immediately before the snapshot/backup operation, and then unmounts. Any thoughts on this?

    Also another consideration: with the new mount point for <root_fs> I can create either a folder or a subvolume to hold my snapshots - Cubicle suggests a subvolume, and I can see that this could be convenient, since you can mount it directly, but then snapshots are also subvolumes, so they would have to be mounted separately anyway, wouldn't they? So you could also just use a folder in <root_fs>? This is actually what Timeshift seems to do.

    So at the moment I have this:
    Code:
    $ sudo btrfs subvolume list /mnt/btrfs_pool/
    ID 256 gen 14941 top level 5 path @
    ID 257 gen 14942 top level 5 path @home
    ID 263 gen 13955 top level 5 path @swap
    ID 270 gen 14800 top level 5 path timeshift-btrfs/snapshots/2021-02-12_17-36-38/@
    ID 271 gen 13930 top level 5 path timeshift-btrfs/snapshots/2021-02-12_17-36-38/@home
    ...
    ID 660 gen 14795 top level 5 path timeshift-btrfs/snapshots/2021-03-15_19-57-21/@
    ID 661 gen 14793 top level 5 path timeshift-btrfs/snapshots/2021-03-15_19-57-21/@home
    ID 662 gen 14942 top level 5 path @snapshots
    ID 663 gen 14942 top level 662 path @snapshots/@.20210315
    ID 664 gen 14942 top level 662 path @snapshots/@home.20210315

    Leave a comment:


  • GreyGeek
    replied
    Originally posted by Chopstick View Post
    This was actually the question I had to your script, GreyGeek: why do you mount to /mnt/@ before you make the snapshot? Why does making a snapshot to a subfolder and then send-receiving it to an external HD not work to restore your system? Would that work for @home?
    My mount command is
    mount /dev/disk/by-uuid/ce2b5741-c01e-4b3d-b6ba-401ad7f7fcdf /mnt
    so I am not specifically mounting @. /mnt becomes the <ROOT_FS> (root filesystem) when I mount the disk to /mnt. One can mount @ or @home specifically to a mount point, as shown in my (and your) /etc/fstab file:
    Code:
    [FONT=monospace][FONT=monospace][COLOR=#000000]UUID=ce2b5741-c01e-4b3d-b6ba-401ad7f7fcdf /               btrfs   defaults,subvol=@ 0       1 [/COLOR]
    # /home was on /dev/sda1 during installation 
    #UUID=ce2b5741-c01e-4b3d-b6ba-401ad7f7fcdf /home           btrfs   defaults,subvol=@home 0       2
    [/FONT][/FONT]
    Since I copied the contents of @home into /mnt/@/home I no longer need to mount @home, so I commented it out and my fstab only mounts @ as "/". Notice that in the past when I did mount @home it was being mounted as /home in fstab.

    During installation the HD is formatted with BTRFS and the HD becomes the root filesystem, <ROOT_FS>. It is the TOP of the BTRFS binary tree, the parent subvolume The first two child subvolumes created during the install are @ and @home. The naming convention @ and @home were arbitrarily chosen by Canonical. They could be named anything. You could, for example, make a rw snapshot of @ and call it Chopstick:
    btrfs su snapshot /mnt/@ /mnt/Chopstick

    (notice that since "-r" was not used Chopstiks is read-write)
    then open /etc/fstab and change subvol=@ to subvol=Chopstick and save it. Then reboot.
    When you open a Konsole and do "vdir /" you will still see the usual hierarchical structure, with "/" at the top. Mount your HD to /mnt and you will see /mnt, /mnt/Chopstick and /mnt/@home. You will also see /mnt/@ but you can delete @
    btrfs subvol delete -C /mnt/@
    and your system will continue to work fine, even after rebooting again, because @ is no longer being used.

    You could add another child leaf to the parent leaf, <ROOT_FS> by creating, say, @data.
    btrfs subvol create -c /mnt/@data
    (the -c commits the operation)
    Now, <ROOT_FS> has three child leaves under it.
    Create a place to mount it to. Under your home account add a subdirectory, say, Data.
    Then, in /etc/fstab, bind @data to /home/chopstick/Data

    Code:
    [FONT=monospace][FONT=monospace]UUID=yourhduuid /home/chopstick/Data    btrfs   defaults,subvol=@data 0       3[/FONT][/FONT]
    which is AFTER the stanza in fstab that mounts @home to /home.
    Now you have a directory under your home account in which you can store data which is, for all practical purposes, separate from either / or /home/chopstick. When you create a snapshot of it:
    btrfs subvol snapshot -r /mnt/@data /backup/snapshots/@data2021031513455
    that snapshot does not include anything in / or /home/chopstick, just what is in /home/chopstick/Data.

    Originally posted by Chopstick View Post
    I am a bit confused why everybody seems to mount the subvolumes first, even though they are already mounted (since the system is running).
    Notice that in fstab what is being mounted is @ and @home, not the <ROOT_FS>. Ergo, from within / or /home you have no access to <ROOT_FS>. because you cannot "cd .." any higher while at /. By mounting the HD to /mnt, and not a subvolume leaf, you gain access to the <ROOT_FS> as /mnt. You are free to work on it as you please. If you use mv to move @ to @old:
    mv /mnt/@ /mnt/@old
    the system continues to run normally. If you delete @ your system will hang and you will have to use a LiveUSB to recover using a previous snapshot.

    You can make a rw snapshot of some previously created read-only snapshot, say @2021031245 and name it @ and make it read-write by not using "-r":
    btrfs su snapshot /mnt/snapshots/@2021031245 /mnt/@ (notice the missing "-r")
    Then you can umount /mnt and reboot. You have successfully rolled @ back to @2021031245


    Originally posted by Chopstick View Post
    Also, is "@" just a convention to name subvolumes? When I created snapshots of my home, I just put them into /Snapshots and called them home_* ... no "@" - is there a problem with that? (I will probably use the @ in the future, just for visual clarity, but I am wondering where this is coming from.)
    Yes, the use of "@" is just a Canonical naming convention. The only caveat is:
    The btrfs-tools command ''set-default'' will break Ubuntu's layout

    Since Ubuntu is set up to always keep the top of the btrfs tree as the default mounting subvolume it will break when using the btrfs-tools command set-default, since this command is specifically designed to change the default mounting subvolume.

    The mount options for / and /home described above relies on the fact that the corresponding subvolumes @ and @home can be located below the default mounting subvolume, and if set-default is used, this is no longer the case.

    If you have accidentally used set-default and want to revert, you can do the following

    sudo mount /dev/sdX# /mnt
    sudo btrfs subvolume set-default 5 /mnt

    since the id 5 is a permanent alias for the top of the btrfs tree.
    The "default mounting subvolume", i.e., the "top of the tree", is the <ROOT_FS>, i.e., the BTRFS formatted HD.

    BTW, never use /dev/sdX# to mount an HD. Use the unique UUID designation, which can be obtained by using "blkid" in a Konsole. Why? Because if you mount or remove a drive the system can switch the /dev/sdX designation of a drive without notification. The UUID was created to avoid that. When I was trying out BTRFS's raid on two HDs they were /dev/sda and /dev/sdb. I then removed my CDROM and put in an HDCADDY, into which I plugged in a 3rd HD. While perusing the file structure I noticed that my raid was now composed of /dev/sda and /dev/sdc. and the 3rd drive was now /dev/sdb. After I switched the raid to two SINGLETON's I noticed that the /dev/sdX designations remained the same, and that is how they are to this day, three years later.
    Last edited by GreyGeek; Mar 15, 2021, 01:19 PM.

    Leave a comment:


  • Chopstick
    replied
    HOWEVER, snapper still has one major, glaring fault, IMO. It stores the root snapshots under / and the home snapshots under /home. These snapshots are accessible ONLY if you can boot into your system since they are inside @ or @home. IF @ doesn't load then you are out of luck and better have a copy of @ on another HD so you can go through the process of mounting your HD from a LiveUSB and then overwriting @ and @home with your backup versions of @ and @home.
    This was actually the question I had to your script, GreyGeek: why do you mount to /mnt/@ before you make the snapshot? Why does making a snapshot to a subfolder and then send-receiving it to an external HD not work to restore your system? Would that work for @home?
    I am a bit confused why everybody seems to mount the subvolumes first, eventhough they are already mounted (since the system is running).

    Also, is "@" just a convention to name subvolumes? When I created snapshots of my home, I just put them into /Snapshots and called them home_* ... no "@" - is there a problem with that? (I will probably use the @ in the future, just for visual clarity, but I am wondering where this is coming from.)
    Last edited by Chopstick; Mar 15, 2021, 10:47 AM.

    Leave a comment:


  • Chopstick
    replied
    Thanks for the info, guys! I think I understand the process much better now!

    @GreyGeek your script was very helpful to me in figuring out how this works! The answer to my question above is essentially, that within a volume BTRFS will not duplicate files between snapshots unless they are modified, and it will keep files around as long as any snapshot references them. So, if we do incremental backups on an external HD we can delete the first or any number of snapshots, and that will not compromise the integrity of other snapshots. I tested this myself.
    I have another question for your script that comes in the next post.

    @jlittle I agree, doing this yourself a couple of times is important, to understand the process - I did that, and now I am ready to move on to a more convenient solution. Initially I was actually under the impression that Time Machine was a full backup solution and not just system restore, so I assumed Timeshift could do that, too.

    What do you guys think of btrbk:
    https://work-work.work/blog/2018/12/...kup-btrfs.html
    E.g. as opposed to snapper? It seems quite simple.

    As for Timeshift itself: I guess it is still useful, but my initial thought of just mirroring/send-receive the snapshots Timeshift makes, wont work, since those do not seem to be read-only, which appears to be necessary for send-receive.
    So, since I need to make a separate set of snapshots, it may not make sense to keep my @home in it as well, especially since I also have timeshift-autosnap-apt installed and I feel like having @home included is slowing down apt...

    Do you guys use timeshift-autosnap-apt and grub-btrfs at all?
    Last edited by Chopstick; Mar 15, 2021, 10:48 AM.

    Leave a comment:

Users Viewing This Topic

Collapse

There are 0 users viewing this topic.

Working...
X