Announcement

Collapse
No announcement yet.

problem using incremental backup to external hd

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    problem using incremental backup to external hd

    I was doing my weekly backup of @ and @home to an external hd for backup when I ran into what may be a problem: the @ backup went fine, but the home incremental backup to external hard disk went for a few seconds, then gave me the following error message before returning to the prompt:

    ERROR: failed to clone extents to steve/.mozilla/firefox/l3ewhpa8.default-release
    /places.sqlite: Invalid argument


    I checked the backup on the external HD, and it seems to have everything copied correctly. I also checked the file indicated in the error message, and that seems to be there too. There isn't much difference between the two snapshots I merged, so that, I think, would explain the short time it took to send the backup copy (using the last two snapshots and 'send -p'). But the error message has me confused. Any ideas on what happened? Will there be any problems if I keep doing incremental backups and get this error message again? I do not use snapper, by the way, just the BTRFS Subvolume Manager for the snapshots and deletions, and writing out the commands by hand (using Greygeek's method in his manual, which has always worked fine) for the incremental backups. I'm using Kubuntu 21.10 and the latest Plasma (5.23.4).

    #2
    See https://unix.stackexchange.com/quest...emental-backup and https://lore.kernel.org/linux-btrfs/...gmail.com/T/#u

    Using Kubuntu Linux since March 23, 2007
    "It is a capital mistake to theorize before one has data." - Sherlock Holmes

    Comment


      #3
      In the last 5 years I've had zero problems with BTRFS and have rarely consulted the wiki or the help so I no longer remember what's in them.

      That kernel link had an interesting comment:
      Yes. You should have created clone (read-write snapshot) of received subvolume and use it as your root. Received_uuid is automatically cleared in writable snapshot.
      I may have learned that but I don't remember it. Normally, we create read only snapshots because they are the only kind that send & receive can send & receive. Calling a r+w snapshot a clone makes sense. I have combined @home into @ so that I have only one snapshot to make and send. I have created a r+w snapshot for other purposes but didn't realize it was "clone".

      Last edited by GreyGeek; Dec 02, 2021, 07:26 PM.
      "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
      – John F. Kennedy, February 26, 1962.

      Comment


        #4
        Reading through snowhog's second link suggests you've struck an active bug in btrfs, one that "pops up with regrettable frequency" to quote one of the messages on that list. It seems to affect firefox sqlite files a lot. Concerning for this user of btrfs and firefox.

        The good news is, at least for the users in the list, that no data have been lost or corrupted.

        The list messages suggest that no btrfs repair process or scrub helps. Somebody tried a lot of that to get nowhere, IIUC. Re-establishing incremental backups by doing a full, non-incremental send/receive did not fix the problem for one user.

        The last message from a month ago suggests creating new a subvolume using "btrfs subvolume create @files_new; cp -av --reflink @files/ @files_new/; btrfs subvolume delete @files; mv @files_new @files". This appears to be run from the btrfs root directory in the fs that has @files; I think you'd have to boot to a system that does not use your @home, like a live USB, then mount the root of the btrfs with @home from there, then cd into that mount. Then
        Code:
        $ sudo btrfs create @home_new
        $ sudo cp -a --reflink @home/ @home_new/
        $ sudo btrfs subvolume delete @home
        $ sudo mv @home_new @home
        Because it uses --reflink in the copy, it's not actually copying the files' contents, just making links, and should be quick, and not take up double the space.
        This will of course invalidate incremental send/receive until you've done a non-incremental.

        If the data in your @home are very important, if it was me I'd want to have a non-btrfs backup somewhere.
        Last edited by jlittle; Dec 03, 2021, 04:12 PM. Reason: typo
        Regards, John Little

        Comment


          #5
          I don't understand this very well, as I'm basically a rank amateur when it comes to btrfs. What I need to know is if this will affect my files in any way. I will make another, non-btrfs copy on another external HD, as jlittle suggested, but it seems like a lot of trouble to do the other steps mentioned. Can I continue to do incremental backups without worrying about the Firefox clone issue? That's what I really need to know.

          Comment


            #6
            Originally posted by oldgeek View Post
            Can I continue to do incremental backups without worrying about the Firefox clone issue? That's what I really need to know.
            Does it stop when it gets the error? I'd worry that the backup is not complete. You could run some checks on the data; I've just checked my last /home snapshot and one of it's backups (I ran find . -type f -print0 | sudo xargs -0 cat | wc -c).

            Regards, John Little

            Comment


              #7
              Yes, it does stop. How do I check if the backup is complete, other than opening a few files in both to see if they look all right. I did that on a few and found no errors, but it was hardly comprehensive. I do not understand the command you use, or how to implement/change it to fit my case. But I certainly appreciate the help you and the others are giving me.

              Comment


                #8
                Originally posted by oldgeek View Post
                I do not understand the command you use, or how to implement/change it to fit my case.
                It just reads all the files (plain, ordinary files) and counts the bytes. (Very unsophisticated, and doesn't pick up some types of corruption.)

                If you want to run it, here's an example, when I did it:
                Code:
                $ sudo -i
                [sudo] password for john:
                # cd /mnt/k-main/@h-21-11-27
                # find . -type f -print0 | xargs -0 cat | wc -c
                15399071859
                # cd /mnt/backup/k-main/@h-21-11-27
                # find . -type f -print0 | xargs -0 cat | wc -c
                15399071859
                # exit
                logout
                $
                You'd need to adjust the two locations and snapshot names to match yours. A more thorough check would be sometthing like
                Code:
                $ sudo -i
                [sudo] password for john:
                # cd /mnt/k-main/@h-21-11-27
                # find . -type f | sort > /tmp/files
                # (while read line;do cat "$line";done < /tmp/files) | md5sum
                d165a784b396c2a6c43acaf3a460bdab  -
                # cd /mnt/backup/k-main/@h-21-11-27
                # (while read line;do cat "$line";done < /tmp/files) | md5sum
                d165a784b396c2a6c43acaf3a460bdab  -
                Regards, John Little

                Comment


                  #9
                  Ok, your explanation was clear, and I was able to do the first check twice using a snapshot on my internal HD and its backup on the external HD. Result? the backup snapshot was bigger! The first number value I got (internal HD) was 198729775019, while the backup on the external HD read 200962089490. What do I make of this? What are the implications for being to do incremental backups for @home? I never have had this issue before, and now only on @home. Does my computer have Covid?

                  Comment


                    #10
                    If I were you I wouldn't be happy with the state of your @home. If you aren't willing to try the creation of a new subvolume using the method in my post #4, I suggest making a full backup, wiping the fs, and recreating a new volume, and restoring the lot from backup. I had to do that about 3 years ago, though the cause of the problems was a motherboard issue.
                    Regards, John Little

                    Comment


                      #11
                      Creating a new @home subvolume seems the quicker of the two solutions, and the code you provided seems clear enough. ]What I don't get is the first part. I have to boot using a live USB, but I'm not sure what to do before I start using the code you mentioned. Do I mount @home on the USB drive? Will it fit? Will the live Kubuntu be able to access my hard disk? I am not satisfied with the way things are, but I don't want to mess things up further.
                      The second method seems clearer to me, but much longer. I already have a full backup of @home from last week, when I had other btrfs errors, so could I just use that after wiping out @home from my present setup? Wouldn't I still get the same clone error after trying an incremental backup with a new snapshot in a couple of days? I hope I've explained myself clear enough.

                      Comment


                        #12
                        Originally posted by oldgeek View Post
                        What I don't get is the first part. I have to boot using a live USB, but I'm not sure what to do before I start using the code you mentioned.
                        Say the device with the @home is /dev/sda:
                        Code:
                        $ sudo mkdir /mnt/top
                        $ sudo mount /dev/sda /mnt/top
                        $ cd /mnt/top
                        $ ls
                        You should see @ and @home at least. (It's also a good place to put subvolumes for other purposes. For example, I don't want my browser cache taking up space in my backups, so I have /mnt/top/@cache and tell firefox to use it.)
                        Will it fit?
                        Mounts are just about how the OS finds files, nothing goes where it's mounted.
                        Wouldn't I still get the same clone error after trying an incremental backup with a new snapshot in a couple of days?
                        The post in lore.kernel.org said that procedure resolved the problem.
                        Regards, John Little

                        Comment


                          #13
                          Well, I was about to try the relinking method, but I couldn't boot into Kubuntu using a USB stick--it just freezes at Grub. Is this an EFI problem? I use an EFI bootloader on my HD and wonder if I need to put an EFI bootloader on the USB as well (which I created using Etcher plus another one made with the Kubuntu startup disk creator, neither worked). This is more complicated than I imagined, but I have plenty of time and would appreciate any help on how I can get the live USB to boot so I can do the other stuff to recreate my file system.

                          Comment


                            #14
                            Originally posted by oldgeek View Post
                            ... I couldn't boot into Kubuntu using a USB stick--it just freezes at Grub...
                            A normal Kubuntu installer written to a USB doesn't use grub, IIUC. My reaction is that if you get to grub you haven't pressed the right key at the right time.

                            It's not actually necessary to boot from a USB. You can boot from the btrfs so long as it's not the OS that's using @home. I "iso boot" from my SSD all the time, skipping the tedious business with USB sticks. (It's quicker, too.) I could do a custom.cfg that would be easy for you to edit and drop into /boot/grub to boot from an iso file, but I'd like time to test it, and I can't get to it until this evening (locally is morning now).
                            Regards, John Little

                            Comment


                              #15
                              Originally posted by jlittle View Post
                              It's not actually necessary to boot from a USB. You can boot from the btrfs so long as it's not the OS that's using @home. I "iso boot" from my SSD all the time, skipping the tedious business with USB sticks. (It's quicker, too.) I could do a custom.cfg that would be easy for you to edit and drop into /boot/grub to boot from an iso file, but I'd like time to test it, and I can't get to it until this evening (locally is morning now).
                              This works here on KDEneon User Edition:

                              Code:
                              menuentry 'KDEneon ISO' {
                              set isofile="/@grub/neon-useredition-amd64.iso"
                              loopback loop (hd0,3)$isofile
                              linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=$isofile noprompt noeject
                              initrd (loop)/casper/initrd.lz
                              }
                              In the above, "set isofile=" has to point to a location that GRUB has access to. I find it most reliable to simply put the desired ISO into the grub or boot folder. In my case, GRUB is installed to it's own subvolume. In most cases you would put "/boot/grub/your.iso" or "/boot/your.iso".
                              Also, the device/partition identification - (hd0,3) - must be correct. Again, in my case and thus in the above example, hd0,3 is my BTRFS root filesystem. In it resides my many subvolumes including @grub. So my ISOs reside in Drive 0, partition 3, folder: @grub.

                              The type of ISO is critical here also. "casper" ISO types (like *buntu products) use the above stanza. Syslinux ISOs require chainloading I believe.

                              Please Read Me

                              Comment

                              Working...
                              X