Announcement

Collapse
No announcement yet.

Unable to Clone Extents?

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Unable to Clone Extents?

    I remade my backup script in Java to get a finer grain control over what was happening with the filesystem and am still gettting the same issues with regards to incremental backups:

    Code:
    sarah@LesserArk:/media/sarah$ sudo btrfs send -p "/media/Wandering_Echo/Snapshots/@home___2019-08-10T15:58:53.373754" "/media/Wandering_Echo/Snapshots/@home___2019-08-13T00:02:13.841344" | sudo btrfs receive "/media/sarah/NihilisticAutomaton/Snapshots"
    At subvol /media/Wandering_Echo/Snapshots/@home___2019-08-13T00:02:13.841344
    At snapshot @home___2019-08-13T00:02:13.841344
    ERROR: failed to clone extents to sarah/.cache/plasmashell/bookmarkrunnerfirefoxfavdbfile.sqlite: Invalid argument
    Not sure what I'm doing wrong, but this time I executed the code that the program produced by hand in the terminal, and I still get the same error.

    Not sure why a freshly created snapshot can't clone it's extents, but that's what's happening aparrantly.

    Am I doing something wrong in the script? The essential process is:
    Code:
    sudo btrfs subvolume snapshot -r NAME
    sudo btrfs send NAME | sudo btrfs receive EXTERNALDISK
    *Later*

    Code:
    sudo btrfs subvolume snapshot -r NAME2
    sudo btrfs send -p NAME NAME2 | sudo btrfs receive EXTERNALDISK
    The incremental bacup always fails with some cryptic error message like unable to clone extents. I can post the Java code that does this, but all it really does is just pass it to terminal and it runs this way.

    Running the raw bash string in terminal doesn't work either!

    I'm not sure what to do here. Is it a bug in BTRFS? How can i get answers? Is there a mail chain I can ask?
    Last edited by PhysicistSarah; Aug 13, 2019, 11:58 AM.

    #2
    sudo btrfs send -p NAME2 NAME
    Your names are backwards here. The "-p" option means "parent subvolume". In your example "NAME" is the parent of "NAME2" not the other way around. However, if I am deciphering your rather long subvolume names correctly, your entry above looks correct. You may be encountering this bug: https://www.mail-archive.com/linux-b.../msg25533.html

    Please Read Me

    Comment


      #3
      My bad on the names part

      If I'm encountering this bug (I didn't encounter it in earlier versions about a year ago or more when I was using my backup script, then they happened all the time), what options do I have to remove it, or file a bug report?

      I read the mail archive but still don't quite understand if I'm doing something wrong (I don't think so), or it somehow incremental backups are broken on my machine.

      Also, my backup script is a work in progress, it's built well, but I still have a few quality of life changes to make, notably the hard to read date format.

      It'll be fixed sometime soon.

      Comment


        #4
        The above looked OK to me (except for the crazy long filenames ). I can't speak to the bug as I've never encountered it. I wonder if what appears to be the sheer volume of the number of incremental backups you may be doing has an impact. I tend to keep only one past subvolume and the most current both on the backup side and the source side.

        Maybe it would be a interesting experiment to start a fresh backup from a populated subvolume and start a manual send-incremental send-... process to see if or when you encounter the bug. Maybe all you need is a clean slate.

        It also occurs to me to do a balance/defrag on all your volumes and check metadata and system space to be sure you're not just full in one or both of those areas.

        Please Read Me

        Comment


          #5
          My storage space appears to be ok. Here's the view from baobab:
          Click image for larger version

Name:	Screenshot_20190814_130112.jpg
Views:	1
Size:	39.0 KB
ID:	644281
          I'll try a manual incremental backup and see if it encounters the error later. I don't have any incremental backups since I've been doing a fresh backup and deleting the old one every time I need to backup the drives. I use a third drive which holds an older full backup as a fail-safe.

          I don't have any RAID arrays right now, so I don't know about a balance, I just defragged all of my drives that rotate (primary is an SSD) last time I cleared the backups.

          We'll see I guess.

          Comment


            #6
            Available space reporting does not address metadata or system space on BTRFS volumes:

            Code:
            stuart@skull:~$  df /mnt/btrfs/
            Filesystem      Size  Used Avail Use% Mounted on
            /dev/sda6       183G  3.9G  179G   3% /mnt/btrfs
            Code:
            stuart@skull:~$ bt fi df /mnt/btrfs/
            Data, single: total=4.01GiB, used=3.73GiB
            System, single: total=4.00MiB, used=16.00KiB
            Metadata, single: total=1.01GiB, used=140.72MiB
            GlobalReserve, single: total=16.00MiB, used=0.00B
            Do "fi df" to get a more accurate idea of your space available. If you extent tree has outgrown the reserved space, you can get "drive full" errors when there's still data space available.

            Running "balance" on a single device system relocates blocks to optimize and can recover allocated but unused space, both meta and data.

            Please Read Me

            Comment


              #7
              Running df gives:
              Code:
              /dev/sdd1           37289      7690      29599  21% /boot/efi
              /dev/sdd2       488346624 343478308  139360508  72% /home
              /dev/sda        976762584 701336976  271269200  73% /media/sarah/ConvergentRefuge
              /dev/sdc       1953514584  21141756 1925612356   2% /media/sarah/NihilisticAutomaton
              Running btrfs filesystem df on my mount point for @ and @home yeilds:
              Code:
              sarah@LesserArk:~$ btrfs filesystem df /media/Wandering_Echo/
              Data, single: total=419.69GiB, used=324.80GiB
              System, DUP: total=8.00MiB, used=64.00KiB
              Metadata, DUP: total=4.00GiB, used=1.17GiB
              GlobalReserve, single: total=435.23MiB, used=0.00B
              I'll perform the balance operation soon on all my drives. I didn't know that I should have been doing this. Do btrfs users typically do this every once in a while?

              EDIT 0:

              Just made 2 snapshots 100% by hand with a few minor changes on the filesystem (Moved a few things around maybe 2 hours apart) and the incremental backup fails:
              Code:
              sarah@LesserArk:/media/Wandering_Echo$ sudo btrfs send -p /media/sarah/Snapshots/SYSTEM_TEST /media/sarah/Snapshots/SYSTEM_TEST2 | sudo btrfs receive -vv /media/sarah/NihilisticAutomaton/Temp/
              At subvol /media/sarah/Snapshots/SYSTEM_TEST2
              At snapshot SYSTEM_TEST2
              receiving snapshot SYSTEM_TEST2 uuid=2f148ce2-dc0b-db43-a5a4-511803ee2503, ctransid=366950 parent_uuid=2f148ce2-dc0b-db43-a5a4-511803ee2503, parent_ctransid=366816
              utimes tmp
              utimes etc/cups
              utimes var/tmp
              utimes var/lib/xkb
              utimes var/log/wtmp
              utimes var/lib/systemd/timesync/clock
              utimes var/lib/systemd/timers/stamp-anacron.timer
              utimes var/spool/anacron/cron.daily
              utimes var/spool/anacron/cron.weekly
              utimes var/spool/anacron/cron.monthly
              utimes root/.cache/mesa_shader_cache
              utimes root/.cache/mesa_shader_cache/index
              utimes media/sarah/Snapshots
              utimes var/lib/systemd/timers/stamp-phpsessionclean.timer
              utimes var/lib/fail2ban/fail2ban.sqlite3
              utimes root/.cache/mesa_shader_cache/58
              ERROR: utimes root/.cache/mesa_shader_cache/58 failed: No such file or directory
              Last edited by PhysicistSarah; Aug 14, 2019, 05:49 PM.

              Comment


                #8
                Those errors are really weird. Try doing "sync" before snapshots. How can a file not be there after a snapshot?

                No real need to do balance regularly AFIAK, just something to know about if you run into metadata problems. Yours look OK to me.

                Older thread, but seems like the same issue: https://github.com/digint/btrbk/issues/91

                Please Read Me

                Comment


                  #9
                  BTW how many snapshots total do you have on your file systems in question?

                  Please Read Me

                  Comment


                    #10
                    Like 2?

                    I purged all snapshots and backups and then tried a new backup, works great.

                    Then I move a few files around, shut down and reboot the comptuer and try an incremental backup and it fails with some weird error like that.

                    Comment


                      #11
                      Again, weird.

                      Maybe stop that?

                      LOL

                      Please Read Me

                      Comment


                        #12
                        Stop taking snapshots, making backups, moving files around (On the disk, not the RO snapshots), or shutting the computer down ?

                        Comment

                        Working...
                        X