Announcement

Collapse
No announcement yet.

slow backup

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    [SOLVED] slow backup

    When I did first backup on external drive it took about 16 minutes for @ (on the same backup disk). Yesterday it took about 1 hour for @. Only one core of two core cpu was engaged to 100%, second was almost idle. My system drive is ssd, backup drive is hdd, both where connected to motherboard by sata 2.

    I have more programs installed than previously but not so many to make so huge difference.

    When I checked the size of backup folder on backup disk (after backup was finished) for both @ and @home it was about 13,5 GB. So @itself was more or less 10GB.

    Are there any steps that should be done to btrfs filesystem before backup that can affect the speed of backup?

    In fstab I mount / with
    Code:
    btrfs   subvol=@,defaults,noatime,space_cache,autodefrag 0 0
    In my normal day-to-day work I have not noticed any slowdown in working of system.
    Last edited by gnomek; Mar 31, 2019, 02:23 AM.

    #2
    BTRFS does it's backups in the background for a reason - so other processes can continue as normal. The point being the backup process will vary in time due to other things going on with the system. A downside might be that you want to shut your system down and you can until the backup is finished.

    Now, if you really want to speed up the backup process, do incremental backups instead of a full backup. It requires a few more steps, but the time to complete the send|receive operation will be exponentially less. There are tutorials out there and on here, but basically you keep one extra snapshot and backup the difference.

    I use a rolling number to keep track of mine and keep more than one backup but you can also just rename the backups each time.

    Example steps assuming your root subvol is mounted at /subvol and your backup file system is mounted at /backups;

    Set up your initial backup;
    Make a read-only snapshot of @ and send as a full backup.
    sudo btrfs su sn -r /subvol/@ /subvol/@backup

    sudo btrfs send /subvol/@backup | sudo btrfs receive /backups/
    Do not delete /subvol/@backup.

    Going forward, repeat these commands weekly;

    Make another ro snapshot of @ as @backupnew.
    sudo btrfs su sn -r /subvol/@ /subvol/@backupnew

    Incrementally send the difference of @backup and @backupnew.
    sudo btrfs send -p /subvol/@backup /subvol/@backupnew | sudo btrfs receive /backups/

    Now delete both copies of @backup.
    sudo btrfs su de -c /subvol/@backup /backups/@backup

    Rename both @backupnew subvolumes to @backup.
    sudo mv /subvol/@backupnew /subvol/@backup
    sudo mv /backups/@backupnew /backups/@backup

    The -p switch means "as parent" which is why you have to keep the previous backup. The command sends only the difference from the new snapshot and the "parent" snapshot.

    This would be very easy to write a script for. If you do this weekly, the extra space required won't be any larger than a week's worth of activity and you'll be no more than one week since your last backup. You could do it daily or even many times a day if you wanted to reduce the backup size or have more current backups, or monthly if your need to backup is less. Since @ and @home exist separately, you would need to do this separately for both but you wouldn't have to use the same schedule. For example you could backup your system monthly but your home daily. Obviously, the longer between backups the larger the time required to send|receive and the more drive space being used.

    When scripting, you could write it to do both backups at once or solo it by passing a command line variable. Finally, if you don't like using the terminal often, write the script then write a desktop entry to run it or even automate it.
    Last edited by oshunluvr; Mar 31, 2019, 10:04 AM.

    Please Read Me

    Comment


      #3
      Thank you. Looks like proper planning and scheduling backup is the solution.

      Comment


        #4
        Originally posted by gnomek View Post
        ... first backup on external drive ... system drive is ssd, backup drive is hdd, both where connected to motherboard by sata 2.
        I am confused by that; an external drive connected with SATA 2. Possible, if one has an eSATA drive enclosure, from about 10 years ago, but not common these days.

        My @ is about 14 GB and it took 2:20 last time I backed it up with btrfs send/receive, from SSD to HDD, non-incremental.

        How is the HDD formatted? It would be unusual to have btrfs on an external drive. What commands did you use for the backup?

        To peg a CPU like that suggests encryption or compression. Or, a Linux I/O scheduler problem; was the system responsive while the backup was running?
        Regards, John Little

        Comment


          #5
          It is normal drive. I said external because normally it is not connected. It is a backup drive with one ntfs partition and one with btrfs partition. It is Seagate USB drive but it has a connector that can be removed and then sata 2 is available. So I can use it as usb or as normal drive connected directly to motherboard by sata 2.

          I did it like this
          Code:
          sudo -i
          
          btrfs su snapshot -r /mnt/01/@ /mnt/snapshots/@30.03.2019
          
          btrfs send /mnt/snapshots/@30.03.2019 | btrfs receive /run/media/user/backup/full/
          Nothing unusual here. One thing I noticed is that I didn't put sudo before btrfs receive. Maybe this is important?

          This backup drive I don't mount in fstab. I clicked on it in Dolphin and it was mounted.

          I was thinking that maybe scrub or defrag or other commands should be run before backup to somehow prepare file system for faster backup.

          Comment


            #6
            Originally posted by gnomek View Post
            It is normal drive. I said external because normally it is not connected. It is a backup drive with one ntfs partition and one with btrfs partition. It is Seagate USB drive but it has a connector that can be removed and then sata 2 is available. So I can use it as usb or as normal drive connected directly to motherboard by sata 2.

            I did it like this
            Code:
            sudo -i
            
            btrfs su snapshot -r /mnt/01/@ /mnt/snapshots/@30.03.2019
            
            btrfs send /mnt/snapshots/@30.03.2019 | btrfs receive /run/media/user/backup/full/
            Nothing unusual here. One thing I noticed is that I didn't put sudo before btrfs receive. Maybe this is important?

            This backup drive I don't mount in fstab. I clicked on it in Dolphin and it was mounted.

            I was thinking that maybe scrub or defrag or other commands should be run before backup to somehow prepare file system for faster backup.
            If you do "sudo -i" first, then sudo is not needed. That's rather the point of "sudo -i".

            While scrub or defrag won't hurt anything, I doubt they'll make any difference. Really, the only way I know of to make a backup quickly is to use the incremental feature. Likely, if you're using a USB connection it's going to be slow. Also, using BTRFS send|receive over USB connections is generally not recommended because even a slight interruption will trash the backup and you may not realize it's bad until it's too late. Having said that, if you have a full backup on the USB drive, the incremental backups are much smaller and quicker so you're less likely to have problems.

            Please Read Me

            Comment


              #7
              So using you're above commands, here's the next backup in an incremental way:

              sudo -i

              btrfs su snapshot -r /mnt/01/@ /mnt/snapshots/@01.04.2019

              btrfs send -p /mnt/snapshots/@30.03.2019 /mnt/snapshots/@01.04.2019 | btrfs receive /run/media/user/backup/full/

              btrfs su delete -c /run/media/user/backup/full/@30.03.2019

              Please Read Me

              Comment


                #8
                Thank you. It is perfectly clear to me now.

                Comment

                Working...
                X