Announcement

Collapse
No announcement yet.

Would like some assistance

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GreyGeek
    replied
    Both incremental backups are suffering from the same problem:
    btrfs send -p /mnt/snapshots/ /mnt/snapshots@202203101833 | btrfs receive /backup
    The child snapshot name is missing. It should be:
    btrfs send -p /mnt/snapshots/@yyyymmddhhmm /mnt/snapshots@202203101833 | btrfs receive /backup
    which is supposed to be supplied by $PREVROOTSNAP and $PREVHOMESNAP.
    They, in turn, should be populated by the ls command and curly brackets manipulation:
    ROOTLIST=($(ls /mnt/snapshots/|grep '^@2'))
    PREVROOTLIST=${ROOTLIST[-2]}
    I generally put spaces around my pipes, and try using just one ():
    ROOTLIST=$(ls /mnt/snapshots | grep '^@2')
    Here I messed around with listing some files in my PWD;

    $ ls | grep '^w'
    wacom_tablet_installation.odt
    webcam_video_audio_capture_cli.txt
    widgets.png
    winehq.key
    working
    wpa_tui.sh


    $ list=($(ls | grep '^w'))
    $ echo ${list[-2]}
    working
    $ echo ${list[-1]}
    wpa_tui.sh
    $ echo ${list[0]}
    wacom_tablet_installation.odt
    $ echo ${list[1]}
    webcam_video_audio_capture_cli.txt

    $ echo ${list[2]}
    widgets.png


    Strange indexing, isn't it?


    Leave a comment:


  • Snowhog
    replied
    Okay, I'm at my wits end. I've been troubleshooting this for nearly a full day (24-hours). I just don't see where I've gone wrong.

    Here's my testing script.
    #!/bin/bash
    STARTCLOCK=$(date +%s)
    # snapshot.sh
    # Created: 2022-03-08
    #
    # Change Log:
    # Syntax to use:
    # YYYY-MM-DD
    # What was done
    #

    # To be run as root from /
    #
    # Before executing this script:
    # Plug in external HDD and ignore Disk & Devices pop-up.
    # External drive will be powered and identified by system but not yet mounted.
    #

    # Variables

    NOW=$(date +%Y%m%d%H%M)
    ROOTNOW=@$(date +%Y%m%d%H%M)
    HOMENOW=@home$(date +%Y%m%d%H%M)
    MKSNAP1="btrfs su snapshot -r /mnt/@ /mnt/snapshots/$ROOTNOW"
    MKSNAP2="btrfs su snapshot -r /mnt/@home /mnt/snapshots/$HOMENOW"

    # Start Snapshot Process

    echo "Mounting drives"

    # mount internal HDD to /mnt

    eval "mount /dev/disk/by-label/LAPTOP /mnt"
    echo "Internal HDD mounted to /mnt"
    echo ""

    # mount external HDD to /backup

    eval "mount /dev/disk/by-label/USBHDD /backup"
    echo "External USB HDD mounted to /backup"
    echo ""

    # Display Timestamp

    echo "Timestamp: $NOW"
    echo ""

    # Create todays Full snapshots of @ and @home

    echo "Making today's snapshot of @"
    eval $MKSNAP1
    eval 'sync;sync'
    echo "$ROOTNOW successfully created"
    echo ""

    echo "Making today's snapshot of @home"
    eval $MKSNAP2
    eval 'sync;sync'
    echo "$HOMENOW successfully created"
    echo ""

    # Find previous @ snapshot as parent"

    ROOTLIST=($(ls /mnt/snapshots/|grep '^@2'))
    PREVROOTLIST=${ROOTLIST[-2]}
    NOW=${ROOTLIST[-1]}
    echo "Attempting incremental backup of @home"
    if [[ -s "/mnt/snapshots/"$PREVROOTSNAP ]];
    then
    MKINC='btrfs send -p /mnt/snapshots/'$PREVROOTSNAP
    MKINC=$MKINC' /mnt/snapshots'$NOW
    MKINC=$MKINC' | btrfs receive /backup'
    echo $MKINC
    eval $MKINC
    eval 'sync;sync;sync'
    DELROOTSNAP='btrfs subvol delete -C /mnt/snapshots'${ROOTLIST[0]}
    eval $DELROOTSNAP
    eval 'sync;sync;sync'
    ROOTLIST=''
    ROOTLIST=($(ls /backup/| grep '^@2'))
    DELROOTSNAP='btrfs subvol delete -C /backup/'${ROOTLIST[0]}
    eval $DELROOTSNAP
    eval 'sync;sync;sync'
    echo "Snapshots of @ completed, oldest @ snapshots deleted."
    else
    echo 'Incremental @ backup failed using '$PREVROOTLIST' and '$NOW
    fi

    # Find previous @home snapshot as parent"

    HOMELIST=($(ls /mnt/snapshots/|grep '^@home2'))
    PREVHOMESNAP=${HOMELIST[-2]}
    NOW=${HOMELIST[-1]}
    echo "Attempting incremental backup of @home"
    if [[ -s "/mnt/snapshots/"$PREVHOMESNAP ]];
    then
    MKINC='btrfs send -p /mnt/snapshots/'$PREVHOMESNAP
    MKINC=$MKINC' /mnt/snapshots'$NOW
    MKINC=$MKINC' | btrfs receive /backup'
    echo $MKINC
    eval $MKINC
    eval 'sync;sync;sync'
    DELHOMESNAP='btrfs subvol delete -C /mnt/snapshots'${HOMELIST[0]}
    eval $DELHOMESNAP
    eval 'sync;sync;sync'
    HOMELIST=''
    HOMELIST=($(ls /backup/| grep '^@home2'))
    DELHOMESNAP='btrfs subvol delete -C /backup/'${HOMELIST[0]}
    eval $DELHOMESNAP
    eval 'sync;sync;sync'
    echo "Snapshots of @home completed, oldest @home snapshots deleted."
    else
    echo 'Incremental @home backup failed using '$PREVHOMESNAP' and '$NOW
    fi

    # Calculate and display how much time creating and sending snapshots took

    STOPCLOCK=$(date +%s)
    SECS=$((STOPCLOCK - STARTCLOCK))
    DURATION=$(date --date "0 $(($(date +%s) - STARTCLOCK)) sec" +%Hh:%Mm:%Ss)
    echo ""
    echo "Total process took: $DURATION"
    echo ""

    # Cleanup

    ## Commented during testing so I don't have to keep unplugging/plugging my USB HDD
    #eval 'sync;sync;sync'
    #eval 'umount /backup'
    #eval 'umount /mnt'
    #eval 'sync;sync;sync'
    #echo "Drives unmounted. Powering off external USB HDD"
    #eval 'udisksctl power-off -b /dev/disk/by-label/USBHDD'
    echo ""
    echo "Finished"
    I'm running the script with nothing yet in /mnt/snapshots or /back

    The @ and @home snapshots get created in /mnt/snapshots quickly, but the creation/sending of the @ and @home incremental snapshots to /backup fails.

    This is the output running the script:
    root@barley-cat:/# /snapshotTESTC.sh
    Mounting drives
    Internal HDD mounted to /mnt

    External USB HDD mounted to /backup

    Timestamp: 202203101833

    Making today's snapshot of @
    Create a readonly snapshot of '/mnt/@' in '/mnt/snapshots/@202203101833'
    @202203101833 successfully created

    Making today's snapshot of @home
    Create a readonly snapshot of '/mnt/@home' in '/mnt/snapshots/@home202203101833'
    @home202203101833 successfully created

    /snapshotTESTC.sh: line 76: ROOTLIST: bad array subscript
    Attempting incremental backup of @home
    btrfs send -p /mnt/snapshots/ /mnt/snapshots@202203101833 | btrfs receive /backup
    ERROR: failed to get flags for subvolume /mnt/snapshots: Invalid argument
    ERROR: empty stream is not considered valid
    ERROR: Could not statfs: No such file or directory
    WARNING: not deleting default subvolume id 5 '//backup'
    Snapshots of @ completed, oldest @ snapshots deleted.
    /snapshotTESTC.sh: line 103: HOMELIST: bad array subscript
    Attempting incremental backup of @home
    btrfs send -p /mnt/snapshots/ /mnt/snapshots@home202203101833 | btrfs receive /backup
    ERROR: failed to get flags for subvolume /mnt/snapshots: Invalid argument
    ERROR: empty stream is not considered valid
    ERROR: Could not statfs: No such file or directory
    WARNING: not deleting default subvolume id 5 '//backup'
    Snapshots of @home completed, oldest @home snapshots deleted.

    Total process took: 00h:00m:02s


    Finished
    root@barley-cat:/#
    What /mnt/snapshots has afterwards:
    Code:
    root@barley-cat:/# vdir /mnt/snapshots 
    total 0 
    drwxr-xr-x 1 root root 354 Mar 10 18:21 @202203101833 
    drwxr-xr-x 1 root root  36 Mar  1 20:42 @home202203101833
    /backup contains nothing.

    Leave a comment:


  • jlittle
    replied
    Thank you for your considered reply.
    Originally posted by GreyGeek View Post
    The problem has always been that snapshots eventually fill up if left alone, and when their combined total size, plus the operating system size, goes over 90% the performance begins to suffer, and so do maintenance tools like balance, scrub, check and others.
    For snapper, the problem is now mitigated by the SPACE_LIMIT and FREE_LIMIT in the configuration.

    Leave a comment:


  • GreyGeek
    replied
    Originally posted by jlittle View Post

    I suspect, and hope, that's old advice. On my main desktop for the system and my home subvolumes I have about 45 snapshots each, roughly half from my backup scheme, which does incrementals to several different drives (each needing a different previous), and half from snapper. I've got 25 or so other subvolumes, mostly other installs and for volatile data I don't want in the backups, such as browser caches and large downloads. Sometimes I've got a lot more subvolumes, when I've got an install that I start snapper in; I often do that with an LTS release.

    I just make sure the fs doesn't fill up, and performance is good. It's still showing ~ 50 GiB unallocated out of 216 GiB, and 62 GiB free. I think keeping the volatile data out of the main subvolumes is working. Lots of snapshots doesn't in principle take up a lot of space, just metadata. For example, take a subvolume with data that changes or grows gradually, with a snapshot each month for a year; a snapshot each week over the same period mostly requires some more metadata, so won't take up much more space.

    With mainstream distros now making btrfs the default file system, and OpenSUSE installing snapper by default, I expect there's many users that have subvolumes with a lot more than 8 snapshots.
    I haven't found any other reputable source which advocates for more snapshots per volume than the BTRFS developers "low double and single digit" number of snapshots per subvolume. I read one person stating that BTRFS supports 2^64 subvolumes. It doesn't. It supports 2^64 bytes per subvolume size. Many seem to pull their number of supported snapshots, like 100, out of dark smelly places.

    The problem has always been that snapshots eventually fill up if left alone, and when their combined total size, plus the operating system size, goes over 90% the performance begins to suffer, and so do maintenance tools like balance, scrub, check and others.

    When I first started using BTRFS over five years ago I went snapshot wild, creating 50-100, along with pre and post package updates. I never got to the 90% threshold but I began to wonder, as I looked at the older snapshots, "what am I keeping them for?" Why would I ever revert to a snapshot that is a month old, much less a year old, or older? I thought of all the data I'd lose reverting to an old snapshot -- emails, calendar data, app installs, DE customizations, code I'd written, etc... Had I upgraded my distro version why would I want to revert to the previous or older version in an old snapshot?

    That's when I started copying data I wanted to keep long term, like my wife's genealogy and family tree data, my old coding (SAVVY, Turbo Pascal 3.02A, Pascal), my math and physics documents, etc., to multiple external disk drives as themselves, without using any intervening technology like BTRFS. That data never changes. It is like a photo of my family hanging on the wall. But, I have that data on my current system as well, for occasional reference and trips down memory lane. The TP302A files date from June of 1992, and my Lotus AmiPro spreadsheets (.SAM) that I used to invoice my clients date from before that.

    I've had clients that generated data and documentation that laws required they keep for 15 years. That data was printed out on paper and stored in indexed cardboard boxes which were stacked six high in warehouses. When some lawyer wanted a certain document a gofer was sent over to retrieve it and make a photocopy of it, returning the original to the box. If you ran a spreadsheet written with Lotus Notes having a paper copy was/is the only reliable way to save that data. Think where they'd be if they had stored their spreadsheet files on Zip drives. Fifteen years later the drives don't work and Lotus Notes died long ago. That spreadsheet data is lost for good. I keep my .SAM spreadsheets on the hope that someone will make a converter for them.

    For my use case BTRFS is perfect: quick and easy to use. I limit my snapshots to around 5 for my only subvolume, @. I always do a snapshot before I run an update and full-upgrade. If things don't go well I can roll back quickly. No harm, no foul. If after I check my WINE apps, virtual machines, my Jupyter Notebooks, and my steam games and everything runs well I will create a new snapshot. I always delete the oldest snapshot when I create a new one, maintaining the 5 snapshot count. I always plug in my 500Gb USB SSD drive when running make_snapshots.sh to keep a "take with me copy" of my Neon installation. I have a 1Tb NVMe SSD also labeled "BACKUP" and when my USB SSD, which is labeled "BACKUP" as well, is not plugged in the mount command in my script mounts the NVMe.

    PS - My use case is testing distros in VM's, creating Jupyter Notebooks to test Covid and other data for validity, emailing friends and relatives, shopping online, keeping up with news and weather, visiting KubuntuForums.net and other websites, and playing games locally, mainly Minecraft. Simple uses. One nice feature of BTRFS is that I can navigate through a snapshot and pull out files and folders as if they were on an EXT4, or C:/ drive, or what ever.
    Last edited by GreyGeek; Mar 10, 2022, 10:07 AM.

    Leave a comment:


  • jlittle
    replied
    Originally posted by GreyGeek View Post
    The BTRFS devs recommend 8 or less per subvolume. This is why I tell people that it is not wise to create 40 or 50 or more snapshots. People who use BTRFS and indiscriminately create snapshots will soon find that their system will slow to a crawl as their allocated space approaches their device size.
    I suspect, and hope, that's old advice. On my main desktop for the system and my home subvolumes I have about 45 snapshots each, roughly half from my backup scheme, which does incrementals to several different drives (each needing a different previous), and half from snapper. I've got 25 or so other subvolumes, mostly other installs and for volatile data I don't want in the backups, such as browser caches and large downloads. Sometimes I've got a lot more subvolumes, when I've got an install that I start snapper in; I often do that with an LTS release.

    I just make sure the fs doesn't fill up, and performance is good. It's still showing ~ 50 GiB unallocated out of 216 GiB, and 62 GiB free. I think keeping the volatile data out of the main subvolumes is working. Lots of snapshots doesn't in principle take up a lot of space, just metadata. For example, take a subvolume with data that changes or grows gradually, with a snapshot each month for a year; a snapshot each week over the same period mostly requires some more metadata, so won't take up much more space.

    With mainstream distros now making btrfs the default file system, and OpenSUSE installing snapper by default, I expect there's many users that have subvolumes with a lot more than 8 snapshots.

    Leave a comment:


  • GreyGeek
    replied
    Originally posted by Snowhog View Post
    GreyGeek Question: Your script creates, but keeps, the full snapshot on your internal drive; the drive your working OS is installed on; compares the current full snapshot to the previous full snapshot, then creates and sends the incremental difference between the two to your backup drive. Do I have that right?

    Right now, my script creates, then sends a full snapshot to my external backup drive. But I'm close to having it ready for incremental backups, which would then mirror the actions your script does. I'm still keeping my separate @ and @home, and I've worked out how to modify your incremental snapshot section so it can create separate incremental @ and @home snapshots from their parents residing in a single location: /mnt/snapshots.

    I'm not ready yet to show my script; I have to test it first and make sure there aren't any 'gotchas' I missed.
    Not quite. My main drive, sda3, has @, my total system, and in addition I keep 5 snapshots under <rootfs>. The snapshots in /mnt/snapshots/ are not "full" snapshots. They are ro COW snapshots that will fill up over time IF I let them remain in /mnt/snapshots.

    Notice below that I have 5 ro snapshots under /mnt/snapshots and that my "usage" shows that 304 Gb of the SSD is still unallocated by BTRFS. My @ and the 5 snapshots takes only 134.07 Gb of space. IF those 5 snapshots were "full" copies of my system I would have run out of SDD space by the time I attempted to create the 4th or 5th snapshot.

    Code:
    vdir /mnt/snapshots/ 
    total 0
    drwxr-xr-x 1 root root 334 Feb 10 16:28 @202203051947
    drwxr-xr-x 1 root root 334 Mar  6 23:01 @202203062302
    drwxr-xr-x 1 root root 334 Mar  6 23:04 @202203071533
    drwxr-xr-x 1 root root 334 Mar  6 23:04 @202203072053
    drwxr-xr-x 1 root root 334 Mar  8 10:56 @202203082247
    
    [B]sudo btrfs filesystem usage / [/B]
    ...
    Overall:
       Device size:                 441.04GiB
       Device allocated:            137.04GiB
       Device unallocated:          [B]304.00GiB[/B]
       Device missing:                  0.00B
       Used:                        134.07GiB
       Free (estimated):            306.26GiB      (min: 306.26GiB)
       Data ratio:                       1.00
       Metadata ratio:                   1.00
       Global reserve:              214.38MiB      (used: 0.00B)
    
    Data,single: Size:135.01GiB, Used:132.74GiB (98.32%)
      /dev/sda3     135.01GiB
    
    Metadata,single: Size:2.00GiB, Used:1.33GiB (66.41%)
      /dev/sda3       2.00GiB
    
    System,single: Size:32.00MiB, Used:48.00KiB (0.15%)
      /dev/sda3      32.00MiB
    
    Unallocated:
      /dev/sda3     304.00GiB
    However, when I start sending ro snapshots to other subvolumes, i.e., not under <rootfs>, my first snapshot cannot be incremental because the receive command has no child to compare it to. After I take 15-20 minutes to send the first snapshot subsequent snapshots can be sent incrementally in only a few seconds IF the destination drive has a full copy of the child present on it.

    The BTRFS devs recommend 8 or less per subvolume. This is why I tell people that it is not wise to create 40 or 50 or more snapshots. People who use BTRFS and indiscriminately create snapshots will soon find that their system will slow to a crawl as their allocated space approaches their device size.

    The line from the USAGE command that displays the data:
    Data,single: Size:135.01GiB, Used:132.74GiB (98.32%)
    shows 98.32% of the 135.01GiB being used. That high percentage rate is normal since BTRFS uses almost all of the space it allocates. In 5 years I've never seen that percentage below 97% and rarely that low.




    Last edited by Snowhog; Feb 11, 2024, 02:27 PM.

    Leave a comment:


  • Snowhog
    replied
    GreyGeek Question: Your script creates, but keeps, the full snapshot on your internal drive; the drive your working OS is installed on; compares the current full snapshot to the previous full snapshot, then creates and sends the incremental difference between the two to your backup drive. Do I have that right?

    Right now, my script creates, then sends a full snapshot to my external backup drive. But I'm close to having it ready for incremental backups, which would then mirror the actions your script does. I'm still keeping my separate @ and @home, and I've worked out how to modify your incremental snapshot section so it can create separate incremental @ and @home snapshots from their parents residing in a single location: /mnt/snapshots.

    I'm not ready yet to show my script; I have to test it first and make sure there aren't any 'gotchas' I missed.
    Last edited by Snowhog; Mar 09, 2022, 11:59 AM.

    Leave a comment:


  • GreyGeek
    replied
    Originally posted by Snowhog View Post
    That's what I don't grok. Every time you run your script it creates a new full snapshot. At least, that's what I understand reading it. Is that not what happens?
    Yes. Using
    btrfs su snapshot -r /mnt/@ /mnt/snapshots/@yyyymmddhhmm
    creates a read-only snapshot in /mnt/snapshots directory. It has to be read only because read only snapshots are the only snapshots you can use the send command to send a snapshot to a subvolume not under <rootfs>.
    IOW, you cannot use
    mv /mnt/snapshots/@yyyymmddhhmm /backup
    because /backup is not under /mnt, the <rootfs>. /backup is its own <rootfs>, which is not the same <rootfs> that @ is under.

    While a snapshot is setting under <rootfs> it is modified by COW changes to the system made by the user. Eventually, if enough changes are made, the snapshot "fills up" with changes and takes up as much disk space as the system itself. So, when I make an ro snapshot and leave it on my system it isn't populated very much until the COW changes began to accumulate. Eventually, a single snapshot could grow to fill the entire disk if I didn't delete it or move it to another subvolume. This is why I can have my entire system plus 6, 8 or 10 snapshots or more on my 500Gb main drive but if I try to send a 6th snapshot to my 500Gb backup drive, which contains only 5 snapshots, I will get an "out of space" error. Unlike my snapshots on my system, the snapshots on my /backup drive are fully populated versions of my file system at the time I used the send command.

    How? When I do a send, either full or incremental of a freshly made snapshot, the send command fully populates the snapshot with copies of the system files. In your case to copy all the files took about 16 minutes. Same for me as well. But, when you do an incremental snapshot, the btrfs receive command expects to see the parent snapshot setting in the location being sent to, i.e., /backup. Using its copy of the parent on /backup as a "bucket", it renames that copy with the name of the child and it begins pouring into that child files received from the send command until the send command is finished. That usually takes less than 15 seconds or so, depending on how much has changed since the parent was sent over. The child on /backup is then an exact copy of the child on /mnt/snapshots.
    If you do not use the -r switch then the resulting snapshot CANNOT be sent using the btrfs send & receive command. But, you can add or remove files from an rw snapshot without hurting it using the cp command or dragging and dropping using Dolphin. This means that if you inadvertently delete a file or directory you can drag a copy of it out of one of your snapshots. I've used that feature several times because of shaky fingers typing.

    If your @ snapshot gets corrupted and you can't boot into it then the first step to recover is to boot using a LiveUSB that has BTRFS installed on it. Once booted up you mount your system's disk ( /dev/.../whatever) to mount /mnt to some directory on the LiveUSB and then use the mv command to move @ out of the way.
    mv /mnt/@ /mnt/@old
    Then you use
    btrfs su snapshot /mnt/snapshots/@yyyymmddhhmm /mnt/@
    without using the -r switch. The snapshot was ro, but when snapshotting it again without the -r switch it is converted to a rw snapshot with the new name @. Then you reboot without the LiveUSB.

    I do a lot of experimenting and have done this restore procedure so many times I've lost count. It's never failed once.

    Last edited by GreyGeek; Mar 08, 2022, 09:54 PM.

    Leave a comment:


  • Snowhog
    replied
    Originally posted by GreyGeek View Post
    For my use case separating the two actions merely doubles the work. I make only one snapshot, @yyyymmddhhmm, and immediately after I make it I incrementally send it to the external backup drive.
    That's what I don't grok. Every time you run your script it creates a new full snapshot. At least, that's what I understand reading it. Is that not what happens?

    Leave a comment:


  • GreyGeek
    replied
    Originally posted by Snowhog View Post
    GreyGeek I'll let you know in a bit. I cleaned up everything; no snapshots in /mnt/snapshots or /backup. I'm rebooting to have a totally fresh environment to begin from.

    Finished. It took 16m10s from the time I launched the script until it finished.
    That's is the same general time my full backup takes to complete. That's why I do mainly incremental backups. That, and to avoid stressing my NVMe SSD.


    Originally posted by Snowhog View Post
    My laptop is an older HP Pavilion g7 with an Intel i3 64-bit processor and 8 GiB of RAM. The USB HDD is a spinner (SeaGate 500B FreeAgent Go), so it isn't speedy.

    Still, I'm happy.
    That's all that matters.
    Originally posted by Snowhog View Post
    Your script makes a new full snapshot and then attempts an incremental snapshot. Would you not want to separate those two actions; have separate scripts? When one is implementing a backup scheme, one generally establishes a Full Backup and Incremental Backup schedule, say Full Backups on Mondays, and Incremental backups on Tuesdays through Sundays. Something akin to that.
    For my use case separating the two actions merely doubles the work. I make only one snapshot, @yyyymmddhhmm, and immediately after I make it I incrementally send it to the external backup drive. Perfect example of KISS.

    My internal MVMe is not removable so I backup to it at least weekly, or on special occasions like massive updates, using manual procedures. As long as I can boot to a root terminal I can recover @ by sending my latest @yyyymmddhh to /mnt/, replacing @
    mv /mnt/@ /mnt/@old
    btrfs send /backup/@yyyymmddhhmm /mnt/@
    and not using the -r switch.


    Originally posted by Snowhog View Post
    Still trying to understand and learn.
    Me too, along with trying not to forget what I've learned in the past, which is turning out to be harder then one can imagine.
    Last edited by GreyGeek; Mar 08, 2022, 02:57 PM.

    Leave a comment:


  • Snowhog
    replied
    Originally posted by GreyGeek View Post
    I would recommend one change to the power off command: use the /dev/disk/by-uuiid/<theuuidofthathdd> instead of /dev/sdb1 because, as I've experienced, /dev/sdXn assignments can change without warning but the uuid assignments are guaranteed to be unique and never change.
    I'll look into that. The udisksctl would have to support that identification. I don't know that it doesn't, but just say'n.

    Added:
    Yup. Works. I also tried using by-label. That also works.
    Last edited by Snowhog; Mar 08, 2022, 02:44 PM.

    Leave a comment:


  • GreyGeek
    replied
    Originally posted by Snowhog View Post
    I'm not happy with my external USB HDD staying powered after the script finishes. Yes, it gets unmounted, but...

    So, I added this after the last eval 'sleep 1':

    This nicely powers off the drive. Sweet.
    Sweet indeed!

    I plug my external USB HDD into an active (powered) USB3 hub which has buttons to turn the power to each port on or off. When the cursor comes back I push that button, which powers down the drive, and then I unplug the drive. If I was using a passive hub I'd use your technique.

    I would recommend one change to the power off command: use the /dev/disk/by-uuiid/<theuuidofthathdd> instead of /dev/sdb1 because, as I've experienced, /dev/sdXn assignments can change without warning but the uuid assignments are guaranteed to be unique and never change.

    Leave a comment:


  • Snowhog
    replied
    Originally posted by GreyGeek View Post
    BTW, comments do not show up on the "New Posts" or "Today's Posts" listings.
    Yes, I'm aware of that. It's either 'by design' or a quirk within the vBulletin core programming. It's a mild complaint over on forum.vbulletin.com.

    Leave a comment:


  • Snowhog
    replied
    I'm not happy with my external USB HDD staying powered after the script finishes. Yes, it gets unmounted, but...

    So, I added this after the last eval 'sleep 1':
    # Begin 2022-03-08 modification
    echo "Powering off external USB HDD"
    eval 'udisksctl power-off -b /dev/sdb1'
    # End 2022-03-08 modification
    This nicely powers off the drive. Sweet.

    Leave a comment:


  • Snowhog
    replied
    GreyGeek I'll let you know in a bit. I cleaned up everything; no snapshots in /mnt/snapshots or /backup. I'm rebooting to have a totally fresh environment to begin from.

    Finished. It took 16m10s from the time I launched the script until it finished.

    My laptop is an older HP Pavilion g7 with an Intel i3 64-bit processor and 8 GiB of RAM. The USB HDD is a spinner (SeaGate 500B FreeAgent Go), so it isn't speedy.

    Still, I'm happy.

    Your script makes a new full snapshot and then attempts an incremental snapshot. Would you not want to separate those two actions; have separate scripts? When one is implementing a backup scheme, one generally establishes a Full Backup and Incremental Backup schedule, say Full Backups on Mondays, and Incremental backups on Tuesdays through Sundays. Something akin to that.

    Still trying to understand and learn.
    Last edited by Snowhog; Mar 08, 2022, 01:24 PM.

    Leave a comment:

Working...
X