Announcement

Collapse
No announcement yet.

SSD performance problems -- config problem or old drive?

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #31
    Only thing that has been "sketchy" about BTRFS was RAID5/6 and full disk encryption, both of which have been resolved AFAIK, but I don't use either so worth checking on if those are things you need.

    I have been using BTRFS full time since tools version 0.19 without any problems.

    Please Read Me

    Comment


      #32
      Thanks -- did some digging, and it sounds like that's about right (except RAID5/6 still sounds like a bad idea with btrfs: https://www.phoronix.com/scan.php?pa...ng-RAID5-RAID6 ).

      Jim Salter is still pretty unexcited about btrfs compared to zfs, but sounds like it's not at all a unanimous opinion and sounds like the issues would be unlikely to affect me.

      Comment


        #33
        You were probably reading "info" posted by ZFS fanboys relying on their experiences,or what they read, in 2012. Btrfs has been stable for me for 5+ years. I've never had any sort of failure mode while using it, and all the problems I had were of my own making while experimenting around. Because one can restore from snapshots within a couple minutes playing around with your system doesn't carry the penalties that activity used to carry. The Raid56 feature is the only instability ranking for Btrfs.

        I will NEVER use a distro that does not allow setting BTRFS as the root filesystem.

        For potential problems a "Gotcha" page is here.

        When people first read about Btrfs and how snapshots created almost instantly and at first are essentially empty they go hog wild and use tools like TimeShift or Snapper, etc.., and create dozens of snapshots per day. A few days ago I was testing out the latest release of Snapper that's in the repository and edited the config down considerably. Even then, in the course of 10 hours, Snapper created 17 snapshots. When I did a single "sudo apt update && sudo apt full-upgrade" Snapper made three "Pre & Post" snapshots. After I was done experimenting with Snapper I deleted all of its snapshots and then uninstalled it. IMO, Snapper and TImeShift are more trouble than they are worth, since doing snapshots manually, and restoring if needed, are drop-dead easy. My only caveat, besides not using Raid56, is:

        The FAQ says:
        Having many subvolumes can be very slow

        The cost of several operations, including currently balance, device delete and fs resize (shrinking), is proportional to the number of subvolumes, including snapshots, and (slightly super-linearly) the number of extents in the subvolumes.

        This is "obvious" for "pure" subvolumes, as each is an independent file tree and has independent extents anyhow (except for ref-linked ones). But in the case of snapshots metadata and extents are (usually) largely ref-linked with the ancestor subvolume, so the full scan of the snapshot need not happen, but currently this happens.

        This means that subvolumes with more than a dozen snapshots can greatly slow down balance and device delete. The multiple tree walks involve both high CPU and IOPS usage. This means that schemes that snapshot a volume periodically should set a low upper limit on the number of those snapshots that are retained.
        This "slow down" can occur if snapshots get very old and eventually fill up to the point where they are almost as big as the @&@home subvolumes combined. Say @+@home takes 100GB. Your HD is 500GB. If you create 5 snapshot pairs and then ignore them they'll eventually consume 100GB each. When your HD free space drops below 10% your system will slow down. Below 5% it may appear to stop.

        Good snapshot management may use about 7 rolling @ snapshots, one per day, deleting the oldest. Keeping a weekly snapshot would generate another 52 snapshots over the course of a year. The older snapshots could become as large as @, and one or a few may eat up all the drive free space. So, only 4 weekly rolling snapshots is more prudent. A couple monthly snapshots. One yearly snapshot kept on your system is probably one too many. A better scheme would be to keep the 7 daily snapshots and move the weekly, monthly and yearly snapshots to external subvolumes just after they are created, and then deleting them from your local subvolume. 7 daily + 4 monthly+1 yearly=12 snapshots for @. For SSD drives systemd sets cron to run fstrim about once a week, or what ever frequency you set it for. That recaptures extents.

        Repeat those numbers for the @home subvolume, and you have a total of about 12 X2 = 24 snapshots.

        I didn't see the sense in keeping @home separate from @ and creating @yyyymmdd & @homeyyyymmdd every night, etc. So I copied the contents of @home into @/home and then edited out the stanza in /etc/fstab that mounted @home to /home. Now I only have to snapshot @ and use one send & receive command using the "incremental" flag (-p) to move the snapshot to an external or remote subvolume.
        Last edited by GreyGeek; Mar 22, 2021, 07:33 AM.
        "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
        – John F. Kennedy, February 26, 1962.

        Comment


          #34
          Thanks!

          I did run into an issue with btrfs on an old laptop with a small SDD (and I assume this is user error somehow because I barely know anything about btrfs) where the drive filled up for no apparent reason and it turned out to be the fact that apt was snapshotting every time it updated. Once I figured out what was happening it wasn't a big deal, but it was an unpleasant surprise. :-)

          Comment


            #35
            The cost of several operations, including currently balance, device delete and fs resize
            I don't normally do any of these. Maybe my next desktop will have a pair of storage devices, for which a regular balance would make sense, but it sounds like it would usually be a scheduled maintenance thing, so could be scheduled for quiets times.
            Or am I missing something?
            Regards, John Little

            Comment


              #36
              I've only been "forced" to balance twice when I ran out of metadata space. It was an old problem that I think has been fixed or at least lessened to a great extent. A decent write-up here: https://askubuntu.com/questions/4640...ll-but-its-not

              Other times I've used balance were related to moving data disk-to-disk or adding/removing devices from an array.

              Please Read Me

              Comment


                #37
                BTW, "apt" doesn't auto-snapshot by itself. It was related to something else you had installed. This is why I wrote my own snap/backup utility script.

                Please Read Me

                Comment


                  #38
                  IF you use locate to find thing, and Snapper, then you'll want to add "/.snapshots" and "/home/.snapshots" to PRUNEPATH in /etc/updatedb.conf
                  "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                  – John F. Kennedy, February 26, 1962.

                  Comment


                    #39
                    Originally posted by oshunluvr View Post
                    BTW, "apt" doesn't auto-snapshot by itself. It was related to something else you had installed. This is why I wrote my own snap/backup utility script.
                    I think what it was was described in e.g. this link which suggests that k/ubuntu installs the package apt-btrfs-snapshots by default: "This package creates a btrfs snapshot each time packages are installed or removed."

                    Comment


                      #40
                      Originally posted by chconnor View Post
                      I think what it was was described in e.g. this link which suggests that k/ubuntu installs the package apt-btrfs-snapshots by default: "This package creates a btrfs snapshot each time packages are installed or removed."
                      Mmmm... news to me!
                      Current versions of Ubuntu and Kubuntu install a package named apt-btrfs-snapshot by default. This package creates a btrfs snapshot each time packages are installed or removed. These snapshots can take up a lot of space if not cleaned up regularly.
                      The article was published Dec 8, 2020, so I suspect that "current versions" of Kubuntu means 20.10
                      I installed Kubuntu 20.04 back when the Alpha was released and apt-btrfs-snapshot wasn't installed during the install back then.
                      Snapper includes the ability to take "Pre" and "Post" snapshots surrounding the use of the apt command.

                      Curious, I installed apt-btrfs-snapshot and then used muon to add a couple of packages. Here is what I got:
                      Code:
                      [FONT=monospace][COLOR=#000000][FONT=monospace][COLOR=#000000]:[/COLOR][COLOR=#5454ff][B]~[/B][/COLOR][COLOR=#000000]$ sudo apt-btrfs-snapshot list [/COLOR]
                      Available snapshots: 
                      @apt-snapshot-2021-03-22_13:37:32[/FONT][/COLOR]
                      [/FONT]

                      Interesting! Now I am off to find out where that snapshot is located.

                      EDIT:
                      BONUS! After being initially stored in /tmp the snapshot ended up being stored exactly where I would store it, under <ROOT_FS>, along side @ and snapshots as a ro snapshot.
                      apt-btrfs-snapshot is a keeper!
                      Last edited by GreyGeek; Mar 22, 2021, 03:32 PM.
                      "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                      – John F. Kennedy, February 26, 1962.

                      Comment


                        #41
                        So I haven't seen any speedup of the system since freeing up space on / and /home (now about 78% full on both)... I notice that gam_server (and possibly by extension mount.ntfs) is taking lots of CPU sometimes and that's associated with the lagging... gam_server I guess is part of gamin... any thoughts on its current utility? Most of what I find via googling is 15 year old posts... but I just connected an external USB3 drive with a ton of files on it and the system got super laggy for a long time. Mainly experienced through dolphin being super slow and saving of files, etc, being slow. I.e. file IO is lagging hard... makes me wonder if it's some combo of mediocre ntfs support with an overactive system service trying to index of watch files?

                        Comment


                          #42
                          Do you have akonadi running?
                          On some systems it is the prime cause of desktop lagginess.
                          I disabled it on my system and use "locate" when I need to find stuff.

                          Also, you may want to run Ksysguard or htop (in a Konsole) to see what is eating your CPU cycles.
                          Last edited by GreyGeek; Apr 03, 2021, 11:02 AM.
                          "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                          – John F. Kennedy, February 26, 1962.

                          Comment


                            #43
                            Thinking a little I wonder if something is causing extra disk i/o? I know some systems seem to like hitting the swap far too early, and that can cause drive related tasks to slow down a LOT, so maybe check swap usage in KSysGuard when you see the lags, as well as cpu usage. Swap usage can also show CPU spikes as the processor can be working overtime reading/writing things to swap which is far far slower than RAM. The swappiness can be adjusted if it is an issue.

                            Comment


                              #44
                              Thanks -- I've had indeed had issues with swap in general (when I had less memory), and these days I do have a swap partition set up but it's usage is zero basically all the time unless I'm slamming the system. Doesn't seem related to these issues. (swappiness is at default 60 now...)

                              I've had Akonadi and Nepomuk disabled forever...

                              I do have a custom conky setup going... when this issues presents, it's mount.ntfs and gam_server that are causing the issue, at least when it was happening recently. It seems as if the "check for file modification" gamin server had to run through every file on the USB3 drive or something.

                              I heard once (quite possibly from one of y'all on this forum) that the linux kernel architecture had one arguable weakness where file I/O (or maybe just USB file I/O?) could cause overall system lag? Something about how that data had to get shuffled around that would cause things to get laggy... so I got to wondering if all the NTFS partitions I have mounted, combined with mediocre ntfs support in general, combined with a 6TB external drive connected over USB, might just be more than the OS can efficiently handle... but if I could disable gam_server without messing things up that would be a tempting thing to try.

                              Comment


                                #45
                                ... weakness where file I/O (or maybe just USB file I/O?) could cause overall system lag
                                That seems to be a reference to i/o starvation, which Linux on some hardware was prone to. My old Kubuntu system, in use from 2006 to 2016, was prone to it; if I did a large copy to or from a USB stick everything became very laggy, unusably so.
                                I'd be surprised if a 20.04 system was affected by it, but computers often surprise me. Changing the scheduler away from CFQ by echoing something to a system file could help; I'll try to dig out a reference to it.

                                Sent from my VFD 822 using Tapatalk
                                Regards, John Little

                                Comment

                                Working...
                                X