Announcement

Collapse
No announcement yet.

SSD and file system tunning in current Kubuntu

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    I also remember an old article that the OP refers to that I was dreading installing and SSD. After some research I found out trim and likes are already working. Also remember reading about the failures of SSD's. As claydoh mentioned, SSD are more robust these days. I boutght a cheap ($34) nvme drive for the very reason of the failure rate, but it has a 3 year warranty.
    No matter what happens, the speed as astonishing! My old HDD booted up kubuntu around 40 - 50 seconds. The new SSD is almost instantaneous...maybe 3 secs!
    Boot Info Script

    Comment


      #17
      Linux I/O schedulers are a dark art. Two systems with the same hardware can behave differently. The default before SSDs used to be cfq, and often this was really bad for SSDs. (It was really, really, bad on the old system I had 2006-2016; some copying loads caused I/O starvation in the OS, and it hung till the copying finished, and if swap got involved (many browser tabs, not much RAM) it crashed Linux altogether.)

      So some early adopters of SSDs needed to change it. But Ubuntu put in a rule to change it for SSDs, and AFAICT since then worrying about the scheduler for SSDs is not necessary, with only marginal improvements possible.
      Regards, John Little

      Comment


        #18
        1. Don't use discard as a mount option. *Buntu's by default use a cron job to run trim to reduce SSD wear.
        2. *Buntu's set the scheduler for you when an SSD is installed (maybe no longer as the scheduler's themselves have been improved), however the scheduler used for NVME is "none". SDA is not your nvme drive so the scheduler you looked at is irrelevant. Tests show the available best schedulers (BFQ and MQ-Deadline) perform worse than no scheduler at all on the NVME interface.
        3. How much added performance are you expecting? FYI:, in order SATA HD, SATA SDD, NVME SSD;
        stuart@office:~$ for d in sda sdd nvme0n1 ; do sudo hdparm -t /dev/$d ; done

        /dev/sda:
        Timing buffered disk reads: 432 MB in 3.00 seconds = 143.90 MB/sec

        /dev/sdd:
        Timing buffered disk reads: 1616 MB in 3.00 seconds = 538.02 MB/sec

        /dev/nvme0n1:
        Timing buffered disk reads: 8414 MB in 3.00 seconds = 2804.32 MB/sec


        Please Read Me

        Comment


          #19
          Originally posted by oshunluvr View Post
          1. Don't use discard as a mount option. *Buntu's by default use a cron job to run trim to reduce SSD wear.
          2. *Buntu's set the scheduler for you when an SSD is installed (maybe no longer as the scheduler's themselves have been improved), however the scheduler used for NVME is "none". SDA is not your nvme drive so the scheduler you looked at is irrelevant. Tests show the available best schedulers (BFQ and MQ-Deadline) perform worse than no scheduler at all on the NVME interface.
          3. How much added performance are you expecting? FYI:, in order SATA HD, SATA SDD, NVME SSD;
          Code:
          achilleus@Linux:~$ for d in sda sdd nvme1n1 ; do sudo hdparm -t /dev/$d ; done
          [sudo] hasło użytkownika achilleus: 
          
          /dev/sda:
          Timing buffered disk reads: 364 MB in  3.01 seconds = 120.87 MB/sec
          /dev/sdd: No such file or directory
          
          /dev/nvme1n1:
          HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
          Timing buffered disk reads: 5438 MB in  3.00 seconds = 1812.23 MB/sec

          Comment


            #20
            Never seen that error. Sounds like an incompatibility with hdparm.

            As far as your speed, that may not be bad as it would depend on your hardware. What computer are you using and what model nvme drive? Mine is a Samsung 970 Pro 512G - the fastest drive on the market - on a ASRock Z270M Extreme4 motherboard.


            Also, if you've already changed a bunch of settings, there's no telling what effect that might have. You might try running that again from a liveUSB so you have a clean slate.

            Please Read Me

            Comment


              #21
              I have MSI GT75 Titan 8RG with two M.2 SSD SAMSUNG MZVLW256 (NVMe PCIe Gen3 x4) drives, which were pre-configured in an Intel RST RAID0. Unfortunatelly no Linux distro was able to detect it, so I had to dismantle the array, because I wanted to have Kubuntu installed alongside Windows 10.

              Here in this topic you have comparison of those two configurations under Widows 10 when it comes to drive speed.

              Comment


                #22
                You can still do RAID0 easily, just not with RST

                Please Read Me

                Comment


                  #23
                  Probably yes (AHCI - haven't tried that), but I was looking for dual-boot with Windows 10, and I have many doubts if it would work without major complications.

                  Comment


                    #24
                    No, you wouldn't be able to use Linux software RAID with Windows - I wasn't thinking about dual-booting. Anyway, good luck with it. I would assume some kernel down the road will support RST but it could be a long wait. Windows and it's weaknesses aren't usually a concern of Linux developers.

                    Looks like you're near max speed anyway:
                    NVMe Samsung MZVLW256Solid State Drive


                    Average Novabench Disk Score
                    150


                    Average sequential read speed
                    1341 MB/s


                    Average sequential write speed
                    793 MB/s
                    RAID0 would help, but honestly unless you're doing a lot of drive access all day I doubt you'll notice much difference once you get beyond SATA speeds. That is to say it will be so fast you won't care that it's not RAID.

                    IMO the best dual-boot setup is to have two drives each dedicated to a single OS.

                    Please Read Me

                    Comment


                      #25
                      Originally posted by oshunluvr View Post
                      IMO the best dual-boot setup is to have two drives each dedicated to a single OS.
                      And that's how my system is configured right now

                      Comment

                      Working...
                      X