Announcement

Collapse
No announcement yet.

SSD and file system tunning in current Kubuntu

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Caesar
    replied
    Originally posted by oshunluvr View Post
    IMO the best dual-boot setup is to have two drives each dedicated to a single OS.
    And that's how my system is configured right now

    Leave a comment:


  • oshunluvr
    replied
    No, you wouldn't be able to use Linux software RAID with Windows - I wasn't thinking about dual-booting. Anyway, good luck with it. I would assume some kernel down the road will support RST but it could be a long wait. Windows and it's weaknesses aren't usually a concern of Linux developers.

    Looks like you're near max speed anyway:
    NVMe Samsung MZVLW256Solid State Drive


    Average Novabench Disk Score
    150


    Average sequential read speed
    1341 MB/s


    Average sequential write speed
    793 MB/s
    RAID0 would help, but honestly unless you're doing a lot of drive access all day I doubt you'll notice much difference once you get beyond SATA speeds. That is to say it will be so fast you won't care that it's not RAID.

    IMO the best dual-boot setup is to have two drives each dedicated to a single OS.

    Leave a comment:


  • Caesar
    replied
    Probably yes (AHCI - haven't tried that), but I was looking for dual-boot with Windows 10, and I have many doubts if it would work without major complications.

    Leave a comment:


  • oshunluvr
    replied
    You can still do RAID0 easily, just not with RST

    Leave a comment:


  • Caesar
    replied
    I have MSI GT75 Titan 8RG with two M.2 SSD SAMSUNG MZVLW256 (NVMe PCIe Gen3 x4) drives, which were pre-configured in an Intel RST RAID0. Unfortunatelly no Linux distro was able to detect it, so I had to dismantle the array, because I wanted to have Kubuntu installed alongside Windows 10.

    Here in this topic you have comparison of those two configurations under Widows 10 when it comes to drive speed.

    Leave a comment:


  • oshunluvr
    replied
    Never seen that error. Sounds like an incompatibility with hdparm.

    As far as your speed, that may not be bad as it would depend on your hardware. What computer are you using and what model nvme drive? Mine is a Samsung 970 Pro 512G - the fastest drive on the market - on a ASRock Z270M Extreme4 motherboard.


    Also, if you've already changed a bunch of settings, there's no telling what effect that might have. You might try running that again from a liveUSB so you have a clean slate.

    Leave a comment:


  • Caesar
    replied
    Originally posted by oshunluvr View Post
    1. Don't use discard as a mount option. *Buntu's by default use a cron job to run trim to reduce SSD wear.
    2. *Buntu's set the scheduler for you when an SSD is installed (maybe no longer as the scheduler's themselves have been improved), however the scheduler used for NVME is "none". SDA is not your nvme drive so the scheduler you looked at is irrelevant. Tests show the available best schedulers (BFQ and MQ-Deadline) perform worse than no scheduler at all on the NVME interface.
    3. How much added performance are you expecting? FYI:, in order SATA HD, SATA SDD, NVME SSD;
    Code:
    achilleus@Linux:~$ for d in sda sdd nvme1n1 ; do sudo hdparm -t /dev/$d ; done
    [sudo] hasło użytkownika achilleus: 
    
    /dev/sda:
    Timing buffered disk reads: 364 MB in  3.01 seconds = 120.87 MB/sec
    /dev/sdd: No such file or directory
    
    /dev/nvme1n1:
    HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
    Timing buffered disk reads: 5438 MB in  3.00 seconds = 1812.23 MB/sec

    Leave a comment:


  • oshunluvr
    replied
    1. Don't use discard as a mount option. *Buntu's by default use a cron job to run trim to reduce SSD wear.
    2. *Buntu's set the scheduler for you when an SSD is installed (maybe no longer as the scheduler's themselves have been improved), however the scheduler used for NVME is "none". SDA is not your nvme drive so the scheduler you looked at is irrelevant. Tests show the available best schedulers (BFQ and MQ-Deadline) perform worse than no scheduler at all on the NVME interface.
    3. How much added performance are you expecting? FYI:, in order SATA HD, SATA SDD, NVME SSD;
    stuart@office:~$ for d in sda sdd nvme0n1 ; do sudo hdparm -t /dev/$d ; done

    /dev/sda:
    Timing buffered disk reads: 432 MB in 3.00 seconds = 143.90 MB/sec

    /dev/sdd:
    Timing buffered disk reads: 1616 MB in 3.00 seconds = 538.02 MB/sec

    /dev/nvme0n1:
    Timing buffered disk reads: 8414 MB in 3.00 seconds = 2804.32 MB/sec

    Leave a comment:


  • jlittle
    replied
    Linux I/O schedulers are a dark art. Two systems with the same hardware can behave differently. The default before SSDs used to be cfq, and often this was really bad for SSDs. (It was really, really, bad on the old system I had 2006-2016; some copying loads caused I/O starvation in the OS, and it hung till the copying finished, and if swap got involved (many browser tabs, not much RAM) it crashed Linux altogether.)

    So some early adopters of SSDs needed to change it. But Ubuntu put in a rule to change it for SSDs, and AFAICT since then worrying about the scheduler for SSDs is not necessary, with only marginal improvements possible.

    Leave a comment:


  • verndog
    replied
    I also remember an old article that the OP refers to that I was dreading installing and SSD. After some research I found out trim and likes are already working. Also remember reading about the failures of SSD's. As claydoh mentioned, SSD are more robust these days. I boutght a cheap ($34) nvme drive for the very reason of the failure rate, but it has a 3 year warranty.
    No matter what happens, the speed as astonishing! My old HDD booted up kubuntu around 40 - 50 seconds. The new SSD is almost instantaneous...maybe 3 secs!

    Leave a comment:


  • claydoh
    replied
    And some recent scheduler benchmarks, if you want to try others:

    https://www.phoronix.com/scan.php?pa...io-sched&num=1

    Leave a comment:


  • claydoh
    replied
    But Ubuntu already takes care of this on a per-device basis:

    here is my nvme drive
    Code:
    $ cat /sys/block/nvme0n1/queue/scheduler
     [none]
    and my spinning drive:

    Code:
    $ cat /sys/block/sda/queue/scheduler 
    noop deadline [cfq]
    Note this is on an 18.04 based system
    Last edited by claydoh; Jun 08, 2019, 12:29 PM.

    Leave a comment:


  • Caesar
    replied
    According to Arch Linux wiki (IMO the best Linux wiki)

    The process to change I/O scheduler, depending on whether the disk is rotating or not can be automated and persist across reboots. For example the udev rule below sets the scheduler to none for NVMe, mq-deadline for SSD/eMMC, and bfq for rotational drives:

    Code:
    /etc/udev/rules.d/60-ioschedulers.rules
    # set scheduler for NVMe
    ACTION=="add|change", KERNEL=="nvme[0-9]*", ATTR{queue/scheduler}="none"
    # set scheduler for SSD and eMMC
    ACTION=="add|change", KERNEL=="sd[a-z]|mmcblk[0-9]*", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="mq-deadline"
    # set scheduler for rotating disks
    ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"
    In my kubuntu I have the following scheduler

    Code:
    achilleus@Linux:~$ cat /sys/block/sda/queue/scheduler
    [mq-deadline] none
    AFAIK the scheduler-name that has square brackets around it, is the active scheduler. Is it worth changing?

    Also I went for that advice, as I have 32GB orf RAM and mostly only 3GB is being used.

    Code:
    To reduce writing to the SSD, we will be using tmpfs (i.e. RAM-disk) to store the system cache and temp directory. You can use the same for logs /var/log, however, I like to keep my logs between reboots so I will use the regular disk for that. Add the following lines to the /etc/fstab.
    
    tmpfs   /tmp       tmpfs   defaults,noatime,mode=1777   0  0
    tmpfs   /var/spool tmpfs   defaults,noatime,mode=1777   0  0
    tmpfs   /var/tmp   tmpfs   defaults,noatime,mode=1777   0  0
    The TRIM is enabled out-of-the-box in my system [non-zero values of DISC-GRAN (discard granularity) and DISC-MAX (discard max bytes) indicate TRIM support]

    Code:
    achilleus@Linux:~$ lsblk --discard
    NAME        DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
    sda                0        0B       0B         0
    ├─sda1             0        0B       0B         0
    └─sda2             0        0B       0B         0
    nvme0n1            0      512B       2T         0
    ├─nvme0n1p1        0      512B       2T         0
    ├─nvme0n1p2        0      512B       2T         0
    ├─nvme0n1p3        0      512B       2T         0
    └─nvme0n1p4        0      512B       2T         0
    nvme1n1            0      512B       2T         0
    └─nvme1n1p1        0      512B       2T         0
    I'm leaving journaling for ext4.
    Last edited by Caesar; Jun 08, 2019, 12:18 PM.

    Leave a comment:


  • claydoh
    replied
    Originally posted by jglen490 View Post
    I think with the present state of things, there is very little to worry about WRT SSD drives. Micromanaging will probably gain something equally microscopic. I have no intention of worrying about my SSD. Of course, mine's a cheap Kingston and may not last very long, but if it looks promising to have an SSD with stock settings in Kubuntu, I'm not going to worry about it. If this one does fail soon, I have its twin as a spare setting in my desk drawer.
    Agreed. I am still waiting for my first SSD to die. Even the 5 year old one I no longer have is still chugging along.

    Leave a comment:


  • jglen490
    replied
    I think with the present state of things, there is very little to worry about WRT SSD drives. Micromanaging will probably gain something equally microscopic. I have no intention of worrying about my SSD. Of course, mine's a cheap Kingston and may not last very long, but if it looks promising to have an SSD with stock settings in Kubuntu, I'm not going to worry about it. If this one does fail soon, I have its twin as a spare setting in my desk drawer.

    Leave a comment:

Users Viewing This Topic

Collapse

There are 0 users viewing this topic.

Working...
X