Announcement

Collapse
No announcement yet.

OMG snaps

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    OMG snaps

    I have a couple of problems with snap.
    The first one is:

    Code:
    ~$ lsblk
    NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    loop0    7:0    0  54.4M  1 loop /snap/core18/1066
    loop1    7:1    0  42.8M  1 loop /snap/gtk-common-themes/1313
    loop2    7:2    0  54.4M  1 loop /snap/core18/1055
    loop3    7:3    0  14.1M  1 loop /snap/guake-cl/1
    loop4    7:4    0    74M  1 loop /snap/wine-platform-3-stable/6
    loop5    7:5    0 218.6M  1 loop /snap/wine-platform-runtime/23
    loop6    7:6    0  35.3M  1 loop /snap/gtk-common-themes/1198
    loop7    7:7    0  88.5M  1 loop /snap/core/7270
    loop8    7:8    0 456.4M  1 loop /snap/wine-platform/128
    loop9    7:9    0  88.4M  1 loop /snap/core/7169
    loop10   7:10   0 216.4M  1 loop /snap/wine-platform-runtime/6
    etc.
    fdisk is even worse:

    Code:
    ~$ sudo fdisk -l
    
    Disk /dev/loop0: 54.4 MiB, 57069568 bytes, 111464 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    and so on.
    Even blkid is not immune:
    Code:
    /dev/loop0: TYPE="squashfs"
    ...
    /dev/loop10: TYPE="squashfs"
    
    [squashfs? I'll squash you in a minute :]
    They clog mtab, show on df, anything to do with partitions.
    Mildly to wildly annoying.

    The second (and this is worse):

    Code:
    ~$ systemd-analyze blame | head -n 20
           12.054s NetworkManager-wait-online.service
            3.683s dev-sdb1.device
            2.455s systemd-journal-flush.service
            1.813s dev-loop10.device
            1.811s dev-loop8.device
            1.809s dev-loop9.device
            1.636s dev-loop4.device
            1.613s dev-loop5.device
            1.601s dev-loop7.device
            1.597s dev-loop6.device
            1.594s dev-loop3.device
            1.546s dev-loop2.device
            1.535s dev-loop1.device
            1.528s dev-loop0.device
            1.316s snapd.service
            1.042s udisks2.service
             546ms systemd-udev-trigger.service
             478ms NetworkManager.service
             412ms upower.service
             383ms networkd-dispatcher.service
    Add them up, it's more than 20 seconds dev-looping its navel at startup.
    Funny? No. Worth it? Not to me. Acceptable? Not really.

    ----------------------------------------------------

    So, two questions:
    The first, is anything being done - to the best of your knowledge - to address these problems?

    The second, if I wanted to get rid of snaps.
    Basically, I have nothing in there of much use to me except wine.*
    So, assuming I'd want to re-install and re-configure it, just remove snapd? And the /snap directory?
    Or anything else?
    I have searched far and wide , and am still confused.

    * I could swear I did not snap-install wine.
    I may be wrong - I've installed a few distros lately - but I really think I added the repos for wine-HQ and installed that. I even have a winehq.key in my /home...
    What may have happened is, checking out Discover, I noticed it had a Tiberian Sun game. It uses Wine, and had its own snap (I've removed it now)
    So Discover snapped the Wine...

    #2
    If you don't also purge snapd, you will still see the loop nonsense.
    Boot Info Script

    Comment


      #3
      So, it's
      sudo apt autoremove --purge snapd
      ? Or is it better to
      sudo apt autoremove --purge snapd gnome-software-plugin-snap
      ?
      Also delete /var/cache/snapd/ , /snap. and ~/snap ?
      And... anything else?

      Comment


        #4
        Originally posted by Don B. Cilly View Post
        So, it's
        sudo apt autoremove --purge snapd
        ? Or is it better to
        sudo apt autoremove --purge snapd gnome-software-plugin-snap
        ?
        Also delete /var/cache/snapd/ , /snap. and ~/snap ?
        And... anything else?
        This is what I used to rid myself of the loops.
        Code:
        sudo apt autoremove --purge snapd
        Boot Info Script

        Comment


          #5
          you do not need to remove anything to get rid of the loopy loops .

          of course you can if you do not need it ("snaps") or you can just turn it off ("snapd") .

          install "kde-config-systemd" then you will have a system settings module to work with systemd .

          you just mask the 2 entrys "snapd.seeded.service" and "snapd.service" , this will keep all the rest "inactive"

          I have all these snap related packages installed ,,,,,,,,,

          Code:
          vinny@vinny-Bonobo-Extreme:~$ dpkg -l | grep snap
          ii  libsnapd-glib1:amd64                          1.47-0ubuntu0.18.04.0                               amd64        GLib snapd library
          ii  libsnapd-qt1:amd64                            1.47-0ubuntu0.18.04.0                               amd64        Qt snapd library
          ii  libsnappy1v5:amd64                            1.1.7-1                                             amd64        fast compression/decompression library
          ii  plasma-discover-backend-snap                  5.16.3+p18.04+git20190723.0107-0                    amd64        Discover software management suite - Snap backend
          rc  plasma-discover-snap-backend                  5.14.5+p18.04+git20190108.1338-0                    amd64        Discover software manager suite - Snappy support
          ii  snapd                                         2.39.2+18.04                                        amd64        Daemon and tooling that enable snap packages
          but never see a pesky loop unless I mount one for some reason .
          and if I ever wanted to install a snap everything is still in place to do so .

          VINNY
          i7 4core HT 8MB L3 2.9GHz
          16GB RAM
          Nvidia GTX 860M 4GB RAM 1152 cuda cores

          Comment


            #6
            I have masked them, rebooted, and I'm afraid lsblk shows all ten loops, and the times in analyze blame have gotten worse. :·/

            Code:
                    2.155s dev-loop8.device
                    2.099s dev-loop9.device
                    2.091s dev-loop10.device
                    2.012s dev-loop0.device
                    2.001s dev-loop1.device
                    1.997s dev-loop2.device
                    1.991s dev-loop4.device
                    1.987s dev-loop5.device
                    1.972s dev-loop7.device
            Click image for larger version

Name:	systemd.png
Views:	1
Size:	76.3 KB
ID:	644255

            Comment


              #7
              strange , all I have are the ones that start with "snapd" are the rest snaps that you have installed ? if so then yes they will try to load as their software you installed that thinks it is needed.

              hear I get

              Code:
              vinny@vinny-Bonobo-Extreme:~$ dpkg -l | grep snap
              ii  libsnapd-glib1:amd64                          1.47-0ubuntu0.18.04.0                               amd64        GLib snapd library
              ii  libsnapd-qt1:amd64                            1.47-0ubuntu0.18.04.0                               amd64        Qt snapd library
              ii  libsnappy1v5:amd64                            1.1.7-1                                             amd64        fast compression/decompression library
              ii  plasma-discover-backend-snap                  5.16.3+p18.04+git20190723.0107-0                    amd64        Discover software management suite - Snap backend
              rc  plasma-discover-snap-backend                  5.14.5+p18.04+git20190108.1338-0                    amd64        Discover software manager suite - Snappy support
              ii  snapd                                         2.39.2+18.04                                        amd64        Daemon and tooling that enable snap packages
              Code:
              vinny@vinny-Bonobo-Extreme:~$ lsblk
              NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
              sda      8:0    0 465.8G  0 disk 
              ├─sda1   8:1    0 300.6G  0 part 
              ├─sda2   8:2    0     4G  0 part [SWAP]
              ├─sda3   8:3    0  52.4G  0 part 
              ├─sda4   8:4    0     1K  0 part 
              ├─sda5   8:5    0  55.7G  0 part 
              └─sda6   8:6    0  53.1G  0 part /
              sdb      8:16   0 931.5G  0 disk 
              └─sdb1   8:17   0 931.5G  0 part /mnt/btrfs
              sdc      8:32   0 232.9G  0 disk /mnt/test
              Code:
              vinny@vinny-Bonobo-Extreme:~$ blkid
              /dev/sda1: UUID="ff5d66d4-35b6-4c9c-a64e-8dfbe2aa1e31" UUID_SUB="564f9db6-6533-4d2d-b776-9c3e8ea6b1bf" TYPE="btrfs" PTTYPE="dos" PARTUUID="0006c611-01"
              /dev/sda2: UUID="98d20e91-1908-48a9-b713-c4aa0fd8b055" TYPE="swap" PARTUUID="0006c611-02"
              /dev/sda3: UUID="f10fe189-1184-4546-bbb2-58a9f0edbf8d" TYPE="ext4" PTTYPE="dos" PARTUUID="0006c611-03"
              /dev/sda5: UUID="baba2b60-3d3b-4781-8320-c7a25dd3cf52" TYPE="ext4" PTTYPE="dos" PARTUUID="0006c611-05"
              /dev/sda6: UUID="12886cb0-73e1-494a-8ec6-789b17d74e6a" TYPE="ext4" PTTYPE="dos" PARTUUID="0006c611-06"
              /dev/sdb1: LABEL="Extra Drive 1" UUID="030aba53-b498-4ff5-9e41-80699f3e2c02" UUID_SUB="161743d2-b19b-4268-ba35-402895d37c2c" TYPE="btrfs" PTTYPE="dos" PARTLABEL="primary" PARTUUID="e5d309dd-2438-43ff-8129-5380fa9193b0"
              /dev/sdc: UUID="fe385e89-57a5-4632-9823-043a70b67c65" UUID_SUB="acac51a4-abf8-4dc5-b939-c72dce1f0164" TYPE="btrfs"
              Code:
              vinny@vinny-Bonobo-Extreme:~$ findmnt
              TARGET                                SOURCE     FSTYPE     OPTIONS
              /                                     /dev/sda6  ext4       rw,relatime,errors=remount-ro
              ├─/sys                                sysfs      sysfs      rw,nosuid,nodev,noexec,relatime
              │ ├─/sys/kernel/security              securityfs securityfs rw,nosuid,nodev,noexec,relatime
              │ ├─/sys/fs/cgroup                    tmpfs      tmpfs      ro,nosuid,nodev,noexec,mode=755
              │ │ ├─/sys/fs/cgroup/unified          cgroup     cgroup2    rw,nosuid,nodev,noexec,relatime,nsdelegate
              │ │ ├─/sys/fs/cgroup/systemd          cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,xattr,name=systemd
              │ │ ├─/sys/fs/cgroup/devices          cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,devices
              │ │ ├─/sys/fs/cgroup/pids             cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,pids
              │ │ ├─/sys/fs/cgroup/net_cls,net_prio cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,net_cls,net_prio
              │ │ ├─/sys/fs/cgroup/blkio            cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,blkio
              │ │ ├─/sys/fs/cgroup/perf_event       cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,perf_event
              │ │ ├─/sys/fs/cgroup/rdma             cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,rdma
              │ │ ├─/sys/fs/cgroup/hugetlb          cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,hugetlb
              │ │ ├─/sys/fs/cgroup/cpu,cpuacct      cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,cpu,cpuacct
              │ │ ├─/sys/fs/cgroup/freezer          cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,freezer
              │ │ ├─/sys/fs/cgroup/memory           cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,memory
              │ │ └─/sys/fs/cgroup/cpuset           cgroup     cgroup     rw,nosuid,nodev,noexec,relatime,cpuset
              │ ├─/sys/fs/pstore                    pstore     pstore     rw,nosuid,nodev,noexec,relatime
              │ ├─/sys/kernel/debug                 debugfs    debugfs    rw,relatime
              │ ├─/sys/fs/fuse/connections          fusectl    fusectl    rw,relatime
              │ └─/sys/kernel/config                configfs   configfs   rw,relatime
              ├─/proc                               proc       proc       rw,nosuid,nodev,noexec,relatime
              │ └─/proc/sys/fs/binfmt_misc          systemd-1  autofs     rw,relatime,fd=35,pgrp=1,timeout=0,minproto=5,maxproto=5,di
              ├─/dev                                udev       devtmpfs   rw,nosuid,relatime,size=8150380k,nr_inodes=2037595,mode=755
              │ ├─/dev/pts                          devpts     devpts     rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
              │ ├─/dev/shm                          tmpfs      tmpfs      rw,nosuid,nodev
              │ ├─/dev/mqueue                       mqueue     mqueue     rw,relatime
              │ └─/dev/hugepages                    hugetlbfs  hugetlbfs  rw,relatime,pagesize=2M
              ├─/run                                tmpfs      tmpfs      rw,nosuid,noexec,relatime,size=1637228k,mode=755
              │ ├─/run/lock                         tmpfs      tmpfs      rw,nosuid,nodev,noexec,relatime,size=5120k
              │ └─/run/user/1000                    tmpfs      tmpfs      rw,nosuid,nodev,relatime,size=1637228k,mode=700,uid=1000,gi
              ├─/mnt/btrfs                          /dev/sdb1  btrfs      rw,relatime,compress=lzo,space_cache,subvolid=5,subvol=/
              └─/mnt/test                           /dev/sdc   btrfs      rw,relatime,ssd,space_cache,subvolid=5,subvol=/
              vinny@vinny-Bonobo-Extreme:~$
              Code:
              vinny@vinny-Bonobo-Extreme:~$ systemd-analyze blame | head -n 20
                     24.201s plymouth-start.service
                     17.015s dev-sda6.device
                     13.998s plymouth-read-write.service
                      7.347s udisks2.service
                      6.962s systemd-journal-flush.service
                      5.175s systemd-udevd.service
                      4.585s NetworkManager.service
                      4.092s networkd-dispatcher.service
                      3.940s apparmor.service
                      3.026s mnt-btrfs.mount
                      2.785s accounts-daemon.service
                      2.460s gpu-manager.service
                      2.420s grub-common.service
                      2.318s linuxlogo.service
                      2.077s postfix@-.service
                      1.866s systemd-modules-load.service
                      1.790s systemd-tmpfiles-setup-dev.service
                      1.299s systemd-tmpfiles-setup.service
                      1.296s thermald.service
                      1.275s wpa_supplicant.service
              humm looks like I nead to mask the plymouth ones on this system as well !

              the loop devices are conected to each of the snaps you have installed and wile thay are installed will still be active or you remove "snapd"

              or you can unmount them (the loop devices)

              VINNY
              i7 4core HT 8MB L3 2.9GHz
              16GB RAM
              Nvidia GTX 860M 4GB RAM 1152 cuda cores

              Comment


                #8
                I insist. I did not install them! They were sneaked on me. It's conspiracy! Snaps are snapping at my heels! I'm surrounded! Er...

                Tell you what. I'll just purge and nuke.
                I was waiting for more suggestions on the systemd hack, but... they'll just come back anyway.

                About the Plymouth... wouldn't eliminating splash from grub do it?
                And if you removed quiet too... I love it.
                No only it looks like you're booting a nuclear power station (in a cheap movie) but it's also vaguely informative.
                In the sense, if something hangs at boot, you see it immediately.
                It's probably slightly faster too.

                Comment


                  #9
                  So I purged.
                  The gnome-software-plugin-snap, I checked, I didn't have.
                  But since I had some gnome/GTK snaps, I wasn't sure.

                  /var/cache/snapd/ and /snap were autoremoved,. and ~/snap only had a few things left in it. It's gone.
                  Now, the only thing I was worried about was all the wine entries.
                  It turns out, as I suspected, it was a "fake" wine, all my wine stuff still works just fine - even after reboot.

                  My lsblk looks nice and clean, my systemd-analyze looks very happy, and it seems nothing is snapping at me anymore
                  Last edited by Don B. Cilly; Jul 26, 2019, 03:40 AM.

                  Comment


                    #10
                    Playing around with virtual machines, I decided to give Ubuntu 19.04 a spin.
                    Coming from Kubuntu, it's like jumping off a Ducati 916 S4 (my bike) and onto a Piaggio Zip 50 (my son's).

                    But the thing (related to this topic) is:
                    Code:
                    df -h
                    Filesystem      Size  Used Avail Use% Mounted on
                    udev            2.0G     0  2.0G   0% /dev
                    tmpfs           395M  1.4M  393M   1% /run
                    /dev/sda1       217G  5.1G  201G   3% /
                    tmpfs           2.0G     0  2.0G   0% /dev/shm
                    tmpfs           5.0M  4.0K  5.0M   1% /run/lock
                    tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
                    /dev/loop0       90M   90M     0 100% /snap/core/6673
                    /dev/sda2       923M   62M  798M   8% /boot
                    /dev/sda4       265G   72M  252G   1% /home
                    /dev/loop1       54M   54M     0 100% /snap/core18/941
                    /dev/loop2      152M  152M     0 100% /snap/gnome-3-28-1804/31
                    /dev/loop3      4.2M  4.2M     0 100% /snap/gnome-calculator/406
                    /dev/loop4       15M   15M     0 100% /snap/gnome-characters/254
                    /dev/loop5      1.0M  1.0M     0 100% /snap/gnome-logs/61
                    tmpfs           395M   56K  395M   1% /run/user/1000
                    /dev/loop6      3.8M  3.8M     0 100% /snap/gnome-system-monitor/77
                    /dev/loop7       36M   36M     0 100% /snap/gtk-common-themes/1198
                    /dev/loop8       55M   55M     0 100% /snap/core18/1074
                    /dev/loop9       89M   89M     0 100% /snap/core/7396
                    /dev/loop10      15M   15M     0 100% /snap/gnome-characters/296
                    /dev/loop11     3.8M  3.8M     0 100% /snap/gnome-system-monitor/100
                    /dev/loop12      43M   43M     0 100% /snap/gtk-common-themes/1313
                    /dev/loop13     150M  150M     0 100% /snap/gnome-3-28-1804/71
                    On this machine, I only installed one thing: conky.
                    Now, I really hope that Kubuntu implements an opt-in policy for snaps, the sooner the better.
                    Ubuntu has become basically unusable anyway.

                    Comment


                      #11
                      I'd like to revisit this with different assumptions.
                      First off, I have an Ubuntu 20.04, and - it might be my impression, but from what I've seen it's a lot more usable than 19.04.
                      Now, of course, df -h looks much like the above.
                      But then, df -h on a snapless machine nowadays still looks like:

                      Filesystem Size Used Avail Use% Mounted on
                      udev 3.9G 0 3.9G 0% /dev
                      tmpfs 793M 1.4M 792M 1% /run
                      /dev/sdc2 96G 14G 78G 15% /
                      tmpfs 3.9G 64M 3.9G 2% /dev/shm
                      tmpfs 5.0M 4.0K 5.0M 1% /run/lock
                      tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
                      /dev/sdc4 99M 6.7M 92M 7% /boot/efi
                      /dev/sdc6 198G 25G 164G 13% /home
                      tmpfs 793M 20K 793M 1% /run/user/1000


                      So, just as I can make an alias to make it look like this (with df -h -x tmpfs)

                      Filesystem Size Used Avail Use% Mounted on
                      udev 3.9G 0 3.9G 0% /dev
                      /dev/sdc2 96G 14G 78G 15% /
                      /dev/sdc4 99M 6.7M 92M 7% /boot/efi
                      /dev/sdc6 198G 25G 164G 13% /home


                      (no, you can't get rid of the udev, but, OK)
                      I can make it look the same with df -h -x tmpfs -x loop on a loopy filesystem.

                      So that takes care of one little piggy.

                      About the impact of snaps on boot:
                      I tried installing some snaps on Ubuntu. They really seem to have no impact on boot times - whatever systemd-analyze blame says.

                      Using systemd-analyze critical-chain
                      (which seems to report things more accurately as to actual times)

                      Code:
                      graphical.target @13.253s
                      └─multi-user.target @13.253s
                      └─kerneloops.service @13.169s +82ms
                      └─network-online.target @13.135s
                        └─NetworkManager-wait-online.service @5.897s +7.237s
                          └─NetworkManager.service @5.273s +540ms
                            └─dbus.service @5.257s
                              └─basic.target @5.176s
                                └─sockets.target @5.173s
                                  └─snapd.socket @5.162s +7ms
                                    └─sysinit.target @5.088s
                                      └─swap.target @5.084s
                                        └─dev-disk-by\x2duuid-c988060d\x2dc444\x2d430e\x2db5cd\x2d4ea65aa54446.swap @5.028s +48ms
                                          └─dev-disk-by\x2duuid-c988060d\x2dc444\x2d430e\x2db5cd\x2d4ea65aa54446.device @4.964s
                      on the Ubuntu machine, whether I have 8 or 15 snaps installed - the same times.
                      Now, on Kubuntu/neon, I get graphical.target @6.569s -or thereabouts -, but I've made other improvements there such as systemd-networkd instead of NM.

                      Thing is... snaps impact on boot times? From what I can see. none.
                      Snaps impact on filesystem checks? Horrible, but, they are getting messier and messier by the day anyway, and it's easily fixable.
                      It seems I was deceived by systemd-analyze blame... quite a bit.

                      Now, my assumptions here might be totally wrong - again- but from what I'm guessing, snaps don't seem to be that noxious after all...

                      Comment


                        #12
                        For what I have seen at least snapd itself has an impact on boot times (much more relevant for me on my laptop as it has a hard disk).
                        Sorry I have no numbers for you, but it boots several seconds faster after purging snapd
                        Debian KDE & LXQt • Kubuntu & Lubuntu • openSUSE KDE • Windows • macOS X
                        Desktop: Lenovo ThinkCentre M75s • Laptop: Apple MacBook Pro 13" • and others

                        get rid of Snap script (20.04 +)reinstall Snap for release-upgrade script (20.04 +)
                        install traditional Firefox script (22.04 +)​ • install traditional Thunderbird script (24.04)

                        Comment


                          #13
                          Originally posted by Don B. Cilly View Post
                          ...
                          Snaps impact on filesystem checks? Horrible, but, they are getting messier and messier by the day anyway, and it's easily fixable.
                          Just in case
                          df -hx squashfs
                          sudo fdisk -l /dev/sd*
                          mount -t nosquashfs,nocgroup

                          Plus https://www.youtube.com/watch?v=gVZOBgTDJWc
                          Kubuntu 20.04

                          Comment


                            #14
                            Thanks. alias dfh='df -hx squashfs -x tmpfs' is good enough for the moment.
                            On Ubuntu 20.04:

                            Code:
                            ~$ df -h
                            Filesystem      Size  Used Avail Use% Mounted on
                            udev            3,9G     0  3,9G   0% /dev
                            tmpfs           793M  1,8M  791M   1% /run
                            /dev/sdc7        41G  8,4G   31G  22% /
                            tmpfs           3,9G   50M  3,9G   2% /dev/shm
                            tmpfs           5,0M  4,0K  5,0M   1% /run/lock
                            tmpfs           3,9G     0  3,9G   0% /sys/fs/cgroup
                            /dev/loop0      1,2M  1,2M     0 100% /snap/ark/6
                            /dev/loop2       55M   55M     0 100% /snap/core18/1754
                            /dev/loop1       94M   94M     0 100% /snap/core/9066
                            /dev/loop3       55M   55M     0 100% /snap/core18/1705
                            /dev/loop4      128K  128K     0 100% /snap/forkstat/206
                            /dev/loop5      241M  241M     0 100% /snap/gnome-3-34-1804/24
                            /dev/loop6      256M  256M     0 100% /snap/gnome-3-34-1804/33
                            /dev/loop7       63M   63M     0 100% /snap/gtk-common-themes/1506
                            /dev/loop9       68M   68M     0 100% /snap/sublime-text/85
                            /dev/loop8      896K  896K     0 100% /snap/konquest/46
                            /dev/loop10      50M   50M     0 100% /snap/snap-store/433
                            /dev/loop11      28M   28M     0 100% /snap/snapd/7264
                            /dev/loop12      50M   50M     0 100% /snap/snap-store/454
                            /dev/loop13     261M  261M     0 100% /snap/kde-frameworks-5-core18/32
                            /dev/loop14     291M  291M     0 100% /snap/kde-frameworks-5-qt-5-14-core18/4
                            /dev/sda1       451M   57M  394M  13% /boot/efi
                            tmpfs           793M   32K  793M   1% /run/user/1000
                            
                            ~$ dfh
                            Filesystem      Size  Used Avail Use% Mounted on
                            udev            3,9G     0  3,9G   0% /dev
                            /dev/sdc7        41G  8,4G   31G  22% /
                            /dev/sda1       451M   57M  394M  13% /boot/efi

                            Comment


                              #15
                              Anyway. I timed the boot time on Ubuntu (with the clock on my wall) - with 14 snap loops - after switching to systemd networkng from NM. Some 25 seconds.
                              I timed it on Neon. No snaps, no NM. Some 26.
                              On neon systemd-analyze says: graphical.target reached after 6.128s in userspace
                              On Ubuntu, 9.305.
                              Which probably don't mean a thing... or thereabouts.

                              All that's left to do is remove snapd from Ubuntu and time it. I'll do that sooner or later.

                              Comment

                              Working...
                              X