Announcement

Collapse
No announcement yet.

Moving to K20

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    Setting aside PikeyFS er... BTRFS for the moment...
    I tried sharing /home between K20 and neon.
    Separate menu entries for cilly apps are easy.

    Of course, as soon as I opened Firefox in neon, "Zis iz an old fersion of Firefox. It vill create havoc mit all your zistem. Create new profile?".
    No thanks, updated (I hadn't for a week or so, using K20), 'saul good, man.

    Conky... of course the startup script has to be the same. So I scripted the script... ;·)
    PHP Code:
    rel=$(lsb_release -awk '/ID/ {print $3}'
    case 
    $rel in
    neon
    )   cd /ms/.conky;conky -./.conkyrc;conky -./conkyrc2 ; exit ;;
    Ubuntucd /home/not/.conky;conky -./.conkyrc;conky -./conkyrc2 & exit  
    esac 
    /ms is my "ἀποικία" (home away from home ;·) - where I keep the stuff I only want on that distro (very little)
    So each release starts its own conky.

    So now, (after deleting the backup home folders), I have

    Click image for larger version

Name:	hduK20.png
Views:	1
Size:	19.5 KB
ID:	644721
    On K20, and

    Click image for larger version

Name:	hduneon.png
Views:	2
Size:	18.2 KB
ID:	644722
    on neon.

    Which means I'll be able to recover quite a lot of space from both /roots if I need to :·)

    [EDIT] Still, neon is now "moving ahead" on Plasma (as expected).

    Click image for larger version

Name:	conkneon.png
Views:	2
Size:	24.0 KB
ID:	644719 . Click image for larger version

Name:	conkK20.png
Views:	2
Size:	25.5 KB
ID:	644720
    Last edited by Don B. Cilly; May 12, 2020, 09:55 AM. Reason: Amended code. Works better now :·)

    Comment


      #17
      Tch. I'm back on neon. Why? I... just don't know.
      I've been booting this, that and the other lately. Sometimes I boot "back" (for actual use) to K20, sometimes to neon.

      In the end, I tended to boot back more and more to neon, and, for the last few days, it's all I do.
      Why? I... just don't know.
      They're almost exactly the same, they share the same /home, they behave... the same.

      Click image for larger version

Name:	_nonso.gif
Views:	5
Size:	275 Bytes
ID:	644762

      Comment


        #18
        I dabbled with btrfs for a while, but had horrendous problems, reverted to EXT4. BTRFS is not yet ready for general use IMHO.

        Be very careful with your critical files, keep them on a good backup if you do make the move.

        Cheers, Tony.

        Comment


          #19
          Originally posted by barfly View Post
          I dabbled with btrfs for a while, but had horrendous problems, reverted to EXT4. BTRFS is not yet ready for general use IMHO.

          Be very careful with your critical files, keep them on a good backup if you do make the move.

          Cheers, Tony.
          Being careful of critical files is a must regardless of which filesystem one uses. I began using BTRFS when the K16.04 Beta was released and I've put it through all of its paces by trying it on a single drive as a singleton, on two and three drives as RAID's and taking them back to a singleton on sda with sdb and sdc being used as archives.

          When using BTRFS it is wise NOT to use /dev/sdx to mount drives because those drive letters can change at boot up without notice. That's why the best practice is to use the UUID designation:

          mount /dev/disk/by-uuid/ce2b5741-c01e-4b3d-b6ba-401ad7f7fcdf /mnt

          blkid can be used to retrieve UUID values:
          Code:
          root@j:~# blkid
          /dev/sda1: UUID="[B]ce2b5741-c01e-4b3d-b6ba-401ad7f7fcdf[/B]" UUID_SUB="e4e0902f-6a80-47cd-a53a-571632f78cc5" TYPE="btrfs" PARTUUID="e00dfb49-01"
          /dev/sdb1: LABEL="sdb1" UUID="17f4fe91-5cbc-46f6-9577-10aa173ac5f6" UUID_SUB="4d5f96d5-c6c6-4183-814b-88118160b615" TYPE="btrfs" PARTUUID="5fa5762c-9d66-4fdf-ba8f-5c699763e636"
          /dev/sdc1: UUID="e84e2cdf-d635-41c5-9f6f-1d0235322f48" UUID_SUB="c78731d5-d423-4546-9335-f9751c148174" TYPE="btrfs" PARTUUID="dc864468-01"
          I've found that TimeShift can destroy your subvolumes if you delete the pkg without first deleting all the snapshots you've created with it. Snapper's problem is that its snapshots are stored within directories within the subvolume. In Kubuntu there are two primary subvolumes: @ and @home. I've experimented with creating others, like @data, and mounting them using fstab, but in the end I just remained with the two. The snapper snapshots are inside @ and @home. If one of them gets corrupted you cannot mount it and replace it with a previous snapshot that does mount, losing any changes since the corrupt snapshot was made. I do all my snapshots, archiving and rollback from a manual sudo Konsole. I often experiment with my installation and when I am done, if I don't like the results, I rollback to a snapshot pair taken just before the experiment. Oh, BTW, always take snapshots in pairs (@ and @home) at the same time. Mixing up snapshots can result in disconnected packages.

          Another "gotcha" is creating too many snapshots. Even though snapshots, when first created, contain only meta data and are essentially empty, as you use your system and make changes the snapshots begin to fill up. Let's say you have a 500GB SSD with 150GB of data and snapshot @ and @home every hour. In a 12 hour day that is 24 snapshots. 168 in a week and 672 in a month, and 8,064 in a year. All the while you are doing updates, installs, uninstalls, adding and removing files and folders, etc... Eventually, your oldest snapshots will begin to fill up and a pair of them will hold abut 150GB. A short while later the next oldest pair fills up and you have only 200GB left. Then the 3rd oldest pair fills up and you have only 50GB left. 10%. Your drive will begin to slow down, which you will interpret as some sort of failure. Then, when the 4th oldest pair attempts to fill up your filesystem runs out of room. Of course, this could happen much earlier if your metadata runs out of space, even if you have 200GB unallocated.
          Code:
          # btrfs filesystem usage /mnt
          Overall:
              Device size:                 465.76GiB
              Device allocated:            108.04GiB
              Device unallocated:          357.72GiB
              Device missing:                  0.00B
              Used:                         92.19GiB
              Free (estimated):            371.77GiB      (min: 371.77GiB)
              Data ratio:                       1.00
              Metadata ratio:                   1.00
              Global reserve:              173.55MiB      (used: 0.00B)
          
          Data,single: Size:105.01GiB, Used:90.96GiB (86.62%)
             /dev/sda1     105.01GiB
          
          [COLOR=#000000]Metadata,single[/COLOR]: Size:3.00GiB, Used:1.23GiB (41.06%)
             /dev/sda1       3.00GiB
          
          System,single: Size:32.00MiB, Used:16.00KiB (0.05%)
             /dev/sda1      32.00MiB
          
          Unallocated:
             /dev/sda1     357.72GiB

          In four years I have yet to experience a failure of any kind using BTRFS. I find it to be extremely stable, fast and reliable.
          "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
          – John F. Kennedy, February 26, 1962.

          Comment


            #20
            Originally posted by barfly View Post
            I dabbled with btrfs for a while, but had horrendous problems, reverted to EXT4. BTRFS is not yet ready for general use IMHO.
            When was that, I wonder? With openSUSE making btrfs its default file system, I'm hoping that progress to stability will be quick.
            Regards, John Little

            Comment


              #21
              Originally posted by barfly View Post
              I dabbled with btrfs for a while, but had horrendous problems, reverted to EXT4. BTRFS is not yet ready for general use IMHO.

              Be very careful with your critical files, keep them on a good backup if you do make the move.

              Cheers, Tony.
              Obviously making backups is highly recommended regardless of file system. I can't imagine what you did or encountered that rose to " horrendous problems," Care to share any actual information?

              I've been using BTRFS daily since tools version 0.19 - like 2007 or 2008. Not only have I not ANY problems, much less horrendous ones, I've not lost any data. Of course, I'm also methodical and don't attempt to enable or use features that weren't ready for use. For example, for 3-4 years RAID5/6 was known to have issues. Even though it was well known and advised not to use it except for testing was published all over the 'net, people still tried using it then blamed the file system for data corruption. Quota groups was another feature that for a time was not recommended for general use. I believe both are now considered usable, although I have no real need for either so I haven't tried.

              I recollect losing a full file system only twice since switching to Linux in 1998 - once when using MDADM RAID and once when an unknown occurrence destroyed an EXT4 file system. With no way to recover system file info from EXT, the recovery tools could only save 10,000 files with no filenames or dates. Not much good. Never had any issue using RAID with BTRFS and never lost a file. Fixed MANY a mucked install from a bad update or user stupidity in 20 seconds or less because I had snapshots. Recovered many a deleted file from snapshots. FUlly automated my snapshots and backups last year so I don;t have to think about it until I need it.

              Sorry you had a bad experience but the ancient EXT file system will never compete with BTRFS on built-in features, file security, or ease of use.

              Please Read Me

              Comment


                #22
                Originally posted by jlittle View Post
                When was that, I wonder? With openSUSE making btrfs its default file system, I'm hoping that progress to stability will be quick.
                You know of some instability? Or did you mean it in another way?

                BTRFS format and file structure has been stable for years - as in set and not going to change. BTRFS-TOOLS is still under development but AFAIK it's mostly to complete planned feature enhancements and improve performance.

                Please Read Me

                Comment


                  #23
                  Maybe this post should be in the btrfs forum...
                  Originally posted by oshunluvr View Post
                  You know of some instability? Or did you mean it in another way?
                  Having thought about it for a while, I think it's the perception of instability I want btrfs to leave behind. There's been disdain for btrfs; I've encountered phrases like "not ready for production", "buggy", and even "I wouldn't touch btrfs with a twenty foot barge pole". I think that negativity is nonsense, but it became the received opinion for those in the know.

                  For my use of btrfs, with only one fastish SSD and one larger disc drive, so no multivolume or RAID considerations, btrfs has a lot of advantages, and has performed well, except once when it totally borked itself. On that occasion, I lost no data at all, because initially the problem didn't affect reading and at the first sign of trouble I made a backup of the affected subvolume, and ensured the backups of other subvolumes were current.

                  Originally posted by oshunluvr View Post
                  BTRFS format and file structure has been stable for years - as in set and not going to change. BTRFS-TOOLS is still under development but AFAIK it's mostly to complete planned feature enhancements and improve performance.
                  My experience with that failure was that the tools for fixing a btrfs with problems, and the code in the kernel for handling problems, could have been better. Mounting the borked btrfs drove a kernel thread (thus unkillable) into a loop, write bound, running indefinitely, even when booting gentoo with an old kernel. That shouldn't be possible, at the very least the mount should have timed out. Now, the original problem was most likely caused by a failure to resume from suspension; the system had had a few of those, caused by flaky firmware (the only way to reset the motherboard and bring the computer back to life is to pop the battery for a while) so I didn't blame btrfs for the trouble, but the tools for recovering from the slight corruption destroyed the fs.

                  This is the soapbox, so ...

                  I don't understand why ZFS is considered so cool. I suspect that it has the mana from being used by the big iron, and the guys who really know what they're doing use it. For me, I'm allergic to Oracle Corp, and restrictions on the licence that keep it out of the kernel are a red flag. But, I'm interested, and read what people say about it, and watch the videos. I've concluded that ZFS was designed for hard drives, to get speed and reliability by using lots of them and lots of RAM, and generally the design is over 20 years old and before SSDs. IMO this oldness of design underlies the problems with using DM/SMR (er, device managed shingled magnetic recording, I think) drives with ZFS recently; on resilvering ZFS is getting some kind of quadratic behaviour with SMR drives, in one case taking 9 days to rebuild a 2TB drive (it takes about an hour to write 2 TB via SATA 3).

                  IMO for personal use ZFS makes no sense, bringing in needless complexity and those hardware requirements.

                  I think that if users like us used ZFS like we use btrfs it would be less reliable than btrfs, if only because we wouldn't configure it properly, or give it enough resources, or something like that.
                  Regards, John Little

                  Comment


                    #24
                    Good points John.

                    There have been some past problems with BTRFS, most of which have been hammered out: Quotas, RAID 5/6, and mounting a degraded RAID 1. I rode along from day one - first on a test bed for more than a year, then daily. The whole time I kept a watch on the issues and stayed away from them, which really wasn't hard to do. I had been using MDADM RAID and lost a few arrays over the years (backups were made of course) but I haven't lost a single BTRFS filesystem, RAID or otherwise and I use BTRFS a LOT.

                    I agree with your opinion of ZFS exactly.

                    Please Read Me

                    Comment


                      #25
                      I still don't, or more accurately, am not able to, see a compelling reason for btrfs etc, for 'normal' desktop users.
                      I am probably lacking the eli5 information to make it clear, and the ability to install an OS without needing do things manually. I know the last part is of course related to the installer.

                      Comment


                        #26
                        I guess that depends on what you view as "normal". Normal users don't need snapshots? Easy backups? Don't need the file security of a Copy-On-Write file system?

                        Please Read Me

                        Comment


                          #27
                          Well, we normal users do have snapshots ans pretty easy backups with TimeShift and the like.

                          As to the file security of COWS (and the like)... I'm not sure, but I find the current EXT4 code reassuring enough ;·)

                          Code:
                                   (__) 
                                   (oo) 
                             /------\/ 
                            / |    ||   
                           *  /\---/\ 
                              ~~   ~~

                          Comment


                            #28
                            Hmmm, <insert joke here about who is a normal user and who isn't>


                            Please Read Me

                            Comment


                              #29
                              Originally posted by oshunluvr View Post
                              I guess that depends on what you view as "normal". Normal users don't need snapshots? Easy backups? Don't need the file security of a Copy-On-Write file system?
                              A normal user having to deal with any real or theoretical learning curves to install, use, and manage this on an Ubuntu system. An extra option in the installer, similar to the drive encryption option, with some simple defaults for snapshots, etc set up as well, ootb.? Assuming this is not already the case.

                              I am NOT knocking the file system at all, just that it is currently (imo) not readily set up for most average people to use it a bit more easily. Less reliance on the terminal, maybe.
                              Last edited by claydoh; Jul 01, 2020, 04:18 PM.

                              Comment


                                #30
                                Originally posted by oshunluvr View Post
                                Hmmm, <insert joke here about who is a normal user and who isn't>


                                Ahh ya beat me to it lolololol!

                                Comment

                                Working...
                                X