Announcement

Collapse
No announcement yet.

Proposal for a improved software distribution using BTRFS

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Proposal for a improved software distribution using BTRFS

    BTRFS fanboys might love this..

    Lennart Pottering, systemD developer, has a proposal for distributing system images using BTRFS:

    http://0pointer.net/blog/revisiting-...x-systems.html

    It's an interesting read. Key points:
    • aims to replace the need for end users to use packaging tools
    • core OS, root filesystem, desktop environments, and development tools can be distributed separately as different BTRFS snapshots
    • apps can be built to run against a specific snapshot containing a known set of libraries instead of having a package with a huge list of dependencies. Since the versions of each library are known, the upstream developer is able to do much more targeted testing instead of leaving this to distributors.
    • multiple versoned snapshots would be installed at once, allowing the system to select the most appropriate snapshot to execute an application with
    • not as wasteful as it sounds, because BTRFS has a diff tool so duplicate files aren't stored on disk
    • allows end users to get the most up to date versions of a developer's software with minimal fuss


    I'm still mulling it over, but I think it sounds like there could be some big advantages to doing things this way, mainly in terms of reducing the huge amount of effort that goes into maintaining all of the different Linux distributions - this makes installing a new distro as simple as swapping out one of the snapshots for another!

    It's worth noting that this isn't ready yet, it's just an aspiration. What do you think?
    samhobbs.co.uk

    #2
    It might sound good in principle but, I have a few questions:

    1. Are you going to be able to gain access to the source code of any open source application using this system of installing?
    One of the best things about open source is that anyone can obtain the source code and scrutinise it for any malicious code or to fix bugs.

    2. If this system becomes the default system for installing applications that means everyone will have to install and use BTRFS, but what if you don't want to use or have any need to use that file system?
    GNU OS (more commonly known as Linux) has always been about freedom to choose what you want to have installed and the freedom to run apps as you want to use them. With this system it sounds like no more freedom of choice.

    Comment


      #3
      Originally posted by NickStone View Post
      Are you going to be able to gain access to the source code of any open source application using this system of installing?
      Good question, from what I understand the answer is yes. In the article he says that this method would not completely replace package managers etc, which would still be used by developers when generating snapshots. All of the packaging tools would still be there, so you could still do apt-get source foo or whatever.

      If this system becomes the default system for installing applications that means everyone will have to install and use BTRFS, but what if you don't want to use or have any need to use that file system?
      ...
      With this system it sounds like no more freedom of choice.
      Well, developers would still produce tarballs of source or use git or whatever during development, I think what pottering is getting at is that they could also generate BTRFS snapshots built against a known set of libraries if they wanted more control over the environment their software was designed to run in. This doesn't make it more difficult for distribution maintainers to generate packages if they want to, since at present they still have to do all the work of generating .rpm or .deb files from tarballs! Instead, they have the option of using the developer's snapshot directly, without having to re-package everything each time.

      Even if some distros didn't offer .debs, I'm sure there would still be a few distros who made it their thing to use the old way (like Devuan did with systemD); as you say Linux is nothing if not diverse!

      I reckon a lot of people who just want to install applications quickly and easily would just stick with whatever the distro default is - if your distro used this method of software distribution then you would have a use for BTRFS - you'd be gaining the stability that comes with the extra testing the upstream devs have been able to do, so your software should be less buggy (in theory! Whether it would work in practice is another thing entirely...).

      Turning the question on its head, what are the advantages of, say, EXT4 over BTRFS to the average user? I love to tinker, but I've not seen the need to change the filesystem yet, EXT4 works fine and I cba to change it. Using BTRFS for backups is the only reason I can think of to switch filesystems, and that's an argument for BTRFS over EXT4, not the other way around! Can you think of any?

      I think the biggest reason people would resist this is that you'd be getting stuff signed by a huge number of different devs (since the installed files would come directly from developers), instead of everything being uploaded to a build-farm as a source package and turned into a .deb signed by Canonical or Debian. Pottering doesn't think people would care about this:

      The classic Linux distribution scheme is frequently not what end users want, either. Many users are used to app markets like Android, Windows or iOS/Mac have. Markets are a platform that doesn't package, build or maintain software like distributions do, but simply allows users to quickly find and download the software they need, with the app vendor responsible for keeping the app updated, secured, and all that on the vendor's release cycle. Users tend to be impatient. They want their software quickly, and the fine distinction between trusting a single distribution or a myriad of app developers individually is usually not important for them.
      ...but I think he's wrong. Plenty of Linux users think this is important, and it would likely cause a huge stink (but doesn't every project of his? lol.)
      samhobbs.co.uk

      Comment


        #4
        Not knocking the concept, as it is interesting, to say the least.

        1) All developers only release stable, quality, working, and useful software every time, all the time.
        1b) support and bugs: a distro takes all these disparate bits and molds things into a more solid bit. So now the upstream has to do that (this could be a plus or a minus)
        2) What will the 249958837 respins that mainly change wallpaper and font do
        2a) heck, what will this do to the concept of a distro in general?
        2b) This raises the bar of entry for those looking to be able to contribute to a project (ie how many coders got their feet wet via packaging and other distro-related tasks)
        4) how the heck does does the end user deal with all this? Because you know any ui for this will be terrible





        Sorry (NOT!) for being snarky lately, this busted wrist and being freshly un-married have unbalanced by something or another

        Comment


          #5
          Originally posted by claydoh View Post
          N and being freshly un-married
          Fathers and mothers, lock up your daughters!

          Comment


            #6
            I support such a distro.

            But also wonder, a snapshot will include everything, including the user name and password...
            But maybe I don't know enough of the concept.

            Comment


              #7
              1) All developers only release stable, quality, working, and useful software every time, all the time.
              Genuinely unsure if that's sarcasm from a long-time linux user...? The good ones do! There are plenty of hobbyists that don't though (I'm thinking of myself here...there's a steep learning curve)

              1b) support and bugs: a distro takes all these disparate bits and molds things into a more solid bit. So now the upstream has to do that (this could be a plus or a minus)
              Yeah - on the plus side it should make patches more portable and get more fixes in upstream, since you'd expect multiple distros to be running the same sets of libraries, which reduces the number of variables to deal with. On the other hand, the upstream maintainers may have different priorities to distros and be uncooporative, in which case I guess we aren't really any worse off than we are today.

              2) What will the 249958837 respins that mainly change wallpaper and font do
              2a) heck, what will this do to the concept of a distro in general?
              Firstly, LOL. Not fond of these guys then? I guess they could all just release a slightly modded image of one part of the parent distro, I think you're more likely to understand this than me because you've used BTRFS, but I think the snapshots can be mounted on top of each other...?

              2b) This raises the bar of entry for those looking to be able to contribute to a project (ie how many coders got their feet wet via packaging and other distro-related tasks)
              Kind of true. I've found packaging to be really quite difficult as it is. Making a .deb would still be part of the process though, and if you wanted to distribute some small app you could still do that without creating a snapshot.

              4) how the heck does does the end user deal with all this? Because you know any ui for this will be terrible
              Can't agree enough - if there was a UI, it would almost certainly be terrible. BUT, will there even be a UI? I would expect it to be handled by systemD during the boot process

              Sorry (NOT!) for being snarky lately, this busted wrist and being freshly un-married...
              Sorry to hear that pal, hopefully the two are unrelated (not a wall-puncher, are you?). I can't say I noticed any difference in snarkiness tbh... what's the difference between ∞ and ∞+1?

              Originally posted by Teunis View Post
              But also wonder, a snapshot will include everything, including the user name and password...
              There's a separate snapshot for home, so the developer could distribute an exact copy of (parts of) his system without leaking any private data, assuming there isn't any in /usr or wherever. You'd have to be careful not to distribute config files with passwords in, or SSL keys and other bits if you were making a snapshot of /etc though.
              samhobbs.co.uk

              Comment


                #8
                Originally posted by Teunis View Post
                Fathers and mothers, lock up your daughters!

                Lol, who said anything about being single? :cool:

                Comment


                  #9
                  Very interesting. I can't help thinking that Lennart Pottering, being a Red Hat person, does not appreciate how well Debian's APT has worked and still works.

                  <Cynic>the one ring (systemd) has not bound the three (Arch, Ubuntu, whatever); let's try again</cynic>
                  Regards, John Little

                  Comment


                    #10
                    Package delivery has not been an area I've spent much time learning about. As a user, one of the things I've learned is I prefer DEB over RPM and binary builds are just a pain in the arse. I think the biggest plus to the concept is the total removal of the decision about which package manager to use if you're a distro or having to learn to package for every variation if you're a developer.

                    As far as the btrfs end of it, I can tell you a half-a-dozen or more reasons why you should be using btrfs and at least a half-a-dozen more ways btrfs could be further implemented that would remove or reduce even the casual user's workload and problems.

                    I think for many users - me included - he's right in that they don't care as much about the who or how and more about the when. Moving away from a fragmented ecosystem of many ways to deliver a program to a single universal delivery system can't help but speed things up in the development cycle and therefore will increase the speed at which the users will get their "fix" of the latest release. Everyone wins. There are many Linux users that complain about the lack of conformity distro-to-distro and package delivery is a big one. Frankly, I care less as long as it works.

                    As a developer, how cool to be able to have 20 distros on a single btrfs filesystem and one command sends a package to them all? Or to not go through the whole package+re-package times number-of-package-systems every update. One command, one package, everyone can install it, done.

                    As far as the "removal" of choice with regards to hard drive formats - is this really an issue? Do the vast majority of computer users care if it's EXT4, BTRFS, JFS, ReiserFS, or whatever? I don't think so. They do care about speed, versatility, reliability, and functionality. A single file system that offers native capabilities like; backup and restore, rollback, multi-device support - both RAID (any level) and JBOD, the ability to copy an entire drive's data to another drive - even over a network, removal of the need to partition (mostly at least), and all of that while the filesystem is still mounted and in use, including adding or removing devices (drives) - BTRFS does this and does it now. The real question is why aren't you all using BTRFS? The problems I have with btrfs are almost all related to lack of installer and bootloader support (I can't imagine why bootloaders aren't built into our motherboards, but that's a whole 'nother gripe ). The non-casual or "expert" user won't care much. We'll still go through our machinations to get where we want to be. I, for one, have to manage the fact that many distros either poorly support btrfs installation or don't support it at all. Much in the same way I have to work around GRUBs shortcomings.

                    My drive format history has followed the capabilities of the filesystems and how they fit my needs. While I agree that choice is a huge part of what we love about linux, endless and needless choice doesn't have to be always embraced when a clear "best choice" exists. There isn't a single thing better about EXT4 over BTRFS that I am aware of. Heck, even the maintainers of EXT4 acknowledge it's a stop-gap format on the way to something better. I realize I've strayed a bit off the OP, but it goes to the concept of making the filesystem something that you benefit from instead of just manage.

                    To the technical points of how it could be done; Yes, you could merge a snapshot into a subvolume and as it's been pointed out - without needless multiple copies of libraries. I'm not a developer, but I assume a method of dependency control would have to be included in the snapshot delivery. More specifically, snapshots are "copies" of a subvolume sort of frozen in time, so the developer need only keep his program files and it's dependencies in a separate subvolume and then nothing else from his system would be included in the delivery. The cool part would be version control. The dev. gets ready to release, makes a snapshot, sends it out, then starts the next cycle of bug-fixes, enhancements, whatever. When time to release, make a new snapshot and send it, done. For as long as the dev. wished, he would have all the versions saved as snapshots with no unnecessary redundant copies of any files that didn't change version-to-version. Neat.

                    Please Read Me

                    Comment


                      #11
                      Thanks for that post, it was a good read

                      Why am I not using BTRFS? It's not the default, and I have so many things to learn that I haven't got round to it yet... the other things always seem to be more urgent. I will get around to it eventually, and when that day comes I'm sure I'll join the BTRFS fanclub and wonder why I waited so long! Lol.
                      samhobbs.co.uk

                      Comment


                        #12
                        Originally posted by oshunluvr View Post
                        ... There isn't a single thing better about EXT4 over BTRFS that I am aware of....
                        Last I heard, BTRFS was much slower than EXT4. Is that no longer so?

                        Regards, John Little
                        Regards, John Little

                        Comment


                          #13
                          it's catching up ,,,,,,,,,,,,, https://delightlylinux.wordpress.com...btrfs-or-ext4/ .

                          I have a install on a btrfs drive/partition and I didn't notes a difference in performance at all just using it ,,,no tests, bench marks ,ect ,,,,,, just using it .

                          + with a snapshot of @ (the /) I did a "rm -rf / " on the running system (which killed it of course) and just booted to a alternate install and renamed the snapshot (@snap) to @ and the system was restored exactly as it was at the point of death .

                          VINNY

                          vinny@vinny-Bonobo-Extreme:/mnt/btrfs$ ls
                          @ @data @home kubuntu @snap
                          Last edited by vinnywright; May 07, 2015, 02:32 PM.
                          i7 4core HT 8MB L3 2.9GHz
                          16GB RAM
                          Nvidia GTX 860M 4GB RAM 1152 cuda cores

                          Comment


                            #14
                            Originally posted by NickStone View Post
                            1. Are you going to be able to gain access to the source code of any open source application using this system of installing?
                            One of the best things about open source is that anyone can obtain the source code and scrutinise it for any malicious code or to fix bugs.
                            I agree that the ability to investigate source code is useful -- consider how much OpenSSL has improved of late after lots of new scrutiny.

                            Generally, though, most of us never bother to look at the source code and instead just download, install, and run binaries. Why? What makes these binaries trustworthy? Nothing, really. If we claim that it matters, then we need to construct a build environment exactly like that of the developer or packager, build the packages ourselves, and compare MD5 hashes of our results to those of the originals. Only if they match can we then know, to a certain degree, that the binaries reflect the true intention of the source code. (We are, of course, ignoring that the compiler itself might have trust issues.)

                            Comment


                              #15
                              Originally posted by oshunluvr View Post
                              I think the biggest plus to the concept is the total removal of the decision about which package manager to use if you're a distro or having to learn to package for every variation if you're a developer.
                              For all practical purposes, packaging is one of the few remaining major differences between distros. I'm a little mystified at why the few various attempts to create commonalities (like PackageKit) have failed.

                              Originally posted by oshunluvr View Post
                              Moving away from a fragmented ecosystem of many ways to deliver a program to a single universal delivery system can't help but speed things up in the development cycle and therefore will increase the speed at which the users will get their "fix" of the latest release.
                              And actual fixes, too. A common delivery system would greatly simplify the delivery of security updates and patches.

                              Originally posted by oshunluvr View Post
                              The real question is why aren't you all using BTRFS? The problems I have with btrfs are almost all related to lack of installer and bootloader support (I can't imagine why bootloaders aren't built into our motherboards, but that's a whole 'nother gripe ).
                              Yup, I've been sold on the merits of Btrfs for some time, too. Just this week I built up my new X250 thusly:
                              * /dev/sda1 - 512 MB FAT32 for EFI system partition
                              * /dev/sda2 - 8 GB swap
                              * /dev/sda3 - 224.5 GB Btrfs for / (contains @ and @home)

                              Of course, I had to do the partitioning at the command line using gdisk because Ubiquity (still!) appears not to know how to do this. (News flash: good old fdisk has now gained GPT support -- check it out.)

                              Oh, and most (all?) UEFI machines do have boot managers in the firmware. It's just that they're often difficult to invoke and they suck. systemd is subsuming gummiboot, a slightly better boot manager. rEFInd is still the best.

                              Originally posted by oshunluvr View Post
                              More specifically, snapshots are "copies" of a subvolume sort of frozen in time, so the developer need only keep his program files and it's dependencies in a separate subvolume and then nothing else from his system would be included in the delivery. The cool part would be version control. The dev. gets ready to release, makes a snapshot, sends it out, then starts the next cycle of bug-fixes, enhancements, whatever. When time to release, make a new snapshot and send it, done. For as long as the dev. wished, he would have all the versions saved as snapshots with no unnecessary redundant copies of any files that didn't change version-to-version. Neat.
                              Earlier I pointed out some security advantages to this approach. I should also point out some security problems. Shared libraries mitigate code flaws, while static libraries (in separate snapshots here) lead to rot. Let me construct an example.

                              libfoo-1.0 is used by BarTool and BazUtil. In today's world, only one copy of libfoo-1.0 exists on the system. J. Random Smartass finds a security bug in libfoo-1.0 and reports it. The developer fixes it and releases libfoo-1.0a. The developers of BarTool and BazUtil need to ensure that their code still functions with the updated libfoo-1.0a. This is all good.

                              Now imagine we have snapshot delivery of BarTool and BazUtil. For reasons that escape reasons, the developer of BarTool has decided to package libfoo-1.0 in his snapshot. When the updated libfoo-1.0a becomes available, the BarTool developer says "screw it" and decides not to test; furthermore, s/he declares a hard dependency (or static link? not sure which here, tbh) on libfoo-1.0. Meanwhile, the system's shared library gets updated to libfoo-1.0a which will be used by BazUtil because its developer isn't so lazy. Now, then: the system has two copies of libfoo-1.0, a broken one and a fixed one. Can you see why this is a problem?

                              Comment

                              Working...
                              X