Announcement

Collapse
No announcement yet.

Systemd In Ten Years Has Redefined The Linux Landscape

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by oiaohm View Post

    Before calling it bs do spend some time going through the benchmarks on the topic. I am interested to see how Redox version turns out and if it suffers from the problems that you can detect in jails, cgroups/namespaces and zones that result in them being slower than VM under particular workloads or do they find the solution.

    The basic problem is that a lot of people think that isolation of processes is like a solved game like tic tac toe that is something you can engineer out a solution because you understand all the problem space. Isolating applications from the different benchmark failures it clear those have not found the perfect solution and that means every solution so far has missed something about the problem space.

    Basically your ideas are right if we have successfully solved the problem space of how to isolate applications. If we have solved how to isolate applications properly zones, jails, cgroup/namespace... under any workload should be better than VM and minor-ally worse than bare metal. Problem is benchmarks don't tell this story. Instead the benchmarks tell the story that we have those systems at times worse than VM so we have not solved it yet.

    While something is not properly solved how to-do it you cannot engineer a perfect solution either. Its like why you make a prototype and refine it before you engineer final product as well.

    Of course someone could get lucky and engineer up the correct solution or the linux kernel chaos development model could try enough different things to point to where the correct solution is as well.
    I'd surely be interesting to see but if someone is making this claim the first thing I'm going to do is question the methodology.

    Comment


    • Originally posted by k1e0x View Post
      I'd surely be interesting to see but if someone is making this claim the first thing I'm going to do is question the methodology.
      Most of the write up say use docker/jails/zones over virtual machine because container based solution is going to always be better problem is no benchmarks included backing the statement. Reality lot of questions turn up about benchmarks from multi parties that are like what in hell is this like the follow.


      Yes this was a person a year ago who was like what the hell what this. When cgroups/namespace or jails or zones go wrong is strangely bad. This one here is a real head scratchier when it happens to you. Note the person using the same docker image here on the host and in a vm hosted on the host. Yep 30 percent less network traffic and 10% less cpu usage doing the same thing inside the VM so kicking ass out host version. Network one is fun packet issues with network name-spaces adding processing time the more name-spaces that have to be considered.

      You can find cases where people with freebsd jails and historically with solaris zones have it equal oddities.

      I have provided one of the benchmark sets.

      And you get a few rare published benchmarks showing what the hell like the cern one the issue here is as you find these benchmark sets they all start saying the same story. Container/zones/jails are not always light. This is a case where real world implementation does not align to basic theory.

      The methodology of the Linux kernel is messy. But it has strict advantage when what you are dealing with is not a 100 percent solved problem..

      cgroup v1 even that is was a failure gave us some baseline requirements.
      1) mem, cpu, io and network... basically anything resource need to be unified deal with Priority inversion.
      2) Real world usage cases having multi resource trees come hard to impossible to manage well.
      3) Unlike zone and jails of the past we need to be able to stack.

      This was all worked out by real world testing and usage. Allowing development that is not totally engineer out allows work to guided in the direction of usage cases. This is the strict advantage. The messy development is allowing people who are really using it to feedback what functionality is need and what performance problems they find then to run experiments to find solution. This is very much close to scientific method just like the Engineering model but with key differences.

      Linux kernel development model is roughly.
      1) Idea/objective: small Formulation of a question to individual problem.
      2) Hypothesis- You see this in the Linux kernel patch notes from the developer of the patch.
      3) Prediction- You also see this in the Linux kernel patch notes from the developer of the patch.
      4) Implement- is part of the patch.
      5) Testing of hypothesis- This area has been weak on Linux but real world without enough QA hopefully this will change with but this get the code out for broad testing.
      6) Analysis of hypothesis- This area has been weak in real world usages if it wrong you see a lot of complaints in lots of cases sooner or latter.

      We are starting to see kunit and kselftest more and more used. So the Linux kernel methodology is going to be able to work better through those cycles. It looks mess because asking small question at a time is simpler than lets solve 1 huge problem in 1 hit.

      Linux kernel is not engineering model. The engineering model has a weakness. Engineering model is the following.

      1) Idea- Large broad reaching idea.
      2) Concept- Background research
      3) Planning- Break broad reaching idea down into smaller objectives. With multi hypothesis on how it will work.
      4) Design- This is where you run your proof of concepts.
      5) Development- Prototypes and experiments.
      6) Launch the release to end users.

      Basically Linux kernel development is 3-5 out the engineering model. To be correct about half way though development stage. Following the release early and often model of software development.

      freebsd jails don't have full PID namespace stuff.

      There is a lot of stuff here that is stalled/not merged. This is because the engineering model can bog down with a chicken and egg problem.

      If your development stage needs more resources to make the prototypes and experiments. Like if you don't have arm hardware and you need to test what you did works there you need to launch the item to get that testing. But following the engineering model you are stuck in the development stage.

      When you don't have a proper solved problem and you attempt to apply engineering model more often than not its the path stalled as we see on the freebsd side. Due to getting stuck on the development or design stages of the engineering model without the right testing to say if solution is right or wrong.

      The Linux kernel development model is messy I will give you that. But the method in use on the Linux kernel is stall resistant. Does come at the price of the mess.

      One of the fun things about methodology if you go after a perfect methodology result can be that you never deliver product. Basically its working out how much perfection is good enough. Going after too much perfection ends up stalling the production.

      Of course I will say that Linux kernel weak QA system has not been helpful. Lot of ways Linux kernel messy development will collect the information for someone else to latter on make a engineered solution.

      Comment


      • oiaohm
        One problem. You are overly fixated on jails, when it comes to FreeBSD. Problem, and understandable one, is, that due different OS families you cannot do 1:1 comparison, so it's easy to ignore/miss some aspects. Like some guy who checked, saw that BSD does not have cgroups and thus instantly concluded that BSD's do not have ANY resource control facilities. Same with PID namespaces etc.

        Where I was going with that talk. Look up CAPSICUM. It's light virtualization/sandboxing framework, present in FreeBSD since 9.0 and ported to OpenBSD as well. Somebody also tried to port it to Linux AFAIK. It's not meant to replace anything, it's meant to extend.

        Capsicum is somewhat similar to seccomp-bpf in Linux but also different in some few ways. It implements capability-based security, it's focusing on access to global namespaces.

        Comment


        • I think the word is "ravaged" but alright. Of course 10 BILLION fanboys (And counting!) can't be wrong. You're still using another init system? Really?

          Comment


          • Originally posted by aht0 View Post
            Where I was going with that talk. Look up CAPSICUM. It's light virtualization/sandboxing framework, present in FreeBSD since 9.0 and ported to OpenBSD as well. Somebody also tried to port it to Linux AFAIK. It's not meant to replace anything, it's meant to extend.

            Capsicum is somewhat similar to seccomp-bpf in Linux but also different in some few ways. It implements capability-based security, it's focusing on access to global namespaces.
            Not quite Capsicum is very much like the cgroup bpf work on Linux. https://lwn.net/Articles/697462/ It starts here. But there is work to extend this from cgroups to-do everything seccomp-bpf does.



            Yes serous tip of iceberg this is bpf being used for Linux kernel security module.

            bpf you are not working with solid set capabilities. With capability based flag you have to design them right at the start. bpf is a program you load into kernel latter.

            Linux kernel with bpf is going a very different route. Not attempt to engineer a security design but engineer a framework to allow security design to be loaded in after the fact.


            Comment


            • Originally posted by fsfhfc2018 View Post
              I think the word is "ravaged" but alright. Of course 10 BILLION fanboys (And counting!) can't be wrong. You're still using another init system? Really?
              A person like me really does use more than one Init system. I do have openwrt box using procd and busybox init solutions. I also have a few setups using openrc. And I use systemd. I really do have some understanding for the weakness of the other choices.

              Comment


              • I love systemd, it's best feature is moving all the nutjobs over to BSD/Devuan.

                Comment


                • systemd changed a lot of things for the better, most importantly it led to various improvements in the linux kernel and userspace that also other projects benefit from.

                  better usage of cgroups (which did not really have much attention up to that point) , /run directory for volatile stuff, lots of so called kernel-plumbing efforts.

                  it also has shown how conservative lots of people are in the linux user/dev community, to the point of blocking innovations out of spite. i am not a fan of some things this projects does, but i still think it was a big net gain for linux in general. i like how systemd devs have the courage to go new and interesting things, because most projects are just too conservative to even try that.

                  Comment


                  • Originally posted by yoshi314 View Post
                    out of spite.
                    Not wanting a corporate monoculture to take over has nothing to do with "spite."

                    Comment


                    • Originally posted by fsfhfc2018 View Post
                      Not wanting a corporate monoculture to take over has nothing to do with "spite."
                      Corporate mono culture had taken over in init and service management long time ago. if you look at your distrowatch data from 2001 and 2002 and you will see sysvinit/consolekit/udev... is the default configuration of over 95% of all Linux distributions. Then you look at this project maintainers sysvinit/consolekit/udev...of at the time hello redhat redhat and more redhat.

                      It could be ignored while they appear to be a stack of independent projects that were really not independent when you would find consolekit would need udev of a particular version and that would depend on something sysvinit setup with selinux... and so on. Yes the mess before systemd even that they appeared independent projects were quite highly bound to each other so did not really operate independent because the development was basically done in house at redhat.

                      Corporate monoculture was done before 2001. For some reason you only get upset now.

                      That right Redhat decide to merge all the parts they had under their absolute control under one project called systemd. So corporate monoculture take over was complete before systemd exists.

                      You have spite a corporate has so much control now that you can see it. It really spiteful because you don't really understand the facts of where we are. You don't want to have to admit we had lost the corporate monoculture arguement before systemd came into existence and yes it was lost well and truly before the year 2000 and nothing since then appeared to change that fact.

                      Now if you really want to break this monoculture some how a group has to make something more useful than the combination systemd is. Running back to sysvinit and other old and broken will not help this.

                      Also making up false claims about systemd does not help.

                      Finally being spiteful over the fact we have a corporate monoculture problem does not help anything. Its not good grounds to be anti-systemd as it really not offering anything better/more useful.

                      Failing to see the stuff that needs to happen for future init systems is not helping either.

                      Like recently we finally got work so you can directly start a process in a cgroup. We have pidfd and other things. The frameworks at kernel level are starting come along to be able to make a really good service management solution.

                      Comment

                      Working...
                      X