Announcement

Collapse
No announcement yet.

The Linux Kernel Has Been Forcing Different Behavior For Processes Starting With "X"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    maybe I'm just not getting it, but does that mean that a game like 'Xonotic' is also affected?

    Comment


    • #92
      Originally posted by mdedetrich View Post
      Thinking of it another way, its the let it crash mentality which was also popularized in the Erlang programming language (https://medium.com/@vamsimokari/erla...hy-53486d2a6da) which is used in telephone exchanges (Erlang is the reason why the phone service works 24x7).
      I actually thought of this analogy much later in the day (I got some interest from the discussion so). The first time I read about the let it crash philosophy I thought it was horrible. Then I got the "aha" moment after which everything makes sense.

      Originally posted by mdedetrich View Post
      The same design is also what allows microkernels to restart drivers if they happen to crash (i.e. segfault) at which point it can be gracefully restarted. Although the Linux can do this, its not given as part of the design and hence its not universal (I still have cases somewhat recently where Linux just crashes usually due to graphics drivers).
      Yeah, it's also not very robust in Linux. I've been in situations where once a module failed and was unloaded, loading it again was not even allowed (which is still better than it loading and behaving even worse than before tho).

      Originally posted by mdedetrich View Post
      If you actually care about rock solid security and stability, microkernels is what has been used for obvious reasons. There are other techniques as well (i.e. formal verification) which is why sel4 (micorkernel with formal proofs) is alien level tech. This level of security/reliability is overkill for most consumer and even business segments but to claim microkernels are pointless or a gimmick is just stupid.
      I agree. They have their uses. As I'm often mentioning, people seems to think the consumer desktop is everything that exists for some reason. And yeah, for those uses it's pretty pointless and doesn't justify the performance hit. But some critical services do need to add reliability anyway they can.

      Comment


      • #93
        Originally posted by xfcemint View Post
        First of all, the premise of existence of a "performance tradeoff" between monolithic kernels and microkernels is questionable.

        Let's assume that there exists some non-neglible performance penalty for using a microkernel. This performace penalty would probably arise due to increased number of context switches. Then, the mitigations for side-channel atatcks would make microkernels slower.
        I'm not sure you can avoid the context switches and memory copies that microkernels introduce, so I'm not sure how much the performance tradeoff is questionable. There is though a counterpoint in being able to fit the whole kernel or active server code in icache so maybe that can offset the hit. But for some kernels the hit was actually measured and documented in papers.
        Regarding mitigations, my doubt is whether they are necessary compared to the status quo. I'm not sure if those context switches already mean some flushing of caches and such so there would be no need for extra mitigations. Again, just a doubt I have.

        Originally posted by xfcemint View Post
        If CPUs get proper hardware protection against side-channel attacks, the microkernels would automatically get faster.
        But that applies to all kernels. So it would only reduce the gap (assuming such gap of course).

        Originally posted by xfcemint View Post
        If CPUs get proper support for mikrokernels, then the "performance penalty for a microkernel" would become even more meaningless term.
        What would a CPU with "proper support for microkernels" look like? Something that allows more messaging between userspace without kernel interaction? AFAIR Intel did introduce some instructions for that (I was mostly interested on it thinking of Go and more efficient and correct preemption of goroutines, but I guess it would be useful for microkernels).

        Comment


        • #94
          Originally posted by archkde View Post
          Monolithic kernels that look exactly like the average code I would write and never be able to properly read again do not fix this.
          Any kernel of such quality might not fix this.

          But back to topic, according to the article the problem was fixed three years ago in X11. And it was an X11 problem, which the kernel devs tried to accommodate in a rather clumsy way.

          I think Donenfeld's patch should be accepted, but perhaps not immediately. It seems not super-urgent anyway if the hack did not make problems for the last three years. So my approach would be to
          1. announce that old X11 versions up to a certain release are deprecated
          2. give users and distributions a few months to upgrade their X11 version.
          3. then apply the patch and accept the breakage that may happen in some poorly maintained environments.

          Comment


          • #95
            Originally posted by xfcemint View Post
            Microkernels/seL4
            Nah. A microkernel would be a huge step forward for consumer desktop. The actual issue is that microkernels are harder to develop, and much harder to design correctly, compounded with a historical sequence of usable monolithic kernels being ready first.

            Then later, you get rationalizations (more precisely, false excuses) of how monolithic kernels are better or faster or whatever. Because, no group of peple can openly admit that they have used an inferior design for developing an OS.

            seL4 would be an overkill for consumer desktop, but some "less stringent" microkernel would be awesome.
            What would it accomplish in a space where some degree of unreliability is clearly accepted by the users? It's a lot of extra work, as you mention, so cost/benefit analysis applies here. I think the consumer desktop has many more problems at the application level than it has at the kernel stability one. The only actual advantage for consumers I can think of is that it would force ABI compatibility for drivers (or rather, make it unnecessary, as they behave much like normal processes), so many of the driver compatibility problems would go away. But other than that, most users seem to be OK with the status quo.

            I don't think the part of people not being able to admit the used an inferior design is necessarily true. Cutting corners is something that is done and admitted every day in software development. That often implies some misdesigns here and there. IMO it's generally the fanbois (who are typically just users) that are in denial. Just as with the X11 vs Wayland no camp will admit the flaws of their display protocol of choice while devs are often aware of the things they are trading off.

            Comment


            • #96
              Originally posted by xfcemint View Post
              Microkernels/seL4


              Nah. A microkernel would be a huge step forward for consumer desktop. The actual issue is that microkernels are harder to develop, and much harder to design correctly, compounded with a historical sequence of usable monolithic kernels being ready first.

              Then later, you get rationalizations (more precisely, false excuses) of how monolithic kernels are better or faster or whatever. Because, no group of peple can openly admit that they have used an inferior design for developing an OS.

              seL4 would be an overkill for consumer desktop, but some "less stringent" microkernel would be awesome.
              I think the people writing and maintaining these systems are too focused on actually working on the system everyone uses rather than debating about the religious dogma of kernel design. The OS war you're talking about exists entirely in your head, the real world just want to make things that work. There are so, so many more important issues in OS design than just what general overarching architecture the system follows. Not to mention other methods for detecting OS stability at runtime, which are being implemented in Linux, and unlike in a classic microkernel, this involves including the kernel in the code that checks if the system is doing okay. And then you have language-based solutions, which are also now being adapted thanks to Rust. An OS isn't just one architecture, it's usually many philosophies carefully curated into a new design.
              And again, Linux and Windows are hybrids, all the problem drivers are in usermode. It shouldn't matter if the critical drivers are in kernelmode or usermode, what matters is they're actually designed well and tested properly.

              Comment


              • #97
                Originally posted by Ironmask View Post

                And again, Linux and Windows are hybrids, all the problem drivers are in usermode. It shouldn't matter if the critical drivers are in kernelmode or usermode, what matters is they're actually designed well and tested properly.
                I wouldn't put Linux and Windows in the same "hybrid" bucket because there is a world of difference between them. Even on Wikipedia the Windows NT kernel (which is what modern Windows OS uses) is classified as a hybrid kernel, it actually takes inspirations from Mach (see https://en.wikipedia.org/wiki/Archit..._kernel_design). On the other hand Linux kernel is pretty bog standard monolithic kernel (with the typical userspace isolation) with some microkernel like designs sprinkled ontop (i.e. DKMS)

                Comment


                • #98
                  Originally posted by Ironmask View Post
                  And again, Linux and Windows are hybrids, all the problem drivers are in usermode.
                  Hmmmm, besides shader compilers and a little more of the GPU drivers (granted, the more risky and bigger part), what's in userspace in terms of drivers in Linux? I really don't see why you consider Linux to be hybrid, I don't remember if you answered that before.

                  Comment


                  • #99
                    Originally posted by sinepgib View Post

                    Hmmmm, besides shader compilers and a little more of the GPU drivers (granted, the more risky and bigger part), what's in userspace in terms of drivers in Linux? I really don't see why you consider Linux to be hybrid, I don't remember if you answered that before.
                    Yeah I don't get what hes going on about here because pretty much all of the critical part of the drivers in Linux are in tree. As mentioned before there is DKMS but because there is no stable interface it only works for the same kernel revision (in other words there isn't an ABI/protocol).

                    Comment


                    • Originally posted by xfcemint View Post
                      Standard brainwashing.
                      Ok.

                      Originally posted by xfcemint View Post
                      ​I'll try to be as short as possible, otherwise I'll be writing a book here.
                      memory copies -> shared memory
                      context switches -> clustered OS calls
                      context switches -> automatic or semi-automatic option to integrate services into kernel address space (that's not the same as a hybrid kernel, not even close to it. Why? The difference is that the user has a CHOICE.)

                      microkernel support = tagged/colored caches, only a small part of the cache is flushed on a context switch (or nothing is flushed when repeatedly switching between a small number of contexts).
                      All features that are useful regardless of your kernel architecture tho. But yeah, those would level the field a lot.
                      Shared memory without some degree of validation by the kernel could be problematic, one service may cause another one to misbehave due to races and what not (one of the things you care about if you're considering designing something for reliability, such as a microkernel). But surely you can just notify the kernel when you sent a message so you can't write to the shared buffer again (by means of the kernel modifying access permissions) until the reader sends an ack signal now that I think of it.

                      Originally posted by xfcemint View Post
                      What are ALL the advantages of modularization and compartmentalization? Noone can list them. You mentioned a few, but that's far from an exhaustive list.

                      Security, stability, easier maintenance, easier upgradeability, and tons and tons of other stuff are the advantages.

                      You might be right that maybe in the past there was a need to cut corners. Is it finally time for that to stop? Even if noone wants to put up the hard work, can the fanboys of the status quo at least admit that the critique is valid? Who is in denial?

                      The OS world cannot go forward if we are stuck in the past forever, frozen in place by a heap of excuses. The first step forward is to admit it.
                      Remember the question is not whether those advantages exist (they do!) but whether a consumer will care. We live under capitalism, good enough quickly wins over perfect slowly in terms of dominating the market, so cutting corners will always happen. If you can get something people will buy and you can make it with a lesser investment and much sooner it doesn't matter if the competing product is better, you've already won. All of this is specially true for consumers.
                      For servers, for embedded, high availability, etc, a microkernel is worth the investment. For consumers I'm not that sure.​

                      Comment

                      Working...
                      X