Announcement

Collapse
No announcement yet.

The Linux Kernel Has Been Forcing Different Behavior For Processes Starting With "X"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by mercster View Post
    You're right, I'm not...
    And hopefully for us, will never be.

    Originally posted by mercster View Post

    but I know I'm not smart enough to question something said on LKML or any other kernel dev forum.
    Was a personal doubt which was correctly answered by others that not you. Apparently you're also not smart enough to understand that.

    Originally posted by mercster View Post
    ​​
    even after admitting you don't know C!
    I've stated to explain first that I don't have detailed knowledge in C syntax (hence why my doubt). But I'm already aware of your personal lack of reading comprehension skills so don't worry


    Originally posted by mercster View Post
    ​​​
    Just so everyone watching this kid have a tantrum sees the original post, and refuses to take his L quietly and go home.
    Shoulda not get triggered after trying to be a smart ass over my doubt

    Comment


    • #82
      Originally posted by archkde View Post

      Yeah, why have a readable code base when we can also have a 20MLOC spaghetti mess. Performance and energy are not wasted due to microkernels, but due to bad interface design and bad algorithms. Yes, this includes common monolithic kernels.
      No offense dude, but if you lack the expertise to design or write a microkernel you probably lack the expertise to explain to others how to properly design a microkernel.

      Your comments come off as deeply armchaired.

      Comment


      • #83
        I've noticed, the latest "cool kid" move is to call something "spaghetti code", when you haven't even looked at the code. Someone heard something called "spaghetti code" once, and it caught on like wildfire.

        Comment


        • #84
          Originally posted by ll1025 View Post

          No offense dude, but if you lack the expertise to design or write a microkernel you probably lack the expertise to explain to others how to properly design a microkernel.

          Your comments come off as deeply armchaired.
          Funny how you picked one of the posts of "your" "dude" for this comment that talks the least about microkernels at all. I just explained facts about where performance and energy is really wasted. Everything takes too many syscalls to do. Complex heuristics are added to the kernel to guess what userspace is about to do next instead of letting userspace tell. Userspace program and kernels alike waste their time in O(n^2) algorithms that should be O(n). Monolithic kernels that look exactly like the average code I would write and never be able to properly read again do not fix this.

          Comment


          • #85
            Originally posted by ryao View Post
            For something “unmaintained”, it has a large number of release announcements:

            Individual modules are being updated as needed.

            The core server is being updated too:
            Please read my post more carefully.

            "There was about 2 and a bit years where the X.org X11 server did not have maintainer that is 2018 to 2021 time area."

            So that 2022 is with the current maintainer.

            Yes individual modules can be updated individual but there is a reason why the core xorg-server bit is true maintainer killer. Its the build and run of the test suite in the xorg-server in this part that tells you if all the individual modules are still linking up and producing X11 solution still inside specification or not.

            Yes you pointing the large number of release announcements for individual parts is how the xorg-server maintainer being missing from proper action from 2 years was critically missed.

            Just because maintainer is doing releases does not equal they are spending the time validating that the patches they are getting are truly correct.



            Notice something almost every year up until 2018 you see 1.x.y as the version number with the x increasing by 1. Its when that x increased by one when the past maintainer xorg-xserver use to perform the full check. The next major number change is in 2021 where you see the 21. where the 1 has been removed this is the new maintainer.

            Xorg stopping doing complete unified releases has not removed the need for someone to do unified certification that everything works. Yes the Xorg site documentation claims need for unified end in 2012 and that turns out not to be reality.

            This is the same problem why sysvinit has been so much of a problem child. There need to be a lead maintainer or test system who does the certification that all the individual parts do in fact work with each other and submit bugs to different parts when they don't.

            Yes the person who was marked at the xorg-xserver maintainer from 2018 to 2021 that was a long term maintainer of it openly admits to being burnt out at the time and not doing the maintainership job properly including not running full system wide parts of the testsuite. If the issue that burnt him out is not corrected the new xorg-server maintainer we have now most likely will end up burnt out as well.

            The reality here is bad. Core documentation fails of X.org fails to take the test suite of the complete protocol part critically. Yes sum of parts problem each individual part may work alone but assembled their can be failures and someone has to check for these failures. The validation that everything still works with each other has been landing on one module maintainer. Yes 1 person. This problem is causing burn out. This burn out results in maintainer not doing their job correctly.

            Yes maintainer not doing their job correctly as the xorg-server past maintainer proved with his own admit of not doing the maintainership work to check patches can still result in project releases happening. We need another metric if a project is on track and being maintained. Yes this is a recent lesson that was only found in 2021 that this is possible.

            Please note I don't blame the prior X11 server maintainer for what happened. There is a problem here that maintainers can get overworked and burnt out and stop doing their job correct and that they are can end up being ignored for way too long. Yes this problem is not helped with project documentation not matching what need to happen to maintain stable working product.

            Comment


            • #86
              Possibly a stupid question, but would mitigations for side-channel attacks change the performance tradeoff between monolithic and microkernels?
              I think either it makes it much worse (if still necessary, more syscalls will increase the performance hit), which in turn would obsolete many of the claims that "it's not that bad anymore" compared to monolithic, or maybe makes them unnecessary due to reduced sharing and might actually end up performing better than the monolithic kernel with mitigations enabled?
              I would guess that hasn't been measured but that it would have been analyzed theoretically in some paper, but I have no idea how to look for it.

              Comment


              • #87
                Originally posted by oiaohm View Post
                Yes individual modules can be updated individual but there is a reason why the core xorg-server bit is true maintainer killer
                It was a mistake to split it into a bunch of messy modules in true GNU chaos fashion. However Xenocara semi-fork developers and Oracle's Alan Cooper Smith have always been active on the code.

                There is code in i.e coreutils that hasn't been touched for much longer. Certainly that isn't a good candidate for pointless replacement either.

                Comment


                • #88
                  Originally posted by kpedersen View Post
                  true GNU chaos fashion
                  I will steal this expression and never forget it.

                  Comment


                  • #89
                    Originally posted by M@GOid View Post
                    Xorg is fine they say. No need for Wayland they say...
                    Yet a decade later and Wayland still isn't a complete replacement for X. What an absolute joke, but completely expected.

                    Comment


                    • #90
                      Originally posted by sinepgib View Post

                      Again, ideally. I did write a bug in a custom Linux driver once that corrupted memory and it did not crash the system until much later than when the bug manifested. I was allocating one page less than I thought due to a lame attempt to be clever at math, so my driver ended up writing out of bounds corrupting the poor innocent soul that asked for the next page.
                      That wouldn't have happened in a microkernel because either that page was mapped by my process and it didn't (directly) affect other drivers or kernel tasks or it wasn't and the driver receives a segfault at the very moment of access, deterministically. What policy to follow when that happens would be probably defined on the basis of how critical the process is, of course. Crashing the OS gracefully or restarting.
                      Obviously, even in the first scenario there's no "no harm guarantee". For example, if the driver did corrupt itself now it would send unreliable results to processes asking it for data, so there could be a chain reaction. But that's still a reduction in the ways it can harm the user and specially their data.
                      Yeah its shocking how many people commenting on microkernels aren't actually aware what the main selling paint of microkernels are, which is that memory pages are properly segregated so if something in a driver happens the fallout is isolated rather than as you describe, corrupting some memory and it goes unnoticed until it gets touched by something else later on. Thinking of it another way, its the let it crash mentality which was also popularized in the Erlang programming language (https://medium.com/@vamsimokari/erla...hy-53486d2a6da) which is used in telephone exchanges (Erlang is the reason why the phone service works 24x7).

                      The same design is also what allows microkernels to restart drivers if they happen to crash (i.e. segfault) at which point it can be gracefully restarted. Although the Linux can do this, its not given as part of the design and hence its not universal (I still have cases somewhat recently where Linux just crashes usually due to graphics drivers).

                      If you actually care about rock solid security and stability, microkernels is what has been used for obvious reasons. There are other techniques as well (i.e. formal verification) which is why sel4 (micorkernel with formal proofs) is alien level tech. This level of security/reliability is overkill for most consumer and even business segments but to claim microkernels are pointless or a gimmick is just stupid.
                      Last edited by mdedetrich; 09 November 2022, 06:16 AM.

                      Comment

                      Working...
                      X