maybe I'm just not getting it, but does that mean that a game like 'Xonotic' is also affected?
Announcement
Collapse
No announcement yet.
The Linux Kernel Has Been Forcing Different Behavior For Processes Starting With "X"
Collapse
X
-
Originally posted by mdedetrich View PostThinking of it another way, its the let it crash mentality which was also popularized in the Erlang programming language (https://medium.com/@vamsimokari/erla...hy-53486d2a6da) which is used in telephone exchanges (Erlang is the reason why the phone service works 24x7).
Originally posted by mdedetrich View PostThe same design is also what allows microkernels to restart drivers if they happen to crash (i.e. segfault) at which point it can be gracefully restarted. Although the Linux can do this, its not given as part of the design and hence its not universal (I still have cases somewhat recently where Linux just crashes usually due to graphics drivers).
Originally posted by mdedetrich View PostIf you actually care about rock solid security and stability, microkernels is what has been used for obvious reasons. There are other techniques as well (i.e. formal verification) which is why sel4 (micorkernel with formal proofs) is alien level tech. This level of security/reliability is overkill for most consumer and even business segments but to claim microkernels are pointless or a gimmick is just stupid.
- Likes 1
Comment
-
Originally posted by xfcemint View PostFirst of all, the premise of existence of a "performance tradeoff" between monolithic kernels and microkernels is questionable.
Let's assume that there exists some non-neglible performance penalty for using a microkernel. This performace penalty would probably arise due to increased number of context switches. Then, the mitigations for side-channel atatcks would make microkernels slower.
Regarding mitigations, my doubt is whether they are necessary compared to the status quo. I'm not sure if those context switches already mean some flushing of caches and such so there would be no need for extra mitigations. Again, just a doubt I have.
Originally posted by xfcemint View PostIf CPUs get proper hardware protection against side-channel attacks, the microkernels would automatically get faster.
Originally posted by xfcemint View PostIf CPUs get proper support for mikrokernels, then the "performance penalty for a microkernel" would become even more meaningless term.
- Likes 1
Comment
-
Originally posted by archkde View PostMonolithic kernels that look exactly like the average code I would write and never be able to properly read again do not fix this.
But back to topic, according to the article the problem was fixed three years ago in X11. And it was an X11 problem, which the kernel devs tried to accommodate in a rather clumsy way.
I think Donenfeld's patch should be accepted, but perhaps not immediately. It seems not super-urgent anyway if the hack did not make problems for the last three years. So my approach would be to- announce that old X11 versions up to a certain release are deprecated
- give users and distributions a few months to upgrade their X11 version.
- then apply the patch and accept the breakage that may happen in some poorly maintained environments.
Comment
-
Originally posted by xfcemint View PostMicrokernels/seL4
Nah. A microkernel would be a huge step forward for consumer desktop. The actual issue is that microkernels are harder to develop, and much harder to design correctly, compounded with a historical sequence of usable monolithic kernels being ready first.
Then later, you get rationalizations (more precisely, false excuses) of how monolithic kernels are better or faster or whatever. Because, no group of peple can openly admit that they have used an inferior design for developing an OS.
seL4 would be an overkill for consumer desktop, but some "less stringent" microkernel would be awesome.
I don't think the part of people not being able to admit the used an inferior design is necessarily true. Cutting corners is something that is done and admitted every day in software development. That often implies some misdesigns here and there. IMO it's generally the fanbois (who are typically just users) that are in denial. Just as with the X11 vs Wayland no camp will admit the flaws of their display protocol of choice while devs are often aware of the things they are trading off.
- Likes 4
Comment
-
Originally posted by xfcemint View PostMicrokernels/seL4
Nah. A microkernel would be a huge step forward for consumer desktop. The actual issue is that microkernels are harder to develop, and much harder to design correctly, compounded with a historical sequence of usable monolithic kernels being ready first.
Then later, you get rationalizations (more precisely, false excuses) of how monolithic kernels are better or faster or whatever. Because, no group of peple can openly admit that they have used an inferior design for developing an OS.
seL4 would be an overkill for consumer desktop, but some "less stringent" microkernel would be awesome.
And again, Linux and Windows are hybrids, all the problem drivers are in usermode. It shouldn't matter if the critical drivers are in kernelmode or usermode, what matters is they're actually designed well and tested properly.
- Likes 1
Comment
-
Originally posted by Ironmask View Post
And again, Linux and Windows are hybrids, all the problem drivers are in usermode. It shouldn't matter if the critical drivers are in kernelmode or usermode, what matters is they're actually designed well and tested properly.
- Likes 2
Comment
-
Originally posted by Ironmask View PostAnd again, Linux and Windows are hybrids, all the problem drivers are in usermode.
- Likes 2
Comment
-
Originally posted by sinepgib View Post
Hmmmm, besides shader compilers and a little more of the GPU drivers (granted, the more risky and bigger part), what's in userspace in terms of drivers in Linux? I really don't see why you consider Linux to be hybrid, I don't remember if you answered that before.
- Likes 1
Comment
-
Originally posted by xfcemint View PostStandard brainwashing.
Originally posted by xfcemint View PostI'll try to be as short as possible, otherwise I'll be writing a book here.
memory copies -> shared memory
context switches -> clustered OS calls
context switches -> automatic or semi-automatic option to integrate services into kernel address space (that's not the same as a hybrid kernel, not even close to it. Why? The difference is that the user has a CHOICE.)
microkernel support = tagged/colored caches, only a small part of the cache is flushed on a context switch (or nothing is flushed when repeatedly switching between a small number of contexts).
Shared memory without some degree of validation by the kernel could be problematic, one service may cause another one to misbehave due to races and what not (one of the things you care about if you're considering designing something for reliability, such as a microkernel). But surely you can just notify the kernel when you sent a message so you can't write to the shared buffer again (by means of the kernel modifying access permissions) until the reader sends an ack signal now that I think of it.
Originally posted by xfcemint View PostWhat are ALL the advantages of modularization and compartmentalization? Noone can list them. You mentioned a few, but that's far from an exhaustive list.
Security, stability, easier maintenance, easier upgradeability, and tons and tons of other stuff are the advantages.
You might be right that maybe in the past there was a need to cut corners. Is it finally time for that to stop? Even if noone wants to put up the hard work, can the fanboys of the status quo at least admit that the critique is valid? Who is in denial?
The OS world cannot go forward if we are stuck in the past forever, frozen in place by a heap of excuses. The first step forward is to admit it.
For servers, for embedded, high availability, etc, a microkernel is worth the investment. For consumers I'm not that sure.
- Likes 1
Comment
Comment