Originally posted by elatllat
View Post
Announcement
Collapse
No announcement yet.
Proposed: Allow Building The Linux Kernel With x86-64 Microarchitecture Feature Levels
Collapse
X
-
-
Originally posted by NobodyXu View PostWhy do you make the claim that large cloud provides cannot benefit from this?
They compile their own kernel/softwares for different micro-architecture anyway, since they have millions of machines using the same micro-arch and that along will pay off the effort of compiling for micro-arch.
> Running openwrt/lede on x86 isn’t that niche.
erm, no: that's an even SMALLER audience than the case I gave. You're not talking about running it on an actual router, but instead on a "retired" PC. Just how many people do you imagine are actually doing that?!
Still, it looks like you've come around since posting that, and now understand that this is Much Ado About Nothing, so that's good.
Leave a comment:
-
Originally posted by discordian View PostThats an realistic claim and since this is a change, you should prove that there is an advantage over the already existing choices.
The kernel is restricted in what it can do and very little benefits from the arch levels over say the Core2 target.
The potential benefits are in a handful of modules and you already have optimized variants (AVX/SSE3) there.
Then they have no problem with adding a kernel patch and maybe report back their findings?
you are aware that kernel options should be covered by tests?
One of the reply on LKML says that he/she didn’t find any noticeable performance improvement for this patch, so maybe you are right here.
I would wait for the v2 patch and wait for comments on performance there.
Leave a comment:
-
Originally posted by NobodyXu View Post
I don’t get it.
Why do you make the claim that large cloud provides cannot benefit from this?
The kernel is restricted in what it can do and very little benefits from the arch levels over say the Core2 target.
The potential benefits are in a handful of modules and you already have optimized variants (AVX/SSE3) there.
Originally posted by NobodyXu View PostThey compile their own kernel/softwares for different micro-architecture anyway, since they have millions of machines using the same micro-arch and that along will pay off the effort of compiling for micro-arch.
And it’s not that difficult for them at all since they probably already have system in place for running specific images for different micro-arch, it’s not that difficult for them to do micro-arch optimization.
you are aware that kernel options should be covered by tests?
Leave a comment:
-
Originally posted by arQon View Post
And as YOU say:
> Large cloud providers almost always compile their own kernel to squeeze out the last bit of performance.
So that's another group of users (in the more abstract sense) that doesn't benefit. If you're keeping score: that's 0 for 2, on what is by far the two largest groups.
Why do you make the claim that large cloud provides cannot benefit from this?
They compile their own kernel/softwares for different micro-architecture anyway, since they have millions of machines using the same micro-arch and that along will pay off the effort of compiling for micro-arch.
And it’s not that difficult for them at all since they probably already have system in place for running specific images for different micro-arch, it’s not that difficult for them to do micro-arch optimization.
Originally posted by arQon View Post
> These low-level compile-time options are also used by distribution such as openWrt to make their kernel ultra-lightweight while stable, since it is meant to run on the routers (or x86-64 that are cheap and can only be used as routers).
lolwut?!
I mean, okay, even if that's true, you're already down to a niche within a niche - and this is your BEST candidate?
I'm moderately surprised to learn that there are *any* x86 routers out there, given that this is an area absolutely dominated by ARM, but I guess someone like ASUS might have made one once? Let's be generous and say that a whole 1% of routers have x86 CPUs in them. Of that 1%, what fraction is built on, say... well, again, let's be even more ridiculously generous: Haswell-class CPUs instead of Atoms? How about "none"?
Running openwrt/lede on x86 isn’t that niche.
- Likes 1
Leave a comment:
-
Originally posted by NobodyXu View PostI don’t think it is a waste of time.
At the user level, his comment is unquestionably correct: not only is there realistically no benefit in nearly all cases to start with, there's implicitly even less benefit in anything that spends 99% of its runtime just waiting on user interaction, which is by itself the majority of programs.
And as YOU say:
> Large cloud providers almost always compile their own kernel to squeeze out the last bit of performance.
So that's another group of users (in the more abstract sense) that doesn't benefit. If you're keeping score: that's 0 for 2, on what is by far the two largest groups.
> These low-level compile-time options are also used by distribution such as openWrt to make their kernel ultra-lightweight while stable, since it is meant to run on the routers (or x86-64 that are cheap and can only be used as routers).
lolwut?!
I mean, okay, even if that's true, you're already down to a niche within a niche - and this is your BEST candidate?
I'm moderately surprised to learn that there are *any* x86 routers out there, given that this is an area absolutely dominated by ARM, but I guess someone like ASUS might have made one once? Let's be generous and say that a whole 1% of routers have x86 CPUs in them. Of that 1%, what fraction is built on, say... well, again, let's be even more ridiculously generous: Haswell-class CPUs instead of Atoms? How about "none"?
So you're advocating for additional work, for literally no benefit at all to 100.000% of users... other than the fact that you think "it's cool". (Or perhaps because "this CPU was really expensive, so I deserve to have everything built specifically for me" - which is totally understandable).
But that's it. That's 100% of the actual benefit of this. "It's cool", and apparently it helps a few people with acute inferiority complexes feel a bit more validated for once (which is fine, Gentoo users: we know you need it. :P)
Rebuilding ALL the programs you use with march=native? THAT has value - sometimes, maybe, given a specific-enough set of circumstances. But this is just for the kernel, where the value of this is literally zero. This is *absolutely* a waste of time.
Arguably, it's such an insignificant AMOUNT of time that it doesn't really matter, and it's pretty much harmless - but "waste" is exactly what it is.
- Likes 1
Leave a comment:
-
Originally posted by sinepgib View Post
What happens when disabled? A branch is inserted?
- Likes 1
Leave a comment:
-
Originally posted by Developer12 View PostThis seems like a very large waste of time. Last time this came up there was the observation that
A) there's a very long tail of people on all the different feature levels and
B) you don't actually get much benefit in a lot of programs unless you're mandating the very latest improvements
Meaning that most feature levels (especially lower ones) are useless distinctions. Just either compile for the very latest, or for everybody.
But with this? What could possibly benefit a significant amount of kernel code here? Advanced math extensions? Really? The things a kernel does haven't changed much in probably 25 years.
And an entirely different problem: do you really think distros will want to ship multiple kernels? They do everything in their power to have one large generic image for every architecture. The prospect of shipping a special 16K page size arm kernel for M1 macs is causing enough drama already.
It is true that most distributions don’t use these options and the gains might not justified for them, but they are not the only users of the kernel.
Large cloud providers almost always compile their own kernel to squeeze out the last bit of performance.
These low-level compile-time options are also used by distribution such as openWrt to make their kernel ultra-lightweight while stable, since it is meant to run on the routers (or x86-64 that are cheap and can only be used as routers).
And kernel nowadays already have encryption/compression/decompression algorithm built right into the kernel.
The former is used in wireguard, IPSec, dm-crypt, disk encryption, etc while the later is used for compressing unused memory in zswap and zram.
- Likes 1
Leave a comment:
-
This seems like a very large waste of time. Last time this came up there was the observation that
A) there's a very long tail of people on all the different feature levels and
B) you don't actually get much benefit in a lot of programs unless you're mandating the very latest improvements
Meaning that most feature levels (especially lower ones) are useless distinctions. Just either compile for the very latest, or for everybody.
But with this? What could possibly benefit a significant amount of kernel code here? Advanced math extensions? Really? The things a kernel does haven't changed much in probably 25 years.
And an entirely different problem: do you really think distros will want to ship multiple kernels? They do everything in their power to have one large generic image for every architecture. The prospect of shipping a special 16K page size arm kernel for M1 macs is causing enough drama already.
- Likes 2
Leave a comment:
-
And us gentoo folk are just wondering what all the fuss is about
- Likes 3
Leave a comment:
Leave a comment: