Originally posted by Michael
View Post
Announcement
Collapse
No announcement yet.
Linux 5.0 To Linux 5.9 Kernel Benchmarking Was A Bumpy Ride With New Regressions
Collapse
X
-
Originally posted by birdie View Post
When out of all huge multibillion dollar companies (IBM, Intel, Microsoft et al) and individuals working on the kernel, Phoronix (which is run by a single individual) finds mission-critical regressions which cut performance by half, it speaks volumes about how Linux is being developed. People are arguing if it's ready for the desktop, meanwhile the core part of Linux is being sabotaged all the fucking time. It's quite sad really. In a perfect world we would see performance increases with each release, instead the fastest kernel out of the ten which have been tested, is 5.2 which was released over a year ago. That's appalling.
- Likes 2
Comment
-
Originally posted by leipero View Post
I feel little bit of the sarcasm here . But seriously, those compile times are insane, without localmodconfig you would need as many threads as possible .Michael Larabel
https://www.michaellarabel.com/
- Likes 8
Comment
-
Originally posted by birdie View Post
When out of all huge multibillion dollar companies (IBM, Intel, Microsoft et al) and individuals working on the kernel, Phoronix (which is run by a single individual) finds mission-critical regressions which cut performance by half....
Originally posted by birdie View Post...In a perfect world we would see performance increases with each release...
IOW usability trumps performance (except for HPC where usability == performance)
- Likes 13
Comment
-
Originally posted by Michael View Post
Not really sarcastic.... When bisecting multiple kernel regressions would be too time consuming not using the highest end core count systems when they are available. This testing already took about a week as is with all the kernel benchmarks and then the bisecting process and still rather time consuming and ultimately probably not profitable in the end in terms of the traffic / new subscribers or tips to justify the effort.
As for the other issue, it is what it is, hence why the above suggestion might make sense.
Comment
-
Something doesn't make sense to me.
If you set buffered=1 & direct=0 in fio that number I nearly get , so buffered has a limit.
If you set buffered=0 & direct=1 in fio that number I get 3.56 Million Iops with nvme m2 ssds.
How come with your expensive nvme drive you get nearly the same ball park numbers if direct or not ?
does that drive support multiple namespaces? Did you use namespace=1 ?
Comment
-
Originally posted by birdie View Post
When out of all huge multibillion dollar companies (IBM, Intel, Microsoft et al) and individuals working on the kernel, Phoronix (which is run by a single individual) finds mission-critical regressions which cut performance by half, it speaks volumes about how Linux is being developed. People are arguing if it's ready for the desktop, meanwhile the core part of Linux is being sabotaged all the fucking time. It's quite sad really. In a perfect world we would see performance increases with each release, instead the fastest kernel out of the ten which have been tested, is 5.2 which was released over a year ago. That's appalling.
- Likes 3
Comment
-
Originally posted by Charlie68 View Post
Unfortunately regressions are part of development, they shouldn't happen, but sometimes they do, but I don't think it's Microsoft's fault or anyone else's, it happens when you're dealing with code as complex as a kernel can be.
Also, I do understand that regressions happen, it's quite usual with software development. What escapes me is when we have regressions which linger on for years.
- Likes 2
Comment
Comment