Announcement

Collapse
No announcement yet.

Linux 5.15 Hit By Some Early Performance Regressions But Quickly Reverted

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • mdedetrich
    replied
    Originally posted by caligula View Post

    Cloud providers A, B, and C use the Linux kernel. A knows that a regression X occurs in kernel 5.15. They patch it internally. X goes undetected for companies B and C. Their stack runs slower. A wins market share.
    And cloud companies cannot build their own kernel with the proposed patches? Or the linux kernel cannot make a separate release from a different branch that doesn't taint master?

    Leave a comment:


  • stormcrow
    replied
    Originally posted by pmorph View Post
    Maybe it can work for products built from the ground up to a continuous delivery model. But a lot (majority?) of businesses still rely on classic release cycles, and within the cycle they try and keep the "lower layer" things as static as possible, to avoid any surprises and risks to project schedule. I think the main issue is there is always more parts requiring testing than most of the companies have resources available to get it done (linux kernel would be just a small piece of that lot).
    Yeah that's the idea I was coming out against. You can't build a monolithic product with a set schedule on top of software that's, by necessity, changing daily. Software development paradigms that were established in the 1980s & 90s can't keep up with security problems of the 2020s and beyond. Software releases need to be pushed out as soon as patches are available, not on fixed schedules - despite the headaches it'll give IT departments.

    I rarely reference ESR, but the cathedral can not move to follow the bazaar every time it decides to move to a better location, even if the cathedral's foundation is cracking. One can't create a traditional cathedral on top of a bazaar. It's like trying to build Notre Dame on Saharan sand.

    Leave a comment:


  • pmorph
    replied
    Originally posted by stormcrow View Post

    Yeah, but that can lead to Serafean 's noted consequence. That can have rather drastic consequences if you're not agile enough to keep on top of the (very) fast changing landscape we're now in. Exploits are being probed and utilized within minutes of public disclosure. Distros tend to be on the inside loop of errata disclosures because of exposure - people know who to contact. Who's going to tell company Y that their internally customized database cluster is now vulnerable to a severity 9.8 CVE RCE chain ...oops... it just got pwned through a pivoted BEC just 5 minutes ago. Now a TB of customer data is for sale on the dark web, the company has lost access to half its servers, and lawyers are lining up to sue for negligence.
    Maybe it can work for products built from the ground up to a continuous delivery model. But a lot (majority?) of businesses still rely on classic release cycles, and within the cycle they try and keep the "lower layer" things as static as possible, to avoid any surprises and risks to project schedule. I think the main issue is there is always more parts requiring testing than most of the companies have resources available to get it done (linux kernel would be just a small piece of that lot).

    Leave a comment:


  • M@GOid
    replied
    Originally posted by Vistaus View Post

    Oracle will never suffer as their kernel is unbreakableā„¢
    I would watch a YT video of the meeting were they landed that name. How they came to it, how bad were the other choices... It must have been something strait out of a Dilbert cartoon.

    Leave a comment:


  • Vistaus
    replied
    Originally posted by [email protected] View Post

    That is a cloud provider type situation. I talk more about a company that sells products that run on Linux, like Intel, IBM for hardware, or Oracle for software. If they product start to under-perform compared to the competition because of a kernel bug, their sales people will suffer.

    Even in the case of Oracle, they eventually will have to upgrade the kernel of their distro sooner or later.
    Oracle will never suffer as their kernel is unbreakableā„¢

    Leave a comment:


  • stormcrow
    replied
    Originally posted by pmorph View Post
    I would think most big companies don't work that much with the latest stable kernels? They probably use older kernels (with patches), and run performance tests over the whole product.
    Yeah, but that can lead to Serafean 's noted consequence. That can have rather drastic consequences if you're not agile enough to keep on top of the (very) fast changing landscape we're now in. Exploits are being probed and utilized within minutes of public disclosure. Distros tend to be on the inside loop of errata disclosures because of exposure - people know who to contact. Who's going to tell company Y that their internally customized database cluster is now vulnerable to a severity 9.8 CVE RCE chain ...oops... it just got pwned through a pivoted BEC just 5 minutes ago. Now a TB of customer data is for sale on the dark web, the company has lost access to half its servers, and lawyers are lining up to sue for negligence.

    Leave a comment:


  • M@GOid
    replied
    Originally posted by pmorph View Post
    I would think most big companies don't work that much with the latest stable kernels? They probably use older kernels (with patches), and run performance tests over the whole product.
    That is a cloud provider type situation. I talk more about a company that sells products that run on Linux, like Intel, IBM for hardware, or Oracle for software. If they product start to under-perform compared to the competition because of a kernel bug, their sales people will suffer.

    Even in the case of Oracle, they eventually will have to upgrade the kernel of their distro sooner or later.

    Leave a comment:


  • pmorph
    replied
    Originally posted by [email protected] View Post
    Considering the frequency of performance regressions that still slips trough "stable" releases, I wonder how much the big companies invest on Linux QA. I mean, if your earnings depend on server performance, it is in your best interest that a patch from another company don't mess with your business.
    I would think most big companies don't work that much with the latest stable kernels? They probably use older kernels (with patches), and run performance tests over the whole product.

    Leave a comment:


  • Serafean
    replied
    Originally posted by caligula View Post

    They might have their internal patch sets to maintain the expected level of performance. It's actually beneficial for them if the other companies use the standard mainline kernels without knowing anything about these regressions.
    I know a company that does keep an internal patchset. It also became frozen on a specific version of linux due to said patchset. They have been discussing trying to catch up to mainline for a few years now.

    Leave a comment:


  • caligula
    replied
    Originally posted by mdedetrich View Post

    Could you explain why?
    Cloud providers A, B, and C use the Linux kernel. A knows that a regression X occurs in kernel 5.15. They patch it internally. X goes undetected for companies B and C. Their stack runs slower. A wins market share.

    Leave a comment:

Working...
X