Announcement

Collapse
No announcement yet.

VMware Is Exploring Reducing Meltdown/PTI Overhead With Deferred Flushes

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • VMware Is Exploring Reducing Meltdown/PTI Overhead With Deferred Flushes

    Phoronix: VMware Is Exploring Reducing Meltdown/PTI Overhead With Deferred Flushes

    VMware engineer Nadav Amit who previously pursued "Optpolines" and other possible performance optimizations in light of Spectre / Meltdown vulnerabilities is now proposing patches for deferring PTI flushes to help with addressing the performance overhead caused by Meltdown...

    http://www.phoronix.com/scan.php?pag...er-PTI-Flushes

  • #2
    Actually, imnsho major virtualization and OS software vendors should just drop vulnerable Intel CPUs from official HCLs at once, ditch all the mitigations related and warn the user on boot that he's using vulnerable system at his own risk.

    This will serve to push forth two goals: 1) move Intel to either recall CPUs or solve the issue in some software-independent way (probably impossible) - and if they don't move, users just move from them; 2) decomplexify the kernels - currently multiple OS and VM hypervisor kernels fragile intrinsics like MM and the very core syscall areas are getting overly complexified with all these PTI/L1TF/MDS/whatever related mitigations which actually do carry only partial importance, and consist of the whole lot of approaches/mixture of code. This increases probability of hitting bugs inside kernels, that adds performance overheads even for unaffected hardware, this makes the whole bloat harder to support. Also, all this bloat will anyways have to be cut out eventually when it becomes obsolete legacy.

    The point is: if certain hardware is SO faulty that we need to change ALL the existing systems a whole bloody lot to fix it, it's the hardware that should be fixed, not the systems.

    [of course just an opinion, as wrong as mere personal opinion can be]
    Last edited by Alex/AT; 08-25-2019, 11:11 AM.

    Comment


    • #3
      Originally posted by Alex/AT View Post
      Actually, imnsho major virtualization and OS software vendors should just drop vulnerable Intel CPUs from official HCLs at once, ditch all the mitigations related and warn the user on boot that he's using vulnerable system at his own risk.

      This will serve to push forth two goals: 1) move Intel to either recall CPUs or solve the issue in some software-independent way (probably impossible) - and if they don't move, users just move from them; 2) decomplexify the kernels - currently multiple OS and VM hypervisor kernels fragile intrinsics like MM and the very core syscall areas are getting overly complexified with all these PTI/L1TF/MDS/whatever related mitigations which actually do carry only partial importance, and consist of the whole lot of approaches/mixture of code. This increases probability of hitting bugs inside kernels, that adds performance overheads even for unaffected hardware, this makes the whole bloat harder to support. Also, all this bloat will anyways have to be cut out eventually when it becomes obsolete legacy.

      The point is: if certain hardware is SO faulty that we need to change ALL the existing systems a whole bloody lot to fix it, it's the hardware that should be fixed, not the systems.

      [of course just an opinion, as wrong as mere personal opinion can be]
      Excellent point Alex.

      Cloud operators and corporate datacenter operators are currently considering the same things. Simply accelerate retirement the older hardware or patch their existing investment. Some are seeing a reduction of up to 20% of their compute capacity due to the mitigations and stopped because they were having to draw capacity from other parts of their plant to make up for the losses.

      The expense of testing each mitigation patch for some was becoming too expensive due to the sheer number of application types they support.

      Looking at the patch & testing expense, some are simply accelerating their asset replacement cycles to push the defective hardware out of the plant sooner. Cloud operators who roll their own infrastructure will have the most flexibility as they tend to be able to turn over assets much quicker and have more flexibility.

      Corporate datacenters who are more focused on asset cycles will take longer to remediate as it just takes longer for them to budget for these large capital needs very quickly.

      Comment


      • #4
        Originally posted by Alex/AT View Post
        Actually, imnsho major virtualization and OS software vendors should just drop vulnerable Intel CPUs from official HCLs at once, ditch all the mitigations related and warn the user on boot that he's using vulnerable system at his own risk.

        This will serve to push forth two goals: 1) move Intel to either recall CPUs or solve the issue in some software-independent way (probably impossible) - and if they don't move, users just move from them; 2) decomplexify the kernels - currently multiple OS and VM hypervisor kernels fragile intrinsics like MM and the very core syscall areas are getting overly complexified with all these PTI/L1TF/MDS/whatever related mitigations which actually do carry only partial importance, and consist of the whole lot of approaches/mixture of code. This increases probability of hitting bugs inside kernels, that adds performance overheads even for unaffected hardware, this makes the whole bloat harder to support. Also, all this bloat will anyways have to be cut out eventually when it becomes obsolete legacy.

        The point is: if certain hardware is SO faulty that we need to change ALL the existing systems a whole bloody lot to fix it, it's the hardware that should be fixed, not the systems.

        [of course just an opinion, as wrong as mere personal opinion can be]
        Take a look at device driver code (either kernel or userspace) -- it's chock full of workarounds for hardware errata. VM systems in any OS are always brutally complex affairs, with lots of edge cases. Would it be nice if all hardware (and firmware) were nice and simple and always worked as designed, with no corner cases, no possibilities for future changes exposing unanticipated behavior, and all that? Sure, but hardware designers aren't perfect in the present, much less at foreseeing the future, and it's a lot harder to fix hardware than software.

        This is not a clearly-defined error like the F00F bug or the FDIV bug or Rowhammer. The processor's doing what it's supposed to; it's just that there can be side effects leading to subtle timing changes in other operations that turn out to be exploitable to leak information that can be used to piece together other data that might (or might not) be possible to analyze for the purpose of generating other exploits or simply exfiltrating data. At worst, it's not a very efficient means of attack.

        Dropping all currently known vulnerable Intel CPUs would essentially mean dropping all significant Intel CPUs, with no guarantee that future side channel timing attacks will not be found, not to mention other possible classes of attacks we haven't even considered yet. It would mean that essentially every existing server, desktop, laptop, and possibly even tablet or phone, would immediately have to be replaced. With what? AMD? AMD chips -- not to mention other architectures -- aren't invulnerable to side channel attacks either, not to mention that we surely haven't reached the end of the road. Turn off multithreading altogether? That helps for some, but not all, and users of such chips have that option if they want it, but not everyone needs that.

        How would you implement a boot warning like that? A message in the boot log, that nobody ever reads? A gate at boot time that requires the user to confirm at every reboot? Aside from making unattended boot impossible for no good reason, it would become a huge annoyance for everybody, and would steal attention from much more serious exploits (and just imagine somebody selling a "mitigation" for that that was really a Trojan horse!).

        Commercial (and in some cases, private) cloud vendors, with multi-tenant hardware, do have a more serious problem, since they can't very easily restrict what third party code runs on those instances and there may be a lot more incentive to exfiltrate data that may take a long time to extract. But if security is that important, you don't want to have multi-tenant hosts to begin with, and if your security needs are that great, you probably don't want to be running on a public cloud to begin with.

        But the fact is that there will always be hardware errata, that need software workarounds simply because updating software (or even firmware) is just a lot easier and cheaper than respinning and replacing hardware.

        Comment


        • #5
          Originally posted by rlkrlk View Post
          Take a look at device driver code (either kernel or userspace) -- it's chock full of workarounds for hardware errata.
          The scale is pretty bloody different there. 99% of driver workarounds are consolidated inside the driver code, and this time, the 'mitigations' affect the most intrinsic parts of any and every system that intends to 'mitigate' the hardware flaw effects.

          Originally posted by rlkrlk View Post
          This is not a clearly-defined error like the F00F bug or the FDIV bug or Rowhammer. The processor's doing what it's supposed to
          I never heard x86 CPU was supposed to actually load the data from page of privilege level higher than execution privilege level into the register, so this is something new to me

          Originally posted by rlkrlk View Post
          Dropping all currently known vulnerable Intel CPUs would essentially mean dropping all significant Intel CPUs, with no guarantee that future side channel timing attacks will not be found, not to mention other possible classes of attacks we haven't even considered yet. It would mean that essentially every existing server, desktop, laptop, and possibly even tablet or phone, would immediately have to be replaced
          Why immediately? They will still run, just the users/admins would be informed they are not fully officially supported and vulnerable. It would be up to Intel to put up any further strategy in this then, either leaving their users with their issues, or recalling and replacing the faulty hardware. It was a 'clever' move to shift the issue resolution from themselves to OS/VM hypervisor kernel developers, but it is totally not a right thing to go with, and in the long term it will bring a lot of havoc. It already does, to the level it stops being tolerable, and that's why people are starting to experiment with workarounds for workarounds, like 'deferred flushes' mentioned in the article we are commenting on.

          Originally posted by rlkrlk View Post
          How would you implement a boot warning like that? A message in the boot log, that nobody ever reads?
          A hot red warning line during all the boot process for attended systems, and indeed boot log message for unattended systems - admins of those do actually read the logs, contrary to the (apparently wrong) belief stated. Not reading logs may be common for end users, but (good) system admins usually do.

          Originally posted by rlkrlk View Post
          But the fact is that there will always be hardware errata, that need software workarounds simply because updating software (or even firmware) is just a lot easier and cheaper than respinning and replacing hardware.
          Software development and debugging, especially major commercial and especially in-home custom software nobody offers, comes pretty more expensive than hardware most of the time because it's human time and effort involved. If you are providing mass service, hardly traceable OS/software bugs also really do cost there, on a huge scale. If you are VPS provider, every dropped bit of performance actually costs, and if you hit a bug, it's a disaster. So how this fact is fact pretty beats me.

          I'm not talking about commodity consumer systems of course, here it all can remain as is. But for majority of the SPs and related software, especially hypervisors, ditching the faulty hardware that requires effort on that scale to workaround is the thing to do. Of course it should not be one time, day, hour and second thing - but the way stated could speed up the process a whole lot instead of putting all the issue under the rug. And actually, as it was mentioned by edwaleni, SPs (one I'm at included) are slowly moving in this direction themselves now, because performance drops for these excessive mitigations have already became intolerable.

          Comment


          • #6
            Originally posted by Alex/AT View Post
            Actually, imnsho major virtualization and OS software vendors should just drop vulnerable Intel CPUs from official HCLs at once, ditch all the mitigations related and warn the user on boot that he's using vulnerable system at his own risk.
            If you think this will have any effect on the actual hardware's service life you are a noob.

            Any server deployment is budgeted and has to last X years from when it is installed. There is no way in hell that anyone will even *think* about changing it before this time is up.

            Comment


            • #7
              Originally posted by starshipeleven View Post
              Any server deployment is budgeted and has to last X years from when it is installed. There is no way in hell that anyone will even *think* about changing it before this time is up.
              Sunk cost fallacy. While they have been budgeted, they were budgeted with an estimated ROI that is being eroded as we speak as performance is already degraded and clients try to escape to less vulnerable and better-performing solutions. Your mindset hurts businesses in the long run, both finance-wise and reputation-wise, and they know it. There are contingency plans for when such problems occur. Hoping that everything will go well would be foolish, because they won't.
              Last edited by Ikaris; 08-26-2019, 04:10 AM.

              Comment


              • #8
                Originally posted by Ikaris View Post
                Sunk cost fallacy.
                More like "that's what decides the management". Sysops aren't in charge, managers are.

                While they have been budgeted, they were budgeted with an estimated ROI that is being eroded as we speak as performance is already degraded and clients try to escape to less vulnerable and better-performing solutions.
                This matters only for some types of businness, namely midrange and bigger cloud providers. They are not using VmWare. VmWare is used by companies that manage their own internal stuff (and it can get quite big, but still nowhere near a big cloud provider), or by smaller cloud providers.

                There you can usually get away with more moronic decisions on IT as it is not "the businness" but a support to it.

                Do note that these are the vast majority of the businnesses around, and as I said, the main target of VMWare.

                Comment


                • #9
                  Originally posted by Alex/AT View Post
                  The scale is pretty bloody different there. 99% of driver workarounds are consolidated inside the driver code, and this time, the 'mitigations' affect the most intrinsic parts of any and every system that intends to 'mitigate' the hardware flaw effects.
                  The VM system consists of code that's partially CPU-specific and part that's generic. The CPU-specific code is in essence a device driver. Anything written in assembly is essentially device-specific itself.

                  Originally posted by Alex/AT View Post
                  I never heard x86 CPU was supposed to actually load the data from page of privilege level higher than execution privilege level into the register, so this is something new to me
                  And it doesn't in fact load it in a way that's directly detectable. It turns out that there's a whole host of ways that weren't anticipated that allow indirect inference of the contents via timing measurements. So those bugs are fixed or worked around in the software and firmware, like a lot of other device bugs.

                  Originally posted by Alex/AT View Post
                  Why immediately? They will still run, just the users/admins would be informed they are not fully officially supported and vulnerable. It would be up to Intel to put up any further strategy in this then, either leaving their users with their issues, or recalling and replacing the faulty hardware. It was a 'clever' move to shift the issue resolution from themselves to OS/VM hypervisor kernel developers, but it is totally not a right thing to go with, and in the long term it will bring a lot of havoc. It already does, to the level it stops being tolerable, and that's why people are starting to experiment with workarounds for workarounds, like 'deferred flushes' mentioned in the article we are commenting on.
                  So every time someone finds a new timing attack the response is to be to mark the hardware as unsupported rather than apply a workaround at the kernel/firmware level? Pretty quickly we'd have no supported hardware at all, at the rate new vulnerabilities are being discovered.

                  Just how would one go about "recall and replace" for that many processors, anyway? And don't just say "that's Intel's problem", because in reality it's everyone's problem, including system vendors and users. Your laptop? You have to send it back to your system vendor, who has to swap the motherboard, which under that kind of crushing load would take weeks if not months? And would likely have to repeat the process constantly, leaving you for long periods without a laptop? Or having to replace it and deal with all of the hassles that brings (swap the storage drive(s) if you're lucky)? Not to mention the huge amount of waste material this would generate? Even though there are workarounds which might cost some performance but will avoid the vulnerability? It's not the first time that a bug fix has caused a performance hit, and that developers continue to work on the problem to refine the workaround and reduce the hit.

                  Why is it "not a right thing to go with" to fix a vulnerability wherever it can be most expeditiously fixed? And what havoc will it bring about in the long term? More complex code? Yes, and I agree that it makes maintenance harder. But that's life in software. As a software developer myself, I agree that it's not very pleasant. But until hardware developers become perfect in their craft -- where perfect means anticipating every possible future attack -- we're stuck with reactive software fixes, because software by its very nature is easier to replace than hardware. Hardware developers might become perfect by the time of heat death of the universe, which isn't a knock against them because the same is true with software developers.

                  Originally posted by Alex/AT View Post
                  A hot red warning line during all the boot process for attended systems, and indeed boot log message for unattended systems - admins of those do actually read the logs, contrary to the (apparently wrong) belief stated. Not reading logs may be common for end users, but (good) system admins usually do.
                  Sure. And just what are system administrators going to do -- run out and replace all of their systems? A message that says "your hardware is defective, replace it" isn't very useful. Where are they going to get the budget for it, especially since it's going to happen repeatedly? More likely they'll find a different OS that someone with a more open mind about fixes -- what matters is fixing the problem, not assessing blame -- is willing to provide. All the Linux vendors would have to create and carry their own patches, which would be an utter nightmare.

                  Originally posted by Alex/AT View Post
                  Software development and debugging, especially major commercial and especially in-home custom software nobody offers, comes pretty more expensive than hardware most of the time because it's human time and effort involved. If you are providing mass service, hardly traceable OS/software bugs also really do cost there, on a huge scale. If you are VPS provider, every dropped bit of performance actually costs, and if you hit a bug, it's a disaster. So how this fact is fact pretty beats me.
                  Hardware development and debugging also involves human time -- as well as fab and manufacturing. A lot more than software development time and effort. The debugging isn't something each end user has to do; this is done at the OS and firmware levels, which involve many fewer players. Bugs are always going to be with us; sometimes they're disastrous and sometimes not, and sometimes they can be mitigated. And that's what's going on here, mitigating the effect of bugs. And that's precisely what the piece here is about, mitigating the performance impact of these workarounds.

                  Originally posted by Alex/AT View Post
                  I'm not talking about commodity consumer systems of course, here it all can remain as is. But for majority of the SPs and related software, especially hypervisors, ditching the faulty hardware that requires effort on that scale to workaround is the thing to do. Of course it should not be one time, day, hour and second thing - but the way stated could speed up the process a whole lot instead of putting all the issue under the rug. And actually, as it was mentioned by edwaleni, SPs (one I'm at included) are slowly moving in this direction themselves now, because performance drops for these excessive mitigations have already became intolerable.
                  Just how big of a scale are the workarounds we're talking about, anyway? And on what measure: system software developer time, deployment time, performance, what? Rolling out a kernel upgrade may not be that easy, but it's a lot easier than replacing all of the hardware, especially since it will not be a one-time thing since more issues are sure to be discovered down the road.

                  If the performance loss is intolerable, recalling all of that hardware and replacing it with something that probably will have less performance isn't going to solve the issue. Why less performance? Well, the easiest way to avoid a lot of these problems will be to eliminate hyperthreading altogether, for example. But that doesn't solve all of the problems anyway

                  Comment


                  • #10
                    Originally posted by rlkrlk View Post
                    And it doesn't in fact load it
                    In fact, flawed CPUs do, where they should not do it under any way and/or condition. And this is exactly the flaw we are discussing.
                    The rest of the post is not even interesting because this clearly shows misplaced arguments there.
                    Last edited by Alex/AT; 08-26-2019, 12:04 PM.

                    Comment

                    Working...
                    X