Announcement

Collapse
No announcement yet.

RELPOLINES: A New Spectre V2 Approach To Lower Overhead Of Retpolines

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • RELPOLINES: A New Spectre V2 Approach To Lower Overhead Of Retpolines

    Phoronix: RELPOLINES: A New Spectre V2 Approach To Lower Overhead Of Retpolines

    Nadav Amit of VMware has announced their (currently experimental) work on "dynamic indirect call promotion" or what they have dubbed "RELPOLINES" -- not to be confused with the traditional Retpolines for "return trampolines" as one of the Spectre Variant Two software-based mitigation approaches. Relpolines is designed to have lower overhead than Retpolines...

    http://www.phoronix.com/scan.php?pag...wer-Spectre-V2

  • #2
    Who should be notified about character set problems with lkml.iu.edu archives? Firefox seems to think that it's UTF-8 (which it is not), headers say nothing about it and in HTML there is '<meta HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-2">' which is probably correct but it's ignored possibly because it's so late in the code. It's a bit annoying especially when Fx doesn't let me to select character set myself for some reason.

    Comment


    • #3
      I wonder if anyone has tried to do a simple projection of actual power cost to society as a whole on the major barf Intel has left us with?

      Assume something simple:
      2 * 10^9 vulnerable CPU's still operating. All patched. (I think Intel sells something like 100M CPUs per quarter)
      Assume 50W TDP on average.
      Assume average power load of all nodes to be 15W.
      Assume incurred overhead to be 1%.
      Assume this can be projected 24/7 (which it can't)
      Assume tasks run longer to compensate (which they usually don't do. It's just your time and framerates that get shafted).
      0.15W * 2 * 10^9 / s. ~ 300MW/s.

      Now that's a large chunk of a decently sized nuclear power-plant.
      $50/MWh * 300MW = $15000/h * 24 * 365 = 131 million USD a year... for a few years to come...

      I know. It's a stupid exercise in futility. But I had fun none the less.

      Comment


      • #4
        Originally posted by milkylainen View Post
        I wonder if anyone has tried to do a simple projection of actual power cost to society as a whole on the major barf Intel has left us with?

        Assume something simple:
        2 * 10^9 vulnerable CPU's still operating. All patched. (I think Intel sells something like 100M CPUs per quarter)
        Assume 50W TDP on average.
        Assume average power load of all nodes to be 15W.
        Assume incurred overhead to be 1%.
        Assume this can be projected 24/7 (which it can't)
        Assume tasks run longer to compensate (which they usually don't do. It's just your time and framerates that get shafted).
        0.15W * 2 * 10^9 / s. ~ 300MW/s.

        Now that's a large chunk of a decently sized nuclear power-plant.
        $50/MWh * 300MW = $15000/h * 24 * 365 = 131 million USD a year... for a few years to come...

        I know. It's a stupid exercise in futility. But I had fun none the less.
        tbf, I think you should probably subtract back out the benefit of these arch level optimizations in the first place.. just as strawman argument, we could all be using a massive amount of 486's (or whatever last gen before spectre style attacks became relevant) to achieve the same amount of computing power.. but at a *much* greater power cost. (yeah, yeah, yeah, different process node and other unrelated intervening changes brought down the cost per amount of computation.) I'm not sure, nor have given much thought to how you'd model that.

        but random/crazy thought on that topic, maybe energystar type ratings should be a thing for software ;-)

        Comment


        • #5
          Originally posted by robclark View Post

          tbf, I think you should probably subtract back out the benefit of these arch level optimizations in the first place.. just as strawman argument, we could all be using a massive amount of 486's (or whatever last gen before spectre style attacks became relevant) to achieve the same amount of computing power.. but at a *much* greater power cost. (yeah, yeah, yeah, different process node and other unrelated intervening changes brought down the cost per amount of computation.) I'm not sure, nor have given much thought to how you'd model that.

          but random/crazy thought on that topic, maybe energystar type ratings should be a thing for software ;-)
          Regarding the rating. I do think so.
          It would be nice if the world could converge on qualification of software at some point.
          Not for NASA style mission software, but rather for everyday use stuff.
          Esp. embedded software in various non-trivial devices.
          Why not? We have hardware ratings and qualifications up the yin yang.
          But software is sold or given with the devices with usually no qualifications.
          The qualified software discussion is something I have had for years with colleagues.

          Comment

          Working...
          X