Announcement

Collapse
No announcement yet.

RdRand Performance As Bad As ~3% Original Speed With CrossTalk/SRBDS Mitigation

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by discordian View Post
    Meh, stick a fork in it and bring on ARM and RISC-V.
    ARM is not immune to those issues either and had a new speculative execution vulnerability disclosed yesterday (whitepaper).

    Comment


    • #22
      Originally posted by Imout0 View Post
      Where is Birdie to defend poor Intel?
      https://www.youtube.com/watch?v=1t3cBTb3xPc :-D

      Comment


      • #23
        Fortunately, this can be turned off: P

        Comment


        • #24
          Originally posted by numacross View Post

          ARM is not immune to those issues either and had a new speculative execution vulnerability disclosed yesterday (whitepaper).
          Sure, but ARM and especially RISCV have a good chance of getting formal verification of their CPUs, including resistance against side-channel attacks (ony as good as the formal model of course). There are framework existing and in use already, (not accounting for side-channels AFAIK). Never gonna happen for x86 (as whole, not just some units), that arch will be consistently be blasted with new exploits till the heat death of our universe.

          Comment


          • #25
            Is this used in SSL / TLS?
            If it is, the impact in web services will be huge.

            Comment


            • #26
              Originally posted by smotad View Post
              Is this used in SSL / TLS?
              If it is, the impact in web services will be huge.
              yeah I think the prime number generation or the elliptical curve maths in diffie-hellman exchange would employ the RDRAND instruction.

              Comment


              • #27
                Originally posted by smotad View Post
                Is this used in SSL / TLS?
                If it is, the impact in web services will be huge.
                It can be used for that. There are other ways to generate cryptographically strong (pseudo)random numbers. After all, we managed just fine before RDRAND was introduced.

                The Linux kernel for examples samples the timing of random events, such as key-presses, interrupt timings, CPU clock jitter etc to generate the random pool used for /dev/(u)random. Though I believe that if a hardware random number generator such as RDRAND is available, the kernel can take advantage of it in addition to other sources of randomness.

                It appears the effect will be much larger on SGX enclaves, as there is no other good source of randomness available inside those. And you wouldn't want to feed randomness in from an external source that the code outside the enclave could observe. I don't expect most end users will care about the SGX use case though.

                Comment


                • #28
                  The mitigation for this makes RDRAND slower and makes it so you can only call it from one core at a time, so any benchmark that just calls RDRAND repeatedly on multiple threads is going to get crucified. That's not a realistic scenario though, even for encryption workloads or whatever RDRAND is going to be <0.01% of instructions.

                  Comment


                  • #29
                    Originally posted by Vorpal View Post

                    It can be used for that. There are other ways to generate cryptographically strong (pseudo)random numbers. After all, we managed just fine before RDRAND was introduced.

                    The Linux kernel for examples samples the timing of random events, such as key-presses, interrupt timings, CPU clock jitter etc to generate the random pool used for /dev/(u)random. Though I believe that if a hardware random number generator such as RDRAND is available, the kernel can take advantage of it in addition to other sources of randomness.

                    It appears the effect will be much larger on SGX enclaves, as there is no other good source of randomness available inside those. And you wouldn't want to feed randomness in from an external source that the code outside the enclave could observe. I don't expect most end users will care about the SGX use case though.
                    And RDRAND is probably still faster than collecting entropy from the environment (e.g. keyboard/mouse/network activity).

                    Comment


                    • #30
                      Originally posted by smotad View Post
                      Is this used in SSL / TLS?
                      If it is, the impact in web services will be huge.
                      We can see some of the stuff that used it from what AMD’s defective rdrand instruction on zen 2 was said to break. Both systemd and Destiny 2 are said to have used it. Why either needed it was unclear. Here is a reference on Destiny 2:

                      AMD’s Robert Hallock has released a beta chipset driver for AMD Ryzen 3000 processors that fixes Destiny 2 incompatibility issues

                      Comment

                      Working...
                      X