Announcement

Collapse
No announcement yet.

Bisected: The Unfortunate Reason Linux 4.20 Is Running Slower

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by tuxd3v View Post
    Wait and see the next 2 years of vulnerabilities pooping up..
    Not sure if you intended to write that, but I agree with the sentiment.

    Comment


    • #82
      Good work Michael, and thanks for the article.

      I can't wait to get my 16 core Zen 2.

      Comment


      • #83
        Yeah but SMT always sucked. When it first was launched in the P4 era, it basically just was used so that whatever resources the first thread left unused the second thread could sort of "fill in" at the cost massive latency.

        And the thing is AMD knew SMT sucked and they spent years avoiding it. It's why they invented CMT. And in fact CMT -STILL- has the highest performance potential compared to any other x86 architecture. A modern CMT architecture -scaled up-, even with just 3 integer unites per thread would blow away Zen. It would annihilate it.

        Comment


        • #84
          Originally posted by ssokolow View Post
          Then you'd have a different kind of meltdown. The big thing that's holding back clock speeds is, once you get up to around 4GHz, it gets exponentially harder to increase clock speed without causing waste heat to shoot through the roof.
          It's always exponential though, not just after 4 GHz (well, the correct term is quadratic, not exponential). And it's not just heat, it's power draw as well. Heat comes from power after all.

          Comment


          • #85
            Originally posted by TemplarGR View Post
            The only idiots here are the variant who does not understand that only a FEW variants affect AMD. And of those that do affect AMD, only 1 has been demonstrated to actually work on AMD. The other 2 are only THEORETICAL, but in practical terms they are immune to them too. The only real threat for AMD cpus is the Spectre V1 and that has the lowest performance cost to mitigate.
            You don't get it. All speculative execution is vulnerable currently. All fucking CPUs, even non-x86. Yes, obviously an exploit for Intel hardware is not going to work on AMD because the branch predictors are different. It may not even work on a different CPU also from Intel. It's no different with software vulnerabilities, you need to know *exactly* how a software works and how it has that buffer overflow to be able to exploit it.

            If it's compiled with slightly different settings, you need to reanalyze and re-exploit it.

            Not my problem very little are interested in exploiting AMD due to its low market share. Note how many Intel CPUs also have "theoretical" vulnerabilities because the researchers show exploits only on a couple uarch. Obviously I'm talking about Spectre here (Meltdown is a different thing so don't bring it up).

            It's a flaw inherent in speculative execution itself. It has nothing to do with "cutting corners". God dammit.

            Comment


            • #86
              Originally posted by Weasel View Post
              It's always exponential though, not just after 4 GHz (well, the correct term is quadratic, not exponential). And it's not just heat, it's power draw as well. Heat comes from power after all.
              I'm aware of the correct term. I specifically said "exponentially" because I intended to use it in the imprecise vernacular sense.

              I was referring to how, around 4GHz, CPU clock speeds ran into a wall and, to keep Moore's Law chugging along without putting ridiculous amounts of work into cooling the chips, they had to start adding cores.

              Comment


              • #87
                Originally posted by Weasel View Post
                You don't get it. All speculative execution is vulnerable currently. All fucking CPUs, even non-x86. Yes, obviously an exploit for Intel hardware is not going to work on AMD because the branch predictors are different. It may not even work on a different CPU also from Intel. It's no different with software vulnerabilities, you need to know *exactly* how a software works and how it has that buffer overflow to be able to exploit it.

                If it's compiled with slightly different settings, you need to reanalyze and re-exploit it.

                Not my problem very little are interested in exploiting AMD due to its low market share. Note how many Intel CPUs also have "theoretical" vulnerabilities because the researchers show exploits only on a couple uarch. Obviously I'm talking about Spectre here (Meltdown is a different thing so don't bring it up).

                It's a flaw inherent in speculative execution itself. It has nothing to do with "cutting corners". God dammit.
                Bullshit. Go back and look at the P4 era SMT and then tell me again it wasn't invented specifically -TO- cut corners....

                Comment


                • #88
                  Originally posted by TemplarGR View Post

                  The only imbecile here is you. Stop insulting other forum posters you fucking cretin. I am done with idiots on this forum, i have been insulted too many times. Fuck you.

                  Also, get a grip and stop spreading Intel PR material in an effort to do damage control. It is very apparent. Try the pc gaming reddit, posters here are typically more educated in IT.
                  If you've been singled out, you deserve it. I only post when the biggest idiocy is showing it's self.

                  Still, I run a Ryzen 1700 and RX 480. I've also built 2 systems for others in the past year, both with Ryzen CPU's and Nvidia GPU's. I'm far from an Intel person at all, reality just escapes your mind with the severity of this crap. Yeah, it sucks, and so does Intel's performance. But is there a big deal here? Not really, everyone is over blowing it. This stuff only matters if you run other people's code on the same computer as yours, which is a shit idea to begin with.

                  Comment


                  • #89
                    Originally posted by ms178 View Post

                    You are incorrect, your source also states that updated microcode is available for Nehalem and Westmere - see also this newer file from August 2018 from Intel: https://www.intel.com/content/dam/ww...e-guidance.pdf

                    Haswell introduced the INVPCID instruction which is used to re-gain some performance for some mitigations. Hence older processors than Haswell with the mitigations enabled should lose more performance than newer ones and I'd like to see the exact numbers. What that means: The performance implications should be even worse for these older systems.
                    I thought you were referring to the lack of mitigations for older hardware, not the degree to which performance was lost. That explains it.

                    Comment


                    • #90
                      Originally posted by dungeon View Post
                      Someone should make unoptimized -O0 distro, so that people have got real secure un/performance right from the beginning
                      That would not help.

                      Comment

                      Working...
                      X