Announcement

Collapse
No announcement yet.

Bisected: The Unfortunate Reason Linux 4.20 Is Running Slower

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ryao
    replied
    Originally posted by gmturner View Post
    Pretty sure insider-types have been making ominous statements to the effect that they expect many more performance-harming exploits/mitigations to appear... Scary to think there may well be more bad news of this nature in the pipeline. Also to anyone saying how great this is for AMD, consider this: if you're a security researcher that's uncovered a new class of exploit in, let's say, mid 2017, the obvious platform to code your proof-of-concept exploit for is Intel. This is like saying that linux is more secure because it has less desktop malware in the wild... basically your status as a minority platform provides a relative disincentive to attackers. It's pretty clear that Intel's market-share will not continue as it has in the past, and not only because of the these security issues. IME it's a safe bet that similar exploits will appear for Zen targets sooner or later, and AMD has been wise not to seize this window of opportunity to make sweeping claims about their hardware being free from security flaws.

    Finally, to whomever said Intel was cheating, nobody is saying that so far as I know (at least, not about Spectre/Meltdown). This isn't purported to be about cheating, but about performance enhancing features of modern desktop/server platforms that leaked information in ways so subtle that nobody thought of it until now. It would be somewhat scammy if Intel/AMD knew about these issues all along, but chose to suppress that information, rather than develop mitigations. I have not heard any allegations to that effect, and even if such accusations emerged, so long as the exploits in question remained theoretical, I would still take them with a grain of salt.
    Whenever ATI or Nvidia were caught implementing hacks to raise performance, they were accused of cheating. I do not see much difference here. The only thing is that basically everyone cheated here, with Intel cheating more than others.

    Leave a comment:


  • ryao
    replied
    Originally posted by carewolf View Post

    It doesn't. They are effectively immune, it can't be ruled out there is a security hole that can be exploited that would open up all the spectrev2 issues, but a technique to trick the latest AMD branch predictor that way hasn't been found yet.

    And it isn't AMD saying it, the original researchers and everybody knowledgable about branch predictors and Spectre are saying it. They are using a more precise modern branch predictor that doesn't pick up as much random noise. This "noise", is what could be manipulated with older branch predictors to make it jump to specific addresses, so one that ignores training data that doesn't apply to it, is safe from that threat.
    That is not what AMD itself says:

    Leave a comment:


  • ryao
    replied
    Originally posted by dungeon View Post
    Someone should make unoptimized -O0 distro, so that people have got real secure un/performance right from the beginning
    That would not help.

    Leave a comment:


  • ryao
    replied
    Originally posted by ms178 View Post

    You are incorrect, your source also states that updated microcode is available for Nehalem and Westmere - see also this newer file from August 2018 from Intel: https://www.intel.com/content/dam/ww...e-guidance.pdf

    Haswell introduced the INVPCID instruction which is used to re-gain some performance for some mitigations. Hence older processors than Haswell with the mitigations enabled should lose more performance than newer ones and I'd like to see the exact numbers. What that means: The performance implications should be even worse for these older systems.
    I thought you were referring to the lack of mitigations for older hardware, not the degree to which performance was lost. That explains it.

    Leave a comment:


  • abott
    replied
    Originally posted by TemplarGR View Post

    The only imbecile here is you. Stop insulting other forum posters you fucking cretin. I am done with idiots on this forum, i have been insulted too many times. Fuck you.

    Also, get a grip and stop spreading Intel PR material in an effort to do damage control. It is very apparent. Try the pc gaming reddit, posters here are typically more educated in IT.
    If you've been singled out, you deserve it. I only post when the biggest idiocy is showing it's self.

    Still, I run a Ryzen 1700 and RX 480. I've also built 2 systems for others in the past year, both with Ryzen CPU's and Nvidia GPU's. I'm far from an Intel person at all, reality just escapes your mind with the severity of this crap. Yeah, it sucks, and so does Intel's performance. But is there a big deal here? Not really, everyone is over blowing it. This stuff only matters if you run other people's code on the same computer as yours, which is a shit idea to begin with.

    Leave a comment:


  • duby229
    replied
    Originally posted by Weasel View Post
    You don't get it. All speculative execution is vulnerable currently. All fucking CPUs, even non-x86. Yes, obviously an exploit for Intel hardware is not going to work on AMD because the branch predictors are different. It may not even work on a different CPU also from Intel. It's no different with software vulnerabilities, you need to know *exactly* how a software works and how it has that buffer overflow to be able to exploit it.

    If it's compiled with slightly different settings, you need to reanalyze and re-exploit it.

    Not my problem very little are interested in exploiting AMD due to its low market share. Note how many Intel CPUs also have "theoretical" vulnerabilities because the researchers show exploits only on a couple uarch. Obviously I'm talking about Spectre here (Meltdown is a different thing so don't bring it up).

    It's a flaw inherent in speculative execution itself. It has nothing to do with "cutting corners". God dammit.
    Bullshit. Go back and look at the P4 era SMT and then tell me again it wasn't invented specifically -TO- cut corners....

    Leave a comment:


  • ssokolow
    replied
    Originally posted by Weasel View Post
    It's always exponential though, not just after 4 GHz (well, the correct term is quadratic, not exponential). And it's not just heat, it's power draw as well. Heat comes from power after all.
    I'm aware of the correct term. I specifically said "exponentially" because I intended to use it in the imprecise vernacular sense.

    I was referring to how, around 4GHz, CPU clock speeds ran into a wall and, to keep Moore's Law chugging along without putting ridiculous amounts of work into cooling the chips, they had to start adding cores.

    Leave a comment:


  • Weasel
    replied
    Originally posted by TemplarGR View Post
    The only idiots here are the variant who does not understand that only a FEW variants affect AMD. And of those that do affect AMD, only 1 has been demonstrated to actually work on AMD. The other 2 are only THEORETICAL, but in practical terms they are immune to them too. The only real threat for AMD cpus is the Spectre V1 and that has the lowest performance cost to mitigate.
    You don't get it. All speculative execution is vulnerable currently. All fucking CPUs, even non-x86. Yes, obviously an exploit for Intel hardware is not going to work on AMD because the branch predictors are different. It may not even work on a different CPU also from Intel. It's no different with software vulnerabilities, you need to know *exactly* how a software works and how it has that buffer overflow to be able to exploit it.

    If it's compiled with slightly different settings, you need to reanalyze and re-exploit it.

    Not my problem very little are interested in exploiting AMD due to its low market share. Note how many Intel CPUs also have "theoretical" vulnerabilities because the researchers show exploits only on a couple uarch. Obviously I'm talking about Spectre here (Meltdown is a different thing so don't bring it up).

    It's a flaw inherent in speculative execution itself. It has nothing to do with "cutting corners". God dammit.

    Leave a comment:


  • Weasel
    replied
    Originally posted by ssokolow View Post
    Then you'd have a different kind of meltdown. The big thing that's holding back clock speeds is, once you get up to around 4GHz, it gets exponentially harder to increase clock speed without causing waste heat to shoot through the roof.
    It's always exponential though, not just after 4 GHz (well, the correct term is quadratic, not exponential). And it's not just heat, it's power draw as well. Heat comes from power after all.

    Leave a comment:


  • duby229
    replied
    Yeah but SMT always sucked. When it first was launched in the P4 era, it basically just was used so that whatever resources the first thread left unused the second thread could sort of "fill in" at the cost massive latency.

    And the thing is AMD knew SMT sucked and they spent years avoiding it. It's why they invented CMT. And in fact CMT -STILL- has the highest performance potential compared to any other x86 architecture. A modern CMT architecture -scaled up-, even with just 3 integer unites per thread would blow away Zen. It would annihilate it.

    Leave a comment:

Working...
X