Announcement

Collapse
No announcement yet.

The Spectre/Meltdown Performance Impact On Linux 4.20, Decimating Benchmarks With New STIBP Overhead

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • TemplarGR
    replied
    Originally posted by ryao View Post

    Nvidia GPUs lack speculative execution, so they would be immune to Spectre style vulnerabilities. The stream processor paradigm is fundamentally different from how central processing units work in a way that makes speculative execution a performance penalty (from wasting die area that could go to more processing elements) rather than a performance win, so that is unlikely to ever change. There are probably other vulnerabilities that you might find, but none that require fixes that reduce performance are likely.

    As far as I know, Nvidia’s performance advantage over AMD is mainly from two things:

    1. Superior shader compiler optimization.
    2. Hardware techniques to minimize execution inefficiencies such as branch divergence and penalties from small shaders (such that the GPU is not made to go idle)

    The first is likely a big part of why Nvidia does not want to open source their driver. If their shader compiler were adapted to AMD hardware, AMD performance should increase significantly. It also does not help that they have an incredible amount of driver fragmentation. Nvidia also has a single unified driver that they use on all platforms, which lets platform independent changes made on one platform advance every platform. AMD has a multitude of drivers for Linux on top of the blob that they use on Windows. They would do much better if they killed off the blob in favor of modifying their sanctioned version of the Linux driver stack for reuse elsewhere and poured all effort into that.

    You can read about one of Nvidia’s hardware techniques here:

    https://yosefk.com/blog/simd-simt-sm...idia-gpus.html

    Nvidia claims that in at least one case, it can do MIMD, which is another level of efficiency entirely:

    https://developer.nvidia.com/gpugems...chapter34.html

    There is also the parallel kernel support mentioned here:

    https://www.anandtech.com/show/2849/5

    I could be mistaken, but as far as I know, AMD GPUs have none of these enhancements. AMD tried something that they called primitive shaders. They claimed would give them an improvement in efficiency, but it is reportedly so broken that they never released a driver that uses it.
    You are a missing a lot of key points...

    1) The thing with AMD shader architecture is that it has vastly more cores that run on a much lower frequency. So we end up with the typical "more cores are harder to optimize for" problem. Sure, graphic workloads are easier to parallelize but this does not mean core counts can increase exponentially, they have limits in utilization. I don't think it is an issue of the shader compiler, and i don't believe Nvidia's shader compiler would help AMD cards. Also, the thing about non opensourcing the driver due to keeping the shader compiler secret is bullshit, they don't have to, AMD also keeps their binary shader compiler secret, they are using another LLVM based compiler for the open drivers.

    2) Nvidia does not actually have better performance.At least not *compute* (read shader) performance. There is a reason bitcoin miners preferred AMD cards and it is not because they were cheap, if anything during the mining crisis you couldn't find an AMD RX 580 for under 600-700 euros here in Europe... We are talking about a 200 dollars MSRP card... Nvidia gpus were much cheaper, because miners only took Nvidia gpus when they couldn't get AMD...

    3) As for games, AMD is not that far behind in gaming performance, with Vega 64 matching the 1080 on average and Vega 56 mostly being on par or better with 1070. The reason most games tend to perform better on Nvidia has to do with Nvidia meddling with key game developers and offering their gameworks tech which is optimized for Nvidia hardware. Gameworks is not just a "shader effects" library, it replaces key parts of a game engine. Naturally most game developers took the "gift" since it meant less work for their developers, some like Ubisoft essentially didn't have their own engines, they had only gameworks... Nvidia was trying many shenanigans with gameworks to minimize AMD performance, from increasing tesselation far more than they should have, to having tailor made D3D11 Command Lists, to physx ( a part of gameworks) always providing a boost by executing on gpu and being crippled (made single core) for the cpu version for amd cards, etc. You can see how crippleworks was bad by benching other games that don't use all of its features like The Witcher 3 which fared much better on AMD cards after CDPR patched the game or Deus Ex Mankind Divided which does not use gameworks at all.

    Leave a comment:


  • TemplarGR
    replied
    Originally posted by torsionbar28 View Post

    And who do you think is buying all the EPYC and Xeon Gold chips? Hint: It's not single users running desktop apps. These chips are primarily used for the exact purpose you identified in bold font.
    He is right though. They have to provide an easy way for end-users to disable some mitigations without having to recompile the kernel. Sure, those mitigations are really important for some usecases but for my amd desktop for example i don't think i need anything more than a Spectre V1 mitigation.

    Leave a comment:


  • TemplarGR
    replied
    Originally posted by ryao View Post

    AMD recommends a mitigation be enabled for Spectre v2 in their official response, although they discourage STIBP:

    https://developer.amd.com/wp-content...ch_Control.pdf

    The kernel patch that turns it on does not discriminate between AMD and Intel. It is bizarre that it is not being enabled. If I had access to a recent AMD system, I could figure out why.
    AMD is forced to "officially" recommend a mitigation, since you can't exclude the theoretical possibility of an exploit and they could be liable for heavy penalties if by any chance it happened. So they keep their asses safe by recommending a mitigation. I would do the same. But still, realistically, we don't need it.

    Leave a comment:


  • torsionbar28
    replied
    Originally posted by birdie View Post

    And this is pure BS for over 95% of users out there who only run a web browser, a document processor and a spreadsheet.

    Both Firefox and Chrome have long implemented protections against Meltdown/Spectre class exploits, so there's really no way such users could be hacked.

    Most, if not all of the vulnerabilities are about shared environments or/and virtualization companies.

    However for some reasons SOHO users must incur the costs of these workarounds for CPU design errors by default with no option of disabling them all in one fell swoop.

    I cannot even fathom how much energy will be wasted due to this madness.
    And who do you think is buying all the EPYC and Xeon Gold chips? Hint: It's not single users running desktop apps. These chips are primarily used for the exact purpose you identified in bold font.

    Leave a comment:


  • torsionbar28
    replied
    Originally posted by schmidtbag View Post
    I'm curious how much Windows is affected by this. I haven't seen any benchmarks for that yet.
    Are there many folks running Windows on a chip like this? I would think Xeon Gold and EPYC are more often used as hypervisors in multi-tenant environments, rather than as monolithic hosts.

    Leave a comment:


  • dungeon
    replied
    Originally posted by birdie View Post
    And this is pure BS for over 95% of users out there who only run a web browser, a document processor and a spreadsheet.
    Well, if we imagine that 95% of users are like this then these really don't need nor have CPU with HT Technology HT was needed only in scenarios where user do a lot of tasks at once, but since now we have a lot multi core CPU it is kind of - who cares. So who really cares? Maybe HEDT people nowdays Are they really mainstream? That is also questionable

    Current Intel's mainstream line is like - i3 which does not have HT, i5 does not have it too, only some of i7 have it.... where do you see 95% of users there Only these from top of the top will see some performance reduction here and there due to this and that is pretty much it

    If you show me data which says that 95% of Intel users only buy HT CPUs then i may start to believe in this number... but is really far from truth
    Last edited by dungeon; 17 November 2018, 10:40 PM.

    Leave a comment:


  • ryao
    replied
    Originally posted by birdie View Post

    And this is pure BS for over 95% of users out there who only run a web browser, a document processor and a spreadsheet.

    Both Firefox and Chrome have long implemented protections against Meltdown/Spectre class exploits, so there's really no way such users could be hacked.

    Most, if not all of the vulnerabilities are about shared environments or/and virtualization companies.

    However for some reasons SOHO users must incur the costs of these workarounds for CPU design errors by default with no option of disabling them all in one fell swoop.

    I cannot even fathom how much energy will be wasted due to this madness.
    The problem with this mentality is that it risks ignoring a situation where a fix is later found to be important.

    If you run a web browser, Skype, games from steam or even steam, you are basically executing code that is a vector through which malware can gain a foothold to use a local vulnerability. It is wrong to think of an exploit by itself as being not serious from it requiring X thing because there will be an exploit that is also considered not serious that gives X thing. The power of chained exploits is quite something. Here is an article showing the power of chained exploits:

    https://medium.com/@N/how-i-lost-my-...e-24eb09e026dd

    The concept of chaining exploits is generic enough that you can apply it to anything. This means that every exploit is serious. While you think otherwise right now, the moment some big attack occurs that negatively affects you, you will likely be quick to call people incompetent, even if the root cause is that they thought like you do right now in the first place.

    Quite a few facepalm level intrusions have occurred because of people who underestimate the severity of security flaws. Those people imagine an exploit in vacuo rather than imagining the exploit paired with hypothetical exploits that are likely to be found. When they are found, things go from being okay to being a nightmare situation because the issue was not appropriately handled when it still had a low impact.
    Last edited by ryao; 17 November 2018, 08:50 PM.

    Leave a comment:


  • ThoreauHD
    replied
    It's a good thing we have to be nice to each other as Intel whithers away and dies. Thank you CoC. Job done.

    Leave a comment:


  • ryao
    replied
    Originally posted by dungeon View Post
    OK, HyperThreading technology is now less hyper, but still does something



    I am only waiting someone to start exposing nVidia Ti cards vulnerabilities, that is the same crap - no one care about security there Or better to say - gamers does not care.
    Nvidia GPUs lack speculative execution, so they would be immune to Spectre style vulnerabilities. The stream processor paradigm is fundamentally different from how central processing units work in a way that makes speculative execution a performance penalty (from wasting die area that could go to more processing elements) rather than a performance win, so that is unlikely to ever change. There are probably other vulnerabilities that you might find, but none that require fixes that reduce performance are likely.

    As far as I know, Nvidia’s performance advantage over AMD is mainly from two things:

    1. Superior shader compiler optimization.
    2. Hardware techniques to minimize execution inefficiencies such as branch divergence and penalties from small shaders (such that the GPU is not made to go idle)

    The first is likely a big part of why Nvidia does not want to open source their driver. If their shader compiler were adapted to AMD hardware, AMD performance should increase significantly. It also does not help that they have an incredible amount of driver fragmentation. Nvidia also has a single unified driver that they use on all platforms, which lets platform independent changes made on one platform advance every platform. AMD has a multitude of drivers for Linux on top of the blob that they use on Windows. They would do much better if they killed off the blob in favor of modifying their sanctioned version of the Linux driver stack for reuse elsewhere and poured all effort into that.

    You can read about one of Nvidia’s hardware techniques here:

    https://yosefk.com/blog/simd-simt-sm...idia-gpus.html

    Nvidia claims that in at least one case, it can do MIMD, which is another level of efficiency entirely:

    https://developer.nvidia.com/gpugems...chapter34.html

    There is also the parallel kernel support mentioned here:

    https://www.anandtech.com/show/2849/5

    I could be mistaken, but as far as I know, AMD GPUs have none of these enhancements. AMD tried something that they called primitive shaders. They claimed would give them an improvement in efficiency, but it is reportedly so broken that they never released a driver that uses it.
    Last edited by ryao; 17 November 2018, 08:06 PM.

    Leave a comment:


  • birdie
    replied
    Originally posted by dungeon View Post
    On this particular case, well users might like it or not but no doubt they must and should mitigate these, as these are hardware flaws
    And this is pure BS for over 95% of users out there who only run a web browser, a document processor and a spreadsheet.

    Both Firefox and Chrome have long implemented protections against Meltdown/Spectre class exploits, so there's really no way such users could be hacked.

    Most, if not all of the vulnerabilities are about shared environments or/and virtualization companies.

    However for some reasons SOHO users must incur the costs of these workarounds for CPU design errors by default with no option of disabling them all in one fell swoop.

    I cannot even fathom how much energy will be wasted due to this madness.

    Leave a comment:

Working...
X