Announcement

Collapse
No announcement yet.

The Spectre/Meltdown Performance Impact On Linux 4.20, Decimating Benchmarks With New STIBP Overhead

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • creative
    replied
    Originally posted by dungeon View Post

    I know that and there will more to come, particulary on Ti-s as these render total unsecure
    As apposed to a non-Ti?

    Leave a comment:


  • dungeon
    replied
    I know that and there will more to come, particulary on Ti-s as these render total unsecure

    Leave a comment:


  • oooverclocker
    replied
    Originally posted by dungeon View Post
    I am only waiting someone to start exposing nVidia Ti cards vulnerabilities
    Here you are
    Et Tu, GPU? Researchers Publish Side-Channel Attacks on Nvidia Graphics - TomsHardware

    Originally posted by tuxd3v View Post
    The unique way, would be to buy AMD..or run intel cpus without security, or even, run them with atom processors performance, but cost/consumption of top line processors..
    From the current point of view AMD is a very good alternative and this will even improve next year with Zen 2. Atom CPUs are too underperforming but pretty secure though. Running Intel CPUs without mitigation sounds like a risky idea. At least, less risky with Linux than with Windows since you have better control whether programs themselves already mitigate vulnerabilities.

    However, I have deactivated SMT/HT on every device, no matter if AMD or Intel based. And I will simply compensate this with more cores in the future. It may cost me up to 40% of performance with AMD CPUs but this is affordable and for gaming purposes it tends to have mostly positive effects. But for a big data center it is a really hard hit of course.
    Last edited by oooverclocker; 18 November 2018, 06:21 AM.

    Leave a comment:


  • creative
    replied
    Originally posted by schmidtbag View Post
    Well depending on your display, GPU, and graphics settings, a performance loss up to 20% on the CPU might not be noticeable.
    i7 7700, gtx 1070, I have not seen any performance loss at all. There is a readable loss of performance with userbenchmarks' statistics albeit small very small. I judge performance loss by conscious perception of bad performance and I have seen none of that, linux nor windows. That does not mean that its not driving some people mad though.

    For my setup I have seen no real noticeable performance crappout pre meltdown spectre vs post all the patches. I run the latest games as well. I feel really unaffected by it but interesting to read about and interesting how people are perceiving it. Perception is an interesting thing.

    I am not worked up about any of this.

    Looking at some benchmarking on youtube my two generation old i7 still has an edge over even a R7 2700x in some places of over 17fps at 1080p and the i7 7700 is a locked chip.

    I mainly game though and gaming is not really effected.

    Unless someone owns a huge video editing based business or provides cloud services, you may know the story.
    Last edited by creative; 18 November 2018, 06:17 AM.

    Leave a comment:


  • creative
    replied
    Originally posted by NotMine999 View Post

    sarcasm?
    No, fact. Been really comfortable with windows 10 lately.

    Leave a comment:


  • brrrrttttt
    replied
    Originally posted by birdie View Post
    Meanwhile a request in LKML to enable to disable (sic!) all these mitigations was and met with an utter indifference and now if you want to reach previously available performance you have to peruse a ton of documentation and you also have to recompile the kernel since some mitigations are compiled-in regardless, without a runtime option to disable them.

    Well done, kernel devs, well done!
    Aside from the dubious merit of your actual point, at least you _can_ recompile your kernel, and you _can_ see what the actual mitigations are.

    Leave a comment:


  • ryao
    replied
    Originally posted by TemplarGR View Post

    AMD is forced to "officially" recommend a mitigation, since you can't exclude the theoretical possibility of an exploit and they could be liable for heavy penalties if by any chance it happened. So they keep their asses safe by recommending a mitigation. I would do the same. But still, realistically, we don't need it.
    Here is a visual representation of AMD’s resistance to Spectre v2:



    If AMD is recommending any mitigation, it is because it is vulnerable. Second guessing them when they admit it despite a huge incentive to bury the issue by denying it is absurd. I am not the only one calling AMD fanboys out on that:

    https://www.techarp.com/articles/amd...ble-spectre-2/
    Last edited by ryao; 18 November 2018, 03:51 AM.

    Leave a comment:


  • ryao
    replied
    Originally posted by TemplarGR View Post

    He is right though. They have to provide an easy way for end-users to disable some mitigations without having to recompile the kernel. Sure, those mitigations are really important for some usecases but for my amd desktop for example i don't think i need anything more than a Spectre V1 mitigation.
    The mainline Linux kernel developers do not need to provide such a way, although they do anyway. It dies not require recompiling your kernel. It is a commandline flag. If you aren’t concerned about the other Spectre issues that affect AMD, you might as well ignore Spectre v1 too.

    Edit: I see that you might be talking about page table isolation and retpoline. The page table isolation feature does not have much performance impact in practice. retpoline is used so infrequently that the performance impact is fairly negligible. It only applies to function pointer usage, which is infrequent. If you want the performance from avoiding these, there are likely several other features that you would want to recompile your kernel to disable to get the performance from avoiding them too, like turning off runtime checks in the kernel hacking section.
    Last edited by ryao; 18 November 2018, 04:22 AM.

    Leave a comment:


  • ryao
    replied
    Originally posted by TemplarGR View Post

    You are a missing a lot of key points...

    1) The thing with AMD shader architecture is that it has vastly more cores that run on a much lower frequency. So we end up with the typical "more cores are harder to optimize for" problem. Sure, graphic workloads are easier to parallelize but this does not mean core counts can increase exponentially, they have limits in utilization. I don't think it is an issue of the shader compiler, and i don't believe Nvidia's shader compiler would help AMD cards. Also, the thing about non opensourcing the driver due to keeping the shader compiler secret is bullshit, they don't have to, AMD also keeps their binary shader compiler secret, they are using another LLVM based compiler for the open drivers.
    “More cores are harder to optimize for” is a CPU issue. Graphics processors do not suffer from that because the problem is embarrassingly parallel. You would need to have a few million processing elements before “more cores are harder to optimize for” is a problem for graphics.

    If you don’t think that Nvidia’s compiler could be useful on AMD hardware, then you are not familiar with compiler design. Nvidia has been hiring PhDs that specialize in GPU compiler design the moment that they graduate. AMD on the other hand is not quite so proactive at getting talent. The Nvidia shader compiler’s front end and middle end would almost certainly be better than AMD’s equivalents on AMD hardware if the two were matched with an AMD backend.

    2) Nvidia does not actually have better performance.At least not *compute* (read shader) performance. There is a reason bitcoin miners preferred AMD cards and it is not because they were cheap, if anything during the mining crisis you couldn't find an AMD RX 580 for under 600-700 euros here in Europe... We are talking about a 200 dollars MSRP card... Nvidia gpus were much cheaper, because miners only took Nvidia gpus when they couldn't get AMD...
    Raw gflops numbers are one thing for mining which is a fairly ideal use case for AMD’s architecture. They are another thing for certain compute and most graphics tasks. If you have branching that suffers from divergence, you can easily halve your performance. If the branching is somewhat complex, you can cut it even further.

    3) As for games, AMD is not that far behind in gaming performance, with Vega 64 matching the 1080 on average and Vega 56 mostly being on par or better with 1070. The reason most games tend to perform better on Nvidia has to do with Nvidia meddling with key game developers and offering their gameworks tech which is optimized for Nvidia hardware. Gameworks is not just a "shader effects" library, it replaces key parts of a game engine. Naturally most game developers took the "gift" since it meant less work for their developers, some like Ubisoft essentially didn't have their own engines, they had only gameworks... Nvidia was trying many shenanigans with gameworks to minimize AMD performance, from increasing tesselation far more than they should have, to having tailor made D3D11 Command Lists, to physx ( a part of gameworks) always providing a boost by executing on gpu and being crippled (made single core) for the cpu version for amd cards, etc. You can see how crippleworks was bad by benching other games that don't use all of its features like The Witcher 3 which fared much better on AMD cards after CDPR patched the game or Deus Ex Mankind Divided which does not use gameworks at all.
    I consider a Vega 64 to be inferior to the 1070 outside of special cases where everything aligns to make the shader compiler do a good job. AMD’s shader compilers are well known to be lousy. Just look at DXVK bug reports, and in that case, the DXVK developer is developing on AMD hardware. Furthermore, the hardware has some crippling issue that prevented primitive shaders from being implemented such that it will never achieve the performance its designers intended. If I thought that it had a decent chance, I would have brought one and then started hacking away at the shader compiler, but I concluded that it was a waste of time because I would never get something as efficient as a 1070, even if I found a way to get it to generate obscenely good assembly code. Speaking of which, AMD does not appear to publish programming documentation for that GPU that could be used to get GPU reset support implemented (I looked) so that hacking on the shader compiler is not a masochistic experience. Make up whatever excuses you want for the issues. At the end of the day, it just is not as good.
    Last edited by ryao; 18 November 2018, 04:27 AM.

    Leave a comment:


  • TemplarGR
    replied
    Originally posted by ryao View Post

    Nvidia GPUs lack speculative execution, so they would be immune to Spectre style vulnerabilities. The stream processor paradigm is fundamentally different from how central processing units work in a way that makes speculative execution a performance penalty (from wasting die area that could go to more processing elements) rather than a performance win, so that is unlikely to ever change. There are probably other vulnerabilities that you might find, but none that require fixes that reduce performance are likely.

    As far as I know, Nvidia’s performance advantage over AMD is mainly from two things:

    1. Superior shader compiler optimization.
    2. Hardware techniques to minimize execution inefficiencies such as branch divergence and penalties from small shaders (such that the GPU is not made to go idle)

    The first is likely a big part of why Nvidia does not want to open source their driver. If their shader compiler were adapted to AMD hardware, AMD performance should increase significantly. It also does not help that they have an incredible amount of driver fragmentation. Nvidia also has a single unified driver that they use on all platforms, which lets platform independent changes made on one platform advance every platform. AMD has a multitude of drivers for Linux on top of the blob that they use on Windows. They would do much better if they killed off the blob in favor of modifying their sanctioned version of the Linux driver stack for reuse elsewhere and poured all effort into that.

    You can read about one of Nvidia’s hardware techniques here:

    https://yosefk.com/blog/simd-simt-sm...idia-gpus.html

    Nvidia claims that in at least one case, it can do MIMD, which is another level of efficiency entirely:

    https://developer.nvidia.com/gpugems...chapter34.html

    There is also the parallel kernel support mentioned here:



    I could be mistaken, but as far as I know, AMD GPUs have none of these enhancements. AMD tried something that they called primitive shaders. They claimed would give them an improvement in efficiency, but it is reportedly so broken that they never released a driver that uses it.
    You are a missing a lot of key points...

    1) The thing with AMD shader architecture is that it has vastly more cores that run on a much lower frequency. So we end up with the typical "more cores are harder to optimize for" problem. Sure, graphic workloads are easier to parallelize but this does not mean core counts can increase exponentially, they have limits in utilization. I don't think it is an issue of the shader compiler, and i don't believe Nvidia's shader compiler would help AMD cards. Also, the thing about non opensourcing the driver due to keeping the shader compiler secret is bullshit, they don't have to, AMD also keeps their binary shader compiler secret, they are using another LLVM based compiler for the open drivers.

    2) Nvidia does not actually have better performance.At least not *compute* (read shader) performance. There is a reason bitcoin miners preferred AMD cards and it is not because they were cheap, if anything during the mining crisis you couldn't find an AMD RX 580 for under 600-700 euros here in Europe... We are talking about a 200 dollars MSRP card... Nvidia gpus were much cheaper, because miners only took Nvidia gpus when they couldn't get AMD...

    3) As for games, AMD is not that far behind in gaming performance, with Vega 64 matching the 1080 on average and Vega 56 mostly being on par or better with 1070. The reason most games tend to perform better on Nvidia has to do with Nvidia meddling with key game developers and offering their gameworks tech which is optimized for Nvidia hardware. Gameworks is not just a "shader effects" library, it replaces key parts of a game engine. Naturally most game developers took the "gift" since it meant less work for their developers, some like Ubisoft essentially didn't have their own engines, they had only gameworks... Nvidia was trying many shenanigans with gameworks to minimize AMD performance, from increasing tesselation far more than they should have, to having tailor made D3D11 Command Lists, to physx ( a part of gameworks) always providing a boost by executing on gpu and being crippled (made single core) for the cpu version for amd cards, etc. You can see how crippleworks was bad by benching other games that don't use all of its features like The Witcher 3 which fared much better on AMD cards after CDPR patched the game or Deus Ex Mankind Divided which does not use gameworks at all.

    Leave a comment:

Working...
X