Announcement

Collapse
No announcement yet.

The Spectre/Meltdown Performance Impact On Linux 4.20, Decimating Benchmarks With New STIBP Overhead

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by NotMine999 View Post

    I find it interesting to note that "impact on kernel performance" was not considered/challenged by the person(s) replying to the original poster (Artem) in the thread.

    Are Linux kernel developers not concerned with performance impacts of their coding?

    I have fragments of memory that suggest that Linus has challenged contributors to justify/document the performance impact of their code, generally when those impacts were claimed to improve performance.

    Some mitigations are included in CPU microcode according to Intel. The only way to revert that is to load older microcode.

    A few kernel boot flags exist that will turn off some of these features. That is useful for testing and for those not interested in these fixes.

    What bothers me is that the replies to the original poster (Artem) seemed to suggest that no kernel boot flags exist in some cases due to the use of GCC compiler flags. What are those flags? have they been clearly documented? I think GCC compiler flags being used to mitigate these security issues need to be clearly ocumented so those that want to "revert" those specific changes can do so by recompiling the kernel for themselves. Then a user can rebuild their kernel without any security fixes of their own choosing.

    Seriously, it sounds to me like the responses in that email thread suggest Linux kernel devs simply accepting that these security fixes will negatively impact Linux kernel performance by taking a "let's not bring up that subject again" approach.

    I say that Michael's testing is clearly("blatant" might be a better word) showing the "before & after" impact of these security fixes, especially in Intel processors.

    It is times like this exact situation that prove the value of the work that Michael is doing.

    Why the Linux Foundation or other large entities involved in Linux kernel development do not provide Michael and Phoronix with an annual grant to help defray his costs incurred in doing all of this performance testing is simply beyond me.

    If the corporate world & Linux development communities can get something important like this testing for free, then why should they have to pay to support for it? Sad.
    As a non-mainline kernel developer myself, I would not accuse the mainline developers of not caring about the performance of their code. If anything, they care so much that they have done things developers of other platforms would regard to be insane:
    • Mutex unlocking has subtly different semantics from the userland counterpart that causes bugs if the developers using them are unaware of the differences. This is done just for the sake of a tiny bit more performance. This is a reference to how unlocking a mutex will continue accessing memory to process the waiter list just to let a thread in the fast lock path execute sooner.
    • Ticketed spinlocks were invented to reduce SMP contention on NUMA systems. This one is not insane, but the idea that anyone would try to sequeeze out extra performance from a fundamental locking mechanism that nobody thought could be made better is just mind boggling.
    • RCU has been used everywhere to improve concurrency, including in certain trees (which is a pain to understand)
    • Efforts have been made to eliminate locking in favor of the absolute minimal memory barriers necessary to make things work as fast as they can openness on the DEC Alpha (which has the most relaxed barrier model there is) have been made.
    • CPU prefetch has been overused to the point of harming performance in some cases, such as in linked list traversal.
    • Compiler hints in the form of likely and unlikely are peppered throughout the code. This could turn into hints for the CPU branch predictor depending on the architecture, or can just result in ordering things differently so that the branch predictor is inclined to predict a certain way. It is an extremely esoteric form of optimization.
    • Various tiny accessor/setter functions have been made into preprocessor definitions to forcibly inline them to avoid function call overhead. It is possible to program without using such functions, but it makes things more maintainable. Doing it the way the kernel has done it gives you the best of both worlds, but makes debugging more difficult as you cannot instrument processor definitions.
    • container_of was implemented to allow certain structures that extend other structures to be implemented in a way that avoids a single pointer. The way that this is implemented uses typeof, which is a compiler extension that is incompatible with the C standard.
    • plenty of kernel config options have remarks about slight slowdowns or small increases in memory usage.
    • skbufs are incredibly ugly, but they reduce pointer indirections to increase network performance.
    • kernel virtual memory has been crippled to ensure that it is not used very much for the sake of slightly faster execution.
    • direct reclaim has been implemented to make memory allocations complete sooner under low memory situations (although this is debatable).
    • they implemented an ugly hack called ->bmap so that swap files are as performant as swap devices. This is not compatible with anything that is not an in place filesystem and the value of it is questionable, but that is a tangent for another thread.
    • they adopted a few hacks from IRIX to speed up performance in certain things. Namely, the non-standardized (and poorly defined) O_DIRECT and short extended attributes (that allow storage in inodes to avoid an extra disk access or two).
    Those are just off the top of my head. If you look at the FreeBSD or Illumos source code, you would see that most of these techniques are not done, or are done extremely sparingly. The extent to which the Linux kernel has been overoptimized in certain instances is extreme. On other platforms, the developers are far more conservative at doing things and only make changes when profiling shows a tangible improvement. If a micro-optimization can be shown to make a tiny improvement, the mainline developers will often do it.

    Of course, there are areas where you will see that not much effort is given (like /dev/urandom until recently), but overall, the Linux kernel’s mainline developers do more for performance than those of just about any other platform. In some benchmarks comparing platforms, the differences show themselves quite prominently.

    Comment


    • #22
      Originally posted by dungeon View Post
      On this particular case, well users might like it or not but no doubt they must and should mitigate these, as these are hardware flaws
      And this is pure BS for over 95% of users out there who only run a web browser, a document processor and a spreadsheet.

      Both Firefox and Chrome have long implemented protections against Meltdown/Spectre class exploits, so there's really no way such users could be hacked.

      Most, if not all of the vulnerabilities are about shared environments or/and virtualization companies.

      However for some reasons SOHO users must incur the costs of these workarounds for CPU design errors by default with no option of disabling them all in one fell swoop.

      I cannot even fathom how much energy will be wasted due to this madness.

      Comment


      • #23
        Originally posted by dungeon View Post
        OK, HyperThreading technology is now less hyper, but still does something



        I am only waiting someone to start exposing nVidia Ti cards vulnerabilities, that is the same crap - no one care about security there Or better to say - gamers does not care.
        Nvidia GPUs lack speculative execution, so they would be immune to Spectre style vulnerabilities. The stream processor paradigm is fundamentally different from how central processing units work in a way that makes speculative execution a performance penalty (from wasting die area that could go to more processing elements) rather than a performance win, so that is unlikely to ever change. There are probably other vulnerabilities that you might find, but none that require fixes that reduce performance are likely.

        As far as I know, Nvidia’s performance advantage over AMD is mainly from two things:

        1. Superior shader compiler optimization.
        2. Hardware techniques to minimize execution inefficiencies such as branch divergence and penalties from small shaders (such that the GPU is not made to go idle)

        The first is likely a big part of why Nvidia does not want to open source their driver. If their shader compiler were adapted to AMD hardware, AMD performance should increase significantly. It also does not help that they have an incredible amount of driver fragmentation. Nvidia also has a single unified driver that they use on all platforms, which lets platform independent changes made on one platform advance every platform. AMD has a multitude of drivers for Linux on top of the blob that they use on Windows. They would do much better if they killed off the blob in favor of modifying their sanctioned version of the Linux driver stack for reuse elsewhere and poured all effort into that.

        You can read about one of Nvidia’s hardware techniques here:

        https://yosefk.com/blog/simd-simt-sm...idia-gpus.html

        Nvidia claims that in at least one case, it can do MIMD, which is another level of efficiency entirely:

        https://developer.nvidia.com/gpugems...chapter34.html

        There is also the parallel kernel support mentioned here:



        I could be mistaken, but as far as I know, AMD GPUs have none of these enhancements. AMD tried something that they called primitive shaders. They claimed would give them an improvement in efficiency, but it is reportedly so broken that they never released a driver that uses it.
        Last edited by ryao; 17 November 2018, 08:06 PM.

        Comment


        • #24
          It's a good thing we have to be nice to each other as Intel whithers away and dies. Thank you CoC. Job done.

          Comment


          • #25
            Originally posted by birdie View Post

            And this is pure BS for over 95% of users out there who only run a web browser, a document processor and a spreadsheet.

            Both Firefox and Chrome have long implemented protections against Meltdown/Spectre class exploits, so there's really no way such users could be hacked.

            Most, if not all of the vulnerabilities are about shared environments or/and virtualization companies.

            However for some reasons SOHO users must incur the costs of these workarounds for CPU design errors by default with no option of disabling them all in one fell swoop.

            I cannot even fathom how much energy will be wasted due to this madness.
            The problem with this mentality is that it risks ignoring a situation where a fix is later found to be important.

            If you run a web browser, Skype, games from steam or even steam, you are basically executing code that is a vector through which malware can gain a foothold to use a local vulnerability. It is wrong to think of an exploit by itself as being not serious from it requiring X thing because there will be an exploit that is also considered not serious that gives X thing. The power of chained exploits is quite something. Here is an article showing the power of chained exploits:

            https://medium.com/@N/how-i-lost-my-...e-24eb09e026dd

            The concept of chaining exploits is generic enough that you can apply it to anything. This means that every exploit is serious. While you think otherwise right now, the moment some big attack occurs that negatively affects you, you will likely be quick to call people incompetent, even if the root cause is that they thought like you do right now in the first place.

            Quite a few facepalm level intrusions have occurred because of people who underestimate the severity of security flaws. Those people imagine an exploit in vacuo rather than imagining the exploit paired with hypothetical exploits that are likely to be found. When they are found, things go from being okay to being a nightmare situation because the issue was not appropriately handled when it still had a low impact.
            Last edited by ryao; 17 November 2018, 08:50 PM.

            Comment


            • #26
              Originally posted by birdie View Post
              And this is pure BS for over 95% of users out there who only run a web browser, a document processor and a spreadsheet.
              Well, if we imagine that 95% of users are like this then these really don't need nor have CPU with HT Technology HT was needed only in scenarios where user do a lot of tasks at once, but since now we have a lot multi core CPU it is kind of - who cares. So who really cares? Maybe HEDT people nowdays Are they really mainstream? That is also questionable

              Current Intel's mainstream line is like - i3 which does not have HT, i5 does not have it too, only some of i7 have it.... where do you see 95% of users there Only these from top of the top will see some performance reduction here and there due to this and that is pretty much it

              If you show me data which says that 95% of Intel users only buy HT CPUs then i may start to believe in this number... but is really far from truth
              Last edited by dungeon; 17 November 2018, 10:40 PM.

              Comment


              • #27
                Originally posted by schmidtbag View Post
                I'm curious how much Windows is affected by this. I haven't seen any benchmarks for that yet.
                Are there many folks running Windows on a chip like this? I would think Xeon Gold and EPYC are more often used as hypervisors in multi-tenant environments, rather than as monolithic hosts.

                Comment


                • #28
                  Originally posted by birdie View Post

                  And this is pure BS for over 95% of users out there who only run a web browser, a document processor and a spreadsheet.

                  Both Firefox and Chrome have long implemented protections against Meltdown/Spectre class exploits, so there's really no way such users could be hacked.

                  Most, if not all of the vulnerabilities are about shared environments or/and virtualization companies.

                  However for some reasons SOHO users must incur the costs of these workarounds for CPU design errors by default with no option of disabling them all in one fell swoop.

                  I cannot even fathom how much energy will be wasted due to this madness.
                  And who do you think is buying all the EPYC and Xeon Gold chips? Hint: It's not single users running desktop apps. These chips are primarily used for the exact purpose you identified in bold font.

                  Comment


                  • #29
                    Originally posted by ryao View Post

                    AMD recommends a mitigation be enabled for Spectre v2 in their official response, although they discourage STIBP:

                    https://developer.amd.com/wp-content...ch_Control.pdf

                    The kernel patch that turns it on does not discriminate between AMD and Intel. It is bizarre that it is not being enabled. If I had access to a recent AMD system, I could figure out why.
                    AMD is forced to "officially" recommend a mitigation, since you can't exclude the theoretical possibility of an exploit and they could be liable for heavy penalties if by any chance it happened. So they keep their asses safe by recommending a mitigation. I would do the same. But still, realistically, we don't need it.

                    Comment


                    • #30
                      Originally posted by torsionbar28 View Post

                      And who do you think is buying all the EPYC and Xeon Gold chips? Hint: It's not single users running desktop apps. These chips are primarily used for the exact purpose you identified in bold font.
                      He is right though. They have to provide an easy way for end-users to disable some mitigations without having to recompile the kernel. Sure, those mitigations are really important for some usecases but for my amd desktop for example i don't think i need anything more than a Spectre V1 mitigation.

                      Comment

                      Working...
                      X