Announcement

Collapse
No announcement yet.

The Spectre Mitigation Impact For Intel Ice Lake With Core i7-1065G7

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    While results are interesting to compare penalty of applied SW mitigations, they say zero about performance loss of HW mitigations vs SW mitigations. If anything, whenever results are close to each other, it could mean Intel is doing some cache'ing inside to mitigate performance penalty from applying SW mitigations, after all these software ones are running on top of microcode ones.
    Or kernel is already patched to exclude relevant assumed-to-be-addressed-in-microcode-per-cpuid mitigations for Ice Lake?
    Last edited by reavertm; 18 October 2019, 02:46 PM.

    Comment


    • #12
      Originally posted by reavertm View Post
      While results are interesting to compare penalty of applied SW mitigations, they say zero about performance loss of HW mitigations vs SW mitigations. If anything, whenever results are close to each other, it could mean Intel is doing some cache'ing inside to mitigate performance penalty from applying SW mitigations, after all these software ones are running on top of microcode ones.
      Or kernel is already patched to exclude relevant assumed-to-be-addressed-in-microcode-per-cpuid mitigations for Ice Lake?
      There are no HW mitigations for most Spectre-class exploits. Any OoOE CPU with a cache is vulnerable. Ryzen CPUs aren't immune either.

      Comment


      • #13
        why don't we ever see any benchmarks of full mitigations (spectre_v2=on spec_store_bypass_disable=on l1tf=full mds=full,nosmt) instead of the weaker default mitigations?

        Comment


        • #14
          Originally posted by hotaru View Post
          why don't we ever see any benchmarks of full mitigations (spectre_v2=on spec_store_bypass_disable=on l1tf=full mds=full,nosmt) instead of the weaker default mitigations?
          What's the point of enabling mitigations for stuff that's already fixed in hardware (i.e. nothing to mitigate in the first place).

          Comment


          • #15
            Originally posted by bug77 View Post

            What's the point of enabling mitigations for stuff that's already fixed in hardware (i.e. nothing to mitigate in the first place).
            Spectre V2 isn't fixed in hardware on this CPU, and I was asking in general, not specifically about this CPU. the vast majority of Intel CPUs do not have Meltdown, L1TF, MDS, and Spectre V4 fixed in hardware, and it would be interesting to see the performance difference between default mitigations and full mitigations.

            Comment


            • #16
              BTW web development performance impact of mitigations is really very low. I spend most of my time in the IDE which is Java-based, and most others will also be in a java-based IDE other then some of the newer generation programmers who may have adopted Electron which will have a noticeable impact but not a crippling one by any means.

              As for the browser impact... well as a web developer my browser sits idle most of the time. Load average is 0.61, 1.02, 1.04 on SKL6200U, and it's clocking down to 500Mhz idle. That's <25% at <20% of its all-core turbo, so in reality that's somewhere in 5-10% real load average across 4 threads. That's with Windows 7 + Edge and MacOS + Safari open in Virtualbox on different workspaces, skype/slack/thunderbird/twinkle minimised, and Clementine streaming TheQFM.

              The most heavily mitigated things I do are related to automatic code completion, real-time static analysis, loading projects, Git bisecting/merging/blaming. Those are all syscall-heavy so they activate the context-switch-related mitigations more than anything. None of them peg a thread for more than a second or two - not enough to add more than 5 minutes total to my work day - not really worth buying a new work machine quite yet. All of them respond faster than Firefox opening an average website... which is why Firefox is always the last browser I test on.. ugh. They really need to get back in the performance game. I guess I feel it a bit with Firefox's page load times after all. :-/
              Last edited by linuxgeex; 18 October 2019, 07:12 PM.

              Comment


              • #17
                Originally posted by Michael View Post

                IIRC, the best Cascadelake latency I've seen was 120~150. Cascadelake is not using Sunny Cove.
                You make it sound as if these are actually bad values...

                So, if it's not Cascade Lake, which Intel architecture is it then?

                Any educated guesses?

                Comment


                • #18
                  Originally posted by Linuxxx View Post

                  You make it sound as if these are actually bad values...

                  So, if it's not Cascade Lake, which Intel architecture is it then?

                  Any educated guesses?
                  Kaby and older with mitigations are like 600+?
                  Michael Larabel
                  https://www.michaellarabel.com/

                  Comment


                  • #19
                    Originally posted by Linuxxx View Post
                    That at least would make sense for their goal of achieving the lowest latency possible for gaming (and where AMD's ZEN architecture still falls short).
                    I wouldn't expect that to make much of a difference in gaming workloads, and the fact that Intel still largely beats out the competition there while having ctx_clock 4x higher than AMD would seem to back that up.

                    Comment


                    • #20
                      Seriously awaiting the performance results for 10th gen Intel desktop chips to see how much further ahead they are compared my 7th gen setup. Will be waiting it out another 3 generations most likely though.

                      Comment

                      Working...
                      X