Announcement

Collapse
No announcement yet.

Intel's Mitigation For CVE-2019-14615 Graphics Vulnerability Obliterates Gen7 iGPU Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by Hibbelharry View Post
    If we're loosing more than 100 percent of performance... If we're loosing half the performance...
    If you're loosing performance, you should tighten it.

    Comment


    • #62
      Originally posted by Michael View Post

      I ran some tests on Whiskeylake yesterday and found no change in power draw on battery.
      Oh that's good to know. I guess the increase the user experienced is caused by something else perhaps, unless other gen9 models could be affected differently somehow.

      Just to confirm, you did test the power draw at idle right? Not just at load via PTS?

      Originally posted by boxie View Post

      is it possible to get some laptop benchmarks including power usage before/after mitigation too?
      See above, according to Michael there's no difference?
      Last edited by polarathene; 17 January 2020, 11:11 AM.

      Comment


      • #63
        Originally posted by HEX0 View Post
        And software rendering even with 80 cores is worse than Ivy Bridge GPU
        https://www.phoronix.com/scan.php?pa...swr-xeon&num=2
        They're doing something wrong. Seriously, AVX-512 machines can do 16 x 32b ALU operations per cycle. The machines run at 4-5 GHz. That's like 64-80G instructions per second per core. E.g. HD 4000 has 128 ALUs that run at around 1,1 GHz max. Modern memory bandwidth is also much higher and CPUs have better branch prediction and caches.

        Comment


        • #64
          Originally posted by caligula View Post

          They're doing something wrong. Seriously, AVX-512 machines can do 16 x 32b ALU operations per cycle. The machines run at 4-5 GHz. That's like 64-80G instructions per second per core. E.g. HD 4000 has 128 ALUs that run at around 1,1 GHz max. Modern memory bandwidth is also much higher and CPUs have better branch prediction and caches.
          GPUs have some often-used operations hardwired, e.g. ROP (Raster Output Processor - taking the float values of color channels, clamping them into 8bit integers and packing them into one 32bit integer - RGBA Uint32 pixel). It just increases the lag to see the pixel (longer pipeline), but doesn't affect the throughput (framerate). CPU has to do such things via a series of instructions.

          Also don't forget that you can't take the performance of SIMD CPU instructions as a simple multiply of what it does. For GPU operations (usually a pixel shader) SIMD (Single Instruction Multiple Data) means you don't use a lot of "data slots". E.g. you run an instruction over 4 color channels - RGBA - and the rest of data slots in the 16 variables of AVX512 is unused. And it takes additional instructions ot rearrange data between the SIMD operations (rearranging the output from the previous instruction/set of inctuctions as the input for the next instructions) - GPUs have hardwired "register swizzling" for that. GPUs have many small independent pipelines (or group of pipelines), each with small amount of ALUs - that way there's no unused performance like in CPU SIMD instructions working over a large set of variables if you are unable to use them all (e.g. next inctructions depends on the output of the previous instruction - common for pixel shaders - so you can't group them into one SIMD inctruction).

          PS: My brother have just dodged the bullet - was thinking about buying a Macbook Pro with Intel Iris Pro 5200 (Hasswell, now performence 40% down). (In my country, Macbooks are expensive compared to salaries, so it's common to buy an older model as used.)

          Comment

          Working...
          X