Originally posted by fintux
View Post
Announcement
Collapse
No announcement yet.
The Gaming Performance Impact From The Intel JCC Erratum Microcode Update
Collapse
X
-
- Likes 1
-
Originally posted by atomsymbolNotice that all of the tests achieved a framerate higher than 60 Hz. With FreeSync, a game needs only about 50 Hz to appear smooth to most gamers. For the most part, anything higher than 60 Hz is an unnecessary waste of electrical energy and increases fan noise, unless the gamer is using a 100+ Hz monitor and gains an actual advantage from using the 100+ Hz refresh rate. (I do not own a 144 Hz display, so I what I wrote may be completely invalid.)
Because the human eyes don't see frames, there are things like ghosting and blurring even with higher than 60 FPS. See for example this: https://www.testufo.com/eyetracking - how different the background looks depending on which UFO you are focusing your eyes to. And blurbusters.com has much more interesting content explaining why things matter beyond 60 Hz (and even beyond 144 Hz).
Also, any benchmarks on current game mean that a future CPU-bound game would suffer this in similar ratio. So it could be a difference between say, 57 and 60 Hz, which would matter especially without FreeSync. Also, this will likely effect CPUs like older i3, i5 etc., where the starting frame-rate would not be this high. It's also worth pointing out that these changes again further decrease the performance that has already taken hit by the numerous earlier patches to flaws in the Intel CPUs.
Leave a comment:
-
Guest repliedOriginally posted by atomsymbol
In my opinion, it doesn't matter whether not every Intel chip is affected by this. A developer in many cases doesn't have control over which CPU is in the user's machine. From the viewpoint of a developer, if for example just 5% of users are estimated to be running the software on affected Intel CPUs then this relatively small percentage translates into a necessity to compile the software with the patched toolchain.
Like in the case of CONFIG_RETPOLINE, the Jcc erratum is a compile-time option that, under normal conditions, cannot be dynamically turned off/on at will during runtime. This means that distributions like Ubuntu will need to - if Ubuntu maintainers decide that the Jcc erratum is significant enough - compile the default Ubuntu kernel with the patched toolchain and with the new -mbranches-within-32B-boundaries option. It is possible, though unlikely in my opinion, that Ubuntu/etc will decide to start distributing two distinct x86 kernel images - one for affected Intel CPUs and the other one for unaffected AMD/Intel CPUs - and let the bootloader (such as grub) decide which kernel to load based on probing the CPU for vulnerabilities (but this step would of course require patches enabling the bootloader to be able to perform such decisions, including the ability for the bootloader to update the CPU's microcode). Another option would be for the Linux kernel itself to contain multiple versions of the whole kernel, each one compiled with different compiler options. This would ensure that every user gets the best performance out of the kernel and does not need to resort to using a kernel statically compiled for the worst case scenario.
When it comes to distros, compiling all software twice is a different problem entirely. The best thing would be to tell affected people to install Gentoo with correct build flags and screw off, that way noone has to babysit people who were unfortunate enough to buy these shitty chips.
Leave a comment:
-
Guest repliedOriginally posted by atomsymbol
In my opinion, Intel made a very serious, completely avoidable, mistake in the design of their Skylake-based CPUs. One of their solutions is to patch compiler toolchains. Enabling the patch will affect about 99% of x86 applications because a jump instruction image crossing a 32-byte boundary or ending on a 32-byte boundary has probability close to 100% to appear in an x86 application at least once. Developers who are conscious about the quality of their software will be forced to pass special flags to the toolchain which will slightly slow down or increase the codesize of future applications on all non-Intel CPUs and on all Intel CPUs without the erratum. Because this is an avoidable erratum which should have been detected during CPU design validation, patching the toolchain is quite unusual.
The other solution, to let Intel CPU users to update microcode in their machines, is fine because it does not slow down non-Intel CPUs nor Intel CPUs without the erratum.
The fact that Intel thought it would be a good idea to "fix" this problem by posting patches to GAS is ridiculous, and I hope proprietary software won't be built with these mitigations enabled. Not every Intel chip is affected by this, so they're shooting themselves in their foot, but also other chips are taking the hit as well.
I think Intel should consider putting more money into validating their designs before shitting them out to the market, as it will probably cost them less than spending all that effort on handling the mess they've made for themselves. I guess that layoff they had a couple of years ago really paid off with the nightmare they made for themselves.
- Likes 7
Leave a comment:
-
Most of the testing was done at 1440p or 4k and RTX 2080 Super. This means that a lot of the results can be actually GPU-bound. The results would be more informative and reliable if they were done at 1080p and using RTX 2080 Ti to minimize any GPU bottlenecks.
- Likes 6
Leave a comment:
-
Originally posted by atomsymbol
In my opinion, Intel made a very serious, completely avoidable, mistake in the design of their Skylake-based CPUs. One of their solutions is to patch compiler toolchains. Enabling the patch will affect about 99% of x86 applications because a jump instruction image crossing a 32-byte boundary or ending on a 32-byte boundary has probability close to 100% to appear in an x86 application at least once. Developers who are conscious about the quality of their software will be forced to pass special flags to the toolchain which will slightly slow down or increase the codesize of future applications on all non-Intel CPUs and on all Intel CPUs without the erratum. Because this is an avoidable erratum which should have been detected during CPU design validation, patching the toolchain is quite unusual.
The other solution, to let Intel CPU users to update microcode in their machines, is fine because it does not slow down non-Intel CPUs nor Intel CPUs without the erratum.
- Likes 13
Leave a comment:
-
The Gaming Performance Impact From The Intel JCC Erratum Microcode Update
Phoronix: The Gaming Performance Impact From The Intel JCC Erratum Microcode Update
This morning I provided a lengthy look at the performance impact of Intel's JCC Erratum around the CPU microcode update issued for Skylake through Cascade Lake for mitigating potentially unpredictable behavior when jump instructions cross cache lines. Of the many benchmarks shared this morning in that overview, there wasn't time for any gaming tests prior to publishing. Now with more time passed, here is an initial look at how the Linux gaming performance is impacted by the newly-released Intel CPU microcode for this Jump Conditional Code issue.
Tags: None
- Likes 2
Leave a comment: