Originally posted by curfew
View Post
Announcement
Collapse
No announcement yet.
The Peculiar State Of CPU Security Mitigation Performance On Intel Tiger Lake
Collapse
X
-
Michael can you please do one with Zen 3? I've seen an article where the they were faster with mitigations enabled (on Windows, though).
For anyone wondering, this is what the kernel reports (with them disabled):
Code:/sys/devices/system/cpu/vulnerabilities/itlb_multihit:Not affected /sys/devices/system/cpu/vulnerabilities/l1tf:Not affected /sys/devices/system/cpu/vulnerabilities/mds:Not affected /sys/devices/system/cpu/vulnerabilities/meltdown:Not affected /sys/devices/system/cpu/vulnerabilities/spec_store_bypass:Vulnerable /sys/devices/system/cpu/vulnerabilities/spectre_v1:Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers /sys/devices/system/cpu/vulnerabilities/spectre_v2:Vulnerable, IBPB: disabled, STIBP: disabled /sys/devices/system/cpu/vulnerabilities/srbds:Not affected /sys/devices/system/cpu/vulnerabilities/tsx_async_abort:Not affected
Comment
-
Originally posted by Michael View Post
Possible but seemingly unlikely. The "mitigations=off" should bypass all mitigations controllable by the kernel -- hardware optimized or not. All the relevant bits were correctly reported as "Vulnerable" via sysfs when the change was made.
2) Some mitigations exist as CPU firmware and again not affected by kernel parameters.
3) Mitigations in software still remain.
So, to disable all the mitigations, you have at the very least recompile the kernel and userspace. There's no pre-vulnerabilities firmware for TGL, so you cannot avoid this part.
- Likes 3
Comment
-
mitigations=on == more cycles spend in mitigations == lower performance.
Or in other terms, your "performance per watt" goes down as the mitigations go up.
The power usage stays the same.
I think he did run benchmarks in the past that also included power usage. To me the current benchmark type is clear.
Comment
-
The only useful point of the mitigations=off is to improve performance, so if this regression is confirmed the kernel should not disable mitigations (or at least the ones that regress) on such systems.
If one still would want to disable some mitigation (e.g for debug purpose) it could force using a specific mitigation option.
Comment
-
Originally posted by Dr_ST View PostWaiting for the IT department for allowing me to buy AMD in data center instead of Intel. The once "ever" wintel king is dead.
- Likes 4
Comment
-
Originally posted by Qaridarium View Post
i will never buy intel products thats for sure but this really looks amazing.
this is proof that the linux kernel should be much more strict in forcing all kind of security mitigations.
because in the end the companies who want sell cpus in the end will bring amazing performance even if we force any kind of security mitigations.
Comment
-
To me, it looks like another cheat buried somewhere, like the one that caused the Meltdown to be possible (still not exactly known how it is there).
Could be some specific mitigation sequence 'optimization' or CPU turning off something in firmware (microcode) when mitigations are detected.Last edited by Alex/AT; 28 November 2020, 12:29 PM.
- Likes 1
Comment
-
Originally posted by torsionbar28 View PostFor an IT department, they sure sound slow. The superiority of EPYC has been published for years now. We did a server refresh last year, all Dell R7415 EPYC servers. These single-socket EPYC servers replaced quad socket Xeon's (Dell R810). Comparing cost, going with AMD saved us over $100,000 per rack on hardware, and another $200,000 per rack on software licensing (enterprise software licensed per socket). It was really a no-brainer, not sure why any IT department would still be choosing intel in 2020.
If your IT department had chosen to go with EPYC based servers instead of quad socket Xeons, then yes the scenario you described would have saved your company money. But you described spending the cash on the quad socket Xeon and software and then adding to that but buying new EPYC based servers and more software licenses. Your company didn't save any money, they just added to what they had already spent.
Comment
-
Originally posted by sophisticles View PostMost It departments these days are very conservative.
Originally posted by sophisticles View PostIn your scenario, assuming you're telling the truth, then your IT department needs a someone that understands basic business, because they didn't save any money, they actually wasted money.
If your IT department had chosen to go with EPYC based servers instead of quad socket Xeons, then yes the scenario you described would have saved your company money. But you described spending the cash on the quad socket Xeon and software and then adding to that but buying new EPYC based servers and more software licenses. Your company didn't save any money, they just added to what they had already spent.- We priced out new servers to replace the old ones, as the old ones (R810) were EOL. Dell R7415's with AMD EPYC processors, and R740's with intel Xeon processors. For a similar level of performance, based on core count, frequency, and published benchmarks, a 42 U rack full of R740's costs more than $100,000 *more* than a 42 U rack full of R7415's, at least with the processor and options we selected. Ergo, we saved over $100,000 by selecting AMD EPYC powered servers rather than similarly spec'd intel Xeon servers.
- Since it sounds like you're unfamiliar with enterprise IT, most enterprise software is licensed annually, to include the technical support contract. On many enterprise software suites, the licensing costs are per core, or per socket. Our software is licensed per socket. An R7415 is a single socket server. An R740 with intel Xeon with the same core count and memory footprint requires dual-socket. Ergo, the cost per annum of our enterprise software is literally cut in half by selecting AMD EPYC over intel Xeon for our recent tech refresh.
Last edited by torsionbar28; 29 November 2020, 12:36 AM.
- Likes 4
Comment
Comment