Originally posted by SomeoneElse
View Post
Announcement
Collapse
No announcement yet.
GhostRace Detailed - Speculative Race Conditions Affecting All Major CPUs / ISAs
Collapse
X
-
Originally posted by Dawn View Post
If you're willing to accept the tradeoffs of a highly-static microarchitecture (ie, being slower, especially on branchy code / code with a lot of dynamic memory behavior) without major speculative side-channels, feel free to join us over in Itanium Land.
- Likes 5
Comment
-
Originally posted by Dawn View Post
If you're willing to accept the tradeoffs of a highly-static microarchitecture (ie, being slower, especially on branchy code / code with a lot of dynamic memory behavior) without major speculative side-channels, feel free to join us over in Itanium Land.
- Likes 3
Comment
-
Originally posted by energyman View Postsometimes I wonder if all those 'security concerns' are even valid. All those sideband attacks have been happening since the.. 90s? Noone cared, because in real world it all did not matter. But then, if you run your software on your own hardware... who cares anyway even today?- A lot of software wasn't even running on the internet (or didn't have an expectation to do so). Due to this security wasn't as much of an issue issue, the solution was pretty much "lets just take the machines off the internet" but due to how digitized/interconnected the world is nowadays that isn't really an option (we have WiFi in fridges now...)
- We weren't in the era of cloud, running multi-tenant hypervisors in data centers where you have processes in different VM's from completely disparate companies and for obvious reasons you don't want data from company A to be visible to company B if its running on the same machine. In the 90's typicly companies had their own bespoke machines in datacenters, or even directly on premise
Last edited by mdedetrich; 13 March 2024, 04:15 AM.
- Likes 2
Comment
-
Originally posted by nranger View Post
Performance is important because it equates to time and energy. Yes, security is critical, but it's a balancing act.
Best estimates I could find for global datacenter energy usage was about 200 Terawatt Hours annually. If you made all of them do 1% more work for security mitigations, that's an extra 2,000 Gigawatt Hours annually. If my back of the napkin math is correct, that's enough electricity to power about 80,000 average US homes for a year.
Let me try to describe it a bit different: Imagine a car driving at a constant speed along the highway which needs to turn left/right in 10 intersections before reaching it's destination target. Each time it needs to turn it reduces it's speed by 90% before speeding up again. Keep in mind that the mitigations is only affecting the turning action which takes a bit more time. e.g. with mitigations the car may need to reduce it's speed by (as just an example) 95% instead of 90%. This does NOT affect the cars top speed on the long straight lines which may be the majority of the journey anyway. Only the turning speed which in itself is a minor percentage of the journey is the "problem".
So in other words , if 9.5 hours of this journey is straight roads, and 0.5hours is turning/waiting for intersections, adding a couple of more seconds idling at the stop light at each intersection may not really affect the total amount of time/fuel consumed that much, and depending on the number of intersections you may not even notice it.
Like with so many other mitigations the real life impact is very dependable on what workload is being run. If your main workload is synthetic benchmarks it may be very different from someone running a fileserver, webserver or database.
http://www.dirtcellar.net
- Likes 5
Comment
Comment