Announcement

Collapse
No announcement yet.

GhostRace Detailed - Speculative Race Conditions Affecting All Major CPUs / ISAs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by SomeoneElse View Post
    ... You are told that everything is infinite but you might never reach it. It's so frustrating ...
    Which is why we really are alone in the universe.

    Comment


    • #22
      GhostRace sounds like it's going to be an awesome new version of SuperTuxKart. I'm really looking forward to playing it.

      Comment


      • #23
        Here we go. Again

        Comment


        • #24
          Originally posted by avis View Post

          You're part of it. Secondly hatred is highly detrimental to your well-being. Thirdly, the universe doesn't care. It's soulless. It just is.
          Well then it seems people aren't part of the universe.

          Comment


          • #25
            Originally posted by Dawn View Post

            If you're willing to accept the tradeoffs of a highly-static microarchitecture (ie, being slower, especially on branchy code / code with a lot of dynamic memory behavior) without major speculative side-channels, feel free to join us over in Itanium Land.
            I'd bet anyone on here $100 that if Lisa Su got up at Computex 2025 and announced a rebadged version of EPIC architecture with some extra marketing fluff about "AI-powered instruction scheduling optimization" that every single dork on this site who has ragged on Itanium would be jumping out of his chair applauding at her vision to replace the "failed" x86 architecture.

            Comment


            • #26
              Vulnerability name made me think of The Wraith. Anybody remember that movie?


              Comment


              • #27
                Originally posted by avis View Post

                You can enjoy mitigations=off but don't forget to use the old microcode as well. Some mitigations are now part of it.
                I do.

                Comment


                • #28
                  Originally posted by Dawn View Post

                  If you're willing to accept the tradeoffs of a highly-static microarchitecture (ie, being slower, especially on branchy code / code with a lot of dynamic memory behavior) without major speculative side-channels, feel free to join us over in Itanium Land.
                  This is a really bad example, Itanium failed for a host of other reasons

                  Comment


                  • #29
                    Originally posted by energyman View Post
                    sometimes I wonder if all those 'security concerns' are even valid. All those sideband attacks have been happening since the.. 90s? Noone cared, because in real world it all did not matter. But then, if you run your software on your own hardware... who cares anyway even today?
                    90's was an extremely different world to now
                    • A lot of software wasn't even running on the internet (or didn't have an expectation to do so). Due to this security wasn't as much of an issue issue, the solution was pretty much "lets just take the machines off the internet" but due to how digitized/interconnected the world is nowadays that isn't really an option (we have WiFi in fridges now...)
                    • We weren't in the era of cloud, running multi-tenant hypervisors in data centers where you have processes in different VM's from completely disparate companies and for obvious reasons you don't want data from company A to be visible to company B if its running on the same machine. In the 90's typicly companies had their own bespoke machines in datacenters, or even directly on premise
                    Last edited by mdedetrich; 13 March 2024, 04:15 AM.

                    Comment


                    • #30
                      Originally posted by nranger View Post

                      Performance is important because it equates to time and energy. Yes, security is critical, but it's a balancing act.

                      Best estimates I could find for global datacenter energy usage was about 200 Terawatt Hours annually. If you made all of them do 1% more work for security mitigations, that's an extra 2,000 Gigawatt Hours annually. If my back of the napkin math is correct, that's enough electricity to power about 80,000 average US homes for a year.
                      Your analogy assumes that the 5% slowdown applies to everything (all code).

                      Let me try to describe it a bit different: Imagine a car driving at a constant speed along the highway which needs to turn left/right in 10 intersections before reaching it's destination target. Each time it needs to turn it reduces it's speed by 90% before speeding up again. Keep in mind that the mitigations is only affecting the turning action which takes a bit more time. e.g. with mitigations the car may need to reduce it's speed by (as just an example) 95% instead of 90%. This does NOT affect the cars top speed on the long straight lines which may be the majority of the journey anyway. Only the turning speed which in itself is a minor percentage of the journey is the "problem".

                      So in other words , if 9.5 hours of this journey is straight roads, and 0.5hours is turning/waiting for intersections, adding a couple of more seconds idling at the stop light at each intersection may not really affect the total amount of time/fuel consumed that much, and depending on the number of intersections you may not even notice it.

                      Like with so many other mitigations the real life impact is very dependable on what workload is being run. If your main workload is synthetic benchmarks it may be very different from someone running a fileserver, webserver or database.

                      http://www.dirtcellar.net

                      Comment

                      Working...
                      X