Announcement

Collapse
No announcement yet.

One-Line Patch For Intel Meteor Lake Yields Up To 72% Better Performance, +7% Geo Mean

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    I did work on westmere, sandy bridge was halfway done when I left. Even then we did thousands of hours of validation per server board, per cycle (Alpha/Beta/silver/etc), using internal tooling and pattern testing in a ton of configs across a range of mainstream OS (Win/RHEL/SLES) - performance was part of it but not the focus of most testing... Mainstream benchmarking was actually a separate process that occurred from Alpha onwards but was pretty lightweight compared to actual hardware validation that happened. The entire validation cycle could take 12-18 months.

    There's hundreds of kernel knobs exposed that can tune performance so it doesn't surprise me stuff like this falls through the cracks no matter how robust the process gets.

    Comment


    • #42
      Originally posted by panikal View Post
      Mainstream benchmarking was actually a separate process that occurred from Alpha onwards but was pretty lightweight compared to actual hardware validation that happened. The entire validation cycle could take 12-18 months.
      I like to tell people: "get it right, then make it fast". If the answer's not right (or the system isn't stable), I don't care how fast it is.

      Of course, I work in software. In hardware, I expect you tend to find stability problems that don't crop up until you run the thing at speed. However, if it's not stable at slow speeds, chances are the situation is going to be even worse at high speeds.

      Comment


      • #43
        Originally posted by coder View Post
        I like to tell people: "get it right, then make it fast". If the answer's not right (or the system isn't stable), I don't care how fast it is.

        Of course, I work in software. In hardware, I expect you tend to find stability problems that don't crop up until you run the thing at speed. However, if it's not stable at slow speeds, chances are the situation is going to be even worse at high speeds.
        The majority of the testing involved pushing specific patterns of 1s and/or 0s (patterns and codes were super proprietary) through the various busses and electrical connections at very high rates to trigger electromagnetic interference and "worst case" and "best case" data flow scenarios. By the time it's released a server board / CPU has had many tens of thousands of hours of testing maxing electrical connections to their corners.....and now we get meltdown and spectre.

        Testing is a constant struggle to evolve your methods against increasing complexity and change within a profitable time frame.
        ​​​

        Comment

        Working...
        X