Announcement

Collapse
No announcement yet.

Linus Torvalds Hits Nasty Performance Regression With Early Linux 6.8 Code

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Hans Bull View Post

    Difficult to imagine that kernel compilation time was the only workload affected by this regression.
    Again, how would you prepare such a test in advance? Include some Blender renders in the linux kernel build process to check for regressions? Time the SPECviewperf suite maybe?

    Some things are difficult to reproduce in an automated test case. That's why we have feature, dev and release branches: each one is undergoing a different kind of scrutiny.

    Comment


    • #32
      Why changing a powerful 3970X ?
      You won't observe regressions if you keep buying the next fully advertised processor.
      ​​​​​​In addition to that, Programmers won't optimize loops because they are unaware of the gap.
      We should also consider the impact of Interpreters in the build overall time: Python being the worst.

      Comment


      • #33
        Originally posted by bug77 View Post
        You can't do just "old fashioned hands on testing" without a solid base of automated tests. It simply can't be done.
        This world would sleep if things were run by men who said "it can't be done".

        Comment


        • #34
          Originally posted by sophisticles View Post

          This world would sleep if things were run by men who said "it can't be done".
          You have no idea what you're talking about, apparently.
          I'm working on a relatively simple piece of software atm. It has over 4,000 automated tests (and they're not enough). You can replicate them using manual testing, but right now the build takes 2 minutes. It would take days to validate a build in the absence of automated tests.

          Comment


          • #35
            From that message it is interesting to see Linus Torvalds still rocking an AMD Ryzen Threadripper 3970X workstation
            Hardly. Those 2020 chips more than do the job still and until the very recent, and still very expensive, Threadripper 7000-series AMD had basically abandoned the Threadripper platform. Releasing the 5000-series only in the guise of the OEM-exclusive "Threadripper Pro"-series.

            Me, I still use a Threadripper 1950-based system at work and it definitely still does the job. Only reason why I'm looking into a CPU upgrade is because I work on Windows desktop software and will need to move off Windows 10 before it reaches EOL (next year) and Microsoft, in its infinite wisdom, decided to not support Zen1 in Windows 11. How usable these things still are is well reflected in that second hand Threadripper 2000 and 3000 series CPUs still go for not that much less than what they went for when new.

            Comment


            • #36
              I thought this CPU already supported "autonomous frequency scaling", just like Linux since something lile v4.x or so.
              More than that, my Zen3 based Ryzen, at least on Windows, prefers to run at ~100% frequency ~100% of the time and enter C1(E)/C6 powersave/sleep states whenever possible, a.k.a "race to idle". I know Zen1(+) works similar on Windows with no real autonomous mode but custom Windows power plan installed by the chipset driver (that forces ~95% minimum frequency all the time).

              Also, why is it strange that Linus keeps a ~2 generations old machine if last time he waited ~15 years to upgrade to this one? Especially since this is a fairly "big" one (a 32-core ThreadRipper is not a tomatoclock).
              Last edited by janos666; 12 January 2024, 11:44 AM.

              Comment


              • #37
                Originally posted by zloturtle View Post

                It is well known that giving developers fast machines risk them writing less efficient code.
                Agreed. I'm mostly doing automation through scripting languages, some steps being intensive or time critical. It always made sense to run tests in deliberately resource-starved VMs, to be more certain it'd be fine in the wild. I found a lot of efficiency bottlenecks that would otherwise be seemingly random and difficult to trace on more performant systems.

                Comment


                • #38
                  Originally posted by bug77 View Post

                  Again, how would you prepare such a test in advance? Include some Blender renders in the linux kernel build process to check for regressions? Time the SPECviewperf suite maybe?

                  Some things are difficult to reproduce in an automated test case. That's why we have feature, dev and release branches: each one is undergoing a different kind of scrutiny.
                  How about automating exactly what Linus did? Building the kernel, booting from it, then compile it again and measure the time

                  Comment

                  Working...
                  X