Announcement

Collapse
No announcement yet.

Intel Xeon Platinum 8380: 2021 vs. 2022 Performance For Ubuntu, Clear Linux, CentOS Stream

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Xeon Platinum 8380: 2021 vs. 2022 Performance For Ubuntu, Clear Linux, CentOS Stream

    Phoronix: Intel Xeon Platinum 8380: 2021 vs. 2022 Performance For Ubuntu, Clear Linux, CentOS Stream

    With Intel Xeon Sapphire Rapids expected to make more of a splash coming up, it's a good time to revisit the Intel Xeon Platinum 8380 "Ice Lake" performance to see how the Linux software performance has evolved since last year's launch. In this article are benchmarks of the dual Xeon Platinum 8380 server from May 2021 with CentOS Stream, Clear Linux, and Ubuntu compared to fresh installs now of those latest Linux distribution releases.

    https://www.phoronix.com/review/inte...ake-linux-2022

  • #2
    If I remember correctly Clear Linux was using powersafe governor as well in your last Linux vs windows benchmarks.

    Comment


    • #3
      Originally posted by Volta View Post
      If I remember correctly Clear Linux was using powersafe governor as well in your last Linux vs windows benchmarks.
      It only does on ~laptop hardware.
      Michael Larabel
      https://www.michaellarabel.com/

      Comment


      • #4
        Originally posted by Michael View Post

        It only does on ~laptop hardware.
        Ok, that would explain why windows wasn't hopeless as usual.

        Comment


        • #5
          This once again proves that anyone who is not using the performance governor is doing themselves & their CPU a huge disservice!

          Even on my 200+ Watts Intel Rocket Lake power-sipper, using intel_cpufreq performance does not mean that the fan is constantly spinning in its fastest state, because the cores are idling in the deepest sleep-state 99% of the time.

          Comment


          • #6
            Originally posted by Linuxxx View Post
            This once again proves that anyone who is not using the performance governor is doing themselves & their CPU a huge disservice!

            Even on my 200+ Watts Intel Rocket Lake power-sipper, using intel_cpufreq performance does not mean that the fan is constantly spinning in its fastest state, because the cores are idling in the deepest sleep-state 99% of the time.
            True. Furthermore, it makes Linux competition look good in some tests.

            Comment


            • #7
              Originally posted by Volta View Post

              True. Furthermore, it makes Linux competition look good in some tests.
              Very much true!

              Hopefully Michael will one day test Asahi Linux with the apple_cpufreq performance governor vs. macOS on M1/M2, just to witness Apple lunatics having another mental break-down...

              For anyone who missed it, Digital Foundry recently did a comparison between the "high-end" M1 Ultra vs. a true high-end nVidia RTX 3090.
              And honestly, I was shocked to see just how bad these games are running on macOS, even when fully native like in the case of World of Warcraft; no idea how they are coping with these results after having spent thousands of $money on those toys:

              Comment


              • #8
                Originally posted by Linuxxx View Post

                Hopefully Michael will one day test Asahi Linux with the apple_cpufreq performance governor vs. macOS on M1/M2, just to witness Apple lunatics having another mental break-down...
                And against Windows 11 on AMD and Intel Alder Lake with a 'proper' - performance - CPU governor.

                Comment


                • #9
                  Michael If you have the time, could you share the results of tuned-adm active on your CentOS Stream 9 deployment? If you installed from the DVD ISO and used the "Minimal Install" group, then TuneD wouldn't be installed by default (it's in the Standard and Base package groups, which are optional for minimal-install). I typically recommend using Server as the default install choice. Otherwise, on bare-metal you should end up with either of the following profiles as detected and selected by default during the installation:

                  Code:
                  # /usr/lib/tuned/<profile>/tuned.conf
                  - throughput-performance:
                      governor=performance
                      energy_perf_bias=performance
                  - balanced:
                      governor=conservative|powersave
                      energy_perf_bias=normal
                  There is a third default profile for detected virtual machines (virtual-guest) that inherits throughput-performance and adds some extra sysctl modifications. If I'm understanding the way the recommended selection process works (e.g. check for VM, then portable user-oriented system, and if neither set to performance mode), I believe your system should be defaulting to throughput-performance for your tests as a baremetal server.

                  Code:
                  # /usr/lib/tuned/recommended.d/50-tuned.conf
                  ...
                  [virtual-guest]
                  virt=.+
                  
                  [balanced]
                  syspurpose_role=(.*(desktop|workstation).*)|^$
                  chassis_type=.*(Notebook|Laptop|Portable).*
                  
                  [throughput-performance]
                  This works on a first-match first-serve basis, and all parameters must match for a profile to be selected.

                  Cheers,
                  Mike
                  Last edited by mroche; 19 August 2022, 02:15 PM.

                  Comment


                  • #10
                    Originally posted by Linuxxx View Post

                    Very much true!

                    Hopefully Michael will one day test Asahi Linux with the apple_cpufreq performance governor vs. macOS on M1/M2, just to witness Apple lunatics having another mental break-down...

                    For anyone who missed it, Digital Foundry recently did a comparison between the "high-end" M1 Ultra vs. a true high-end nVidia RTX 3090.
                    And honestly, I was shocked to see just how bad these games are running on macOS, even when fully native like in the case of World of Warcraft; no idea how they are coping with these results after having spent thousands of $money on those toys:

                    The problem is that DF didn't do in their comparison a single big computation benchmark for GPU. In Blender (CUDA rendering) Nvidia 3090 is still 5 times faster then M1 Ultra. And moment Optix get turned on things are even more dramatic. Thing is in Blender CUDA rendering, RTX 3090 is even better performance/watt then M1 ultra on obsolete now 8nm samsung node.

                    Comment

                    Working...
                    X