Announcement

Collapse
No announcement yet.

Initial Benchmarks Of The AMD EPYC 7F32 Performance On Ubuntu 20.04 LTS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Initial Benchmarks Of The AMD EPYC 7F32 Performance On Ubuntu 20.04 LTS

    Phoronix: Initial Benchmarks Of The AMD EPYC 7F32 Performance On Ubuntu 20.04 LTS

    Announced back on 14 April were AMD's newest members of the EPYC 7002 "Rome" family, the 7Fx2 high frequency processors. Back on launch day we posted the AMD EPYC 7F52 Linux benchmarks for that 16-core/32-thread CPU with a staggering 256MB cache and clocking up to 3.9GHz. In this article today are our initial benchmarks of the EPYC 7F32 as the 8-core/16-thread processor yielding a 128MB L3 cache and clock speeds up to 3.9GHz.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    and the winner is.... LA LA Land.

    Comment


    • #3
      Michael, would it be possible to add core count (and possibly frequency) to the charts? I know, there is a table on page one but I have a hard time remembering all the specs based on the weird naming schemes.

      Comment


      • #4
        Frequencies would only make sense if turbo is disabled. Base clock is pretty much pointless these days for comparison, it varies too much and is affected by too many factors.

        Comment


        • #5
          I am still wondering if the FPU is limited by the BW of L1 L2 or by the FPU itself

          Comment


          • #6
            Originally posted by pegasus View Post
            Frequencies would only make sense if turbo is disabled. Base clock is pretty much pointless these days for comparison, it varies too much and is affected by too many factors.
            Agreed, not sure why 'base clock' is even a thing any more. For years now, CPU's have scaled down well below the base when load is low (frequency scaling), and then scaling up beyond the base when load is high (turbo), with several frequency stops in between.

            In the past, this was the default workstation profile, while servers would disable the frequency scaling so the CPU runs only at fixed base or turbo to maximize performance. I've noticed this trend changing in recent years however, with more and more servers adopting the desktop 'ondemand' style frequency management to save on datacenter power and cooling. Cloud hosting in particular has moved away from running CPU's at fixed base frequency. For most business applications, the performance difference is negligible. It's only when you get to very heavily loaded servers more commonly seen in HPTC workloads where CPU performance is more of a factor.

            Comment


            • #7
              Unfortunately, Intel has taken the same direction as the airlines by naming their CPU's by their frequent flier status.

              Just as we sit at the gate while the attendant sounds like a Starbucks barista calling out the levels for boarding, so it will be for Intel Xeon's.

              "I will take a Xeon Double Latinum with a decaf cache and triple soy boost clock."
              Last edited by edwaleni; 28 April 2020, 10:57 AM.

              Comment


              • #8
                Originally posted by tchiwam View Post
                I am still wondering if the FPU is limited by the BW of L1 L2 or by the FPU itself
                That's easy to calculate. Lets assume a FPU unit at 2GHz capable of 1 avx2 op every clock cycle. Avx2 is 256b or 32B. So you need to be able to feed it 32B * 2GHz = 64GB/s of data to fully make use of it. Now dig up real values for frequency and IPC of the FPU you're interested in and see what the numbers tell you. Then assume that fpu is not fully occupied every clock cycle; I've seen it used about 20% of the time in "math heavy" codes. Find out the value of your problem by observing the code flow.

                In real life you find very few problems that fit into caches and the bottlenecks tend to be on the dram channels.


                Comment

                Working...
                X