Announcement

Collapse
No announcement yet.

AMD Ryzen 7 5800X Linux Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by MadeUpName View Post
    Can some one please explain AMD's naming conventions to me. Why aren't all of the new generation of chips Ryzen 9? Why are some labelled Ryzen 7? If they are warmed over last gen parts why aren't they 3000 series chips? I find AMD naming conventions baffling.
    Not sure what you mean, but, the general naming is as follows:
    Ryzen 3 for low end parts.
    Ryzen 5 for low-mid range parts.
    Ryzen 7 for mid-high end parts.
    Ryzen 9 for high end parts.

    The reason why they use 9 at all is that Intel introduced "Core i9" when Zen got out, and AMD naming is very convenient for user to get the idea of performance metrics relative to the competition.

    Comment


    • #22
      Thanks leipero.

      Comment


      • #23
        Originally posted by MadeUpName View Post
        Can some one please explain AMD's naming conventions to me. Why aren't all of the new generation of chips Ryzen 9? Why are some labelled Ryzen 7? If they are warmed over last gen parts why aren't they 3000 series chips? I find AMD naming conventions baffling.

        AMD Ryzen 9 have 2 chiplets with somewhere between 12 to 16 cores that is yes 24 to 32 to threads.

        AMD Ryzen 7 is 8 cores 16 threads.

        AMD Ryzen 5 is 6 cores 12 threads.

        Basically 9, 7, 5 are not generations. They are targeted market segment. There use to be Ryzen 3 as well those were 4 core and that segment has gone out the modern AMD lines.

        AMD series naming is horrible hopefully fixed with the jump to 5000 this time around.

        Cpu core designs are Zen with number follow higher the number the newer the cpu design.(this will get important in a bit) Of course this is not simple we have Zen+ between Zen1 and Zen2 to add extra confusion.

        Everything series 5000 as long as AMD sticks to what they say will be Zen 3 cpu design from AMD.
        4000 series are Zen 2 with APU(intergrated graphics)
        3000 series are Zen 2 without APU and Zen + with APU This is start of great mess.
        2000 series are Zen + without APU and Zen 1 with APU.
        1000 series Zen 1 without APU.

        There are a handful of exception chips AMD made that are Zen+ as 1000 series parts.

        AMD got there APU parts and non APU parts badly out of sync. Series with AMD was based on year of release not technology inside. 5000 forward with cpu will hopefully be aligned to technology inside. The series mess had made picking what cpu works with what motherboard with AMD bit harder than it should have been.

        With fixed naming if AMD stick to it Zen 4 design should be 6000.

        Comment


        • #24
          Originally posted by atomsymbol

          ? "scale past 4 uops per x86 instruction" ?



          This isn't making sense. Zen 3 core can decode 8 µops/cycle from µop cache and dispatch 6 ops/cycle. Considering that 90% of work in applications is concentrated in loops (and half of those loops might be parallelizable), it is pretty much inevitable that we will get to experience a 100 ops/cycle/core x86 CPU during our lifetimes. It doesn't matter that internally the 100 ops/cycle/core CPU will be using an instruction encoding different from the programmer-visible external x86 instruction encoding.

          https://www.anandtech.com/show/16214...5700x-tested/2

          Most high-performance ARM CPUs are utilizing a µop cache as well, so you cannot use the argument that Zen's µop cache is no longer x86, because that would ultimately lead to the conclusion that ARM is no longer ARM.

          The last truly-x86 CISC CPU was i386, introduced in year 1985. With an IPC (instructions per clock) less than 0.5 if I recall the data correctly.

          (I predict that you might try to reply as if you already did write everything the right way in your post and I completely misunderstood it. You didn't.)
          Maybe -you- don't understand...

          IPC as a comparison metric -ONLY- works when you are comparing products using the exact same architecture. A Zen instruction is -not- the same as Tigerlake instruction, which is -not- the same as a A14 instruction... Arm products -NEED- higher IPC because each instruction accomplishes -less- actual work. IPC between different products and architectures is not a comparable metric.

          Sorry, but you are totally wrong. You apparently don't even understand what IPC even is.

          EDIT: x86 instructions get decoded into RISC-like micro instructions. Those micro instructions are called uops and they have a minimum complexity that is derived from the microarchitecture itself.
          Last edited by duby229; 12 November 2020, 04:28 PM.

          Comment


          • #25
            Originally posted by leipero View Post

            Not sure what you mean, but, the general naming is as follows:
            Ryzen 3 for low end parts.
            Ryzen 5 for low-mid range parts.
            Ryzen 7 for mid-high end parts.
            Ryzen 9 for high end parts.

            The reason why they use 9 at all is that Intel introduced "Core i9" when Zen got out, and AMD naming is very convenient for user to get the idea of performance metrics relative to the competition.
            Oh damn I feel so stupid, I never realized the Ryzen numbers matches the Core ones...

            Comment


            • #26
              Originally posted by uid313 View Post

              We already know how good the Apple A14 Bionic performs and the M1 performs even better.
              https://browser.geekbench.com/ios-benchmarks/
              and
              https://browser.geekbench.com/v5/cpu/4651916



              A single-core score of 1719, what this means is that it is fastest than the fastest thing both Intel and AMD offers, while using much, much less power.
              Unless we know the operating system used, the compiler, the compilation flags and the code of the benchmark itself, the score is as valuable as the output of /dev/random.
              They might measure useless workloads, the code might be written by apple, the competition CPU might be running windows vista, the different architectures might be using different compilers with different optimizations and so on.
              True comparisons can only be made with open source benchmarks, compiled with march=native and equivalent compilers and optimizations. It seems it will take a while until we have it though (we have neither march=M1 nor march=zenver3)

              Comment


              • #27
                RIP Intel...

                Comment


                • #28
                  Originally posted by geearf View Post

                  Oh damn I feel so stupid, I never realized the Ryzen numbers matches the Core ones...
                  Maybe you are not, but most people do not care for such things, and they do not know that...

                  Comment


                  • #29
                    Originally posted by duby229 View Post

                    Yup you'd have to compare ARM instructions to x86 uops. Still though x86 really can't scale past 4 uops per x86 instruction and on average I think 3 is more common.

                    EDIT: I really doubt we'll ever see an x86 architecture with like a 8-wide front-end. That is unless CMT-like architectures get re-invested in.
                    It's the -ENTIRE- reason why x86 traditionally has only 3 or 4 integer units per pipeline, it's -the- most parallelism that can be extracted when decoding x86 instructions.

                    Originally posted by atomsymbol
                    This isn't making sense. Zen 3 core can decode 8 µops/cycle from µop cache and dispatch 6 ops/cycle. Considering that 90% of work in applications is concentrated in loops (and half of those loops might be parallelizable), it is pretty much inevitable that we will get to experience a 100 ops/cycle/core x86 CPU during our lifetimes. It doesn't matter that internally the 100 ops/cycle/core CPU will be using an instruction encoding different from the programmer-visible external x86 instruction encoding.
                    The best you can hope for is 4 integer units per pipeline per core. That's it.... Your claim of 100 is just pure asinine.
                    Last edited by duby229; 12 November 2020, 05:36 PM.

                    Comment


                    • #30
                      Originally posted by duby229 View Post

                      Maybe -you- don't understand...

                      IPC as a comparison metric -ONLY- works when you are comparing products using the exact same architecture. A Zen instruction is -not- the same as Tigerlake instruction, which is -not- the same as a A14 instruction... Arm products -NEED- higher IPC because each instruction accomplishes -less- actual work. IPC between different products and architectures is not a comparable metric.

                      Sorry, but you are totally wrong. You apparently don't even understand what IPC even is.

                      EDIT: x86 instructions get decoded into RISC-like micro instructions. Those micro instructions are called uops and they have a minimum complexity that is derived from the microarchitecture itself.
                      Sorry, you are talking about architecture instructions, which is a very shaky stuff even on risc.

                      Every comparison between CPUs is with Benchmarks that have their own definition of instructions, like arithmetic operations on floats.

                      This is the topic, this is how CPUs are measured by Benchmarks, the a16 is leading there by a good margin (varies btw Benchmarks of course). The architecture details just give insight how it is archived.

                      On x86 the implicit restrictions hinder parallelism (variable length instructions, strong memory order), which hinders scaling amount of architecture instructions running concurrently. But that's the cause, the effect is that Benchmarks have better scores than x86 at the same frequency, or in other words more "logical/source code" operations per clock.
                      Last edited by discordian; 12 November 2020, 05:42 PM.

                      Comment

                      Working...
                      X