Announcement

Collapse
No announcement yet.

AMD Ryzen 7 CPUs Shipping 2 March, Pre-Order Today

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by duby229 View Post

    All you have to do is look at any block diagram... Literally any one of them.


    You can obviously see how it takes the whole module to be a an x86 core.



    Even AMD's own diagrams find it necessary to compare a CMT module to a Zen core because that's what a CMT module actually is.
    It's obvious from the pictures that the whole CMT module is needed, but the basic resources are all duplicated. They've been divided differently and not totally independent, but there's plenty of parallel structures for 2 "cores". The whole definition of core is silly anyways. The operating system doesn't see cores, even SMT threads show up as 'processors'. The way Intel does it is not the only way.

    Comment


    • Originally posted by edwaleni View Post

      There are still apps in the world that are thread performance sensitive. Perhaps not in desktop, but they do exist. When the Opteron version of Ryzen sprouts we will put it in our lab to check it out. We have many thread sensitive apps still on Sandy & Ivy Bridge. With 4 year life cycles of server hardware, this version from AMD will be a little late but the cost may be enough to wait for.
      Thread sensitive apps = legacy crap. Even Intel has provided multicore systems for years. Phones have 4 cores, now desktop is moving from 2-4 towards 4-8 cores, laptops will have 4-8 cores (maybe still focus more on 4-6), servers have multiple sockets and multiple cores per socket. If your algorithm focuses on a single 5 GHz core, it just fails so much at this time. Power consumption increases faster than f(x)=cx with higher clock speed. Power consumption increase is pretty linear wrt cores. Performance scalability is good with increasing cores, algos with a focus on faster and faster GHz get speedups of 2% per year. Focusing on cores gives you 100% every two years.

      Comment


      • "When I use the word Core," Humpty Dumpty is rumored to have said to Alice in a rather a scornful tone, "it means just what I choose it to mean — neither more nor less."

        Comment


        • Originally posted by caligula View Post

          It's obvious from the pictures that the whole CMT module is needed, but the basic resources are all duplicated. They've been divided differently and not totally independent, but there's plenty of parallel structures for 2 "cores". The whole definition of core is silly anyways. The operating system doesn't see cores, even SMT threads show up as 'processors'. The way Intel does it is not the only way.
          An x86 core needs everything required to execute x86 code, for that you need the front end too. A CMT module is -a- core with two integer processors and a floating point processor. You guys forget all about the FPU and the shared L2 cache on the backend. And then don't consider -anything- at all on the front end, it's completely ridiculous.

          Those integer pipelines never see -any- x86 code ever, by the time they do any work instructions have already been decoded into native macro ops. Individulaly they are -NOT- x86 cores. No way no how.

          Now when you consider what an x86 core ACTUALLY is you can then come to the -reality- that even the currently crippled and scaled down CMT architectures available to purchase today perfom damn good, not thread for thread, but core for core. Even today years after they became obsolete. That's a true testament to CMT's multithreading scalability.
          Last edited by duby229; 24 February 2017, 03:11 PM.

          Comment


          • Originally posted by caligula View Post

            Thread sensitive apps = legacy crap. Even Intel has provided multicore systems for years. Phones have 4 cores, now desktop is moving from 2-4 towards 4-8 cores, laptops will have 4-8 cores (maybe still focus more on 4-6), servers have multiple sockets and multiple cores per socket. If your algorithm focuses on a single 5 GHz core, it just fails so much at this time. Power consumption increases faster than f(x)=cx with higher clock speed. Power consumption increase is pretty linear wrt cores. Performance scalability is good with increasing cores, algos with a focus on faster and faster GHz get speedups of 2% per year. Focusing on cores gives you 100% every two years.
            Agree Caligula, some of it is legacy junk, some isn't.

            Call processing, routing specifically (as in voice & video), and call orchestration is very intolerant to latency on x86. The routing code is legacy, the orchestration code is fairly new. Higher IPC means I can process more CPS and use the other cores for other less demanding/forgiving tasks. Is it the most efficient use of a Xeon? no way! These applications can only go so far before context switching begins to impact timing which can cause calls to drop, which in some industries is inexcusable. You can design around it, but its just one example of how certain software available today, IPC still matters for single threads.

            Comment


            • Originally posted by duby229 View Post

              An x86 core needs everything required to execute x86 code, for that you need the front end too. A CMT module is -a- core with two integer processors and a floating point processor. You guys forget all about the FPU and the shared L2 cache on the backend. And then don't consider -anything- at all on the front end, it's completely ridiculous.

              Those integer pipelines never see -any- x86 code ever, by the time they do any work instructions have already been decoded into native macro ops. Individulaly they are -NOT- x86 cores. No way no how.

              Now when you consider what an x86 core ACTUALLY is you can then come to the -reality- that even the currently crippled and scaled down CMT architectures available to purchase today perfom damn good, not thread for thread, but core for core. Even today years after they became obsolete. That's a true testament to CMT's multithreading scalability.
              FPU is not required. You should also look a bit at the terminology there. Back in days of 286 you had x86 core and x87 co-processor (that was the FPU). You can easily say that number of integer units = number of x86 cores. L2 cache is not required as well to build a x86 core and the front end is not even part of x86 architecture.

              To be frank, there is no such thing as "x86 core" nowadays. Both AMD and Intel just have proprietary RISC cores with x86 emulation.

              Comment


              • Originally posted by arakan94 View Post

                FPU is not required. You should also look a bit at the terminology there. Back in days of 286 you had x86 core and x87 co-processor (that was the FPU). You can easily say that number of integer units = number of x86 cores. L2 cache is not required as well to build a x86 core and the front end is not even part of x86 architecture.

                To be frank, there is no such thing as "x86 core" nowadays. Both AMD and Intel just have proprietary RISC cores with x86 emulation.
                Exactly why a front end -is- part of the definition for a x86 core is -because- modern processors are superscalar RISC pipelines. There are 6 essential stages for a superscalar pipeline and instruction decoding is definitely part of that. Along with prefetching, which means cache is definitely a part of that too.
                Last edited by duby229; 24 February 2017, 08:03 PM.

                Comment


                • Originally posted by caligula View Post

                  Thread sensitive apps = legacy crap. Even Intel has provided multicore systems for years. Phones have 4 cores, now desktop is moving from 2-4 towards 4-8 cores, laptops will have 4-8 cores (maybe still focus more on 4-6), servers have multiple sockets and multiple cores per socket. If your algorithm focuses on a single 5 GHz core, it just fails so much at this time. Power consumption increases faster than f(x)=cx with higher clock speed. Power consumption increase is pretty linear wrt cores. Performance scalability is good with increasing cores, algos with a focus on faster and faster GHz get speedups of 2% per year. Focusing on cores gives you 100% every two years.
                  ARM cores behave quite a ways different from x86 cores though. Behave - as in, they may have different clock speeds, capabilities, turn on-off independently depending on load etc. You may have 4 cores which become active/shut off depending on particular load and software may never be seeing 4 cores at once. On x86, cores are always "there", they are identical to each-other.

                  You might pre-order Ryzen but getting it actually is probably going to take some time. Seems most places are "out of stock" already..

                  Glad AMD seems to be doing fine with Ryzen so far. Some competition for Intel.

                  Comment


                  • Originally posted by starshipeleven View Post
                    Not gonna happen.
                    And here you go, Intel's massive price cuts are there

                    Comment


                    • Can't wait to read the only one review and benchmarks I really trust.

                      Comment

                      Working...
                      X