Announcement

Collapse
No announcement yet.

AMD Threadripper 2990WX Linux Benchmarks: The 32-Core / 64-Thread Beast

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by nils_ View Post
    They could do something like AMD does and just sandwich together multiple CPUs in an MCP design. I don't think they're too worried at the moment.
    That would necessitate them having to develop something like AMD's Infinity Fabric, which will take quite some time. Sure, they've got something slightly similar, but that's just for allowing multiple chips access the same memory and not really suited for the kinds of things Infinity Fabric is for.

    Comment


    • #72
      Originally posted by L_A_G View Post

      That would necessitate them having to develop something like AMD's Infinity Fabric, which will take quite some time. Sure, they've got something slightly similar, but that's just for allowing multiple chips access the same memory and not really suited for the kinds of things Infinity Fabric is for.
      Did you heard about Intel EMIB thechnology? It is superior to anything that AMD has on virtually any metric (latency, power consumption,...), and EMIB is proposed as standard for DARPA chiplets initiative

      Comment


      • #73
        Originally posted by L_A_G View Post
        Damn... That's one really high performance CPU.

        Where I work we've been so impressed with the original Threadripper-series that it's replaced the Xeons we previously offered to our customers as part of our full software+hardware package (together with a really expensive 4k stereo monitor). Unless Intel comes up with something really crazy, and I don't mean anything like their oh-sh*t-we-need-to-cobble-together-something-really-cool-ASAP demo powered by a 1000W water cooler at Computex this year, I think they've basically lost the business from us for the next couple of years.
        The Computex demo used a hacked Xeon on a modified server board. Since both Xeons and server boards aren't designed for overclocking (*), Intel had to use a chiller to push clocks to 5GHz and simulate the performance of the forthcoming 28 core Skylake-A chip.

        (*) Same reason why der8auer used a chiller to overclock a EPYC on a SP3 board to simulate the performance of a 32 core ThreadRipper.

        Comment


        • #74
          Originally posted by chithanh View Post
          You can see from Phoronix graphs that the 2990WX doesn't consume much more power than the 7980XE, and Phoronix doesn't use any "AVX powervirus".


          So this means compared to the 2990WX, the 7980XE has 85 W lower TDP but 66 W lower actual AC system power consumption. The remark about Intel staying within TDP and Zen not seems therefore quite far-fetched when talking about Threadripper.
          I said that some Zen systems fall to satisfy the marketing TDP, not that all Zen do. E.g. '15W' ryzen mobile are usually 25W or even 35W. The 2700X has a real TDP of 140W. The older '65W' 1700 was a 90W chip,... other Zen chips work within the official TDP.

          Early engineering samples of first gen threadripper violated the rated TDP. E.g. one of the first 180W samples had a real TDP above 200W. Final chips worked within the official TDP of 180W, but they include a power limit mechanism in the TR4 socket that underclocks the core under the base clock under full loads. Here you have a 1950X in action



          I suppose second gen threadripper work similarly because using the same socket.

          Comment


          • #75
            Originally posted by L_A_G View Post

            That would necessitate them having to develop something like AMD's Infinity Fabric, which will take quite some time. Sure, they've got something slightly similar, but that's just for allowing multiple chips access the same memory and not really suited for the kinds of things Infinity Fabric is for.
            Intel already has UPI/QPI, although they are slower than Infinity Fabric (on a single chip) and on par with it when connecting another socket.

            Comment


            • #76
              Originally posted by juanrga View Post
              Did you heard about Intel EMIB thechnology? It is superior to anything that AMD has on virtually any metric (latency, power consumption,...), and EMIB is proposed as standard for DARPA chiplets initiative
              Originally posted by nils_ View Post
              Intel already has UPI/QPI, although they are slower than Infinity Fabric (on a single chip) and on par with it when connecting another socket.
              You do know that Infinity Fabric isn't just a data bus? Intel's new bus may be way faster, but it's a dedicated data bus meant to connect processor chips to memory chips, not processing chips to each other like Infinity Fabric. Comparing the two is kind of like comparing a car to a boat. Sure, they get you from a point A to a point B, but they do have some pretty different intended use cases.

              The Computex demo used a hacked Xeon on a modified server board. Since both Xeons and server boards aren't designed for overclocking (*), Intel had to use a chiller to push clocks to 5GHz and simulate the performance of the forthcoming 28 core Skylake-A chip.
              Funny how they claimed this was actual hardware they were going to ship soon-ish until some people noticed and they had to admit that it was just something hacked together out of server hardware and tried to blame it on the guy presenting the hack-sold-as-a-real-product simply forgetting to mention it.

              As for what some overclocker did, it's a very different deal when it's just a third party trying to get a performance estimate for a product that isn't out yet and not a company presenting a crazily hacked together mess as if it was a real product on the way.

              Comment


              • #77
                Originally posted by L_A_G View Post
                You do know that Infinity Fabric isn't just a data bus? Intel's new bus may be way faster, but it's a dedicated data bus meant to connect processor chips to memory chips, not processing chips to each other like Infinity Fabric. Comparing the two is kind of like comparing a car to a boat. Sure, they get you from a point A to a point B, but they do have some pretty different intended use cases.
                Intel EMIB, which is a shorthand for (Embedded Multi-die Interconnect Bridge) is not a bus to connect chips to memory, but a multidie technology beyond Infinity Fabric, MCM, and interposers.



                Originally posted by L_A_G View Post
                Funny how they claimed this was actual hardware they were going to ship soon-ish until some people noticed and they had to admit that it was just something hacked together out of server hardware and tried to blame it on the guy presenting the hack-sold-as-a-real-product simply forgetting to mention it.

                As for what some overclocker did, it's a very different deal when it's just a third party trying to get a performance estimate for a product that isn't out yet and not a company presenting a crazily hacked together mess as if it was a real product on the way.
                Intel didn't claim it was actual hardware, but a demo of future hardware:
                "The 28C demo at the keynote is a real product in development. We are optimizing design and process across products and the demo showcased an upcoming product having the capability of 5.0 GHz overclocking across all 28 cores."

                Comment


                • #78
                  Originally posted by juanrga View Post
                  Intel EMIB, which is a shorthand for (Embedded Multi-die Interconnect Bridge) is not a bus to connect chips to memory, but a multidie technology beyond Infinity Fabric, MCM, and interposers.

                  https://www.intel.com/content/www/us...ndry/emib.html
                  That's not what EMIB is at all. It's a way to connect two ajacent chips on an MCM with high speed signals. It's not a bus specification. It's not a protocol for communication. It's just a mechanical design trick that can be used for many purposes. To use it in place of Infinty Fabric or some other similar interconnect would require a lot of development work. Intel may have something in mind or development, but they haven't demonstrated or announced anything.

                  Further, look at their known designs using EMIB as well as their info page you linked. They always show EMIB being used to link to ajacent chips. Never more than that. So, it's not going to replace MCMs which can link non-ajacent chips--like the Threadripper and Epyc modules do.

                  Comment


                  • #79
                    Originally posted by willmore View Post
                    That's not what EMIB is at all. It's a way to connect two ajacent chips on an MCM with high speed signals. It's not a bus specification. It's not a protocol for communication. It's just a mechanical design trick that can be used for many purposes. To use it in place of Infinty Fabric or some other similar interconnect would require a lot of development work. Intel may have something in mind or development, but they haven't demonstrated or announced anything.

                    Further, look at their known designs using EMIB as well as their info page you linked. They always show EMIB being used to link to ajacent chips. Never more than that. So, it's not going to replace MCMs which can link non-ajacent chips--like the Threadripper and Epyc modules do.
                    As its name indicates Intel EMIB is an interconnect technology (the "I" is for Interconnect). Infinity Fabric is AMD interconnect technology.

                    Intel EMIB uses different software/hardware layers, one of them is AIB (Advanced Interface Bus). Intel has licensed the AIB standard to DARPA chiplets program.

                    EMIB is for dense package integration. The distance between dies is reduced to the minimum. This has two extra advantages: lower latency and lower power consumption. Using EMIB one could build something as Threadripper and EPYC, but less power hungry, and without the terrible latency problems that the AMD IF approach has.

                    EMIB is used in commercial products. Kabylake G uses EMIB. And Xeon Advanced Processor is rumored to be using EMIB with a three die configuration
                    Last edited by juanrga; 20 August 2018, 04:25 AM.

                    Comment


                    • #80
                      Originally posted by juanrga View Post

                      [snip]

                      EMIB is for dense package integration. The distance between dies is reduced to the minimum. This has two extra advantages: lower latency and lower power consumption. Using EMIB one could build something as Threadripper and EPYC, but less power hungry, and without the terrible latency problems that the AMD IF approach has.
                      Which have to be proven.

                      'terrible latency problems' --- compare _current_ AMD offerings with Intel...

                      EMIB is used in commercial products. Kabylake G uses EMIB. And Xeon Advanced Processor is rumored to be using EMIB with a three die configuration
                      Which have to be proven, that it is faster/better.

                      Comment

                      Working...
                      X