Announcement

Collapse
No announcement yet.

Intel Launches Cooper Lake Xeons CPUs, New Optane Persistent Memory + SSDs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by edwaleni View Post
    I am still not a fan of naming CPU types after airlines frequent flyer status.

    I am still waiting for the Diamond and Latinum Xeon's to be released. Maybe that is what the 10nm version will be called?

    Perhaps they will come out with a Diamond Xeon "Tiffany Edition" and resell them as jewelry. They cost just as much now.
    maybe the Latinum edition is already hidden (pressed) in the gold edition? Everyone knows Latinum is liquid under ambient conditions (earth).

    Comment


    • #12
      Originally posted by vladpetric View Post
      Why do you care that it's only 14nm? And do you even know what that means when it comes to semiconductor technology?
      Funny that, I seem to remember Intel saying nm meant a lot when they had the process node technology lead over AMD.

      Comment


      • #13
        Originally posted by Slartifartblast View Post

        Funny that, I seem to remember Intel saying nm meant a lot when they had the process node technology lead over AMD.
        My comment wasn't addressed to Intel.

        Comment


        • #14
          Originally posted by vladpetric View Post
          Why do you care that it's only 14nm? And do you even know what that means when it comes to semiconductor technology?
          It means poor performance per watt vs the competition, which is the key metric for server parts.

          Comment


          • #15
            Originally posted by smitty3268 View Post

            It means poor performance per watt vs the competition, which is the key metric for server parts.
            Not really ...

            10 years ago, yes, one node difference implied considerable power reduction.

            These days, static power (leakage due to a transistor being powered on) is much higher and increases considerably with lower feature sizes.

            Comment


            • #16
              Originally posted by vladpetric View Post

              Not really ...

              10 years ago, yes, one node difference implied considerable power reduction.

              These days, static power (leakage due to a transistor being powered on) is much higher and increases considerably with lower feature sizes.

              While it is right that some time in a not too distant future it would become impossible to improve the performance per watt without switching to different semiconductor materials or to completely different electronic devices, we are not there yet.

              AMD has 2 great advantages over Intel, one is the lower cost for more cores and more cache than Intel (due to the multiple chiplet design) and the much better performance per watt due to the TSMC 7-nm CMOS process.


              If you would compare the specifications of any Intel and AMD current CPUs, you would see that at the same power consumption and at the same number of cores, the AMD CPUs always have a much higher base clock frequency.

              The consequence of this fact is seen in all benchmarks. When an Intel CPU has most of the time no more than 1 or 2 active cores, so that the speed is limited by the maximum turbo core frequencies, then Intel may win the benchmark, but in all benchmarks where enough cores are active so that the power limits are reached, at the same number of cores and at the same power consumption the Intel CPUs are much slower than the AMD CPUs, because the average clock frequency during the benchmark is higher for the latter.


              The static power was worst for the 90-nm Intel CPUs (Prescott/Nocona), where it could exceed a half of the total power consumption.

              The next CMOS processes since 2006 had lower static power, due to innovations like using high-permittivity gate dielectrics, using FinFETs since Intel Haswell and designing the CPUs with a mixture of different kinds of transistors, some of which are high-speed transistors while others are low-leakage transistors.

              While designing a CPU with a low static power has become quite a complex task in modern processes, it can be done, as it should be obvious because the current desktop computers have lower idle power values than ever and the current laptops have also better battery lifetimes than ever.


              So the conclusion is that for now AMD has a considerable advantage over Intel at power efficiency. Just look that the best Intel laptop 8-core CPU has a pathetic 2.4 GHz base frequency, while its cheaper competitor from AMD has a 3.3 GHz base frequency and the frequency ratio is confirmed by all benchmarks (when they are done at the same power consumption; many published benchmarks are done with the Intel CPUs consuming up to a double power than AMD).


              Cooper Lake is not competitive with Rome, because even with the double AVX-512 throughput it is cheaper to use twice as many AMD cores and also have lower power consumption and better connectivity.

              Nevertheless, there are certain very specialized applications where Cooper Lake can be the best choice. While deep learning should have been better done on GPUs, professional NVIDIA GPUs are even more expensive for a given performance than Cooper Lake. AMD GPUs are much cheaper than both Cooper Lake and NVIDIA, but for most applications they require a much greater software development effort, due to lack of suitable libraries and tools, which might be prohibitive.

              There are also other niche applications where Cooper Lake can be best, which take advantage of certain features of the Intel server CPUs that are not available yet for AMD CPUs, e.g. direct transfers between the cache memory and peripherals, e.g. networking cards, without passing through the main memory, or better performance counters for tuning or debugging certain programs.




































































              Comment


              • #17
                Originally posted by edwaleni View Post
                I am still waiting for the Diamond and Latinum Xeon's to be released. Maybe that is what the 10nm version will be called?
                Intel's 10nm server chips are called Imaginary Lake.

                Comment


                • #18
                  Originally posted by AdrianBc View Post


                  While it is right that some time in a not too distant future it would become impossible to improve the performance per watt without switching to different semiconductor materials or to completely different electronic devices, we are not there yet.

                  AMD has 2 great advantages over Intel, one is the lower cost for more cores and more cache than Intel (due to the multiple chiplet design) and the much better performance per watt due to the TSMC 7-nm CMOS process.


                  If you would compare the specifications of any Intel and AMD current CPUs, you would see that at the same power consumption and at the same number of cores, the AMD CPUs always have a much higher base clock frequency.

                  The consequence of this fact is seen in all benchmarks. When an Intel CPU has most of the time no more than 1 or 2 active cores, so that the speed is limited by the maximum turbo core frequencies, then Intel may win the benchmark, but in all benchmarks where enough cores are active so that the power limits are reached, at the same number of cores and at the same power consumption the Intel CPUs are much slower than the AMD CPUs, because the average clock frequency during the benchmark is higher for the latter.


                  The static power was worst for the 90-nm Intel CPUs (Prescott/Nocona), where it could exceed a half of the total power consumption.

                  The next CMOS processes since 2006 had lower static power, due to innovations like using high-permittivity gate dielectrics, using FinFETs since Intel Haswell and designing the CPUs with a mixture of different kinds of transistors, some of which are high-speed transistors while others are low-leakage transistors.

                  While designing a CPU with a low static power has become quite a complex task in modern processes, it can be done, as it should be obvious because the current desktop computers have lower idle power values than ever and the current laptops have also better battery lifetimes than ever.


                  So the conclusion is that for now AMD has a considerable advantage over Intel at power efficiency. Just look that the best Intel laptop 8-core CPU has a pathetic 2.4 GHz base frequency, while its cheaper competitor from AMD has a 3.3 GHz base frequency and the frequency ratio is confirmed by all benchmarks (when they are done at the same power consumption; many published benchmarks are done with the Intel CPUs consuming up to a double power than AMD).


                  Cooper Lake is not competitive with Rome, because even with the double AVX-512 throughput it is cheaper to use twice as many AMD cores and also have lower power consumption and better connectivity.

                  Nevertheless, there are certain very specialized applications where Cooper Lake can be the best choice. While deep learning should have been better done on GPUs, professional NVIDIA GPUs are even more expensive for a given performance than Cooper Lake. AMD GPUs are much cheaper than both Cooper Lake and NVIDIA, but for most applications they require a much greater software development effort, due to lack of suitable libraries and tools, which might be prohibitive.

                  There are also other niche applications where Cooper Lake can be best, which take advantage of certain features of the Intel server CPUs that are not available yet for AMD CPUs, e.g. direct transfers between the cache memory and peripherals, e.g. networking cards, without passing through the main memory, or better performance counters for tuning or debugging certain programs.
                  I appreciate your comment (seriously), could you kindly provide references to:
                  • static power consumption being considerably better now, with FinFET technology. Ideally, if you had a ratio of static vs dynamic power for current technology, when the processor operates at full speed. Quantum tunneling leakage is still insane when the feature sizes are ~10 nm.
                  • performance per Watt - we're not talking about just frequency here, but frequency * IPC per Watt (yes, I think our beloved Michael Larabel has some on Phoronix, but they're not easy to find).

                  Comment


                  • #19
                    Originally posted by vladpetric View Post

                    I appreciate your comment (seriously), could you kindly provide references to:
                    • static power consumption being considerably better now, with FinFET technology. Ideally, if you had a ratio of static vs dynamic power for current technology, when the processor operates at full speed. Quantum tunneling leakage is still insane when the feature sizes are ~10 nm.
                    • performance per Watt - we're not talking about just frequency here, but frequency * IPC per Watt (yes, I think our beloved Michael Larabel has some on Phoronix, but they're not easy to find).
                    To be absolutely clear, I agree that FinFET is a necessity - with classical planar transistors the leakage power at 14nm would be unmanageable. What I'm not convinced is that they reduce static power consumption to the point that AMDs 7nm provides a so-much-better trade-off than Intel 14nm++++ (I don't remember how many pluses).

                    In any case, do prove me wrong here (with some references).

                    Comment


                    • #20
                      Originally posted by vladpetric View Post
                      Why do you care that it's only 14nm? And do you even know what that means when it comes to semiconductor technology?
                      At the highest level, intel launching new processors in 2020 that use the same process node as Broadwell which shipped in Sept 2014 means that at $10,000 per chip, the profit margin is substantial. Fab facilities are extremely expensive to build. Launching new product on a six year old process and charging a premium price for it raises a lot of eyebrows. Launching top tier CPU's in 2020 that are still in 14 nm also demonstrates how far behind intel has fallen. In an attempt to obfuscate this fact, look up a few processors on the intel ark web site. Notice how they have omitted the "Lithography: 14nm" line item from their more recent products, whereas this line is present in previous generation products? Sounds like deception by omission to me.

                      Comment

                      Working...
                      X