Announcement

Collapse
No announcement yet.

Intel Launches Cooper Lake Xeons CPUs, New Optane Persistent Memory + SSDs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • vladpetric
    replied
    Originally posted by torsionbar28 View Post
    Yes the plusses in the '14nm++++' moniker indicate incremental improvements over the original 14nm in 2014, that much is widely understood. However, If there's no appreciable difference in performance per watt as you claim, why then is intel pursuing a newer smaller 10nm process? Clearly there is some significant competitive advantage to be had with the newer smaller process. The claims of "intel is behind" and "it doesn't make much difference" are mutually exclusive. In any event, I do agree that intel has done some impressive refinement of their 14 nm process over the years.
    Maybe I wasn't clear enough - I'm saying that the difference in performance per watt between TOTL, current Intel processors and AMD processors is not that high, despite a seemingly huge jump (14nm vs 7nm).

    The difference in power consumption between 14nm and 14nm++ is actually quite high. From the page I cited:

    <<A third improved process, "14nm++", is set to begin in late 2017 and will further allow for +23-24% higher drive current for 52% less power vs the original 14nm process. The 14nm++ process also appear to have slightly relaxed poly pitch of 84 nm (from 70 nm). It's unknown what impact, if any, this will have on the density>>

    52% lower power is not just an incremental improvement. Typically that was achieved by a completely new manufacturing process.

    Leave a comment:


  • torsionbar28
    replied
    Originally posted by vladpetric View Post
    Is Intel behind AMD in this case? Yes. Is it fair to say that 14nm 2020 technology is the same as 2014 technology? No.

    Does it make that much of a difference to you (in terms of things such as real performance per Watt)? I really don't think so.
    Yes the plusses in the '14nm++++' moniker indicate incremental improvements over the original 14nm in 2014, that much is widely understood. However, If there's no appreciable difference in performance per watt as you claim, why then is intel pursuing a newer smaller 10nm process? Clearly there is some significant competitive advantage to be had with the newer smaller process. The claims of "intel is behind" and "it doesn't make much difference" are mutually exclusive. In any event, I do agree that intel has done some impressive refinement of their 14 nm process over the years.
    Last edited by torsionbar28; 19 June 2020, 03:40 PM.

    Leave a comment:


  • vladpetric
    replied
    Originally posted by torsionbar28 View Post
    At the highest level, intel launching new processors in 2020 that use the same process node as Broadwell which shipped in Sept 2014 means that at $10,000 per chip, the profit margin is substantial. Fab facilities are extremely expensive to build. Launching new product on a six year old process and charging a premium price for it raises a lot of eyebrows. Launching top tier CPU's in 2020 that are still in 14 nm also demonstrates how far behind intel has fallen. In an attempt to obfuscate this fact, look up a few processors on the intel ark web site. Notice how they have omitted the "Lithography: 14nm" line item from their more recent products, whereas this line is present in previous generation products? Sounds like deception by omission to me.
    See for instance this page:

    https://en.wikichip.org/wiki/14_nm_l..._process#Intel

    Unfortunately (TTBOMK) we don't have an updated density chart. Nonetheless, take a look at

    High Density (HD) cell at 14nm, for Intel in 2014, was 0.0499 µm². For IBM/Global Foundries it is 0.0810 µm². Same nm, Intel has 1.6x the density, for instance. Not as drastic for High Performance cells, but my point still stands - same nanometers, significantly different densities.

    Leave a comment:


  • vladpetric
    replied
    Originally posted by torsionbar28 View Post
    At the highest level, intel launching new processors in 2020 that use the same process node as Broadwell which shipped in Sept 2014 means that at $10,000 per chip, the profit margin is substantial. Fab facilities are extremely expensive to build. Launching new product on a six year old process and charging a premium price for it raises a lot of eyebrows. Launching top tier CPU's in 2020 that are still in 14 nm also demonstrates how far behind intel has fallen. In an attempt to obfuscate this fact, look up a few processors on the intel ark web site. Notice how they have omitted the "Lithography: 14nm" line item from their more recent products, whereas this line is present in previous generation products? Sounds like deception by omission to me.
    14 nm means just one thing - the widths of the smallest slits in the photolitography masks (aka the feature size). Most importantly, it does not represent the size of a transistor (actually, even the size of a transistor is not a fair benchmark, typically the size of a functional cell, such as the area of one bit memory register, is a much better benchmark). You can have widely different densities for the same "nanometers" in fact. That wasn't the case a decade ago, but it is definitely the case now.

    Current (2020) 14 nm Intel technology is not the same as the one in 2014. If I remember correctly, it's more than 50% denser in fact.

    Is Intel behind AMD in this case? Yes. Is it fair to say that 14nm 2020 technology is the same as 2014 technology? No.

    Does it make that much of a difference to you (in terms of things such as real performance per Watt)? I really don't think so.

    Of course, feel free to prove me wrong with actual data.

    Leave a comment:


  • torsionbar28
    replied
    Originally posted by vladpetric View Post
    Why do you care that it's only 14nm? And do you even know what that means when it comes to semiconductor technology?
    At the highest level, intel launching new processors in 2020 that use the same process node as Broadwell which shipped in Sept 2014 means that at $10,000 per chip, the profit margin is substantial. Fab facilities are extremely expensive to build. Launching new product on a six year old process and charging a premium price for it raises a lot of eyebrows. Launching top tier CPU's in 2020 that are still in 14 nm also demonstrates how far behind intel has fallen. In an attempt to obfuscate this fact, look up a few processors on the intel ark web site. Notice how they have omitted the "Lithography: 14nm" line item from their more recent products, whereas this line is present in previous generation products? Sounds like deception by omission to me.

    Leave a comment:


  • vladpetric
    replied
    Originally posted by vladpetric View Post

    I appreciate your comment (seriously), could you kindly provide references to:
    • static power consumption being considerably better now, with FinFET technology. Ideally, if you had a ratio of static vs dynamic power for current technology, when the processor operates at full speed. Quantum tunneling leakage is still insane when the feature sizes are ~10 nm.
    • performance per Watt - we're not talking about just frequency here, but frequency * IPC per Watt (yes, I think our beloved Michael Larabel has some on Phoronix, but they're not easy to find).
    To be absolutely clear, I agree that FinFET is a necessity - with classical planar transistors the leakage power at 14nm would be unmanageable. What I'm not convinced is that they reduce static power consumption to the point that AMDs 7nm provides a so-much-better trade-off than Intel 14nm++++ (I don't remember how many pluses).

    In any case, do prove me wrong here (with some references).

    Leave a comment:


  • vladpetric
    replied
    Originally posted by AdrianBc View Post


    While it is right that some time in a not too distant future it would become impossible to improve the performance per watt without switching to different semiconductor materials or to completely different electronic devices, we are not there yet.

    AMD has 2 great advantages over Intel, one is the lower cost for more cores and more cache than Intel (due to the multiple chiplet design) and the much better performance per watt due to the TSMC 7-nm CMOS process.


    If you would compare the specifications of any Intel and AMD current CPUs, you would see that at the same power consumption and at the same number of cores, the AMD CPUs always have a much higher base clock frequency.

    The consequence of this fact is seen in all benchmarks. When an Intel CPU has most of the time no more than 1 or 2 active cores, so that the speed is limited by the maximum turbo core frequencies, then Intel may win the benchmark, but in all benchmarks where enough cores are active so that the power limits are reached, at the same number of cores and at the same power consumption the Intel CPUs are much slower than the AMD CPUs, because the average clock frequency during the benchmark is higher for the latter.


    The static power was worst for the 90-nm Intel CPUs (Prescott/Nocona), where it could exceed a half of the total power consumption.

    The next CMOS processes since 2006 had lower static power, due to innovations like using high-permittivity gate dielectrics, using FinFETs since Intel Haswell and designing the CPUs with a mixture of different kinds of transistors, some of which are high-speed transistors while others are low-leakage transistors.

    While designing a CPU with a low static power has become quite a complex task in modern processes, it can be done, as it should be obvious because the current desktop computers have lower idle power values than ever and the current laptops have also better battery lifetimes than ever.


    So the conclusion is that for now AMD has a considerable advantage over Intel at power efficiency. Just look that the best Intel laptop 8-core CPU has a pathetic 2.4 GHz base frequency, while its cheaper competitor from AMD has a 3.3 GHz base frequency and the frequency ratio is confirmed by all benchmarks (when they are done at the same power consumption; many published benchmarks are done with the Intel CPUs consuming up to a double power than AMD).


    Cooper Lake is not competitive with Rome, because even with the double AVX-512 throughput it is cheaper to use twice as many AMD cores and also have lower power consumption and better connectivity.

    Nevertheless, there are certain very specialized applications where Cooper Lake can be the best choice. While deep learning should have been better done on GPUs, professional NVIDIA GPUs are even more expensive for a given performance than Cooper Lake. AMD GPUs are much cheaper than both Cooper Lake and NVIDIA, but for most applications they require a much greater software development effort, due to lack of suitable libraries and tools, which might be prohibitive.

    There are also other niche applications where Cooper Lake can be best, which take advantage of certain features of the Intel server CPUs that are not available yet for AMD CPUs, e.g. direct transfers between the cache memory and peripherals, e.g. networking cards, without passing through the main memory, or better performance counters for tuning or debugging certain programs.
    I appreciate your comment (seriously), could you kindly provide references to:
    • static power consumption being considerably better now, with FinFET technology. Ideally, if you had a ratio of static vs dynamic power for current technology, when the processor operates at full speed. Quantum tunneling leakage is still insane when the feature sizes are ~10 nm.
    • performance per Watt - we're not talking about just frequency here, but frequency * IPC per Watt (yes, I think our beloved Michael Larabel has some on Phoronix, but they're not easy to find).

    Leave a comment:


  • Teggs
    replied
    Originally posted by edwaleni View Post
    I am still waiting for the Diamond and Latinum Xeon's to be released. Maybe that is what the 10nm version will be called?
    Intel's 10nm server chips are called Imaginary Lake.

    Leave a comment:


  • AdrianBc
    replied
    Originally posted by vladpetric View Post

    Not really ...

    10 years ago, yes, one node difference implied considerable power reduction.

    These days, static power (leakage due to a transistor being powered on) is much higher and increases considerably with lower feature sizes.

    While it is right that some time in a not too distant future it would become impossible to improve the performance per watt without switching to different semiconductor materials or to completely different electronic devices, we are not there yet.

    AMD has 2 great advantages over Intel, one is the lower cost for more cores and more cache than Intel (due to the multiple chiplet design) and the much better performance per watt due to the TSMC 7-nm CMOS process.


    If you would compare the specifications of any Intel and AMD current CPUs, you would see that at the same power consumption and at the same number of cores, the AMD CPUs always have a much higher base clock frequency.

    The consequence of this fact is seen in all benchmarks. When an Intel CPU has most of the time no more than 1 or 2 active cores, so that the speed is limited by the maximum turbo core frequencies, then Intel may win the benchmark, but in all benchmarks where enough cores are active so that the power limits are reached, at the same number of cores and at the same power consumption the Intel CPUs are much slower than the AMD CPUs, because the average clock frequency during the benchmark is higher for the latter.


    The static power was worst for the 90-nm Intel CPUs (Prescott/Nocona), where it could exceed a half of the total power consumption.

    The next CMOS processes since 2006 had lower static power, due to innovations like using high-permittivity gate dielectrics, using FinFETs since Intel Haswell and designing the CPUs with a mixture of different kinds of transistors, some of which are high-speed transistors while others are low-leakage transistors.

    While designing a CPU with a low static power has become quite a complex task in modern processes, it can be done, as it should be obvious because the current desktop computers have lower idle power values than ever and the current laptops have also better battery lifetimes than ever.


    So the conclusion is that for now AMD has a considerable advantage over Intel at power efficiency. Just look that the best Intel laptop 8-core CPU has a pathetic 2.4 GHz base frequency, while its cheaper competitor from AMD has a 3.3 GHz base frequency and the frequency ratio is confirmed by all benchmarks (when they are done at the same power consumption; many published benchmarks are done with the Intel CPUs consuming up to a double power than AMD).


    Cooper Lake is not competitive with Rome, because even with the double AVX-512 throughput it is cheaper to use twice as many AMD cores and also have lower power consumption and better connectivity.

    Nevertheless, there are certain very specialized applications where Cooper Lake can be the best choice. While deep learning should have been better done on GPUs, professional NVIDIA GPUs are even more expensive for a given performance than Cooper Lake. AMD GPUs are much cheaper than both Cooper Lake and NVIDIA, but for most applications they require a much greater software development effort, due to lack of suitable libraries and tools, which might be prohibitive.

    There are also other niche applications where Cooper Lake can be best, which take advantage of certain features of the Intel server CPUs that are not available yet for AMD CPUs, e.g. direct transfers between the cache memory and peripherals, e.g. networking cards, without passing through the main memory, or better performance counters for tuning or debugging certain programs.




































































    Leave a comment:


  • vladpetric
    replied
    Originally posted by smitty3268 View Post

    It means poor performance per watt vs the competition, which is the key metric for server parts.
    Not really ...

    10 years ago, yes, one node difference implied considerable power reduction.

    These days, static power (leakage due to a transistor being powered on) is much higher and increases considerably with lower feature sizes.

    Leave a comment:

Working...
X