Announcement

Collapse
No announcement yet.

AMD Showed Off New Threadrippers, 7nm Vega At Computex 2018

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oleyska
    replied
    Originally posted by Tomin View Post

    That could be easily done by not putting memories to those channels. AMD might have done something to improve the behaviour of Infinity Fabric in this configuration, so it could be a little slower when emulated with EPYCs.
    L2 cache is 17 cycles on Epyc TR2 will have 12cycle, + other cache improvements makes it a bad comparison.

    Leave a comment:


  • L_A_G
    replied
    Originally posted by chithanh View Post
    Interesting. Maybe Supermicro does hot have an official distributor in your country, that would explain it.
    Considering Supermicro is a white box manufacturer, any "distributor" of their hardware is less of an actual distributor and more of a company that just buys their volume business-to-business products (rather than business-to-end-user) and sells them to end users. The whole idea with white box manufacturers is that they sell directly to companies who in turn make the actual end product and take care of support, certification, sales and other very region-specific functions.

    Leave a comment:


  • chithanh
    replied
    L_A_G
    Interesting. Maybe Supermicro does hot have an official distributor in your country, that would explain it.

    Do note that there are several variants of the H11SSL, and only the cheapest (H11SSL-i) is 330€.
    For 500€ there is already the dual-socket H11DSi available.

    https://geizhals.de/?cat=mb940&xf=644_Sockel+SP3

    Leave a comment:


  • L_A_G
    replied
    Originally posted by chithanh View Post
    ...
    Maybe prices are different where you live, but the only place that sells the H11SSL where I live charges €500 each and only sells them in packs of 10. I've also not had particularly good experiences with them (or any other white box maker when dealing with them directly) so I didn't even think it was worth being brought up.

    Leave a comment:


  • chithanh
    replied
    Originally posted by L_A_G View Post
    Apart from being about twice the price of the Asus X399 Prime used in the workstation offered by my current employer there's also the fact that the CPU and RAM take up so much space those boards can't fit full size PCIe cards, which make them a complete non-starter for our use cases (as we also do some heavy GPU compute).
    Huh? To my knowledge only the Gigabyte MZ30/MZ31 mobo has this problem. Plus the price difference between the X399-Prime and the cheapest socket SP3 mobo is only 10%.

    ASUS X399-Prime (300€ here) has four x16 slots (two in x8 configuration), but can fit only 3 dual-slot cards.
    Supermicro H11SSL (330€ here) has 3 PCIe x16 slots that are not blocked by anything behind it
    ASRock EPYCD8 (price not disclosed yet, but I expect same ballpark) has 4 PCIe x16 slots, and additionally the 3 PCIe x8 slots are open ended.

    So if you use dual-slot graphics cards, then you can install the same number in X399-Prime like in the H11SSL today, and one more even when EPYCD8 is released.

    Leave a comment:


  • Niarbeht
    replied
    Originally posted by Tomin View Post

    That could be easily done by not putting memories to those channels. AMD might have done something to improve the behaviour of Infinity Fabric in this configuration, so it could be a little slower when emulated with EPYCs.
    Might be interesting to see Michael Larabel give it a spin, then. Maybe he'll be able to cook up some tests for us.

    Leave a comment:


  • Tomin
    replied
    Originally posted by Niarbeht View Post
    Aside from that, I wonder if it's possible to configure a 32-core EPYC chip in such a way as to "simulate" a 32-core Threadripper chip by selectively not using certain memory channels or whatever.
    That could be easily done by not putting memories to those channels. AMD might have done something to improve the behaviour of Infinity Fabric in this configuration, so it could be a little slower when emulated with EPYCs.

    Leave a comment:


  • Niarbeht
    replied
    Originally posted by oleyska View Post

    All dies have dram controllers, we do not know if they will have different wiring to utilize 8 channel memory on existing TR2 platform or if they will only Use 2 memory controllers.
    I do not know how they could solve it but I've seen nifty pin rearrangement at cpu package level from AMD previously.
    https://www.anandtech.com/show/12906...w-x399-refresh

    Two of the dies aren't actually directly hooked up to anything except for the other two dies. So, two dies are great for memory-intensive workloads, and the other two dies are great for things that can just spin in place in cache or don't need to really run all that quickly I guess.

    I kinda wonder if die-independent clocking, and die-dependent scheduling, might be a possibility to improve performance and power use.

    Aside from that, I wonder if it's possible to configure a 32-core EPYC chip in such a way as to "simulate" a 32-core Threadripper chip by selectively not using certain memory channels or whatever.

    Leave a comment:


  • Tomin
    replied
    Originally posted by Adarion View Post

    Indeed. It went round these days that intel's show and the benchmark were to be taken with a full spoon of salt. Reaching 5 GHz and so on, but hardly on all cores and no longer than a few seconds. They had some really big hoses installed at the backside of the case, that makes you wonder what liquid cooling they were using for the show.
    It may be fancy for a show effect, but hardly usable in daily life. Some people also estimated power draws between 400 and 700 W at that speed. Ouch.
    Anandtech had some pictures. 29 phase power supply for the CPU and 1770 W capable cooling sounds like it could be much more than what those estimations are. I have no idea how long they could run those things on 5 GHz but at least they got a picture of all cores* on around 5 GHz.
    https://www.anandtech.com/show/12907...u-need-to-know
    We confirmed that Intel was using a water chiller in the 5 GHz demo, a Hailea HC-1000B, which is a 1 HP water chiller good for 1500-4000 liters per hour and uses the R124 refrigerant to reduce the temperature of the water to 4 degrees Celsius. Technically this unit has a cooling power of 1770W, which correlates to the fact that a Corsair AX1600i power unit was being used for the system.
    *Well, technically that picture just shows some of the cores, so hard to say if the rest were run on such high speeds.

    Leave a comment:


  • Adarion
    replied
    Originally posted by oooverclocker View Post
    <3 AMD

    By the way, according to Gamers Nexus the 28-Core with 5 GHz Intel were presenting one day before AMDs 32-Core announcement was just a Skylake-X cooled by a chiller.

    Really funny - I can't believe that Intel's marketing people think it's a good idea to fool tech press. AMD ran their 32-Core processor with an air cooler to show a setup that people will likely have at home.
    Indeed. It went round these days that intel's show and the benchmark were to be taken with a full spoon of salt. Reaching 5 GHz and so on, but hardly on all cores and no longer than a few seconds. They had some really big hoses installed at the backside of the case, that makes you wonder what liquid cooling they were using for the show.
    It may be fancy for a show effect, but hardly usable in daily life. Some people also estimated power draws between 400 and 700 W at that speed. Ouch.

    It'll be interesting what clocks AMD will have when it enters the markets. But 32 fast cores screams for programs that can make use of it, and even occasional Gentoo compiling won't really fit that monster.

    Leave a comment:

Working...
X