Announcement

Collapse
No announcement yet.

AMD EPYC 7351P Linux Performance: 16 Core / 32 Thread Server CPU For ~$750

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by Brane215 View Post
    By which measure would sinking 90W of heat per octacore be unsurmountable problem ?
    That's 90w per octacore in very fucking close proximity to 3 other 90w octacores, you can't just use a single waterblock, it would be the same as keeping them all under the same heatspreader (i.e. heat from one die will also heat up other dies).
    It's not as simple as watercooling a GPU, there you have only 1 die.

    So you either make 4 tiny waterblocks or you increase the pump and pipe size to have more waterflow.

    It's not insormountable, it just becomes not worth it outside of "rich man toy" territory. Especially since no sane company will watercool a server anyway.

    For cases where they are desperate enough, they install special server racks inside a tank of mineral oil or some similar substance that won't cause damage in case of leaks.

    But it's still a massive PITA, I only know of professional cryptocurrency mining doing that on a serious scale.

    Comment


    • #42
      Originally posted by starshipeleven View Post
      That's 90w per octacore in very fucking close proximity to 3 other 90w octacores, you can't just use a single waterblock, it would be the same as keeping them all under the same heatspreader (i.e. heat from one die will also heat up other dies).
      It's not as simple as watercooling a GPU, there you have only 1 die.
      As long as a single waterblock touches enough of the 4 dies, it is definitely adequate, and is actually more effective than cooling a single large die (like you find on i9s). The greater the surface area, the more heat you can dissipate, and the IHS of Epyc adds a lot of surface area. That being said, if you have a waterblock that can cover most of the surface of an Epyc, it will keep the CPU cooler than an i9 of the same wattage. But, that also means you're heating up the water quicker.

      To clarify, one die under the same IHS will heat up the other dies, but the gigantic surface area of Epyc allows faster heat removal, with an appropriate cooler. When it comes to liquid cooling, you are (in a way, but not literally) expanding the surface area, since the water moves and collects the heat as it passes through the waterblock. That in turn allows Epyc or TR to have pretty consistent temperatures under full load vs the competition.

      Comment


      • #43
        Originally posted by starshipeleven View Post
        It's not as simple as watercooling a GPU, there you have only 1 die.

        So you either make 4 tiny waterblocks or you increase the pump and pipe size to have more waterflow.
        Or you make a custom block for Epyc with split per-die waterflow.


        Comment


        • #44
          Originally posted by edwaleni View Post
          Pretty amazing watching a complete server board with full power, booting while being immersed in what looks like water and a plexiglas tank.
          Yep. There is a niche DIY segment of (reckless) people keeping their system in an aquarium filled with more mundane mineral oil. https://www.pugetsystems.com/submerged.php
          (and you can find many videos on youtube)

          It is not anywhere near as fire-safe or performing as that 3M fluid (probably Novec https://www.3m.com/3M/en_US/novec-us...rsion-cooling/ ), but works decently for home PCs, and is far cheaper.

          Comment


          • #45
            Originally posted by Brane215 View Post
            Or you make a custom block for Epyc with split per-die waterflow.
            The metal in the block would still conduct heat to other dies, that's why I said separate waterblocks.

            Comment


            • #46
              Originally posted by jrch2k8 View Post
              push your motherboard partners to make proper WS class mobos for Epyc with lots of M.2 slots(or at least 3 as the ThreadRipper ones),
              You can use PCIe cards that provide M.2 slots if you want.
              der8auer used those in his Threadripper 8x RAID0 experiment.


              Originally posted by msroadkill612 View Post
              I fancy the 24 core 1P epyc at ~$1050.
              From price/performance I think the most interesting models are
              Epyc 7281 16c/32t for single or dual socket systems, ~660 EUR currently
              Epyc 7401P 24c/48t for single socket systems ~1180 EUR (the one you mentioned)
              Epyc 7551P 32c/64t for single socket systems ~2280 EUR

              Especially a two-socket Epyc 7281 system is exceptional value, massive parallel processing power for less than 2000 EUR cost of CPU and mobo.

              Comment


              • #47
                Originally posted by chithanh View Post
                Especially a two-socket Epyc 7281 system is exceptional value, massive parallel processing power for less than 2000 EUR cost of CPU and mobo.
                True. If one is to plunge for SP3 board, one might as well make it dual socket. Supermicro's H11DSi-NT is a bit over €600 in EU and that's dual socket board with 3* PCIe*8, 2*PCIe3*16 and two 10G NICs.

                Comment


                • #48
                  And it's the memory these days that actually defines the budgets ...

                  Comment


                  • #49
                    Originally posted by jrch2k8 View Post
                    Someone at AMD please please please convince the boneheads at marketing to push your motherboard partners to make proper WS class mobos for Epyc with lots of M.2 slots(or at least 3 as the ThreadRipper ones), while you are at it call those fabs to push on Epyc production because no e-tailer on earth sell Epyc CPUs yet. All those Xeons on NewEgg and Amazon make me sad
                    I would say, make a motherboard with 10 PCIe slots as already existed in the past - I think that was called XL-ATX.

                    With four 16x and six 8x slots you would spend only 112 lanes on the slots, leaving 16 for other things on the motherboard.

                    Comment


                    • #50
                      Originally posted by pegasus View Post
                      And it's the memory these days that actually defines the budgets ...
                      But with 512G and up you're able to use Chrome and Firefox at the same time. Under Gnome 3/Cinnamon or KDE and with a Windows VM in the background.
                      I kid only a bit - can't do that with 8 gigs!, or do that and nothing else in controlled conditions.

                      Comment

                      Working...
                      X