Announcement

Collapse
No announcement yet.

Gigabyte MZ31-AR0: EPYC Motherboard With Dual 10Gb/s LAN, 16 SATA Ports, Seven PCI-E Slots

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by ddriver View Post

    It depends on the circuit that's used. Mobos typically use buck converters, which are decently efficient, but would definitely require at least a modest radiator and that concrete power target.

    That being said, there are other switching voltage regulator circuits out there are approach the 99% efficiency range. Such a circuit would not need to displace more than 2-3 watts of power, and would not need any heatsink / radiator whatsoever. Although it is highly unlikely gigabyte went on to pioneer.
    I doubt GB went with such high-end parts, but that can be checked easily with a detailed examination of the circuits themselves.
    It's still rather befuddling that they cheaped out on a dollar's worth of aluminium heatsinks.

    I also calculated an approximate power draw figure for the DDR4... my original estimate was waay too low (~50W instead of the more correct ~250W).
    Based on servethehome's DDR4 power draw tests on a Xeon server that's a more accurate figure.
    So to sum up: a most likely conventional VRM supplying 400-450W at full load without cooling... yikes.

    Comment


    • #22
      Remember that every 'well engineered' product is designed to fail as soon as possible after the warranty period runs out, just not any sooner. Which is also why there are so many refurbished servers for sale all the time and at a significant discount, enterprise doesn't really wait on them to fail, they are retired shortly after their warranty period expires.

      It is all about maxing out sales, and you don't do that with overbuilt products that can last for decades.

      If I get this board, putting on a radiator on the vrms will be the first thing I do. Stock solutions are usually very mediocre. I got a 10% increase in battery life, working temps and sustained performance by simply replacing the TIM on my yoga 720.

      400-450 watt converter at around 95% efficiency would displace 3-4 watts per phase, which shouldn't be a problem given the component surface area, but certainly won't be running cool.
      Last edited by ddriver; 06 April 2018, 10:06 AM.

      Comment


      • #23
        Originally posted by OneBitUser View Post
        Is this some kind of joke?
        Gigabyte is a kind of a joke in server grade stuff anyway, even for FreeNAS users. Same as Asus for that matter.

        I assume that they considered this board to be placed in a server, with server-grade fans, there it could theoretically make sense as the sheer air moved around would cool them down even if bare.

        In a workstation no. Just no. That's yet another reason to get a mobo that looks like a gaming one (i.e. they placed heatsinks everywhere) for a workstation build.

        Comment


        • #24
          Originally posted by starshipeleven View Post
          Gigabyte is a kind of a joke in server grade stuff anyway, even for FreeNAS users. Same as Asus for that matter.

          I assume that they considered this board to be placed in a server, with server-grade fans, there it could theoretically make sense as the sheer air moved around would cool them down even if bare.

          In a workstation no. Just no. That's yet another reason to get a mobo that looks like a gaming one (i.e. they placed heatsinks everywhere) for a workstation build.
          1 - consumer power supply circuits are usually less efficient and often have to facilitate additional overclocking
          2 - fancy radiators have a wow factor with sillies even if completely unnecessary, RGB LEDs - mandatory!
          3 - barely decent tower cases have decent airflow too
          4 - note that the ram slots actually have dedicated power circuits, my previous calculation didn't factor in those, so a total of 16 vrm phases, that should be able to handle around 500 watts of net power draw for the duration of the warranty period without any cooling assistance even at a lower 90% efficiency point
          5 - "server grade" is actually not all that decent as one might expect, I've had a number of supermicro boards that are actually quite flaky, and their support isn't stellar - it's quite bad really, I guess they don't care much about individual builders, presumably putting most of their support efforts in volume production server designs. The "individual build" for server boards is actually very low compared to consumer boards, the latter enjoy far better support on "individual basis"

          Comment


          • #25
            Did the BIOS problem have anything to do with Linux friendly UEFI booting? https://superuser.com/questions/1123...r-at-each-boot

            Gentoo was booting fine using UEFI on F3 bios for my Gigabyte AB350M-D3H board, then I upgraded to F4 or F10, and it stopped booting. I reported the bug and fix, and Gigabyte did nothing about the regression. In a more recent BIOS update, ~F20+, I seem to have lost the ability to control RAM timings per channel. I can only have one set of timings applied to both channels now. I can prove all of this with screenshots too. So, yeah, one of the most stable boards I've had, but these BIOS regressions make me rethink buying another Gigabyte board. Problems which I likely wouldn't have encountered with an ASUS or Asrock board.
            Last edited by audir8; 06 April 2018, 03:11 PM.

            Comment


            • #26
              Originally posted by ddriver View Post
              1 - consumer power supply circuits are usually less efficient and often have to facilitate additional overclocking
              This isn't a bad thing, oversized components usually fare better for the long haul (and in crappier conditions), a 5% efficiency difference is not going to break the bank unless you are deploying these in a datacenter at a very large scale (aka Google and Amazon and Facebook and friends, that design their own boards for their own needs.

              2 - fancy radiators have a wow factor with sillies even if completely unnecessary, RGB LEDs - mandatory!
              Fancy radiators are still pieces of some type of metal glued to the things that have to be cooled down, so even if they are shaped like a dick and glow with RGB leds they still are functional radiators.

              A naked board is naked, and I don't fancy the idea of wasting time to find/tailor/glue heatsinks myself because some bean counter decided to save 20 cents per board.

              3 - barely decent tower cases have decent airflow too
              Unsure of what this means. In general cases that have a decent airflow aren't so common, and again having a "gaming" board ensures that as long as there is air moving the heatsinks can handle the heat. A board like the Gigabyte above would just cook itself in a poorly ventilated case.

              4 - note that the ram slots actually have dedicated power circuits, my previous calculation didn't factor in those, so a total of 16 vrm phases, that should be able to handle around 500 watts of net power draw for the duration of the warranty period without any cooling assistance even at a lower 90% efficiency point
              I still think you are overestimating this a little, even 250w of heat is a ton.

              But still, the "without any cooling assistence" is a very bold statement. I really doubt they can survive serious loads outside of a rack case with server fans (without throttling).

              Server boards not designed by morons have a pretty obvious and large radiator on VRMs (Intel board, but it's the same class of product) http://www.asrockrack.com/general/pr...Specifications , or "smaller" dual-socket 2011 server boards also have radiators on VRMs http://www.asrockrack.com/general/pr...Specifications , or even smaller single-socket 2011 workstation-grade boards again have heatsinks on VRMs. http://www.asrockrack.com/general/pr...Specifications

              Also Supermicro has heatsinks on VRMs on Epyc boards https://www.supermicro.nl/Aplus/moth.../H11SSL-NC.cfm

              5 - "server grade" is actually not all that decent as one might expect,
              "server-grade" is more about "quality" than "brand". Theoretically Gigabyte, Asus, MSI and others have a line of "server-grade" boards, but are they actually quality enough for that? hmmmm...... maybe yes, maybe not (usually not).

              I've had a number of supermicro boards that are actually quite flaky,
              Yeah, they aren't a gold standard either, I've had a fair share of bad experiences with them too. Especially their IPMI/BMC/whatever.

              and their support isn't stellar - it's quite bad really, I guess they don't care much about individual builders, presumably putting most of their support efforts in volume production server designs. The "individual build" for server boards is actually very low compared to consumer boards, the latter enjoy far better support on "individual basis"
              "far better support" is still a relative term, in most builds I've seen the main reason to use a prosumer board was that it is pretty cheap if compared to the "server-grade" ones, so if it has any issue it could be just dumped (or sold on ebay) and replaced.

              One of the server-oriented brands I've had good experiences with is Asrock Rack, they did their part on acceptingt RMAs when there was the avoton (server Atom) hardware bug that caused the boards with that SoC to just die after a few years, provided beta storage firmwares to sidestep stupid issues in marvell controllers as a stop-gap until they had a firmware that was ok, and their support isn't focused on large B2B as SuperMicro is.

              Comment


              • #27
                starshipeleven 500 watts of power, delivered to the actual components at 90% conversion efficiency will only cause 50 watts of power to be displaced as heat by the power circuit.

                There are 32 power FETs on that mobo, 50 watts at "full load" CPU and memory would put about 1.5 watts per FET on average, which the FETs can absolutely handle.



                Comment


                • #28
                  Originally posted by ddriver View Post
                  starshipeleven 500 watts of power, delivered to the actual components at 90% conversion efficiency will only cause 50 watts of power to be displaced as heat by the power circuit.
                  Ah ok, that's the power consumed, not the heat generated.

                  Comment


                  • #29
                    Can you check if single socket EPYC can run with only 4 DIMMs instead of 8? I know that platform supports 8-channel RAM, but can it run with just 4 channels?

                    Comment


                    • #30
                      Originally posted by malakudi View Post
                      Can you check if single socket EPYC can run with only 4 DIMMs instead of 8? I know that platform supports 8-channel RAM, but can it run with just 4 channels?
                      Yes, you don't have to use all the channels available on any motherboard.

                      Comment

                      Working...
                      X