Announcement

Collapse
No announcement yet.

Gigabyte MZ31-AR0: EPYC Motherboard With Dual 10Gb/s LAN, 16 SATA Ports, Seven PCI-E Slots

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by schmidtbag View Post
    You do know this is meant for large-scale servers, right? That being said, not only is this price point pretty average, but Epyc competes with Xeons, not i9s.
    It's ment for small size server's & workstations It's an E-ATX sized bord so it's suitable for big tower cases that suport up to E-ATX sized boards. So it can be used literally as desktop. Epyc is a server part in generally & so is a P series with it's cut SMP support for more than one processor. The mentioned Epyc 7401P is 24 core part with recommended price of 1075$ which is overblown currently & can be found at 1150$ at Newegg while Epyc 7351 is a 16 core part that can be found for 776$ also on Newegg which actually makes it a better choice right now. In comparison i9 7980xe 12 core part costs 1090$ & i9 7900x 10 cores cost 923$while decent MB for them is around 300$. Keep in mind & how ECC & non ECC DDR memory modules cost the same now. So if you have an need for such a desktop system that you need to trow In lots of PCI-E things on I don't see why not.

    Comment


    • #12
      Yes, it is definitely not meant for "large scale servers". Such boards are in proprietary form factor and tailored to a specific chassis design.

      This one is sort of a bastard child - available SKUs are under-powered for a workstation scenario and over-powered for a home server scenario. It is not useful as a GPU computer system either, with half the PCIE slots obstructed...

      It is a rare breed of synergy - AMD finally has a competitive design, but both them and mobo vendors struggle to put it in a good prosumer use case.

      And while it is understandable that AMD may want to target big server for its high margins and high volume, it makes no sense to do that exclusively at the expense of other markets, not when they can easily produce skus to address other markets. The problem with big server is intel is trenched deeply in that market, and it is easy for corporations to shell out on poor value products when they spend other's money and get to write it as an expense, so there is not much incentive in that market to jump ship. AMD could easily have gotten more revenue form prosumers who seek high performance at a better price/performance ratio than the piss poor one intel provides.

      I would not consider intel's HEDT as a competitor tho... As someone who suffered a six digit loss due to absence of ECC support, I cannot in good faith recommend a system without ECC support for any task that is even remotely important. The rate of cosmic ray induced errors grows proportionally to the amount of data processed, and the rate of gradual silent ram failure grows proportionally to the amount of ram modules you are running. Especially contemporary high density modules.
      Last edited by ddriver; 05 April 2018, 07:21 PM.

      Comment


      • #13
        Originally posted by ddriver View Post
        As someone who suffered a six digit loss due to absence of ECC support, I cannot in good faith recommend a system without ECC support for any task that is even remotely important.
        Ouch.

        I was unaware of EPYC's ECC issues. I thought AMD said all Ryzen's would support ECC.

        IMO, ECC is a must for all servers and mission-critical clients. Clients which modify rich, persistent data residing on servers should use ECC. The only devices where I don't care about it are used only for web browsing, media consumption, and gaming.

        Comment


        • #14
          Originally posted by ddriver View Post
          They layout is pukatronic. Way to render half of the PCIE slots unusable for long cards. I'd prefer to scrap half the ram slots, even if you go for udimms you can get 128 gigs of ram in there, which is plenty for the vast majority of tasks and you don't lose half of the PCIE slots. And the extra board space could have hosted a couple of extra M2 slots, which will not interfere with long PCIE cards. A CPU with 128 PCIE lanes, and just a single M2, really? And half of the PCIE slots blocked out? Great thinking there gigabyte...
          The layout is a little bizarre. There are not too many x16 PCI-E cards that are half length. Most enterprise expansion cards are x8. Most x16 cards are GPU, and all the high end GPU's whether for gaming, CAD, or compute, are full length.

          Also as Michael pointed out, a single M.2 is an odd choice. Everyone buying this board will demand redundant storage. They'd have been better off putting some NVMe connectors on the board like SuperMicro did with their Epyc lineup.

          Originally posted by ddriver View Post
          Alas, while EPYC is great for servers, the complete absence of lower core higher clock parts makes it less than ideal for workstations. It is OK for stuff like rendering, but performs very poorly in scenarios that are clock sensitive like for example DAW. It is disappointingly slow at workloads that cannot scale up to the full amount of cores, but even for those that do scale well, you have to buy the very expensive 24 and 32 core parts to get it to make up for the low clocks.

          AMDs EPYC lineup is very lacking in this aspect. Ryzen and TR have shown that the core can do 3.4 to 3.6 Ghz with a decent amount of efficiency, yet instead of having higher clocks for the lower core EPYC parts, clocks are actually even lower.

          8 core EPYC at 2.1 Ghz base clock when they also have a 32 core part at 2.2 Ghz base clock.
          The top 16 core part has 2.9 Ghz boost clock, while the 32 core part has 3.2 GHz. The 8 core part is also capped at 2.9 boost. Seriously? 8 cores cannot go higher than 32 cores?
          Lower core server parts are a HUGE market, and AMD is not even trying to address it, with that single 8 core pathetic clocks part.
          WTF AMD? There should be a 3.2 GHz base clock 8 core part and a 3 GHz 16 core part. Uniprocessor at a competitive price.
          I noticed this as well, it is kind of strange. This is a first for AMD. The Opteron 4300 and 6300 series had higher clocked lower core parts. They must be targeting virtualization and cloud and supercomputers with the Epyc platform, because those folks want as many cores as possible. Think the AWS's and Google's and Facebook's of the world.

          The only enterprise application I can think of for low-core high-clock server chips is Oracle and their horrendous obtuse per-core licensing model.

          Also notice there are no low power Epyc chips, the last generation of Opteron had HE and EE models that were low-TDP parts.

          It seems like the current Epyc offerings are the replacement for server oriented Opteron 6300. Here's hoping AMD releases another series of Epyc more analogous to the workstation oriented Opteron 4300...

          Comment


          • #15
            One of my first computers was a Vic 20 so for a few of yoou out there that gives you an idea if my age. I certainly dont have a use for one of these EPYC systems but i cant help but to be impressed with the platform. Especially when you can get a system together relatively cheaply. Kinda blows my mind really when i think about how far we have come in one generaton.

            Comment


            • #16
              Originally posted by schmidtbag View Post
              You do know this is meant for large-scale servers, right? That being said, not only is this price point pretty average, but Epyc competes with Xeons, not i9s.
              That Epyc only competes with Xeons is what AMD thinks too, and that is why they told Michael to not compare Threadripper against Epyc:
              Originally posted by Michael View Post
              AMD requested I not run any side-by-side tests of Threadripper and EPYC due to intended for different markets
              But that is just another example of how AMD marketing is incompetent.

              I think it is mostly the price that decides which parts compete against each other. Single and Epyc mobos are available in E-ATX form factor, dual Epyc 7281 CPUs and a Supermicro H11DSi will cost as much as a Core i9 7980XE CPU alone.

              If Michael could at some point get the hardware for a dual 7281 (or dual 7301) against 7980XE comparison that would be great. Unfortunately he chose the MZ31-AR0 over the similarly priced H11DSi, even though he already has a single socket Epyc test platform.

              Originally posted by Zola View Post
              Keep in mind & how ECC & non ECC DDR memory modules cost the same now. So if you have an need for such a desktop system that you need to trow In lots of PCI-E things on I don't see why not.
              Epyc also supports LR-DIMMs which you can actually buy in 32 GB and 64 GB size, and which are all ECC.

              Originally posted by coder View Post
              I was unaware of EPYC's ECC issues. I thought AMD said all Ryzen's would support ECC.

              IMO, ECC is a must for all servers and mission-critical clients. Clients which modify rich, persistent data residing on servers should use ECC. The only devices where I don't care about it are used only for web browsing, media consumption, and gaming.
              What ECC issues? There are none to my knowledge (there's an article by hardwarecanucks which spreads misinformation, but I digress). Intel's HEDT platform does not even offer ECC support.

              Comment


              • #17
                This is more for server than workstation. Far more for server than workstation. Lack of ECC is very weird.

                For workstation usage I would seriously think about Asrock's x399m that is clearly aimed at workstation use and also micro-ATX. And supports ECC ram like all other Threadripper and AM4 Asrock AMD boards https://www.tweaktown.com/reviews/85...ew/index2.html

                Comment


                • #18
                  Originally posted by coder View Post
                  I was unaware of EPYC's ECC issues. I thought AMD said all Ryzen's would support ECC.
                  You might have gotten the impression I was burned by AMD hardware. Nope, I was burned by intel HEDT hardware that has ECC disabled. The whole reason why today I am reluctant to even touch something that doesn't have proper ECC support, and if possible at a decent price.

                  EPYC doesn't have an ECC issue. It has a low clocks issue. It is Ryzen and TR, skus that do support ECC and provide high clocks, but without any commitment from motherboard vendors, support is kinda iffy. On boards that support it it SEEMS to work on the hardware side, and it is even worse on the software side of things.

                  ECC importance is rather understated, and with ECC modules being in the same ballpark as premium non-ECC RAM, AMD could score HUGE with prosumers.

                  I've heard on and on about how unimportant ECC is from fanboys, especially when intel launch 2000$ CPUs that don't support it.

                  But the fact remains that between an identical CPU chip, intel chargers a 50% price premium just to have ECC not be disabled. Because yeah, all of those CPUs come with ECC support, but for most of those, intel would rather disable and throw away the functionality just so it can milk people who really need it an additional 50% more.

                  AMD could capitalize hugely on that greed, as all their CPUS are with ECC support enabled. They just need to give a few mobo makers a little push to offer a complete implementation and do some basic validation.
                  Last edited by ddriver; 06 April 2018, 05:46 AM.

                  Comment


                  • #19
                    Is this some kind of joke?

                    A motherboard in this class that is unbootable with recent BIOSes?
                    Clearance issues with EATX size?

                    I also find the VRM circuitry and it's cooling (or rather the lack thereof) rather underwhelming.

                    I could not pop in a 180W TDP CPU into that board in good conscience... also the maximum of 1TB of DDR4 RAM will have a noticeable power draw as well, so we should be talking about something in the ~400-450W region.

                    Even with 90% efficiency, that would result in around 30-40W of heat being generated on the VRM, which is quite a lot for a few bare chips to handle. Even a simple heatsink would have been a godsend, considering that a workstation built on this board will most likely be used for serious work, so it will have to sustain full load for long times.

                    IMHO this is ridiculous showing for a 650$ motherboard

                    The CPU and RAM sockets should have been placed above the PCIe slots, which would have been possible had Gigabyte sacrificed a PCIe slot, or made two of them single slot.
                    Also, consumer motherboards often have m.2 slots on their backs... so even 3 or 4 NVMe slots should have been no problem to install.
                    Sure, one can use a PCIe x16 to 4*NVMe adapter, but that already sacrifices an expansion slot, and you "only" have 76 PCIe lanes exposed to you out of EPYC's 128 to start with.

                    All in all, not a nice board.
                    Last edited by OneBitUser; 06 April 2018, 06:57 AM. Reason: I calculated an approximate power draw figure for the DDR4... my original estimate was waay too low (~50W instead of the more correct ~250W).

                    Comment


                    • #20
                      Originally posted by OneBitUser View Post
                      Is this some kind of joke?

                      A motherboard in this class that is unbootable with recent BIOSes?

                      I also find the VRM circuitry and it's cooling (or rather the lack thereof) rather underwhelming.

                      I could not pop in a 180W TDP CPU into that board in good conscience... also the maximum of 1TB of DDR4 RAM will have a noticeable power draw as well, so we should be talking about something in the 220-230W region. Even with 90% efficiency, that would result around 20-25W of heat being generated on the VRM, which is quite a lot for a few bare chips to handle. Even a simple a few simple heatsinks would have been a godsend, considering that a workstation built on this board will most likely be used for serious work, so it will have to sustain full load for long times.
                      It depends on the circuit that's used. Mobos typically use buck converters, which are decently efficient, but would definitely require at least a modest radiator and that concrete power target.

                      That being said, there are other switching voltage regulator circuits out there are approach the 99% efficiency range. Such a circuit would not need to displace more than 2-3 watts of power, and would not need any heatsink / radiator whatsoever. Although it is highly unlikely gigabyte went on to pioneer.

                      A better buck convertor implementation could push efficiency to about 95%, but still at such a price point it wouldn't kill them to add a basic heatspreader at the very least.
                      Last edited by ddriver; 06 April 2018, 07:02 AM.

                      Comment

                      Working...
                      X