Announcement

Collapse
No announcement yet.

13-Way IBM POWER9 Talos II vs. Intel Xeon vs. AMD Linux Benchmarks On Debian

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by Michael View Post
    Originally posted by willmore View Post
    Gee, if only there were a database of benchmark results you could use to make your own comparisons amongst an arbitrary set of machines.
    Or people how to use OpenBenchmarking.org
    Don't blame me, I gave a link back then.
    Originally posted by chithanh View Post
    Try this link:
    OpenBenchmarking.org, Phoronix Test Suite, Linux benchmarking, automated benchmarking, benchmarking results, benchmarking repository, open source benchmarking, benchmarking test profiles
    That was for someone wanted to buy a TR 1950X and wondered whether the (back then) similarly priced Epyc 7401P would perform better.

    Originally posted by bridgman View Post
    most of the articles comparing X1950 to 7980XE ended up concluding that while the 7980XE was a bit faster it wasn't worth the much higher price.

    I still think halo products are worth having but not everyone agrees with that.
    Many reviews concluded that the 7980XE is the fastest CPU which you can buy. Almost none mentioned Epyc. Which is of course silly, but if you don't show up to the fight, then you lose by default (alledgedly that was an ATI mantra back when they designed CYPRESS). And AMD had a competitor ready at $2000, but they chose to not let it fight.

    Given the above openbenchmarking Epyc 7601 results which soundly beat the i9 7980XE in most tests, I would have expected the only slightly lower clocked Epyc 7551P to reinforce the "AMD beats Intel at every price point" position.

    And this is why I consider AMD marketing incompetent. Failure to recognize an opportunity to look good against Intel's halo product, and/or failure to act on it.

    Comment


    • #52
      Originally posted by bridgman View Post

      I'm just saying that your "threshold of evil" might be a bit low.
      Unfortunately AMD set itself up for this by requiring the PSP and related technologies, as well as moving away from the open source coreboot implementations and requiring a binary-only firmware solution. While I do understand the market forces at work here, the simple fact is AMD got trounced by a competitor on openness because AMD appears to be selecting for a different set of market features. There is a very fair comparison against IBM where AMD loses, and that is on actual owner control of the machine.

      Before we finally went with POWER as our CPU of choice for our secure systems, we had multiple conversations with various AMD vendors (2015 timeframe, IIRC) trying to get through to AMD proper with one simple message: allow us to disable the PSP, or run our own code on it. Even if it's a separate SKU. Those attempts went nowhere, otherwise we'd be having a very different conversation right now....

      Comment


      • #53
        Originally posted by madscientist159 View Post

        Unfortunately AMD set itself up for this by requiring the PSP and related technologies, as well as moving away from the open source coreboot implementations and requiring a binary-only firmware solution. While I do understand the market forces at work here, the simple fact is AMD got trounced by a competitor on openness because AMD appears to be selecting for a different set of market features. There is a very fair comparison against IBM where AMD loses, and that is on actual owner control of the machine.

        Before we finally went with POWER as our CPU of choice for our secure systems, we had multiple conversations with various AMD vendors (2015 timeframe, IIRC) trying to get through to AMD proper with one simple message: allow us to disable the PSP, or run our own code on it. Even if it's a separate SKU. Those attempts went nowhere, otherwise we'd be having a very different conversation right now....
        That is *not* what the "evil" discussion here was about though. The initial comments about AMD being "evil" were because our marketing folks didn't want side-by-side benchmarks between Threadripper and Epyc. Nothing to do with open sourcing microcode.

        That said, I agree that enough people have side-tracked the thread into other areas that maybe we should give up on the original discussion.
        Test signature

        Comment


        • #54
          Originally posted by bridgman View Post

          That is *not* what the "evil" discussion here was about though. The initial comments about AMD being "evil" were because our marketing folks didn't want side-by-side benchmarks between Threadripper and Epyc. Nothing to do with open sourcing microcode.

          That said, I agree that enough people have side-tracked the thread into other areas that maybe we should give up on the original discussion.
          Yes, I was trying to bring things a bit more on topic....I had originally interpreted the statement a tad differently. Marketing will do what marketing will do, that's a given in most companies and not really worth trying to change IMO.

          For what its worth, we applaud what AMD has been doing with the open graphics stack on Linux, and have been heavily promoting AMD GPUs. Polaris and Vega both work quite well on POWER, and it looks like the ROCm PCIe atomics requirements are being relaxed. There's a bright future there from what I can tell, though again I would ask: can we get compute cards with DisplayPort output* that don't require signed firmware to operate? For a standard GPU the firmware is not much concern, since it's behind the IOMMU and can't really alter the state of a (well designed) system. For compute cards the requirements are quite different and in practice we have to be able to audit and modify the firmware of a compute card.

          * I say with DisplayPort output because one of the use cases we see is trying to visualize the large datasets coming out of a compute cluster. We don't want Hollywood DRM anywhere in such cards as we are dealing with high-value results, and really want to be able to control and audit the GPU in this case. Without the ability to audit, the GPU has to be treated as a "dumb" display device with no ability to alter state on the host (compute), restricting its potential.
          Last edited by madscientist159; 27 June 2018, 06:13 PM.

          Comment


          • #55
            Originally posted by madscientist159 View Post
            Yes, I was trying to bring things a bit more on topic....I had originally interpreted the statement a tad differently. Marketing will do what marketing will do, that's a given in most companies and not really worth trying to change IMO.
            Ahh, OK - I saw it as taking things off topic, which is why we had differing views

            Originally posted by madscientist159 View Post
            For what its worth, we applaud what AMD has been doing with the open graphics stack on Linux, and have been heavily promoting AMD GPUs. Polaris and Vega both work quite well on POWER, and it looks like the ROCm PCIe atomics requirements are being relaxed. There's a bright future there from what I can tell, though again I would ask: can we get compute cards with DisplayPort output* that don't require signed firmware to operate? For a standard GPU the firmware is not much concern, since it's behind the IOMMU and can't really alter the state of a (well designed) system. For compute cards the requirements are quite different and in practice we have to be able to audit and modify the firmware of a compute card.

            * I say with DisplayPort output because one of the use cases we see is trying to visualize the large datasets coming out of a compute cluster. We don't want Hollywood DRM anywhere in such cards as we are dealing with high-value results, and really want to be able to control and audit the GPU in this case. Without the ability to audit, the GPU has to be treated as a "dumb" display device with no ability to alter state on the host (compute), restricting its potential.
            DisplayPort (and display in general) only needs microcode on Raven Ridge so far AFAIK - the display block on all earlier GPUs is all hard-wired logic... or are you talking about microcode for any function on the GPU rather than microcode for display ?

            The obvious challenge is that the vast majority of our sales still come from the OEM PC market, which brings a non-negotiable requirement for DRM that can not be tampered with or disabled by the owner, backed by assurances from the HW vendor. Signing the microcode and keeping it closed are two things that help to get us over the (loosely defined and constantly evolving) threshold for "good enough" DRM.

            One option that I have been exploring is whether we could make the business case work for compute-only GPUs that used a different execution environment for the microcode (so that opening it would not put OEM PC products at risk) and which could potentially be offered with open sourced microcode. I say "compute-only" because we would not be able to sell those products into the OEM PC market at all, and could not leverage any of our current video encode/decode technology. That last point is proving to be a problem because even compute applications are making use of video-in / video-out capabilities these days.

            So the short answer is yes we could do it if we could afford to develop a chip with different microcode engines from our OEM PC parts and continue to support both design paths. Obviously the less functionality we have to include (relative to OEM PC parts) the easier that would be, since each HW block that we did require would need significant rework.

            Just curious, other than possible interactions between compute stacks and IOMMU (which I'm sure could be worked through) what is it about compute that makes microcode more of an issue than it is for graphics/video ?
            Test signature

            Comment


            • #56
              Originally posted by Qaridarium
              So you really think that your company AMD gets away with this ?
              Depends on what you mean by "gets away with it". If you mean "can we do it ?" the answer is obviously yes.

              If the question is "can we do it without any downside ?" the answer is arguably no, although there is honest disagreement about how much opportunity we lost by not positioning one or more of the single-package Epyc SKUs against the latest high-end Intel workstation parts. That disagreement in turn comes from different people placing different value on the "halo effect" of having the fastest <whatever> no matter how big or expensive it might be.

              Bottom line is that you probably disagree with some of our marketing folks about the importance of halo effect, which is fair - I just have a tough time translating that to something like "being evil".
              Last edited by bridgman; 27 June 2018, 07:37 PM.
              Test signature

              Comment


              • #57
                Originally posted by bridgman View Post

                Ahh, OK - I saw it as taking things off topic, which is why we had differing views



                DisplayPort (and display in general) only needs microcode on Raven Ridge so far AFAIK - the display block on all earlier GPUs is all hard-wired logic... or are you talking about microcode for any function on the GPU rather than microcode for display ?

                The obvious challenge is that the vast majority of our sales still come from the OEM PC market, which brings a non-negotiable requirement for DRM that can not be tampered with or disabled by the owner, backed by assurances from the HW vendor. Signing the microcode and keeping it closed are two things that help to get us over the (loosely defined and constantly evolving) threshold for "good enough" DRM.

                One option that I have been exploring is whether we could make the business case work for compute-only GPUs that used a different execution environment for the microcode (so that opening it would not put OEM PC products at risk) and which could potentially be offered with open sourced microcode. I say "compute-only" because we would not be able to sell those products into the OEM PC market at all, and could not leverage any of our current video encode/decode technology. That last point is proving to be a problem because even compute applications are making use of video-in / video-out capabilities these days.

                So the short answer is yes we could do it if we could afford to develop a chip with different microcode engines from our OEM PC parts and continue to support both design paths. Obviously the less functionality we have to include (relative to OEM PC parts) the easier that would be, since each HW block that we did require would need significant rework.

                Just curious, other than possible interactions between compute stacks and IOMMU (which I'm sure could be worked through) what is it about compute that makes microcode more of an issue than it is for graphics/video ?
                Thanks for the in-depth response!

                So what we would be looking for is essentially a pro-grade compute/3D display card. We don't want or need video encode/decode blocks, but would need OpenGL/Vulkan and ROCm support. 6x or 8x DisplayPort outputs would be useful to visualize the data (via OpenGL/Vulkan) coming out of the compute cards / cluster.

                CAPI or BlueLink coupled with HBM would make this an absolute powerhouse as well, but might be harder to do.

                If there's a way to create this kind of card with an open firmware, I'll say right now that we can drive significant demand. I think it neatly sidesteps the DRM issues in that the card doesn't have native codec blocks, and without HDCP/PSP couldn't negotiate access to encrypted (DRM protected) media anyway. It's an enterprise grade card without the consumer features, with main use cases in servers and workstations.

                Thoughts?

                Comment


                • #58
                  Originally posted by Qaridarium
                  allow benchmarks between threadripper and EPYC
                  allow to "disable the PSP"
                  and the right to "run our own code on it"
                  allow: "open source coreboot implementation"
                  allow: "owner control of the machine"

                  so in the end the people Buy IBM Power9 instead because they are sick of this shit.
                  Let's separate the marketing/branding part out because I already responded to another of your posts about that, and separate out the coreboot issue as well.

                  DRM first... the key point here is that you are comparing Ryzen/TR/Epyc to a CPU that will never be sold into the OEM PC market, and so does not have any of the non-negotiable DRM requirements which come with that market.

                  I'm sure I don't have to explain to you that the essense of DRM requirements in the OEM PC market is that the owner must NOT have full control of the machine if that includes being able to tamper with or disable any of the DRM mechanisms. I really like what Raptor is doing - building a PC around a CPU from the datacenter-only world that does not have to satisfy OEM PC market requirements - but that doesn't do anything to relieve the DRM expectations on our products.

                  The real question you should be asking is "what would it take to build a separate line of CPUs and GPUs that could be opened up more fully without putting your existing products and markets at risk ?".

                  At the moment most vendors leverage the same core technology across multiple markets in order to be cost-competitive, but as our business grows it may become possible to make datacenter-only parts which do not share as much technology with OEM PC parts, and which could therefore be opened up more without putting our core products and markets at risk.

                  Going back to coreboot - we *allow* open source coreboot implementations today (unless you think we have been sending cease-and-desist orders to people working on them ?) we are just not doing the work *ourselves* via fully open source AGESA releases at the moment.

                  I still have a tough time seeing any of this as "evil" though.
                  Last edited by bridgman; 27 June 2018, 08:28 PM.
                  Test signature

                  Comment


                  • #59
                    Originally posted by madscientist159 View Post
                    If there's a way to create this kind of card with an open firmware, I'll say right now that we can drive significant demand. I think it neatly sidesteps the DRM issues in that the card doesn't have native codec blocks, and without HDCP/PSP couldn't negotiate access to encrypted (DRM protected) media anyway. It's an enterprise grade card without the consumer features, with main use cases in servers and workstations.

                    Thoughts?
                    The main challenge is that we would need to re-implement most of the blocks and maintain a parallel stream of GPU designs, so that opening up microcode for these new parts would not put the current parts (and our core markets) at risk.

                    That's the reason I was looking at compute-only parts, for example, because the number of blocks involved (and hence the cost of maintaining two parallel designs) could be significantly reduced, and because there was less overlap between the remaining blocks and activities which were affected by DRM requirements.

                    The question becomes whether we can (a) identify product/market combinations that could consume large numbers of CPUs and GPUs that would never be sold into the OEM PC market, and (b) identify them at a point where we could afford to develop separate, parallel implementations such that opening microcode for the new part(s) would not result in effectively (by making reverse engineering trivial) opening up microcode for current OEM PC parts. I don't think we have good answers for that yet.
                    Last edited by bridgman; 27 June 2018, 08:27 PM.
                    Test signature

                    Comment


                    • #60
                      Originally posted by bridgman View Post

                      The main challenge is that we would need to re-implement most of the blocks and maintain a parallel stream of GPU designs, so that opening up microcode for these new parts would not put the current parts (and our core markets) at risk.

                      That's the reason I was looking at compute-only parts, for example, because the number of blocks involved (and hence the cost of maintaining two parallel designs) could be significantly reduced. A couple of years ago it looked even more attractive because there was very little overlap between what we did with compute pipelines and the activities that required DRM protection, but compute shaders have become an important part of multimedia processing.

                      The question becomes whether we can (a) identify product/market combinations that could consume large numbers of CPUs and GPUs that would never be sold into the OEM PC market, and (b) identify them at a point where we could afford to develop separate, parallel implementations such that opening microcode for the new part(s) would not result in effectively (by making reverse engineering trivial) opening up microcode for current OEM PC parts. I don't think we have good answers for that yet.
                      My suggestion is to go after the market currently being served by NVIDIA -- high performance compute cards running on BlueLink. NVIDIA right now is requiring a fairly restrictive EULA and fairly high pricing for access to these cards; AMD could disrupt that market fairly easily I would think.

                      If you want to discuss more privately feel free to Email me directly as well.

                      Comment

                      Working...
                      X