Announcement

Collapse
No announcement yet.

Arcturus No Longer Experimental - AMD Instinct MI100 Linux Support Is Ready

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Arcturus No Longer Experimental - AMD Instinct MI100 Linux Support Is Ready

    Phoronix: Arcturus No Longer Experimental - AMD Instinct MI100 Linux Support Is Ready

    Being sent in as a "fix" this week to the Linux 5.10 kernel is removing the experimental flag for the Arcturus GPU, days after AMD announced the MI100 accelerator at SC20...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I want one. Not because I have any actual use of it, but because I just think it looks really neat and would make a great shelf piece.

    Comment


    • #3
      Originally posted by skeevy420 View Post
      I want one. Not because I have any actual use of it, but because I just think it looks really neat and would make a great shelf piece.

      Well, I also want one, but I do not think that I will ever have one, because the vendors of GPUs that are good for double-precision computations are no longer willing to sell them at prices competitive with CPUs.

      When NVIDIA launched the fashion of GPGPUs, i.e. of GPUs used for computational tasks, there were a lot of published articles with sensational claims of how GPUs will enable everybody to perform computations at a speed 100 times higher than with CPUs and at a much lower price.

      All those articles proved to be just propaganda with very little truth, if any, in them, because their huge increase in speed was between optimized programs running on GPUs and naive (to not say stupid) programs written for the CPUs.

      After the lies were forgotten, what remained is that when comparing modern CPUs with modern GPUs, during the last 5 or 6 years, through several generations of products, it is true that for double-precision computations, when using the same power consumption, a GPU can be about 3 times faster than an Intel or AMD CPU.

      3 is not 100, but for large datacenters or for supercomputers a reduction of 3 times in the power consumption still means big money, so they are willing to pay a lot for GPUs able to do double-precision computations.

      Already many years ago, NVIDIA raised a lot the price of their double-precision GPUs, much above the price per Gigaflop/s of Intel Xeons or AMD Epycs.


      As long as they remained inferior, AMD had a price per Gigaflop/s better than any alternative, so I am still using a couple of old AMD Hawaii Fire Pro and a Radeon VII, all of which had much lower prices per double-precision Gigaflop/s than any CPU.


      The new AMD Instinct are rumored to cost a little less than $7000, while being a little faster than the more expensive NVIDIA Ampere A100.

      At this price they are an excellent cheaper alternative to NVIDIA for those who are not captured by CUDA, but they are more expensive per Gigaflop/s than CPUs. Unless you are a big spender, it is much cheaper to buy several computers with Ryzen 9 5950X, to reach the same computational speed, instead of buying one AMD Instinct.


      For those who are interested only in graphics rendering or machine learning, which use only 32-bit or lower precision computations, the much cheaper gaming GPUs are fine.

      For most of those who want double-precision computations, it seems that GPUs are no longer a solution.





























      Comment


      • #4
        Are you sure about your calculations ? Even ignoring consumer vs pro/datacenter SKUs & pricing a 5950X gives something under 1TF in FP64 for $800 US, while a Radeon VII Pro gives ~6.5 TF for $1900 US. At first glance that seems like 1/3 the cost per TF.

        I might be underestimating the 5950X FP64 performance but I had less luck than I expected finding tests that provide anything but a unit-less score. AIDA64 gives a direct readout of single- and double-precision GFLOPS but nobody seems to have run that test.

        I'm not sure where the 100x number came from (hope it wasn't us) but if you compare consumer-to-consumer parts and go back to pre-Zen CPU days (which is probably when the statement was made) it's probably not hard to find a 10x advantage, at least for FP32. All of the hype-y articles were talking about FP32 on the GPU as far as I can see.

        That said, one of the things I was uncomfortable with in the early days of GPU compute hype was that some media writers seemed to be comparing FP64 flops on a CPU against FP32 flops on a GPU for the performance part, and comparing server-class CPUs against consumer-grade GPUs for the price part.
        Last edited by bridgman; 22 November 2020, 02:29 PM.
        Test signature

        Comment


        • #5
          Originally posted by bridgman View Post
          Are you sure about your calculations ? Even ignoring consumer vs pro/datacenter SKUs & pricing a 5950X gives something under 1TF in FP64 for $800 US, while a Radeon VII Pro gives ~6.5 TF for $1900 US. At first glance that seems like 1/3 the cost per TF.

          I might be underestimating the 5950X FP64 performance but I had less luck than I expected finding tests that provide anything but a unit-less score. AIDA64 gives a direct readout of single- and double-precision GFLOPS but nobody seems to have run that test.

          I'm not sure where the 100x number came from (hope it wasn't us) but if you compare consumer-to-consumer parts and go back to pre-Zen CPU days (which is probably when the statement was made) it's probably not hard to find a 10x advantage, at least for FP32. All of the hype-y articles were talking about FP32 on the GPU as far as I can see.

          That said, one of the things I was uncomfortable with in the early days of GPU compute hype was that some media writers seemed to be comparing FP64 flops on a CPU against FP32 flops on a GPU for the performance part, and comparing server-class CPUs against consumer-grade GPUs for the price part.

          Yes, actually I was wrong.

          When I have written the previous message I was still shocked by the large increase in price by AMD and I had not calculated anything.

          After writing that, I did the calculation and while the new Instinct MI100 is much worse than older AMD GPUs, it is still cheaper than multiple CPUs giving the same aggregate speed.

          A Radeon VII at $700 provided almost 5 DP Gflops per USD, a Radeon VII Pro at $1900 provided about 3.4 DP Gflops per USD and the new Instinct MI100 at the rumored price provides about 1.7 DP Gflops per USD, i.e. only half of the performance per dollar of the previous product.

          On the other hand, Radeon VII Pro had a more than double performance per watt compared to Radeon VII, while the new Instinct MI100 has a performance per watt 1.5 times higher than Radeon VII Pro. So the performance per watt has increased continuously, while the performance per dollar has decreased continuously.

          For the first time after many years, the AMD Instinct MI100 has actually a better performance per watt than the NVIDIA Ampere A100.

          While I was wrong and it would still be cheaper to buy an Instinct MI100 instead of a bunch of CPUs, unfortunately it does not matter, because it seems that it will not be available for sale to individuals or small businesses, which is why they did not publish the price.










          Comment


          • #6
            If it helps, you don't want one of these cards outside of a data center anyways. The fans you need to keep a server card cool are unpleasantly loud, and we can't even work on the cards ourselves outside of a lab without other people complaining that they can't think with the noise. This isn't specific to our cards, just seems to be the current state of servers... lots of small loud fans per box.
            Test signature

            Comment


            • #7
              Originally posted by bridgman View Post
              you don't want one of these cards outside of a data center anyways
              Lots of data center gear finds their way into home labs after a product replacement cycle. It is unfortunate that AMD seems to not care much, if at all, for home labs.

              Originally posted by bridgman View Post
              The fans you need to keep a server card cool are unpleasantly loud
              That is usually a solvable problem. For NVidia compute cards, you can buy waterblocks e.g. from EKWB: https://www.ekwb.com/shop/ek-fc-gv100-pro-nickel-inox

              I haven't seen any for AMD, so maybe a universal GPU block will have to do, and stick separate heatsinks on the VRM.

              Comment


              • #8
                Originally posted by bridgman View Post
                If it helps, you don't want one of these cards outside of a data center anyways. The fans you need to keep a server card cool are unpleasantly loud, and we can't even work on the cards ourselves outside of a lab without other people complaining that they can't think with the noise. This isn't specific to our cards, just seems to be the current state of servers... lots of small loud fans per box.
                haha i can relate.. Working in an R&D Department as well, we have special "fan cables" which lower the RPM of the fans (a big resistor).. As if our devices run in bootloader or some debug state, the fan is always 100% which is yeahh loud, like booting that 2HE server which has fans dimensioned to handle like 5-7kW of heat or something like that...

                Comment

                Working...
                X