Announcement

Collapse
No announcement yet.

The Tesla P100 Is NVIDIA's New & Most Powerful Accelerator

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Tesla P100 Is NVIDIA's New & Most Powerful Accelerator

    Phoronix: The Tesla P100 Is NVIDIA's New & Most Powerful Accelerator

    From NVIDIA's GPU Technology Conference, the company announced today the Tesla P100 as their most advanced accelerator based upon their Pascal "GP100" GPU...

    http://www.phoronix.com/scan.php?pag...DIA-Tesla-P100

  • #2
    Assuming this is stable, this will be some pretty impressive hardware. But considering the sheer amount of technical hurdles this accomplished, I wouldn't feel too comfortable using this for any mission-critical datacenters. There is too much risk for failure for something this different.

    Comment


    • #3
      So is this thing actually programmable with Vulkan or what?

      Comment


      • #4
        Originally posted by M1kkko View Post
        So is this thing actually programmable with Vulkan or what?
        You're saying that as though Michael is obligated to post such information when it in fact is irrelevant to point out. Why wouldn't it support Vulkan? Anyway, it probably is designed for CUDA, and probably has a greater focus on OpenCL than Vulkan at the moment.

        Comment


        • #5
          This thing is actually programmable with Cuda -- Nvidia's C++ extended for GPU programming, and their OpenACC. Keep in mind this particular thing is a compute module, not a graphics card. It's sole reason for being is to be programmed. There will be Pascal graphics cards later in the year. One will suppose they will be programmable in, OpenGL, DirectX, OpenCL (something), and Vulkan in addition to Cuda and OpenACC. I don't know when the more graphics-oriented libraries and compilers will be available for this first P100 compute hardware. Perhaps they are already. But anyone with the coin to cash in as a first Pascal adopter almost certainly has Cuda as his first priority, and a large amount of existing code.

          Comment


          • #6
            Originally posted by schmidtbag View Post
            You're saying that as though Michael is obligated to post such information when it in fact is irrelevant to point out. Why wouldn't it support Vulkan? Anyway, it probably is designed for CUDA, and probably has a greater focus on OpenCL than Vulkan at the moment.

            Sorry, that was not my implication.

            Comment


            • #7
              Lol. I see what nvidia did there. Why not sell the Tesla P100D next? Can we supercharge that GPU? How about Autopilot?

              Comment


              • #8
                Originally posted by schmidtbag View Post
                Assuming this is stable, this will be some pretty impressive hardware. But considering the sheer amount of technical hurdles this accomplished, I wouldn't feel too comfortable using this for any mission-critical datacenters. There is too much risk for failure for something this different.
                Huh? Firstly, it's just an FP64 accelerator. Secondly, it's for HPC usage, precisely where highest-performance cutting edge processors are normally employed. And I'm sure it's not being sold without a hardware warranty, LOL.

                Comment


                • #9
                  Originally posted by torsionbar28 View Post
                  Huh? Firstly, it's just an FP64 accelerator.
                  TBH, it looks more like a FP16 accelerator to me
                  They implemented mixed precision for FP32/FP16 instead of FP64/FP32, so they still need to add dedicated FP64 ALUs. And all the talking about deep learning (which they've been doing for years) shows the direction (FP16 ought to be enough for neural networks). AMD has had mixed precision for 64/32 since Hawaii 3 years earlier and also remarkable FP64 performance per Watt for that time. GP100 is a bit a disappointment for me, tbh - not that I intended to buy one, just the technical side. I hope AMD comes up with mixed precision for FP 64/32/16 with Vega, that would save die space and allow more ALUs.
                  Last edited by juno; 04-06-2016, 03:38 AM.

                  Comment


                  • #10
                    Originally posted by juno View Post
                    TBH, it looks more like a FP16 accelerator to me
                    They implemented mixed precision for FP32/FP16 instead of FP64/FP32, so they still need to add dedicated FP64 ALUs. And all the talking about deep learning (which they've been doing for years) shows the direction (FP16 ought to be enough for neural networks). AMD has had mixed precision for 64/32 since Hawaii 3 years earlier and also remarkable FP64 performance per Watt for that time. GP100 is a bit a disappointment for me, tbh - not that I intended to buy one, just the technical side. I hope AMD comes up with mixed precision for FP 64/32/16 with Vega, that would save die space and allow more ALUs.
                    I wonder how many ALU transistors are shared among the FP16, FP32 and FP64 implementation, and how many ALU transistors can be uniquely attributed to one of FP16/FP32/FP64.

                    Comment

                    Working...
                    X