Announcement

Collapse
No announcement yet.

Intel Acquires The Team Behind ArrayFire GPU Acceleration / Parallel Computing Software

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Ironmask View Post
    Well the 8086 architecture was originally made to control traffic lights.
    Should we call all our software "general purpose traffic light processors"?
    GPUs are still architected primarily for graphics, so I wouldn't fault anyone for still calling them that. Nvidia's A100 still has some hardware ROPs and TMUs, while Intel's PVC has hardware ray tracing and I don't know what else. Only AMD's CDNA chips so far have zero graphics hardware engines, marking the first true break from their GPU ancestry.

    Moreover, the tools, languages, and APIs for programming these products are still the very same ones you use to run compute workloads on actual graphics cards. So, for the foreseeable future, we'll still be talking about GPU Compute, or GP GPU.

    BTW, there nothing about x86 that's tailored for driving traffic lights. If what you say is true, then it was just a case of taking an already-generic CPU and going after a particular application domain, which is entirely commonplace. By contrast, the entire design and evolution of GPUs was oriented towards interactive graphics rendering.
    Last edited by coder; 10 September 2022, 02:30 AM.

    Comment


    • #12
      Advantage of DirectX and Direct3D is that supports also objects and not random shader structures which needs to figure out how to collide them and display them(and monkey Vulkan coders need to figure how to make their The Most Important FPS Shooter Toy on Vulkan to make additional 1.475 FPS to correctly make it display on their display to make more additional gun ammo action shootage when making program parallel is A Hard Thing...
      Last edited by elbar; 10 September 2022, 05:58 AM.

      Comment


      • #13
        Originally posted by coder View Post
        I understand that AMD's finances were still a little rough, back then. However, if they were serious about playing in the GPU compute and AI markets, then they should've appreciated the need to make these kinds of investments.
        they can do it now... and they do it now... but they can not jump back in time do do it in the past

        even if they do it now to break a CUDA monopole is very hard.
        Phantom circuit Sequence Reducer Dyslexia

        Comment


        • #14
          Originally posted by coder View Post
          GPUs are still architected primarily for graphics, so I wouldn't fault anyone for still calling them that. Nvidia's A100 still has some hardware ROPs and TMUs, while Intel's PVC has hardware ray tracing and I don't know what else. Only AMD's CDNA chips so far have zero graphics hardware engines, marking the first true break from their GPU ancestry.
          really man honestly this sounds for me like AMD is Lightyears ahead of the competition.

          i know people in this forum tent to hate amd for the GCN split into CDNA and RDNA but aside of this ROCm/HIP driver problem this split was a great deal.
          i know in the past long before the split happened we talked with bridgman about this and i told him this is a great idea.

          amd only need to do 2 things one is to invest more money into the driver means better ROCm/HIP for CDNA and RDNA and the second one is they really should sell CDNA products in the PC/Desktop/workstation markets.

          and it sounds crazy to first make this split and then put both in a chiplet design but i am sure there is a market for RDNA and CDNA in one single chiplet design package...

          and if we just see how much GPU-Compute power the CDNA chips give us per mm² of die area its just amazing... i call it a fact that on the hardware side nvidia and intel can not compete with this.

          nvidia only wins on the software side.
          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • #15
            Originally posted by coder View Post
            I understand that AMD's finances were still a little rough, back then. However, if they were serious about playing in the GPU compute and AI markets, then they should've appreciated the need to make these kinds of investments.
            I think you don't understand, they were completely broke. They were serious about it but completely broke. They had absolutely different priorities att imo.
            //
            Also, Nvidia is a software company, AMD is not.

            Comment


            • #16
              Originally posted by gescom View Post
              Also, Nvidia is a software company, AMD is not.
              AMD is a software company, also. They might just not know it, yet.

              Comment


              • #17
                Originally posted by coder View Post
                AMD is a software company, also. They might just not know it, yet.
                Not by definition, but yes it could happen in the future

                Cheers!​
                Last edited by gescom; 11 September 2022, 07:49 AM.

                Comment


                • #18
                  Originally posted by gescom View Post
                  Not by definition, but yes it could happen in the future
                  To be clear, I mean that in the same sense as Jensen Huang famously said it of Nvidia. Which is to say that, while AMD designs and sells (packaged) silicon, the software portion of the solutions they deliver is essential for unleashing the hardware's potential and should dominate their overall headcount, if it doesn't already (i.e. by practical necessity, not as a value judgement).

                  If you want to take an even more expansive view, you could include the software tools and simulations needed to develop and test modern chips. That would surely tip the balance even more in favor of the software folks.

                  None of that is to diminish the importance of the hardware -- if the hardware is fundamentally noncompetitive, there's not much the software folks can do to compensate. The point is to recognize how much of the value-add comes in the form of software/firmware, and to acknowledge that it's equally vital.

                  Comment


                  • #19
                    Originally posted by coder View Post
                    Probably AMD's biggest failing, in their attempts to borrow Nvidia's compute strategy, was the lack of investment in software on anything like the scale Nvidia has done.
                    IME it's very close to "Probably ANY company's biggest failing, in their attempts to succeed, is the lack of investment in software on anything even remotely resembling an appropriate scale.

                    Software runs the world these days, and has done for decades, yet its impact is still massively under-appreciated because ~nobody understands that. This is *especially* true of hardware companies.

                    Comment

                    Working...
                    X