Radeon ROCm Updates Documentation Reinforcing Focus On Headless, Non-GUI Workloads

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • coder
    Senior Member
    • Nov 2014
    • 8841

    #91
    Originally posted by Qaridarium View Post
    AMD already works on chiplet design for CDNA with shared memory IO crossbar
    believe it or not.
    You don't know that. It could use a mesh (like Nvidia) or star topology (like the 1st generation EPYC), where each die has its own memory controller.

    I expect this is what they'll do, not only because it scales better, but it also mirrors how multi-GPU setups work that are already being used in HPC.

    Originally posted by Qaridarium View Post
    thats very easy to validate just ask the people who buy 6900XT and MI100 and put it into same pc.
    Okay, please reply back when you have that data.

    Originally posted by Qaridarium View Post
    Wrong just wrong you can make an AI-NPC game who has smarter NPC on gast GPU and more stupid NPC on slower GPUS.
    I'm not saying it's not possible, but it violates basic principles of game design. It's also a disincentive for people with higher-spec hardware to buy the game or for the game's enthusiasts to upgrade their machine, if doing so makes the game much more difficult.

    Originally posted by Qaridarium View Post
    wrong there is no such minimum specification.
    It's standard for games to publish the minimum and recommended hardware specifications. They don't guarantee the game will run properly on anything below the minimum. And, due to the economics of game development, the minimum spec is usually a very mainstream PC.

    Comment

    • qarium
      Senior Member
      • Nov 2008
      • 3396

      #92
      Originally posted by coder View Post
      I'm not saying it's not possible, but it violates basic principles of game design. It's also a disincentive for people with higher-spec hardware to buy the game or for the game's enthusiasts to upgrade their machine, if doing so makes the game much more difficult.
      It's standard for games to publish the minimum and recommended hardware specifications. They don't guarantee the game will run properly on anything below the minimum. And, due to the economics of game development, the minimum spec is usually a very mainstream PC.
      it does not violate basic principles in how to make a highend game it only violate YOUR basic principle in how to make such a game.

      and it makes logically no fucking sense to claim that a high end AAA game made for highend hardware is a "disincentive for people with higher-spec hardware"

      what you say here is insane... it only maybe stop people with low-end hardware to buy the game.

      "It's standard for games to publish the minimum and recommended hardware specifications"

      no one stops a game maker to make a minimum hardware spec what is higher than any hardware you can buy.

      a game maker can say: minimum spec is a 64core threadripper pro with 512GB ram and a 6900XT+MI100 and a 100TB PCIE5.0 SSD.
      this costs like 50 000€ or more but who cares? you are free to do so.
      Phantom circuit Sequence Reducer Dyslexia

      Comment

      • qarium
        Senior Member
        • Nov 2008
        • 3396

        #93
        Originally posted by coder View Post
        You don't know that. It could use a mesh (like Nvidia) or star topology (like the 1st generation EPYC), where each die has its own memory controller.


        it really sounds like the solution i told you.
        Phantom circuit Sequence Reducer Dyslexia

        Comment

        • coder
          Senior Member
          • Nov 2014
          • 8841

          #94
          Originally posted by Qaridarium View Post
          https://www.reddit.com/r/hardware/co...hine_learning/

          it really sounds like the solution i told you.
          It really sounds like you didn't even read it:

          The machine learning accelerator (MLA) chiplet will execute matrix multiplication operations only

          Comment

          • qarium
            Senior Member
            • Nov 2008
            • 3396

            #95
            Originally posted by coder View Post
            It really sounds like you didn't even read it:
            i have read it. and you can claim it so. but in the end CDNA was the first step in this direction.

            and it is what i said to you in the end you will have chiplet design with 1 RDNA chip and then you have AI accelerator and other stuff in the chiplet design.

            and you can be sure before they do "chiplet will execute matrix multiplication operations only" they put a CDNA chip first in the chiplet design.
            Phantom circuit Sequence Reducer Dyslexia

            Comment

            • xcom
              Senior Member
              • Dec 2017
              • 122

              #96
              Maybe Mesa will provide us a good opencl driver

              Comment

              • coder
                Senior Member
                • Nov 2014
                • 8841

                #97
                Originally posted by Qaridarium
                but will this be the better way? i think the better way is: do Vulkan Compute...
                Better because why? Simply that Vulkan support is widespread? Because the compute flavor of SPIR-V is not well-supported, at all.

                Originally posted by Qaridarium
                OpenCL was created by "apple AND Nvidia" do you trust them to be linux friendly ? i do not believe this.
                This is insane. There's nothing about it that's unfriendly to Linux! It's an open standard that's been around for more than a decade and used by lots of people (myself included). If there were really anything about it that was Linux-unfriendly, don't you think it would've come up?

                Seriously, enough with this conspiracy-thinking crap. Just look at it and tell me specifically what's Linux-unfriendly about it, or drop the point.

                And you're not even right about the baseline facts. As I posted in the Mesa Matrix thread:

                OpenCL was initially developed by Apple Inc., which holds trademark rights, and refined into an initial proposal in collaboration with technical teams at AMD, IBM, Qualcomm, Intel, and Nvidia. Apple submitted this initial proposal to the Khronos Group. On June 16, 2008, the Khronos Compute Working Group was formed with representatives from CPU, GPU, embedded-processor, and software companies.

                Even in 2008, IBM was mostly Linux-focused. Qualcomm was Android-focused (Linux-based), and it was already the case that HPC (AMD and Nvidia's main interest) was Linux-based.

                Originally posted by Qaridarium
                to develop better OpenCL support in MESA is in fact a waste of time. the same developers could do vulkan compute.
                They're not equivalent APIs. Vulkan is low-level, potentially making it a lot more work for apps to use Vulkan compute than OpenCL. This is why people are trying things like building OpenCL support on top of Vulkan:


                Unfortunately, such layered approached are usually inferior and more fragile than a native implementation.

                Comment

                • coder
                  Senior Member
                  • Nov 2014
                  • 8841

                  #98
                  Originally posted by Qaridarium
                  "Simply that Vulkan support is widespread?"

                  thats one reason.
                  Okay, but you understand that support for Vulkan compute SPIR-V is basically nonexistant? It's a different format. Just because you have Vulkan support on a machine does not mean you can just run your compute jobs on it, at all.

                  Originally posted by Qaridarium
                  but another one is that if you use Vulkan companies like AMD can not make distingtion between gaming cards and compute cards like they do with 6900XT vs MI100 the MI100 has ROCm support and the 6900XT has no ROCm support.
                  That's also wrong. If we leave aside the issue of missing compute SPIR-V-support and (I assume) the lack of any Vulkan support in MI100, there's the issue of what makes a GPU good for compute - its fp64 support, its memory bandwidth & capacity, its Matrix cores, and its micro-architecture. So, there will always be compute-oriented/optimized cards.

                  The bigger issue is that Vulkan backends do more aggressive optimizations that can sacrifice numerical accuracy. It's a bit like how Direct3D has a reputation for being fast and sloppy, while OpenGL would give you precisely what you asked for. Presumably, the compute SPIR-V could address that, but we don't have compute SPIR-V support.

                  And I'm aware of no cases where a OpenCL driver was used to artificially hamper performance. They don't need it to, because all of these GPUs run proprietary firmware that can restrict things like fp64 performance, while giving the user no workarounds (like using OpenGL compute shaders or Vulkan, etc.).

                  So, that just leaves the ROCm stack issues, which we're being told are getting worked on, though much slower than I'm sure everyone would like.

                  Originally posted by Qaridarium
                  if you do Vulkan compute you can use cheap gaming cards. for this fact alone decision makers should decide they do vulkan compute and drop openCL.
                  You're asking app developers to rewrite their code, and be at the mercy of whatever optimizations their Vulkan stack decided to make that could damage the accuracy of their results.

                  Originally posted by Qaridarium
                  "This is insane."
                  What I said was insane was that we have this open standard that's been out and in use for 13 years, and because of the involvement of 2 companies that you don't trust (and neither do I, for the record) you've decided this somehow infected the standard, itself. If that were true, then there ought to have been lots of complaints from many of the people using and implementing it on Linux, but I have yet to hear any. Not that there aren't the usual sorts of complaints, but I've seen no evidence they tried to poison it, for Linux. The Kronos working group has been very forthright about OpenCL's short-comings, and they appear to be on the path to improve what's already a very viable compute API.

                  Comment

                  • bridgman
                    AMD Linux
                    • Oct 2007
                    • 13183

                    #99
                    Going back to the original article, we have re-opened all the support tickets that were closed based on the initial messaging, and we are working on revised messaging as well.
                    Test signature

                    Comment

                    • coder
                      Senior Member
                      • Nov 2014
                      • 8841

                      Originally posted by Qaridarium
                      Nvidia is sapotaging OpenCL to move all the people to CUDA
                      How???

                      You can't just spout this crap without making your case, and I don't mean just regurgitating other forum posts. The standard is out there for everyone to see, so you should be able to cite specifics.

                      Originally posted by Qaridarium
                      Nvidia did sapotage openCL2.0 and they now are even bigger sapotage openCL3.0 because Nvidia only does 1,2 and they claim they have 3.0 what is a clear lie.
                      Choosing not to support it is not the same as sabotaging it. By that logic, AMD's lagging support also qualifies as sabotaging it.

                      Originally posted by Qaridarium
                      open your eyes openCL is sapotages by Nvidia- and the motive is to move the people to CUDA
                      And AMD wants people to move to HiP, Apple wants people to move to Metal, Google would like us to use RenderScript, and Microsoft wants DirectCompute. The only one who's fully backing OpenCL is Intel, and even they would rather people use oneAPI.

                      Sabotaging it means doing things to the standard that make it hard to use or implement, not just dragging their feet on supporting it.

                      Originally posted by Qaridarium
                      "(I assume) the lack of any Vulkan support in MI100"

                      i think the MI100 support vulkan compute mode.
                      Care to cite any evidence?

                      Originally posted by Qaridarium
                      the standard is in fact infected

                      bug77:

                      "
                      There's no strategy at work here, just a monumental failure from Khronos: they made SVM mandatory for OpenCL 2.x compliance, but SVM is only useful for running OpenCL on APUs. Nvidia doesn't make APUs, so they didn't implement it (they actually did, but not fully)."
                      CUDA has SVM-equivalent, so I don't know why he thinks it's useful only for APUs. And 3.0 fixed the SVM issue by making it optional.

                      Comment

                      Working...
                      X