Announcement

Collapse
No announcement yet.

Red Hat Announces RHEL AI

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by qarium View Post

    there are really people who buy a 4060 and claim because of DLSS and frame generation it is faster than a 7900XTX of course. and if you say you can run FSR3.1 they claim its not an valid option because the screen shots show its lower quality even if in motion you can not see it anymore. and examples who show it in motion are zoomed in and slowed down or else you could not see it.
    For anything with Raytracing it is significantly faster, for everything else you wont notice any difference.

    Originally posted by qarium View Post
    amd strix halo systems will be better and cheaper than a RTX4060 of course.
    Key there is will be.
    Given RTX4080 perf will then be $300 at the same time, who will care?

    Originally posted by qarium View Post
    because for a 4060 you need a AM5 socket system with a ryzen 7800X3D to get the same performance.
    if you count this all in compared to a amd strix halo SOC/APU system am will be faster per dollar thats for sure.
    Originally posted by qarium View Post
    "the software needed to train them is insanely complicated."

    if this is the case all this shiny new stuff we will wait for years to get it.
    i did see a AI based gpu study in the past and the result was not very pleasant.
    but yes it was very fast on very simple hardware.
    No, because nvidia is shipping it now, e.g.

    Comment


    • #92
      Originally posted by mSparks View Post
      For anything with Raytracing it is significantly faster, for everything else you wont notice any difference.
      Key there is will be.
      Given RTX4080 perf will then be $300 at the same time, who will care?
      if you buy a nvidia PCIe gpu it will always be more expensive because additional cost for cpu and mainboard and vram...
      the amd strix halo SOC/APU will reduce the cost significant. and it ends the time of home users buy a PCIe GPU...

      yes yes i get it nvidia is faster in raytracing.


      Originally posted by mSparks View Post
      No, because nvidia is shipping it now, e.g.
      https://www.nvidia.com/en-eu/geforce/rtx-remix/
      not exactly what you did promise but still impressive. but why exactly does this not run on AMD hardware ?
      Phantom circuit Sequence Reducer Dyslexia

      Comment


      • #93
        Originally posted by qarium View Post


        not exactly what you did promise but still impressive. but why exactly does this not run on AMD hardware ?
        nvidia deploy it as CUDA, AMD has been very bleh in similar, there is now ROCm, but support and uptake has so far been minimal at best for fairly subpar performance (aiui you basically need a 7900XTX card to do any useful development with it, and its buggy).

        The situation has improved, but there is a pretty big gap between them that will only close with investment in them from AMD that outpaces nvidia....

        That, imho, is going to look a lot like AMD vs Intel CPUs since the ZEN launch.
        Originally posted by qarium View Post
        amd strix halo SOC/APU
        iGPUs are iGPUs, they dont come close to descrete cards because they are so constrained on space and power budget. orders of magnitude better than any intel iGPU, but several generations behind what you can get from even AMDs low end dGPUs.

        And in the iGPU space Apple silicon is winning.

        Comment


        • #94
          Originally posted by mSparks View Post
          nvidia deploy it as CUDA, AMD has been very bleh in similar, there is now ROCm, but support and uptake has so far been minimal at best for fairly subpar performance (aiui you basically need a 7900XTX card to do any useful development with it, and its buggy).
          The situation has improved, but there is a pretty big gap between them that will only close with investment in them from AMD that outpaces nvidia....
          That, imho, is going to look a lot like AMD vs Intel CPUs since the ZEN launch.
          if nvidia deploy it as CUDA why can't you run it with zluda?

          according to news reports AMD will have this ZEN moment in the GPU field with RDNA5

          according to what i know from bridgman they will introduce speculative branch prediction and the ability to Shader Execution Reordering (SER)​ to speed up Mesh Drawing Pipeline​ to optimise to more modern render pipelines but the reorder freature means it can reorder to be performant on legacy render drawing pipelines as well.
          because of the speculative branch prediction they can use shorter pipeline and this make them able to use higher clock designs. means shader cores clocked at 4-5 ghz will become normal they have this technology for CPUs and RDNA5 will bring this into the GPU world.

          Originally posted by mSparks View Post
          iGPUs are iGPUs, they dont come close to descrete cards because they are so constrained on space and power budget. orders of magnitude better than any intel iGPU, but several generations behind what you can get from even AMDs low end dGPUs.
          And in the iGPU space Apple silicon is winning.
          yes this is right for the amd strix halo SOC ... it has only RDNA3.5 shader cores so no big win in raytracing.

          there will be a next gen SOC with RDNA4 ... and then of course RDNA5...
          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • #95
            Good on them, they have an opportunity to capitalize on the market. It will help then continue making great contributions to the open source community.

            Comment


            • #96
              Originally posted by qarium View Post
              according to what i know from bridgman they will introduce speculative branch prediction and the ability to Shader Execution Reordering (SER)​ to speed up Mesh Drawing Pipeline​ to optimise to more modern render pipelines but the reorder freature means it can reorder to be performant on legacy render drawing pipelines as well.
              Not sure who this might have come from but it certainly wasn't me. I was peripherally involved with RDNA4 to the extent that we leveraged the tech in datacenter GPUs but have had zero involvement with RDNA5.
              Test signature

              Comment


              • #97
                Originally posted by bridgman View Post
                Not sure who this might have come from but it certainly wasn't me. I was peripherally involved with RDNA4 to the extent that we leveraged the tech in datacenter GPUs but have had zero involvement with RDNA5.
                well you maybe know that i know nearly every single post of your 13123​ posts here in this forum.

                and one time you talked about a hypothetical possibility of a speculative branch prediction gpu design to increase the Utilization of unused shader cores​. modern CPUs do this all the time to increase performance.
                in the past this was not necessary for gpus. of course it is years back that you did make this commend and you of course you did not talk about RDNA5....

                but for me this is the logical next step modern GPU designs add more and more shader units and other units like matrix core units and all the year every year the amount of really used units goes down means the designs become more and more ineffective to accelerate all the units. like the Nvidia 4090 they literally did double the FP32 shader units but the Utilization of unused shader cores​ in real world scenarios did go down.

                to me this means the person who develop the first speculative branch prediction for GPUs will have great success..
                Phantom circuit Sequence Reducer Dyslexia

                Comment


                • #98
                  Originally posted by bridgman View Post
                  Not sure who this might have come from but it certainly wasn't me. I was peripherally involved with RDNA4 to the extent that we leveraged the tech in datacenter GPUs but have had zero involvement with RDNA5.
                  i think years back you did talk about in-order vs out of order design for the shader cores. and these outof order designs do speculative branch prediction::.

                  more or less AMD checks every 3-5 years in the last 30 years if a out of order design with speculative branch prediction is usefull.

                  i think the RDNA5 chip design will have such out of order shader units with much higher clock speed...

                  and this could make it possible to have a lot less shader cores because the Utilization of unused shader cores​ is higher.
                  Phantom circuit Sequence Reducer Dyslexia

                  Comment


                  • #99
                    Originally posted by qarium View Post

                    if nvidia deploy it as CUDA why can't you run it with zluda?
                    because it needs at least a 7900XTX to be usable, and even then zluda only offers minimal support for most things cuda accelerates.

                    e.g.
                    minimal support of cuDNN, cuBLAS, cuSPARSE, cuFFT, NCCL, NVML

                    Comment


                    • Originally posted by mSparks View Post
                      because it needs at least a 7900XTX to be usable, and even then zluda only offers minimal support for most things cuda accelerates.

                      e.g.
                      minimal support of cuDNN, cuBLAS, cuSPARSE, cuFFT, NCCL, NVML
                      They killed zluda anyway - they abandoned the project and hope that someone else will take it up - even though, it's likely Nvidia would go after them?

                      AMD can't even support or won't support their own tech - ROCm is a mess - from what I hear even of AMD gpu owners - they are very frustrated using ROCm - it's messy/complicated software - and some of them even get fed up and switch to Nvidia.

                      Whether it's Blender or AI projects, AMD struggles to support any of it - especially, Blender.

                      RDNA 4 is already a fail - they can't even release a flagship card. Sales for the gpus have plummeted.
                      The latest version of AMD's open-source GPU compute stack, ROCm, is due for launch soon according to a Phoronix article—chief author, Michael Larabel, has been poring over Team Red's public GitHub repositories over the past couple of days. AMD ROCm version 6.0 was released last December—bringing...


                      Comment

                      Working...
                      X